Hacker Newsnew | past | comments | ask | show | jobs | submit | georgemcbay's commentslogin

> Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it

I believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.

All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.

https://www.latimes.com/business/story/2026-02-13/openai-acc...

https://cloud.google.com/blog/topics/threat-intelligence/dis...

https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...


Which is a wildly hypocritical tack for them to take considering how all their models were created, but I certainly wouldn’t be surprised if they did.

> I don't understand how we're still using fossil fuels.

Global politics.

Switching to renewables is seen as capitulation to China because of their lead in tech in this area, especially when you consider that renewables generally introduce battery dependence.

They don't even try to hide this anymore. Watch US Secretary of Commerce Howard Lutnick at the WEF:

https://www.youtube.com/watch?v=JY0t0h1gXzk

Explicitly stated: Don't be subservient to China.

Not vocalized, but the obvious alternative: Instead be subservient to the USA and various allied Persian Gulf (and hijacked Latin American) countries who will keep pushing the petrol alternative until it literally runs dry, even if they have to do it at gunpoint.


The general approval the average person has for tech/software workers has declined precipitously over the last decade (IMO for some pretty good reasons, as an overall industry we've done a lot of societal harm) which just adds to how cosmically funny it is that as a group we are likely to experience one of the quickest mass career disruptions from AI.

> Support maybe?

LLMs seem likely to kill or at least vastly weaken the support model.


Tangentially related: https://news.ycombinator.com/item?id=46527950 (Creators of Tailwind laid off 75% of their engineering team)

And essentially one of the reasonings behind it was that people weren't taking support or buying templates etc. from tailwind because they were getting essentially both of these from LLM's.


> "to allow them to continue, in effect, with their life.”

"in effect" doing a lot of heavy lifting there.


There's a bit more steps than that for someone who hasn't yet launched a Play Store app. When I launched a hobby app I had to find and maintain 20 unique testers (they have since reduced it to 12, I think? it was 20 back in 2024) willing to test the app for 2 weeks before Google let me list it on the Play Store.

Though even this wasn't that hard... I easily found more volunteer testers than I needed on a relevant subreddit (the app is a PvP stat tracking app for a videogame, and I found plenty of people willing to sign up to test my app on a subreddit dedicated to that videogame).

That said, because of the unique testers requirement it was a much bigger pain in the ass to get listed on the Play Store than the Apple App Store (my app is written with kotlin multiplatform and supports Android/iOS/Windows/WASM).


Wow it changed a lot since the last time I did it. I wonder it if has to do with the type of app or permissions it requests. Back then(10 years ago so way back then), payment was all that was required. You had to wait for some automated reviews and tests and potentially a manual one which could flag issues depending on the permissions you request but it was straight forward. Apple, on the other hand, always had a manual review and was quite strict in what they would accept.

It is a requirement for all apps (my app just uses the most basic permissions for internet access and nothing else) if it is the first listing a new developer account is putting up on the store.

I think Google did it this way to put some sort of filter on the amount of submissions without having to spend much time manually reviewing everything, foisting things on to the developer.


I'm a new Android developer, it does require the 14 beta testers, at least as of last week when I tried to get it published. So far I've printed flyers and asked friends, but I'm definitely struggling to get enough testers. Almost everyone I know has iPhones and the few people with Android (apart from myself) don't collect Orchids so my app doesn't have relevance to them.

Apple was much easier, pay $99 and 1 week later it was published.


This is actually a pretty big hurdle for me. I don't use social media and the I have tried to post on Reddit but it's impossible to post with new account filter these days.

That's the exact approach I want to take though, kotlin multiplatform.


> I was listening to an advertisement at the time of my near-death experience.

You'll probably never forget that advertisement, which is an exciting business opportunity for Waymo.

They could partner with Spotify and other media content partners so that the Waymo can generate an adrenaline-rush near crash experience when a premium advertiser's ad is playing. /s (hopefully)


This is one of those comments that made me laugh nervously. It's straight out of Ubik or another PKD novel, which probably means it's less than 5 years away from being real.

If there’s a torment nexus to be built they’ll build it.

Might be Orhan Pamuk or JG Ballard's mantle to be picked up

> Google have more cash to burn in the AI race so can be more forgiving today in how their codex plans are used.

Even despite the larger cash pile to burn, Google is in the middle of their own controversy around what many feel is a rug-pull around how Gemini "AI credits" work and are priced.

See:

https://www.theregister.com/2026/03/12/users_protest_as_goog...

https://old.reddit.com/r/google_antigravity/comments/1rv4cec...

etc


> Our token usage and number of lines changed will affect our performance review this year.

The AI-era equivalent of that old Dilbert strip about rewarding developers directly for fixing bugs ("I'm gonna write me a new mini-van this afternoon!") just substitute intentional bug creation with setting up a simple agent loop to burn tokens on random unnecessary refactoring.


> We won’t hire anybody moving forward who doesn’t have hands-on agentic programming experience.

This doesn't make a lot of sense to me even as someone who uses agentic programming.

I would understand not hiring people who are against the idea of agentic programming, but I'd take a skilled programmer (especially one who is good at code review and debugging) who never touched agentic/LLM programming (but knows they will be expected to use it) over someone with less overall programming experience (but some agentic programming experience) every single time.

I think people vastly oversell using agents as some sort of skill in its own right when the reality is that a skilled developer can pick up how to use the flows and tools on the timescale of hours/days.


I suspect it’s not about agentic coding being a special skill, and more about why a competent programmer wouldn’t have tried it by this point, and whether that is a sign of ideological objections that could cause friction with the team. Not saying I agree with that thinking, but I definitely see why a hiring manager could think that way.

I can't get into that hiring manager's head. It shouldn't matter, if the candidate can deliver business value. That's what you are hiring them for. You're not hiring them to burn LLM tokens, you're hiring them to create business value. Why would you care if he does it by hand-coding, using an LLM, or chanting magic spells at the computer?

I was only granted permission to use it a few weeks ago and haven’t had time to set it up yet

> why a competent programmer wouldn’t have tried it by this point

What does one have to do with the other? Since when is following every fad a prerequisite for competence?


Don't shoot the messenger, I'm just telling you how some hiring managers might think, not endorsing the opinion and it's definitely not something I consider in my hiring.

I will say it's a little weird to frame it as "every fad" though. Do you really not see any net new or lasting utility for software engineering in AI tools? If not then more power to you, but software engineering being a fast-moving field where there are (fair or unfair) expectations to keep up is nothing new.


I certainly keep an eye on these developments, but I think the jury is still out on how useful/beneficial they actually are in practice. Generating more code in less time is not a useful measure of productivity for me.

Agree the jury is still out, but "More code in less time" is a shallow strawman. The better question is what is it good at and what is it not good at, and what are the ways to best leverage those capabilities. I've seen enough use cases from enough engineers now that I firmly believe anyone saying "nope never useful" is sticking their head in the sand.

If you aren't taking advantage of it, you are not a competent software engineer in 2026.

On the contrary. That's the only kind of competent software engineer in 2026. Competent engineers don't hand things off to the tool that generates terrible code really quickly.

Many companies cannot take advantage of it. Not everyone is making toy CRUD web applications to help consumers purchase things they don't want. Some people are making safety critical applications, and many more are making highly sensitive applications.

At my job, we just got agents. Because we had to self-host them in our new data center. Our product isn't the kind that can be used with Claude or Gemini, like, legally.


So you just said that you couldn’t use coding agents because you are doing “very important things”, while you clutch your pearls about what other people are doing but your company is in fact using coding agents…

Claude has been a big boost to my sense of competency. I get to point out so many poor solutions in slop PRs now

Right. Using Claude Code & friends is not some esoteric skill that needs years in the trenches to learn which magical incantations to utter.

You prompt it. That's it. Yes, there are better and worse ways of prompting; yes, there are techniques and SKILLs and MCP servers for maximizing usability, and yes, there are right ways to vibe code and wrong ways to vibe code. But it's not hard. At all.

And the last person I want to work with is the expert vibe coder who doesn't know the fundamentals well enough to have coded the same thing by hand.


Yeah, will they take someone who has two months of hands-on with Claude Code, just not someone with zero? Come on, I'll take a great programmer with zero who knows they need to use it over a mediocre programmer who's been doing it since Claude Code released and I expect to be better off for doing so within 2 weeks.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: