1) The AI code maintainence question - who would maintain the AI generated code
2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?
I think this post is a great example of a different point made in this thread. People confuse vibe-coding with llm-assisted coding all the time (no shade for you, OP). There is an implied bias that all LLM code is bad, unmaintainable, incomprehensible. That's not necessarily the case.
1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.
The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).
That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.
2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).
Perhaps thinking about AI generated code in terms of machine code generated by a compiler helps. Who maintains the compiled program? Nobody. If you want to make changes to it, you recompile the source.
In a similar fashion, AI generated code will be fed to another AI round and regenerated or refactored. What this also means is that in most cases nobody will care about producing code with high quality. Why bother, if the AI can refactor ("recompile") it in a few minutes?
AI assists the maintenance. A lot of posts seem to think like once the code is committed the AI’s what, just go away? If you can write a test for a bug, likely it can be either fully or partially fixed by an ai even today.
But where are they racing? If AGI happens, capitalism is over. If AGI won't happen, they just wasted massive amount of resource in chasing of a white elephant and these companies are over.
You can’t survive in survival bunkers or islands, and thinking otherwise is a pipe dream. We don’t have a true model of what this might look like, but if there’s extreme instability then wealth doesn’t serve as a safety measure—it will be a target. You need backing by governed armies to protect status and wealth, but in some proto-civilization model, there will just be warring factions with bunker busters and maybe nukes going at each other. They’ll eventually form treaties and merge into city states, repeating the same trend towards nation states and democracy. Just skip the dumb bloodshed in the middle, and settle on Nordic socialism from the get go.
You're touching on the core tension in Meta's strategy. I think you're partially right, but there's more to it.
On "replacing expensive humans" agree that's part of it, but the bigger play is augmenting existing products. Meta's Q3 2025 guidance shows ad revenue still growing 21.6% YoY. They're using AI to make existing ads more effective (better targeting, higher conversion), not replacing the ad model entirely.
On the moat question this is where the infrastructure spending makes sense. You're right that wrapping an LLM has no moat, but owning the infrastructure to train and serve your own models does. Meta has three advantages: (1) 3B+ daily users generating training data competitors can't access, (2) owning 2GW of infrastructure means $0 marginal cost for inference vs paying OpenAI/Anthropic, and (3) AI embedded in Instagram/WhatsApp/Facebook is stickier than standalone chat.
On ads behind chat interface this is the real risk. But Meta's bet seems to be: short-term AI improves existing ad products (already working), mid-term AI creates new surfaces for ads (AI-generated content, business tools), and long-term if chat wins, Meta wants to own the chat interface (Meta AI), not lose to ChatGPT.
The $75B question is whether they're building a moat or just burning cash on commodity infrastructure. Time will tell, but the data advantage plus vertical integration gives them a shot.
What's your take do you think the data moat is real, or can competitors train equally good models on synthetic/public data?
The tax advantage is only available for people that itemize. Only about 10% of taxpayers do so. Inflation protection is very real and important though.
The main benefit of YC startups is not that they have 10x engineers but rather that they are starting from scratch; hence, the AI works well in a greenfield project.
Enterprise customers are a totally different ball game. Ninety percent is legacy critical code, and some decisions are not made to optimize the dev time.
Also, for Y Combinator startup, let say that the AI introduces a bug. Since there are no customers, no one cares.
Now imagine the same AI introduces a bug in the flight reservation system that was written 20 years ago.
I agree it's not about the 10x engineers or the greenfield. I think YC's selection process is still focused on finding distinguished individuals, but within two specific constraints.
The competition for big LLM AI companies is not other big LLM AI companies, but rather small LLM AI companies with good enough models. This is a classic innovator dilemma.
For example, I can imagine a team of cardiologists creating a fine tune LLM model.
The cardiologist checks the ECG, compare with the LLM results and checks the difference. If it can reduce error rate by like 10%, that's already really good.
My current stance on LLM is that it's good for stuff which is painful to generate, but easy to check (for you). It's easier/faster to read an email than to write it. If you're a domain expert, you can check the output, and so on.
The danger is in using it for stuff you cannot easily check, or trusting it implicitly because it is usually working.
> trusting it implicitly because it is usually working
I think this danger is understated. Humans are really prone to developing expectations based on past observations and then not thinking very critically about or paying attention to those things once those expectations are established. This is why "self driving" cars that work most of the time but demand that the driver remain attentive and prepared to take over are such a bad idea.
> The cardiologist checks the ECG, compare with the LLM results and checks the difference.
Perhaps you're confusing the acronym LLM (Large Language Model) with ML (Machine Learning)?
Analyzing electrocardiogram waveform data using a text-predictor LLM doesn't make sense: No matter how much someone invests in tweaking it to give semi-plausible results part of the time, it's fundamentally the wrong tool/algorithm for the job.
1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?
reply