Are OpenAI or Anthropic et al seriously building towards “world models”? I haven’t seen any real evidence of that. It seems more like they are all in on milking LLMs for all they are worth.
It is quite literally losing money at an unsustainable rate. They need a path to profitability otherwise this is a massive boondoggle for all of the investors
Okay let’s calm down a bit. “Extremely important” is hyperbolic. This is novel, sure, but practically jailbreaking an LLM to say naughty things is basically worthless. LLMs are not good for anything of worth to society other than writing code and summarizing existing text.
> "While AlphaEvolve is currently being applied across math and computing, its *general* nature means it can be applied to any problem whose solution can be described as an algorithm, and automatically verified. We believe AlphaEvolve could be transformative across many more areas such as material science, drug discovery, sustainability and wider technological and business applications."
Is that not general enough for you? or not intelligent?
Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
> Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
AGI means it can replace basically all human white collar work, alpha evolve can't do that while average humans can. White collar work is mostly done by average humans after all, if average humans can learn that then so should an AGI.
An easier test is that the AGI must be able to beat most computer games without being trained on those games, average humans can beat most computer games without anyone telling them how to do it, they play and learn until they beat it 40 hours later.
AGI was always defined as an AI that could do what typical humans can do, like learn a new domain to become a professional or play and beat most video games etc. If the AI can't study to become a professional then its not as smart or general as an average human, so unless it can replace most professionals its not an AGI because you can train a human of average intelligence to become a professional in most domains.
AlphaEvolve demonstrates that Google can build a system which can be trained to do very challenging intelligent tasks (e.g. research-level math).
Isn't it just an optimization problem from this point? E.g. now training take a lot of hardware and time. If they make it so efficient that training can happen in matter of minutes and cost only few dollars, won't it satisfy your criterion?
I'm not saying AlphaEvolve is "AGI", but it looks odd to deny it's a step towards AGI.
I think most people would agree that AlphaEvolve is not AGI, but any AGI system must be a bit like AlphaEvolve, in the sense that it must be able to iteratively interact with an external system towards some sort of goal stated both abstractly and using some metrics.
I like to think that the fundamental difference between AlphaEvolve and your typical genetic / optimization algorithms is the ability to work with the context of its goal in an abstract manner instead of just the derivatives of the cost function against the inputs, thus being able to tackle problems with mind-boggling dimensionality.
The "context window" seems to be a fundamental blocker preventing LLMs from replacing a white collar worker without some fundamental break through to solve it.
Please put two and two together for the rest of us clowns who can’t quite figure it out. State your beliefs plainly so we can come up with some solutions.
I can’t remember the exact date. It’s all part of a “tradition” since at least the 90s. First things were PC, then there were SJWs, then woke, then DEI. Who knows what they will call it next. It’s always complaining about the same thing, just with new verbiage.
The inconvenient truth that the vast majority of adults refuse to acknowledge is that there is no safe level of alcohol. Any drink is going to damage you, marginal though it may be. Unfortunately the healthiest thing you can do is simply never drink alcohol.
I hear this kind of phrasing frequently in the discourse nowadays, but it doesn't seem like a useful framing to me. Is there a safe amount of chocolate? A safe amount of sex? Are we supposed to stop enjoying every pleasure of life as soon as someone does a large study with high enough statistical power to show some negative effect on health, no matter how small?
The question is whether the enjoyment we derive from these things is worth the risk, not whether there is a "safe level", whatever that means.
The phrasing is important as the discussion about personal decisions needs to start from acknowledging it’s a poison, which was not the case for the last centuries. We’ve had narration that some wine and beer is safe and that was incorrect.
Chocolate is beneficial in moderation as far as we know so not sure why you brought it up. So is consexual sex with repeat partner.
First it's an easy way to test censorship. Second, you might flip the question: why is the Chinese govt so obsessed that they still block all mention of the event?
The question you should ask yourself is why are these Chinese labs so "obsessed with a decades old event" that they need to specifically train them to ignore the training corpus?
It really is one of the greatest photographs of all time.
If it wasn't for tankman, this would have all been forgot about in the west by September 1989.
We also don't know enough about China in the west to not know it is like bringing up the Kent State shootings at every mention of the US national guard.
As if there was an article about the US national guard helping flood victims in 2025 and someone has to mention
"That is great but what about the Kent State shootings in 1970?!?"
Current right wing propaganda utilizes a strategy of "flood the zone", based on the "Russian firehose" approach.
This means injecting all talking points all of the time, and disrupting any criticism anywhere.
LLMs make this easier and more effective, and HN is absolutely, 100% owned by bots. And in like a literal sense, given who YCombinator funds and is headed by.
couple of points.
1. I'm not a bot. I'm just politically right, we are around, we just hide because its still career suicide in tech. I know you don't see the right as human but most of us are people just like you, not LLMs
2. I just like to chat on HN, there is no right wing mass organized process for people to chat with others, or at least none that I am a part of. To think that everyone you disagree with must be a bot or part of a conspiracy is both dehumanizing and just... an odd way to see the world.
3. The OP I responded to specifically mentioned the difference responses to the deaths of well known people. Mentioning Castro and comparing to Cheney. That was the context of my response, not sure how this has now veered into organized conspiracy theories