Hacker Newsnew | past | comments | ask | show | jobs | submit | SalmoShalazar's commentslogin

Are OpenAI or Anthropic et al seriously building towards “world models”? I haven’t seen any real evidence of that. It seems more like they are all in on milking LLMs for all they are worth.


I mentioned it in my other comment but people like LeCun, Demis Hassabis, Fei-Fei Li do.

There are indications that Open AI is doing this but nothing official as far as i know and i have not heard anything from Anthropic.


It’s become clear that some form of top down total technocratic control like China has implemented is essential for pushing humanity forward.


It is quite literally losing money at an unsustainable rate. They need a path to profitability otherwise this is a massive boondoggle for all of the investors


Unfortunately “all of the investors” is in reality the whole world’s markets due to the disgusting top heavy nature of the SP500 currently.

I have been talking to AI a lot about what portfolios will survive that crash. :)


And what is the conclusion, what will survive?


Stuff like this is what survived the lost decade after the tech bubble crash in 2000-

Gold, treasuries, small cap value.

https://portfoliocharts.com/2021/12/16/three-secret-ingredie...

I think I'll add in some Bitcoin to round it out. Bitcoin and gold seem to take turns going now and be inversely correlated often.

https://www.theblock.co/data/crypto-markets/prices/btc-pears...


Berkshire.


we are still at early age. think it as human, the early age is burning money. when grow older, the ability of making money increases.

Compared with .com bubble. most .com dies, and the one who survived are GOOGLE, AMAZON, Tencent etc.


Yes but you need money to survive until the later stages


Okay let’s calm down a bit. “Extremely important” is hyperbolic. This is novel, sure, but practically jailbreaking an LLM to say naughty things is basically worthless. LLMs are not good for anything of worth to society other than writing code and summarizing existing text.


A censored LLM might refuse to summarize text because it deems it offensive.


An LLM cannot “deem” anything.


I'm not interested in sophistry. You know perfectly well what I mean, and so does everyone else.


I’m not sure you understand what AGI is given the citations you’ve provided.


> "While AlphaEvolve is currently being applied across math and computing, its *general* nature means it can be applied to any problem whose solution can be described as an algorithm, and automatically verified. We believe AlphaEvolve could be transformative across many more areas such as material science, drug discovery, sustainability and wider technological and business applications."

Is that not general enough for you? or not intelligent?

Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?


> Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?

AGI means it can replace basically all human white collar work, alpha evolve can't do that while average humans can. White collar work is mostly done by average humans after all, if average humans can learn that then so should an AGI.

An easier test is that the AGI must be able to beat most computer games without being trained on those games, average humans can beat most computer games without anyone telling them how to do it, they play and learn until they beat it 40 hours later.

AGI was always defined as an AI that could do what typical humans can do, like learn a new domain to become a professional or play and beat most video games etc. If the AI can't study to become a professional then its not as smart or general as an average human, so unless it can replace most professionals its not an AGI because you can train a human of average intelligence to become a professional in most domains.


AlphaEvolve demonstrates that Google can build a system which can be trained to do very challenging intelligent tasks (e.g. research-level math).

Isn't it just an optimization problem from this point? E.g. now training take a lot of hardware and time. If they make it so efficient that training can happen in matter of minutes and cost only few dollars, won't it satisfy your criterion?

I'm not saying AlphaEvolve is "AGI", but it looks odd to deny it's a step towards AGI.


I think most people would agree that AlphaEvolve is not AGI, but any AGI system must be a bit like AlphaEvolve, in the sense that it must be able to iteratively interact with an external system towards some sort of goal stated both abstractly and using some metrics.

I like to think that the fundamental difference between AlphaEvolve and your typical genetic / optimization algorithms is the ability to work with the context of its goal in an abstract manner instead of just the derivatives of the cost function against the inputs, thus being able to tackle problems with mind-boggling dimensionality.


The "context window" seems to be a fundamental blocker preventing LLMs from replacing a white collar worker without some fundamental break through to solve it.


It's to early to declare something "fundamental blocker" while there's so much ongoing research.


There's been 'ongoing research' since the 60s


Isn't the point that DeepMind is producing products providing value to humanity, where AGI looks like something that will produce mainly harm?


Please put two and two together for the rest of us clowns who can’t quite figure it out. State your beliefs plainly so we can come up with some solutions.


I can’t remember the exact date. It’s all part of a “tradition” since at least the 90s. First things were PC, then there were SJWs, then woke, then DEI. Who knows what they will call it next. It’s always complaining about the same thing, just with new verbiage.


The inconvenient truth that the vast majority of adults refuse to acknowledge is that there is no safe level of alcohol. Any drink is going to damage you, marginal though it may be. Unfortunately the healthiest thing you can do is simply never drink alcohol.


> no safe level of alcohol

I hear this kind of phrasing frequently in the discourse nowadays, but it doesn't seem like a useful framing to me. Is there a safe amount of chocolate? A safe amount of sex? Are we supposed to stop enjoying every pleasure of life as soon as someone does a large study with high enough statistical power to show some negative effect on health, no matter how small?

The question is whether the enjoyment we derive from these things is worth the risk, not whether there is a "safe level", whatever that means.


The phrasing is important as the discussion about personal decisions needs to start from acknowledging it’s a poison, which was not the case for the last centuries. We’ve had narration that some wine and beer is safe and that was incorrect.

Chocolate is beneficial in moderation as far as we know so not sure why you brought it up. So is consexual sex with repeat partner.


Why are westerners so single mindedly obsessed about this decades old event?


First it's an easy way to test censorship. Second, you might flip the question: why is the Chinese govt so obsessed that they still block all mention of the event?


I don’t get why the government doesn’t recognize the event and then mold it to its narrative, like so many other governments do.

They basically need to give it the Hollywood treatment.

I’m sure a lot of people don’t know that prior to the event, the protesters lynched and set soldiers on fire.


They do, but prefer to use their own keywords, such as the June 4th incident.


The question you should ask yourself is why are these Chinese labs so "obsessed with a decades old event" that they need to specifically train them to ignore the training corpus?


It is because of tankman.

It really is one of the greatest photographs of all time.

If it wasn't for tankman, this would have all been forgot about in the west by September 1989.

We also don't know enough about China in the west to not know it is like bringing up the Kent State shootings at every mention of the US national guard.

As if there was an article about the US national guard helping flood victims in 2025 and someone has to mention

"That is great but what about the Kent State shootings in 1970?!?"


Kind of an odd move to pepper the HN thread on Dick Cheney’s death with non-sequitur comments about “the left”


Agreed. Besides, "left" and "right" are particularly meaningless in this context.

Cheney spent his last years being openly embraced by the same people who spent the last few decades playing the part of opposition.


The OP I responded to specifically mentioned the difference responses to the deaths of well known people. Mentioning Castro and comparing to Cheney.


Current right wing propaganda utilizes a strategy of "flood the zone", based on the "Russian firehose" approach.

This means injecting all talking points all of the time, and disrupting any criticism anywhere.

LLMs make this easier and more effective, and HN is absolutely, 100% owned by bots. And in like a literal sense, given who YCombinator funds and is headed by.


couple of points. 1. I'm not a bot. I'm just politically right, we are around, we just hide because its still career suicide in tech. I know you don't see the right as human but most of us are people just like you, not LLMs

2. I just like to chat on HN, there is no right wing mass organized process for people to chat with others, or at least none that I am a part of. To think that everyone you disagree with must be a bot or part of a conspiracy is both dehumanizing and just... an odd way to see the world.

3. The OP I responded to specifically mentioned the difference responses to the deaths of well known people. Mentioning Castro and comparing to Cheney. That was the context of my response, not sure how this has now veered into organized conspiracy theories


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: