Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It feels like every major lab is saying the same thing:

https://darioamodei.com/machines-of-loving-grace https://www.wsj.com/video/events/the-race-for-true-ai-at-goo...

Even folks _leaving_ OpenAI, who have no incentive to drive hype, are saying that we're very close to AGI. https://x.com/sjgadler/status/1883928200029602236

Even folks like Yoshua Benjio, Hinton are saying we're close to it. The models keep getting better at an exponential.

How much evidence does one need to dispel the "this is OpenAI/sama hype" argument?



> The models keep getting better at an exponential.

Isn't it the opposite? Marginal improvements require exponentially more investment, if we believe Altman. AI is expanding into different areas, and lots of improvements have been made in less saturated fields, but performance on older benchmarks has plateaued, especially relative to compute costs.

Even if you focus on areas where growth is rapid, the history of technology shows many, many examples of rapid growth hitting different bottlenecks and stopping. Futurists have predicted common flying cars for decades and decades, but it'll be a long, long time before helicopters are how people commute to work. There are fundamental physical limitations to the concept that technological advancement does not trivialize.

Maybe the problems facing potential AGI have relatively straightforward technological solutions. Maybe, like neural networks already have shown, it will take decades of hardware advancements before advancements conceived of today can see practice. Maybe replicating human-level intelligence requires hardware much closer to the scale of the human brain than we're capable of making right now, with a hundred trillion individual synapses each more complex than any neuron in an artificial neural network.


Sam plainly wrote that intelligence of the log of training resources, but that's presumably written in the context of GPT4 style LLMs. The intelligence gains we're seeing right now are not a result of a 100x increase in traditional training resources but rather new ways of training and agentic processes.


These are all people running AI labs! They want investment, and what better way to get investment than to tell people you're going to create Terminator? The people leaving OpenAI are joining other labs – their livelihoods depend on AI companies receiving investment: "it is difficult to get a man to understand something, when his salary depends on his not understanding it".

> The models keep getting better at an exponential [sic].

We don't know if this is true. A lot of growth that appears exponential is often quadratic (https://longform.asmartbear.com/exponential-growth/) or follows a logistic function (e.g. Moore's law).

Additionally there's a LOT of benchmark gaming going on, and a lot of the benchmark solving is not down to having a process that actually solves the problems; it just turns out that the problems already kind of lie in the span of text on the internet.


I feel insane cognitive dissonance when I read a comment like this. I know/hope what you're saying is in good faith and you aren't trolling. Yet my own experience on how good these models have become and how rapidly they're improving makes me feel like we're talking about 2 completely different things.

Screw the benchmarks, it feels insane how much utility these models already provide in my life and that they keep getting better. I guess all my problems are simple and and "lie in the span of text on the internet", but they're still extremely valuable to me.


People are capable of having really deep conversations with ELIZA (program which just asked questions about things you said). I think there's a kind of "ooh it's language it must be clever" which I think is mistaking the symptom for the effect.

I'm not denying that the large language models may have some (marginal) utility, I'm saying that they're not going to magically turn into Skynet, no matter how much people dream.

I suspect they are not going to have as many applications as people claim: we are a few years in now and we see that the applications are trailing WAY behind the huge level of investment into building datacenter infrastructure. There's some good stuff in FT AlphaVille about this.


I like AI, and I use it every day. I'm not a software dev, but I do write software professionally, and I also use AI for designing and architecting solutions. I feel the technology is eminently useful, and it's made me maybe 20-30% more productive, which is massive. I can reconcile all this without believing a lot of the hype and outlandish claims, and without feeling it's a cognitive disonance.


> How much evidence does one need to dispel the "this is OpenAI/sama hype" argument?

For AGI hype, we need exactly one piece of evidence that machine AGI exists, or that we know how to build it. Otherwise, it's an exaggeration that AGI is imminent - otherwise known as hype. Or maybe it's hope, but sama suggests it should be an expectation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: