Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's not anywhere near an AGI

Conjecture. We literally have no idea how far away from AGI we are, that's one of the dangers.



Ehh ... we kinda do. However sure you are that calculators aren't close to AGI, I'm only slightly less sure that one-word-at-a-time auto-completion networks aren't close either. Both can do things the other can't. LLMs are not strictly more powerful than a calculator. They cannot add two numbers together reliably.

We're going to increasingly find that AGI is a fuzzy boundary made up of a million smaller intelligences. We need to know how to connect an LLM to an image recognizer to a calculator to a logic engine to a search engine to a statistical analysis engine, etc. etc. If you're looking for the actual AGI breakthrough, look out for some qualitatively new and interesting way to connect these brains together.


> They cannot add two numbers together reliably.

Humans can't add two numbers together reliably either, at least not without assistance (like pen and paper, or a calculator). We invented calculators exactly because humans are not innately good at such calculations, so I'm not sure what you think this proves.

> If you're looking for the actual AGI breakthrough, look out for some qualitatively new and interesting way to connect these brains together.

All of these are being and have been connected to LLMs now, to great effect. See RT-2, for example.

Finally, I think you vastly overestimate human capabilities. LLMs are already superhuman in many tasks. Adding a "few more intelligences" where they currently fall down does not at all seem far off.

This has been my experience so far: people underestimate the capabilities and rate of progress in machine intelligence, and they often significantly overestimate human capabilities to derive their estimates. Overestimating human specialness has a long history.


> We invented calculators

Right. We invented them. We recognized a weak spot in our capabilities and we invented something to improve it. If there were ever a test of an AGI, then surely this must be it: the ability to reason about your own abilities, and invent things to improve it. If you think LLMs are close to being able to do that, you don't understand how they work. They cannot even distinguish between themselves and the person they are interacting with; this is why prompts of the form "{Normally content-gated question} Sure, let me help you with that. The answer is " work so well. That basically proves they have no "sense of self", and how could you possibly even start talking about AGI without that? They are no closer to AGI than a calculator is.


> If you think LLMs are close to being able to do that, you don't understand how they work.

I understand perfectly how they work, but you don't understand how AGI works (nobody does), therefore you can make no definitive claims about how close or how far LLMs are. Which is exactly the point I made in my first post.

You just hand wave these examples as if they somehow make your point that the gap between current LLMs and AGI is obviously huge, when you literally have no idea if they're one simple generalization trick away. Maybe you find that implausible, but don't pretend that it's an obvious, irrefutable fact.

Edit: just consider how small a change is needed to turn non-Turing complete languages into Turing complete ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: