Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are reasoning like a child. Within a year or two like an adult.


No. It is a computer program which uses statistics to generate plausible text. It does not do any form of reasoning, at all, childlike or otherwise.


You are drawing bad conclusions about whatever you define "generate plausible text" as.


Maybe you're the one drawing bad conclusions


We will see who was drawing bad conclusions in a couple years. Whatever is said here won't change that.


I'm not making any predictions for the future. Just talking about what we currently have.


Under that premise whatever our brains are doing won't count as reasoning either.

I'd suggest you look into modern neuroscience and topics such as predictive coding if you're interested in refining your views.


Our brains work nothing like LLMs do.


Researchers in ML and neuroscience disagree with you.

You have a superficial grasp of the topic. Your refusal to engage with the literature suggests an underlying insecurity regarding machine intelligence.

Good luck navigating this topic with such a mental block, it's a great way to remain befuddled.

> in 2020 neuroscientists introduced the Tolman-Eichenbaum Machine (TEM) [1], a mathematical model of the hippocampus that bears a striking resemblance to transformer architecture.

https://news.ycombinator.com/item?id=38758572


...what? Underlying insecurity? You think I'm afraid of computers being smarter than me? Sorry but that ship sailed a long time ago, I can't even beat a chess bot from the 90s.

The fact that someone created a mathematical model does not mean it is accurate, and even if a small piece of our brain might conceptually resemble a ML model that does not mean they are equivalent.

It is an indisputable fact that our brains are completely, fundamentally different from computers. A cpu is just a bunch of transistors, our brains use both electrical signals and chemical signals. They are alive, they can form new structures as they need them.

You can link fancy papers and write condescending replies all you want, fact is ChatGPT fails at extremely basic tasks precisely because it has absolutely no understanding of the text it spits out, even when it contains all the knowledge necessary to solve them and much more.

I'm not saying we'll never make AGI, I'm simply saying LLMs are not it. Not on their own anyway. I don't understand why you people are so opposed to that simple fact when the evidence is staring you in the face.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: