Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Counter point: what is it about scraping the Internet and indexing it cleverly that makes you believe that would lead to the the creation of the ability to reason above it's programming?

No one in neuroscience, psychology or any related field can point to reasoning or 'consciousness' or whatever you wish to call it and say it appeared from X. Yet we have this West Coast IT cultish thinking that if we throw money at it we'll just spontaneously get there. The idea that we're even 1% close should be ridiculous to anyone rationally looking at what we're currently doing.



> No one in neuroscience, psychology or any related field can point to reasoning or 'consciousness' or whatever you wish to call it and say it appeared from X.

This is not a good argument. Natural systems, the subject of neuroscience/psychology, are much harder to analyze than artificial systems. For example, it's really difficult to study atmospheric gases and figure out Boyle's/Charles law. But put a gas in a closed chamber and change pressure or temperature and these laws are trivially apparent.

LLMs are much more legible systems than animal brains, and they are amenable to experiment. So, it is much more likely that we will be able to identify what "reasoning" is by studying these systems than animal brains.

P.S. Don't think we are there yet, as much as internet commentators might assert.


Yea but following your example/analogy you have gas-gas but brain-llm. So how can we then experiment? It's a simulation at best.


Both jets and birds fly but do it in a completely different way. Who said that there's only one way to achieve reasoning?


This feels like an appropriate place to share this again:

> "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra


Parrots can both fly and talk, what about that!?


This paper may be interesting to some of you:

Discretization of continuous input spaces in the hippocampal autoencoder

https://arxiv.org/pdf/2405.14600


I think it's really up to the highly nebulous definition. Even in your comment is implied that reasoning and consciousness are two names of the same thing but i'd argue one is here and one will never be provable ever. Reason is working through logical steps, much like a program. It's a set of conditions that get checked and a logical structure that uses that information to reach a conclusion. That's what sets it apart from gut feelings or emotional thinking, it's a traceable structure with "reasons". I can watch the LLM speak base facts out loud, then begin to synthesize them giving _reasons_ for the choices it's making, culminating in a final conclusion. It's already doing that. That is what i call reason. It doesn't mean it's human, it doesn't mean it's "aware of itself", it just means it's thinking a train of thought with concrete steps between each car. Consciousness is completely undefinable and useless as a metric and will never be probably achieved.


I agree that reasoning and consciousness are different, however what I do not see being discussed by the AI research community is the necessity to define and then develop "artificial comprehension".

At this point in time, the act of comprehension is a scientific mystery.

I'd say 'consciousness' is the ongoing ever present comprehension of the moment, a feedback self conversation assessing the current situation a being finds itself. This act requires reasoning, as comprehension is the "sandbox" in which reasoning occurs.

But what is comprehension? It's the instantaneous reverse engineering of observations for verification of reality: is what I observe normal, possible or a threat? If one cannot "understand" an observation then the potential the observation is a threat grows. That 'understanding" is reverse engineering the observation to identify it's range of possible behavior and therefore one's safety in relation to that observation.

Comprehension is extremely complex: arbitrary input goes in and a world model with one's safety and next actions comes out.


Thanks for this. Do you have a blog somewhere, preferably with an RSS feed?


I have a never updated blog, but I'm not an active research scientist. I'm just a plain ordinary over educated guy, whose been writing software using AI, across all the things that have been called "AI" for about 45 years. One could say I'm "over read", at one point I'd read every single Nobel Literature winner, I have finished dozens of authors, and my personal taste is mind fuck philosophy in narrative fiction - think Clockwork Orange, Philip K Dick, and beatnik literature. I post a lot of my opinions at Quora: https://www.quora.com/profile/Blake-Senftner


I don't care about qualifications or job titles, if I read a solid piece of text that makes me think differently, I want to know more. ;) I bookmarked your Quora page and blog in my RSS reader, so if you ever start blogging... And thanks for pointing to Philip K Dick, I might actually start reading science fiction.


The assumption is that since there is already a neural network that “got there” (our brains), we should be able to achieve the same thing synthetically.

We just need to figure out how to train that network.


Neural networks are a simplification of our brains, they are not a replication of it. It is just a modeling method that was inspired by how human neurons work, that's it. It's not 1 to 1 or anything.


Furthermore, neurons alone do not led to consciousness. At the very least, their modulation, mainly by glial cells, is essential as well.

Personally, my money is on quantum coherence within microtubules being the mechanism of conscious experience, with the brain essentially being a quantum/classical hybrid computer.


It may be possible to argue that current work in AI leads to some definition of intelligence, which apparently often is equaled to consciousness by some.

My take it is just unaware intelligence like in Peter Watts’ book Blindsight. A terrific read and a quite scary prospect.


It's more that if you actually work with LLMs they will display reasoning. It's not particularly good or deep reasoning (I would generally say they have a superhuman amount of knowledge but are really quite unintelligent), but it is more than simply recall.


Waters are often muddied here by our own psychology. We (as a species) tend to ascribe intelligence to things that can speak. Even more so when someone (or thing in this case) can not just speak, but articulate well.

We know these are algorithms, but how many people fall in love or make friends over nothing but a letter or text message?

Capabilities for reasoning aside, we should all be very careful of our perceptions of intelligence based solely on a machines or algorithms apparent ability to communicate.


>we should all be very careful of our perceptions of intelligence based solely on a machines or algorithms apparent ability to communicate.

I don't think that's merely an irrational compulsion. Communication can immediately demonstrate intelligence, and I think it quite clearly has, in numerous ways. The benchmarks out there cover a reasonable range of measurements that aren't subjective, and there's clear yes-or-no answers to whether the communication is showing real ways to solve problems (e.g. change a tire, write lines of code, solving word problems, critiquing essays), where the output proves it in the first instance.

Where there's an open question is in whether you're commingling the notion of intelligence with consciousness, or identifying intelligence with AGI, or with "human like" uniqueness, or some other special ingredient. I think your warning is important and valid in many contexts (people tend to get carried away when discussing plant "intelligence", and earlier versions of "AI" like Eliza were not the real deal, and Sophia the robot "granted citizenship" was a joke).

But this is not a case, I think where it's a matter of intuitions leading us astray.


> Where there's an open question is in whether you're commingling the notion of intelligence with consciousness

I’m absolutely commingling these two things and that is an excellent point.

Markov chains and other algorithms that can generate text can give the appearance of intelligence without any kind of understanding or consciousness.

I’m not personally certain of consciousness is even requisite for intelligence, given that as far as we know consciousness is an emergent property stemming from some level of problem solving ability.


This seems like the classic shifting of goalposts to determine when AI has actually become intelligent. Is the ability to communicate not a form of intelligence? We don't have to pretend like these models are super intelligent, but to deny them any intelligence seems too far for me.


My intent was not to claim communication isn’t a sign of intelligence, but that the appearance of communication and our tendency to anthropomorphize behaviors that are similar to ours can result in misunderstandings as to the current capabilities of LLMs.

glenstein made a good point that I was commingling concepts of intelligence and consciousness. I think his commentary is really insightful here: https://news.ycombinator.com/item?id=42912765


AI certainly won't be intelligent while it has episodic responses to queries with no ability to learn from or even remember the conversation without it being fed back through as context. This is the current case for LLM models. Token prediction != Intelligence no matter how intelligent it may seem. I would say adaptability is a fundamental requirement of intelligence.


>AI certainly won't be intelligent while it has episodic responses to queries with no ability to learn from or even remember the conversation without it being fed back through as context.

Thank God no one at the AI labs is working to remove that limitation!


And yet, it is still a current limitation and relevant to all current claims of LLM intelligence.


The guy in memento is clearly still an intelligent human despite having no memory. These arguments always strike me as coming from a "humans are just special okay!" place. Why are you so determined to find some way in which LLMs aren't intelligent? Why gatekeep so much?


I mean humans have short term and long term memory, short term memory is just our context window.


Are they displaying reasoning, or the outcome of reasoning, leading you to a false conclusion?

Personally, I see ChatGPT say "water doesn't freeze at 27 degrees F" and think "how can it possibly do advanced reasoning when it can't do basic reasoning?"


I'm not saying it reasons reliably, at all (nor has much success with anything particularly deep: I think in a lot of cases it's dumber than a lot of animals in this respect). But it does a form of general reasoning which other more focused AI efforts have generally struggled with, and it's a lot more successful than random chance. For example, see how ChatGPT can be persuaded to play chess. It still will try to make illegal moves sometimes, hallucinating pieces in the board state or otherwise losing the plot. But if you constrain it and only consider the legal moves, it'll usually beat the average person (i.e. someone who understands the rules but has very little experience), even if it'll be trounced by an experienced player. You can't do this just by memorisation or random guessing: chess goes off-book (i.e. into a game state that has never existed before) very quickly, so it must have some understanding of chess and how to reason about the moves to make, even if it doesn't color within the lines as well as a comparatively basic chess engine.

(Basically, I don't think there's a bright line here: saying "they can't reason" isn't very useful, instead it's more useful to talk about what kinds of things they can reason about, and how reliably. Because it's kind of amazing that this is an emergent behaviour of training on text prediction, but on the other hand because prediction is the objective function of the training, it's a very fuzzy kind of reasoning and it's not obvious how to make it more rigourous or deeper in practice)


This is the most pervasive bait-and-switch when discussing AI: "it's general reasoning."

When you ask an LLM "what is 2 + 2?" and it says "2 + 2 = 4", it looks like it's recognizing two numbers and the addition operation, and performing a calculation. It's not. It's finding a common response in its training data and returning that. That's why you get hallucinations on any uncommon math question, like multiplying two random 5 digit numbers. It's not carrying out the logical operations, it's trying to extract the an answer by next token prediction. That's not reasoning.

When you ask "will water freeze at 27F?" and it replies "No, the freezing point of water is 32F", what's happening is that it's not recognizing the 27 and 32 are numbers, that a freezing point is an upper threshold, and that any temperature lower than that threshold will therefore also be freezing. It's looking up the next token and finding nothing about how 27F is below freezing.

Again, it's not reasoning. It's not exercising any logic. Its huge training data set and tuned proximity matching helps it find likely responses, and when it seems right, that's about the token relationship pre-existing in the training data set.

That it occasionally breaks the rules of chess just shows it has no concept of those rules, only that the next token for a chess move is most likely legal because most of its chess training data is of legal games, not illegal moves. I'm unsurprised to find that it can beat an average player if it doesn't break the rules: most chess information in the world is about better than average play.

If an LLM came up with a proof no one had seen, but it checks out, that doesn't prove it's reasoning either, just because it's next token prediction that came up with it. It found token relationships no one had noticed before, but that's inherent in the training data, and not a reflective intelligence doing logic.

When we discuss things like reinforcement learning and chain of reasoning, what we're really talking about are ways of restricting/strengthening those token relationships. It's back-tuning of the training data. Still not doing logic.


Put more succinctly: if it came up with a new proof in math that was then verified, and you went back and said "no, that's wrong" it would immediately present a different proof, denying the validity of its first proof, because it didn't construct anything logical that it can stand on and say "no, I'm right".


These are all examples of how they're not very good at reasoning, not that they don't reason at all. Being a perfectly consistent logical process is not a requirement for reasoning.


You're begging the question by using the term "reasoning". In what sense are they reasoning if they're not using any logic at all?


I don't think any of us are qualified to tell the difference between exhibiting reasoning and mixing examples taken from the entire internet. Maybe if the training data was small enough to comprehend in its entirety, we could say one way or the other, but as it stands none of us have read the enitre internet, and we have no way of finding the stackoverflow or Reddit conversation that most closely resembles a given chain of thought.


Yes, my judgement too from messing with Claude and (previously) ChatGPT. 'Ridiculous' and 'cultish' are overton-window enforcement more than they are justified.


From its answers I already conclude it is already reasoning above its programming. I do not see why someone in neuroscience or psychology would need to say it appeared, since they do not know better what reasoning is than any average human.

Reasoning is undefined, but a human recognizes it when it appears. I don't see consciousness part of that story. Also, whether you call it emulated or played reasoning or not, apparently does not matter. The results are what they are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: