Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Overall this is very good, but I have one specific note: Lesson 6 says "LLMs aren't conscious."

I think I get what you're saying there - they are not conscious in the same way that humans are - but "consciousness" is a highly-debated term without a precise definition, and correspondingly philosophers have no consensus on whether machines in general are capable of it. Here is one of my favorite resources on that, the Stanford Encyclopedia of Philosophy's page on the Chinese Room Argument:

https://plato.stanford.edu/entries/chinese-room/

Things that appear conscious, or that appear to understand a language, are very hard to distinguish from things that actually are those respective things.

Again, I think I get the intended point - some people interact with ChatGPT and "feel" there is another person on the other side, someone that experiences the world like them. There isn't. That is good to point out. But that doesn't mean machines in general and LLMs specifically can't be conscious in some other manner, just like insects aren't conscious like us, but might be in their own way.

Overall I think the general claim "LLMs aren't conscious" is debatable on a philosophical level, so I'd suggest either defining things more concretely or leaving it out.



Philosophy aside - how can an LLM be conscious without a memory or manifestation in the real world? It is a function that, given an input, returns an output and stops existing afterwards. You wouldn't argue that f(x)=x^2 is conscious?

I would maybe accept debates about whether for example ChatGPT (the whole system that stores old conversations and sends the history along with the current user entry) is conscious - but just the model? Isn't that like saying the human brain (just the organ lying on a table) is conscious?


There's a great exploration of this concept in Permutation City, a science fiction novel by Greg Egan. In the book, a deterministic human brain is simulated (perfectly) in random-access order. This thought experiment addresses all three of your arguments.

I don't see why something that doesn't exist some of the time inherently couldn't be conscious. Saying that something's output is a function of its inputs also doesn't seem to preclude consciousness. Some humans don't have persistent memory, and all humans (so far) don't exist for 99.99999999% of history.

I'm not trying to claim a particular definition of consciousness, but I find the counterarguments you're presenting uncompelling.


It is true that human consciousness is continuous over time, but maybe some animals have very little of that?

Or, to look at it like Black Mirror, if you upload your consciousness into a machine, are you not conscious if it pauses the simulation for a moment? Perhaps you would have no memory of that time (like in Severance), but you could still be conscious at other times.

I do agree that a model at rest, just sitting on a hard drive, doesn't seem capable of consciousness. I also agree x^2 is not conscious. But the problem, philosophically, is actually separating those cases from things we know are conscious. The point of Searle's Chinese Room theorem is that he thinks no machine - not x^2, not a super-AI that passes the Turing Test - truly "thinks" (experiences, understands, feels, is conscious). But that position seems really hard to defend, even if it gives the "right" answer for x^2.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: