> It’s as if your brain is actually thousands of brains working simultaneously.
I had this epiphany recently while meditating. While trying to make my brain be quiet, I realized that there were different parts that were being quite vociferous and just had a lot to say at that particular moment, a la the "inner voice" phenomena. In this one case I just let them chatter instead of whacking them into submission. And it was then that I realized that consciousness was probably a fairly large collective, like a jellyfish, rather than a single thread.
My innate pattern matching is probably above average. Functionally, something always speaks up and says, "I think I know what that is," with the probable answer, in its estimation at least. I may decide to double check the answer and send it back, soliciting more feedback. But the review process is mostly the same.
I postulate that one of my "brains", "columns", or threads is speaking over the tumult with the possible answer. And the larger your collective of better-trained "brains" in this allegory, the better one will perform at these kinds of exercises.
In the example given, trying to identify a cup based on limited tactile and visual information, there absolutely would be crosstalk while a consensus is formed within probabilistic certitudes.
This is all very much related to the discussion in the past few years about "inner voice", how some have it and some don't, etc. The larger your particular collective, the more chatty your inner voice probably is.
These are just my thoughts after empirically observing my brain after developing fairly good control over it over the past few decades.
There's an interesting approach to "consciousness", as what you are aware of, consciously. i.e. attention management. Animals are conscious of their environment in this way. But because humans also have a lot of models going on internally, the mechanism adapts to that (perhaps higher mammals too; I don't recall). This fits nicely with your experience and the submission.
https://news.ycombinator.com/item?id=31373806
I wonder if our "conscious reasoning" is really just attending to a column that does that? Certainly, formal reasoning is a skill that is learnt and practiced - perhaps not so different from a manual skill. We can also suppose pattern-matching on this, to get intuition and hunches. All old ideas, but perhaps neuroanatomy is advancing sufficiently to test them?
I had a college professor who was really big on letting your subconscious mind work on tasks to intuit answers to difficult questions in this same way. Contemplate a problem consciously, then go on a walk or some other calming exercise and have the answer pop into your head. Your mind has been working on it in the background and foregrounds the task once it's done.
People often experience something like this while driving, when you just get an instinct that a kid is going to jump out, or that other car isn't going to stop, without having consciously paid attention to it. You slam on the brakes and are left with that feeling that you just KNEW that was going to happen. A part of you did know it, and foregrounded that prediction when enough variables lined up, overriding whatever you were previously thinking about.
Perhaps this effect of leaving a problem for a while and then returning also causes you to approach it with a different part of your brain. Like picking a direction at a crossroad and then being unable to go a different direction.
> Contemplate a problem consciously, then go on a walk or some other calming exercise and have the answer pop into your head. Your mind has been working on it in the background and foregrounds the task once it's done.
My current role is that of a principal engineer or quasi-architect and I do exactly this when I need to make a decision regarding which direction we're going to go with something.
An environment that is conducive to your subconscious performing well really helps.
This also works for unconscious problem contemplation, but you have to be ready for it when it happens.
In my younger days, I'd run for 30mins outside often times without any music. During those runs, I'd let my thoughts bubble up. It was a welcome chance to let my brain tell me what was bothering it. Sometimes these would be technical problems du jour, but after handling those, nearly always there would be a social problem, usually interpersonal, that would bubble up. It was a very good technique to identify those issues organically.
Something about the kinematic state also made it easier to accept, process, and set aside the idea than it would be if I was sitting in a room silently.
Nowadays, any time I'm doing anything involving semi-repetitious kinematic activity, I find myself making a choice between listening to something, music or a podcast, or simply nothing and using the time to sort through what ails me. There is a certain mental healing that comes with certain physical activities.
My bedroom theory is that all thinking layers are built on top of the mechanical/somatic processing one. And that thinking is basically virtual hunting to put it grossly. We walk in circles when searching solutions. Moving physically might stimulate and bubble up to the abstract layers.
I swim laps at the Y. The meditative state my mind enters about 10 laps into a 32-lap mile is amazing. The subconscious bubbles up, as a sibling commenter pointed out, and begins actively problem solving at the surface when your conscious mind subsides. Sibling commenter experienced this while running for thirty minutes.
I have a very peaceful home. It's quiet, safe, a little remote, about an hour or two northwest of Austin in the Hill Country. I have a library in my house where I do most of my work, and sometimes I get on my motorcycle and ride for a bit into the middle of nowhere, which becomes meditative, much like swimming, allowing my mind to work through problems on its own in a very beautiful environment.
I can work (whether it's actual coding or sitting quietly in thought) for 12-14 hours a day, max. Then I must stop and get 7-8 hours of sleep. That balance of good sleep and hard work is key to maximizing productivity and the ability to solve problems during available work hours.
A healthy diet free from high-glycemic foods and foods with a high inflammation index is also very important.
Sounds a bit like Internal Family Systems (IFS), a therapeutic technique where you allow three different aspects of self talk to each other to solve deep seated psychological issues playfully.
I think for me, the epiphany was the scale of parallelism, probably thousands of times greater than I previously considered. The article also contrasts between the classical view which looks like a single thread and something with quite a bit more concurrency.
The transfer times, the internal reaction/processing times and the hierarchical structure likely makes this different and more chaotic compared to a big neural net, which is more like a function.
I had a very similar experience while EXTREMELY high. The realization came as follows:
"I" am a story I'm telling myself about about a conversation between different drives in my head, and I am free to dis-invite those drives from the conversation, or create new ones.
This happens to also be the premise of the videogame Disco Elysium.
The framing of such drives, and the voices given to each, appears to have been a causal element in many players' own personal growth and development, particularly around addiction, but certainly not exclusively.
That’s not really the subjective experience though. Why would we feel as though there was an observer who is perceiving both the external and internal world? If you sit quietly you can very quickly be overwhelmed by a tumult of images, thoughts, bits of music, sensations and memories all clamouring for attention. You can consciously direct that spotlight of attention as well. To me this seems more like the self part of the brain is the part responsible for higher order command and coordination of the other parts although not fully in control of everything. Sort of like how the captain of a ship is supposed to be in full control but the crew still have their own wills and desires and perform their jobs autonomously when not being directly ordered to do something.
Imagine a crew member who becomes passive aggressive, or openly hostile. If the captain doesn't address or punish the crew member, his/her bad behavior could affect the rest of the crew.
Imagine a captain who becomes unhinged. What could the crew do?
I imagine this could be a metaphor for mental illness – although lacking in nuance.
> That’s not really the subjective experience though. Why would we feel as though there was an observer who is perceiving both the external and internal world?
Sounds useful as a feedback cycle, so that we can imagine/play ourselves (and other persons) in many imaginary situations.
Being aphantasic (aphantastic?) and anauralic, the "inner voice" and "mind's eye' stuff really fascinates me. My first thought is always that hearing voices in my head would make me question my sanity, but I suppose you must be used to it if that's your normal mode of cognition? From your description, the different "brains" are not under your control, and you just 'harvest' the fruits of their thoughts?
> My first thought is always that hearing voices in my head
That's not an accurate interpretation. In my case, I think "fully formed thoughts in rapid succession, sometimes colliding and in competition with one another" is more accurate. However, sometimes it does resemble a conversation as you start to reason through scenarios using game theory.
Seeing your other comment, it isn't just seeing imagery in my mind's eye, although that's possible, it's the ability to quickly construct fully formed models as well as receive a rapid series of fully formed thoughts almost faster than I can do anything with them, which is where practice in managing that firehouse of information becomes useful.
So you don't experience any of this? The speed at which my brain works is probably faster than yours. The synapses supposedly fire faster. To me it's kind of a superpower that allows me to do amazing things for people, so I obviously have fun with it. But I have learned how to manage it and maximize the potential.
> So you don't experience any of this? The speed at which my brain works is probably faster than yours. The synapses supposedly fire faster. To me it's kind of a superpower that allows me to do amazing things for people, so I obviously have fun with it. But I have learned how to manage it and maximize the potential.
I think you're right. I've often found that I'm way slower than others when it comes to coming up with answers on the spot (except for mental math, where lack of visualizing seems to be a major advantage for some reason - I think I just don't get confused as easily). I'm really bad at conversation and not witty at all, but I often find that I get to better answers in less than time overall, but my time to first answer is a lot longer.
I believe everyone is a genius at something, to quote Jay-Z.
In some ways what I have is a disability. My nervous system can easily get overstimulated with too much information, which precipitates an adrenaline dump that shuts down my body. Speaking in front of crowds, looking at all those faces with all that information bombarding me, will do it. So it definitely has downsides which I know that I have to avoid.
It's more like remembering a conversation than actually hearing.
I know what I sound like to myself. I know what other people sound like to me. I've heard probably millions of voices in my lifetime by now.
So I can basically imagine a conversation between myself and others or an entirely fictitious entity. And it's like a memory that never happened. (I mean, as much as any memory actually happened.)
I often use conversation with an "other" to think through concepts by the "explaining to a five year old method", so that's not really strange to me, but the voices don't come into it. I'm not sure how to quite describe it, but my thoughts only take the form of language when I try to express them, before that they are very abstract and unrelated to language or senses.
So you're simply unusually sane. Can't even hallucinate in two-dimensional projections like most people--they usually have like a marker for "imaginary" that all imaginations are dyed with. It looks like a ghost in a movie. And they're not actually that careful with the marker, everyone's telling them to believe in their dreams ie let the dye bleed out, but then shrinks are like "delusions of grandeur" or what's the other one, one old word for this was in fact "fantasia" (from the Latin with an "f" because Romans themselves adopted it from the Greek with a "ph" because it's a great word. I forget the word but including that word, as in movie by Disney Fantasia) meaning thinking things that aren't true. But like seeing them, in gray, like ghosts.
Ghosts as portrayed in movies are actually a highly realistic view of the common man's imagination from the first person. That's what it looks like to them, so you know. They're usually people because that's what matters most is people spesh the face that's like who they are in a 2D projection, they're imagined because the real person is not there anymore, they're not colored in first because that's work for their limited imagination, plus it interferes with the "imaginary" marker, plus they generally don't realize they're not seeing things colored in, that's something the movie producer only realizes because he's trying to sell tickets. Also because in white societies the last image the living have of the dead are their paleness--in white societies primarily, so Europe and Asia which dominate cultural exports--so they are imagined as they were last seen. That will always be the latest version of their face.
.
Then there's what I call "macro-phantasia" which is like, like a video game with perfect graphics at this point. It is a gift and a curse. As though there were a difference between gifts and curses. Overboard.
.
Because in fact aphantasia prevents you from certain forms of harm, like imagining yourself in terrible places as people are wont to threaten you with. Threats of like "place worse than you can imagine" that's so you turn your imagination on to the max, and harm yourself in your own imagination to coerce yourself to their bidding with no proof it ever happened. If you can't imagine that, guess what, police interrogation is a waste of time even for the policeman on time-and-a-half, you didn't cooperate and that's it, or the gang-land interrogation equivalent, what side you're on is in this case irrelevant. It helps against coercion.
I’m not sure that’s accurate. When I read and subvocalize I do have the sensation of “hearing” the voice in my head. Same for thinking. It’s more like remembering what someone said though since there isn’t an actual external source. For example I can read a sentence in Sean Connery’s voice but it’s more like remembering what he sounds like but transferred to the words I’m reading. My own inner voice does have a unique sound to it too, not quite the same as my speaking voice though.
Right. I don't believe anyone lacks that capacity. It's very en vogue to have mental problems and it's the perfect one to have because it's 100% unprovable.
Then why do they describe it as an "inner voice"? People have told me they literally feel emotions, and literally see imagery in their heads, so I assumed that they also literally heard their thoughts when they use the same language to describe it. I don't do any of those, so I wouldn't know at all, that's why I asked.
I definitely don't visualize. I definitely don't have independent thoughtlines that "chatter" in my head. If we are all doing the same thing, the descriptions of what we're doing should be similar, but they're not.
I'm curious, what happens if you were to close your eyes for a 30 minute+? If you start seeing stuff, patterns, etc..., It's similar to how my mind's eye works.
If there's light, I see the back of my eyelids, if the light is moving I might see patterns like in a cloud or jesus-in-toast kinda thing. I've only ever mentally visualized once while high on opiates (surprisingly hallucinogens don't make me visualize, even when I get visual effects).
It is a phenomenon that has been studied a few times, but I don’t have any references handy. I find it fascinating, and difficult to empathise with since it’s far removed from my personal experience, but most of all I think it’s important to listen when people tell us their subjective experience; that’s all we have.
I was playing a drawing game last week where one person would draw and everyone else would guess what it was as the drawing is appearing. It’s incredible how some people could clearly and confidently guess correctly off of very minor clues, like the legs of a what would be elephant or the hilt of a would be sword. (Keep in mind there are basically scribbles of drawings, not exactly art.)
This "watching your own mind" seems problematic. Either your consciousness is disconnected from your brain(has a brain on its own) like a player playing a game and his virtual character or your mind creates a mirror of yourself so as to be able to watch itself.
clearly, in your examples some entity is thinking (and even makes decisions) about your internal chatter and must therefore be more than just pure consciousness. Or is it just an additional layer of chatter?
>clearly, in your examples some entity is thinking (and even makes decisions) about your internal chatter and must therefore be more than just pure consciousness. Or is it just an additional layer of chatter?
Doesn't have to be either. A part can watch the whole (even itself perhaps). Like our head+eyes+brain can examine our whole body in a mirror (or just looking at our hands, legs, torso, and so on). If we had more movable eyes (like some animals) an eye could turn around and directly examine the other eye too...
I was having thoughts about my own behavior one time and realized that the hamster (meta thinker) was getting out of hand. My internal dialog went "we need to tell him to go..." the words "sit in the corner for a while" never came through because it was indeed "him" reaching that conclusion and he took action during the thought. My mind was quite after that.
The Thousand Brains Theory of Intelligence proposes that rather than learning one model of an object (or concept), the brain builds many models of each object. Each model is built using different inputs, whether from slightly different parts of the sensor (such as different fingers on your hand) or from different sensors altogether (eyes vs. skin). The models vote together to reach a consensus on what they are sensing, and the consensus vote is what we perceive. It’s as if your brain is actually thousands of brains working simultaneously.
Surely this wasn't a novel concept in 2018? It seems a trivial idea, and I'm pretty sure the machine learning course I followed that explained deep convolutional neural networks as being made up of workers identifying various parts of an image and passing their conclusions on to next layers that aggregate and decide, is both older than 2018 and basically the same idea. I'd be very surprised if this line of thinking wasn't thoroughly explored in the 70's.
Of course that doesn't mean there aren't novel concepts in their book and research, but just a bit weird to pose the base "thousand brains" idea as novel.
You’re talking about a pure hierarchy — “passing their conclusions onto next layers to decide.” This “thousand brains” concept refers instead to a peerwise mesh, where “workers” at each layer receive both lower-layer inputs, and also their peers’ outputs, and use both to derive an output that evolves over time, as outputs are passed between peers, until reaching a layerwise equilibrium-state (a “consensus.”)
The (again, many) models in the layer above receive this evolving consensus as the output from subsets of nodes in the layer below — potentially with overlap in which upper-layer models receive outputs of which lower-layer models; and each of these models themselves evolve to a consensus.
By analogy: DCNN is like representative democracy, where each layer is a representative with the layer below as its “constituents”, forming a belief based on the multitude of lower-layer beliefs it hears, and then acting on its own formed belief when reporting to the layer above. This “thousand brains” model is more like grouping constituents into juries and running them through the Delphi-model iterative consensus; and then having representatives who act on continuous-feed statistical aggregations of the Delphi-model iteration-step outputs of multiple juries.
The first thing that came to mind reading this was that broadly it does not seem "novel", rather it reminds me of the many agents theory that makes the "Society of the Mind" [1] as published by Marvin Minsky in the 80s. Perhaps is the focus on the neocortex that is novel? I recall reading what looks like the same neocortex-focused theory in Jeff Hawkins earlier book "On Intelligence". I'll have to read the book to see what new insights are developed.
If you find this at all interesting, there are a few other items I'd recommend.
Buddhism and Modern Psychology on Coursera covered some mind boggling studies that give strong evidence of this, and subsequently support for core secular Buddhist practices.
Happiness Hypothesis by Haidt discusses at a higher level the connection of modern psychology to stoicism, Buddhism, and the many brains theories.
And Waking Up by Sam Harris takes a slightly more incendiary take, but is interesting for its ultimate conclusion that there is no self, just a cacophony of selves running on autopilot.
I personally digested these and came to the conclusion that we are on the cusp of human like AI, but we just haven't lowered our expectations for human like consciousness enough to realize it.
I'd be very interested in your take on these, too.
> I personally digested these and came to the conclusion that we are on the cusp of human like AI, but we just haven't lowered our expectations for human like consciousness enough to realize it.
Ah, that's my take as well! The one thing AI is missing, IMO, is the ability to train over its own output and model itself as a single simplified self, which would then act as a guide signal to get all these diverse micro-selves "on the same page."
Imho we are not even close to AGI. The biggest issue that I see is that a consciousness, a mind, cannot exist in a void. What makes a behavior intelligent is the environment. As in, the behavior is intelligent because it provides an advantage to the one exhibiting it. Looking at humans and our environment I am having a hard time imaging how an AI could learb to be intelligent in this environment (except for closely controlled settings) if its survival does not depend on it.
I recollect that during Lemoine’s conversations with LaMDA, the AI stated that it was actually several independent systems that were able to hold conversations with each other. I thought that was an important and novel point and pertinent to this topic.
Well, keep in mind that that sort of thing is almost certainly not introspection but imitation. This is not a novel idea, and it would have appeared in its training corpus.
As I understand the training approach, LaMDA isn't aware of its own output in a way that would allow it to introspect like that.
What does an individual humans intelligence look like without the wisdom of institutions like family, extended family and community, schools, academys and conservatories, the public sphere, and their interpretations of the world.
I know a lot of my ability to organize the world and characterize objects comes from the organizational structures refined and propagated by these institutions
I had this epiphany recently while meditating. While trying to make my brain be quiet, I realized that there were different parts that were being quite vociferous and just had a lot to say at that particular moment, a la the "inner voice" phenomena. In this one case I just let them chatter instead of whacking them into submission. And it was then that I realized that consciousness was probably a fairly large collective, like a jellyfish, rather than a single thread.
My innate pattern matching is probably above average. Functionally, something always speaks up and says, "I think I know what that is," with the probable answer, in its estimation at least. I may decide to double check the answer and send it back, soliciting more feedback. But the review process is mostly the same.
I postulate that one of my "brains", "columns", or threads is speaking over the tumult with the possible answer. And the larger your collective of better-trained "brains" in this allegory, the better one will perform at these kinds of exercises.
In the example given, trying to identify a cup based on limited tactile and visual information, there absolutely would be crosstalk while a consensus is formed within probabilistic certitudes.
This is all very much related to the discussion in the past few years about "inner voice", how some have it and some don't, etc. The larger your particular collective, the more chatty your inner voice probably is.
These are just my thoughts after empirically observing my brain after developing fairly good control over it over the past few decades.