I cannot agree enough with Karl here. What is the brain? An organic system with deep roots in the organic body, with deep causal connections with its environment.
There's little sense in ignoring the whole basic mode of operation, physics, chemistry and biology of the brain in order to analogise it to another system without any of those properties.
This, at best, provides a set of inspirations for engineers -- it does nothing for science.
> There's little sense in ignoring the whole basic mode of operation, physics, chemistry and biology of the brain in order to analogise it to another system without any of those properties.
Sure there is. People had a feel for it back in "clockworks" times, nowadays we have a much better grasp because of progress of physics and math, particularly CS - mode of operation is an implementation detail. Whatever the mode, once you understand the behavior enough to model it in computational terms, you can implement it in anything you like - gears and levers, pistons, water flowing between buckets, electrons in silicon, photons going through lenses, photons diffusing through metamaterials, sound waves diffusing through metamaterials - and yes, also via a person locked in a room full of books telling them what to draw in response to a drawing they receive, and also via a billion kids following a game to the letter, via corporate bureaucracy, via board game rules, etc.
Substrate. Does. Not. Matter.
The only thing limiting your choice here is practical one. Humanity is getting a good mileage out of electrons in silicon, so that's the way to go for now. Gears would work too, they're just too annoying to handle at scale.
Of course, today we don't have a full understanding of biological substrate - we can't model it fully in terms of computation, because it's a piece of spontaneously evolved nanotech and we barely begun being able to observe things at those scales. We have a lot of studying in front of us - but this is about learning how the gooey stuff ticks, what does it compute and how. But it's not about some new dimension of computation.
It only doesnt matter for counting a system as implementing a pure algorithm, ie., one with no device access. This is an irrelevant theoretical curiosity.
Electronic computers are useful because they're electronic -- they can power devices, and modulate devices using that power. This cannot be done with wood, or most anything else.
"Substrate doesnt matter" is, as a scientific doctrine pseudoscience, and as a philosophical one, theological.
The causal properties of matter are essential to any really-existing system. Non-causal, purely formal properties of systems which can be modelled as functions from the naturals to the naturals (ie., those which are computable) are useless.
> Electronic computers are useful because they're electronic -- they can power devices, and modulate devices using that power. This cannot be done with wood, or most anything else.
On the contrary. That's an implementation detail. You can "power devices, and modulate devices" by having a clockwork computer with transducers at the I/O boundary, converting between electricity and mechanical energy at the edge. It would work exactly like a fully electronic computer, if built to implement the same abstract computations - and as long as you use it within its operational envelope[0], you wouldn't be able to tell the difference (except for the ticking noise).
> The causal properties of matter are essential to any really-existing system. Non-causal, purely formal properties of systems which can be modelled as functions from the naturals to the naturals (ie., those which are computable) are useless.
Yes and no. Of course the causal properties of matter... matter. But the breakthrough in understanding, that came with development of computer science and information theory, is that you can take the "non-casual, purely formal" mathematical models of computation, and define some bounds on them (no infinite tapes), you can then use the real-world matter to construct a physical system following that mathematical model within the bounds, and any such system is equivalent to any other one, within those bounds. The choice of what to use for actual implementation is done on practical grounds - i.e. engineering constraints and economics.
It's how my comment reached your screen, despite being sent through some combination of electrons in wires, photons down a glass fibre, radio signals at various frequencies - hell, maybe even audio signals through the air, or printouts carried by pidgeons[1]. Computer networks are a living proof that substrate doesn't matter - as long as you stick to the abstract models and bounds described in the specs for the first three layers of ISO/OSI model, you can hook up absolutely anything whatsoever to the Internet and run TCP/IP over it, and it will work.
I bet there's at least one node on the Internet somewhere whose substantial compute is done in a purely mechanical fashion. And even if not, it could be done if someone wanted - figuring out how to implement a minimal TCP/IP stack using gears and switches is something a computer can do for you, because it's literally just a case of cross-compilation.
--
[0] - As opposed to e.g. plugging 230V AC to its GPIO port; the failure modes will be different, but that has no bearing on either machine being equivalent within the operational bounds they were designed for.
> matter to construct a physical system following that mathematical model within the bounds, and any such system is equivalent to any other one, within those bounds
No. This wasnt discovered.
Nearly every physical system is implementing nearly every pure algorithm, ie., every computable function.
The particles of gas in the air in my room form a neural network, with the right choice of activation function.
Turing-equivalence is a property of formal models with no spatio-temporal properteis. Physical systems are not equivalent because they both implement a pure algorithm
Pure algorithms are useless, and of interest only in very abstract csci. All actual algorithms, when specified, have massive non-computational holes in them called 'i/o', device access etc.
If your two systems of cogs wants to communiate over a network of cogs, the Send() 'function' (which is not a function!) has to have a highly specific causal semantics which cannot be specified computationally.
These systems only have 'equivalent functions', as seen from a human point-of-view, if their non-computational parts serve equivalent functions. This has nothing to do with any pure algorithm.
You cannot implement a web browser on 'gears' in any useful sense, in any sense in which the partices of their air arent already implementing the web browser. That a physical system can-be-so-described is irrelevant.
Computers are useful not because theyre computers. Theyre useful because they are electrical devices whose physical state can be modulated with hyper-fine detail by macroscope devices (eg., keyboards). We have rigged a system of electrical signals to immitate a formal programming langauge -- but this is an illusion.
Reduce the system down to just want can be specified formally, and it disappears.
> Nearly every physical system is implementing nearly every pure algorithm, ie., every computable function.
Sure. And also about the air and neural network. This is all irrelevant, for the same reason that every possible program and every possible copyrighted work being contained in the base-10 expansion of the number PI is irrelevant. Or that a photo of every event that ever happened anywhere is contained in the space of all possible (say) 1024x1024 24-bit-per-pixel bitmaps. It's all in there, but it's irrelevant, because you have no way of determining which combinations of pixels are photos of real events. And any random sample you take is most certainly not it.
> All actual algorithms, when specified, have massive non-computational holes in them called 'i/o', device access etc.
Only if you stick to a subset of maths you use for algorithms, and forget about everything else. The only actual hole there would be in your memory, or knowledge.
Sure, I/O doesn't play nice with functional programming. It doesn't stop functional programming from being useful with real computers in the real world. We have other mathematical frameworks to describe things that timeless, stateless computation formalisms can't. You are allowed to use more than one at the same time!
> You cannot implement a web browser on 'gears' in any useful sense, in any sense in which the partices of their air arent already implementing the web browser.
Of course I can. Here is the dumb approach for the sake of proof (one can do better with more effort):
1. Find a reference for how to make a NAND gate with gears. Maybe other logic gates too, but it's not strictly necessary.
2. Find the simplest CPU architecture someone made a browser for, for which you can find or get connection-level schematics of the chip; repeat for memory and other relevant components, up to the I/O boundary. Make sure to have some storage in there as well.
3. Build electricity/rotational motion transducers, wire them to COTS display, keyboard, mouse and Ethernet ports.
4. Mechanically translate all the logic gates and connections from point 2. to their gear equivalents using table 1., and hook up to 3.
5. Set the contents of the storage to be the same as a reference computer with a web browser on it.
6. Run the machine.
Of course, this would be a huge engineering challenge - making that many gears work together, in spite of gravity, inertia, tension and wear, and building it in under a lifetime and without bankrupting the world. Might be helpful to start by building tools to make tools to make tools, etc.
But the point is, it's a dumb mechanical process, trivially doable in principle. May be difficult with physical gears, but hey, it worked in Minecraft. People literally built CPUs inside a videogame this way.
> We have rigged a system of electrical signals to immitate a formal programming langauge -- but this is an illusion.
It's the other way around: we've rigged a system of electrical signals to make physical a formal theoretical program. We can also rig a system of optical signals, or hydraulic signals, or pidgeon-delivered paper signals, to "immitate a formal programming language" and implement a formal theoretical program - and as long as those systems immitate/implement the same formal mathematical model, they're functionally equivalent and interchangeable.
I think you aren't following the defintion of 'computer' or 'computable', you seem to have a mixed physical notion of what a 'computer' is.
A computer, from a formal pov, is just an abstract mathematical object (like a shape) which has abstract properties (eg., like being a circle) that are computable, ie., are functions from integers to integers.
The physical devices we call 'computers', in many ways, arent. They exist in space and time and hence have non-computable properties, like their (continuous) extension in space and time.
See Turing's own paper where he makes this point himself, ie., that physical machines arent computers in his sense because they're continuous in time.
Insofar as you appeal to any causal aspects of a physical system you arent talking about a computer in turing's sense, and nothing like a turing equivalence would apply.
We already know that all computable functions can be implemented by 'arbitary substrates' -- this is just the same as saying that you can 'make a circle out of any material'.
In exactly the same sense as gears can be networked, sand dunes already are. You can just go around labelling particles of sand with 0s and 1s, and for a subset, there you have it: the computable aspects of the TCP/IP protocol.
But this is irrelevant. TCP/IP isnt useful because of its computable aspects. It's useful as a design sheet for humans to rig systems of electrical devices with highly specific causal properties.
The system we call 'the internet' is useful because it connects keyboards, screens, mice, microphones, webcams, SSDs, RAM, etc. together -- and because these devices are provide for human interaction.
The sand dune is likewise already implementing arbitary computable functions, so is the sun, so is the air, and any arbitary part of the universe you care to choose.
But the sand dune lacks all the properties the internet has: there's no webcam, no keybaord, no screen, etc.
What we actually use are physical properties. Talk of algorithms is just a design tool for people to build stuff
I mildly disagree (although your final conclusion is correct: it indeed does nothing for science).
The deepest fundamental structures in the brain[0] are quantum fields, which are also the deepest fundamental structures in everything else.
There is no known quantum field of "soul" or "intelligence".
The right abstraction is higher, and could still be a whole lot of things; but as maths can be implemented in logic, which can be implemented in electronics or clockwork or hydraulics, it doesn't matter what analogy is used — and my mild disagreement here is that such inspiration has been useful and gotten us this far.
The process of evolution acts on organic systems, it doesn't act on quantum fields.
I appreciate there's some (imv strange) sense of 'intelligence' where 'finding the right puzzle piece' counts. I cannot fathom why we care about such a notion, and it seems to have almost nothing to do with what we do care about re 'intelligence'.
We care about that thing animals do, that thing which some do better than others. That thing which evolution brought about for (rapid) adaptive fitness to one's environment.
'Everything else is stamp collecting'
We already have a perfectly good understanding of puzzles and their solutions -- animals are their inventors
Intelligence isnt in the solution to a puzzle it's in its design, and especially, in what one does when one cannot solve it -- ie., how one adapts
The csci view of 'intelligence' is an act of self-aggrandising, it turns out to be: csci!
We can simulate evolution in a computer, and this is used as a form of AI directly.
That said, the way you're using biological evolution in your comment sounds as much like a strange analogy as all of the others: we may have some genetically programmed responses to snakes (bad) and potential mates (good), but we can also say that a loss of hydraulic pressure in our brain is a stroke, and use electrical signals to both read from and write to the brain.
What we evolved to think, while interesting from a social perspective, seems to me like the least interesting part of our brains from an AI perspective — it's the bit that looks like a hard-coded computer program, not learning, on the scale of a human life and seen from within.
i'm referring to evolution as the process by which animals were built
if aliens had come down and given us laptops, rather we invented digital machines, then likewise i'd be talking about the relevant materials science, physics etc.
reverse engineering a laptop to figure out how it works would require extremely little computer science, and 'only at the end'
the reason digital computers are interesting and useful is that they route electricity around devices which are designed to be responsive to one another. the patterns of activation, as managed by the CPU, are weakly describable by abstract algorithms like sorting
starting with a laptop, and no further information, we'd be 100(s)+ years of research away from needing to understand that CPUs were implementing a sorting algorithm
and importantly, that it is doing so has almost nothing to do with the value of the device -- which lies in its ability to provide 'dynamical power and modulation of operation' using electricity
we're in the same situation with animals and people think that, what, understanding gradient descent or backprop is helpful? this is just some csci bs
I'm not really following you, sorry; this is all too disjointed.
> we're in the same situation with animals and people think that, what, understanding gradient descent or backprop is helpful? this is just some csci bs
Assuming I've actually got your point for this (and I'm not sure I have):
The backpropagation algorithm itself might be "just some csci bs" (it sure has vibes of "let us shortcut the maths rather than find out how our brains did it"), but gradient descent is nice and general-purpose — much like how evolution is both good for biology and in simulation for everything else.
To get my point, imagine a laptop was delivered by an alien in the year 1900.
Now, try to take that seriously and think about the laptop as an actual object of experimental curiosity -- what exactly does science need to invent, discover, describe etc. to understand the operation of that laptop?
99.999% of that new knowledge has to be in physics and chemistry, before the tiny 0.0001% of theoretical csci knowlegde is brought to bare.
Consider how impossible it would be to apply any csci knowledge first: we do not even have the ability to measure the cpu state! So we could not even identify any part of the system with 0s, 1s, etc.
Now: that's a laptop!
Imagine now you're dealing with an animal.
Hopefully its now clear how ridiculous it is to describe basically any aspect of our mode of operation by starting with trivial little csci algorithms. It would be insane even with an actual electronic computer, let alone an organic system.
A system whereby clearly our organic properties are radically fundamental to our mode of operation
Consider two hypothetical versions of this. One, the exact scenario as you described - history unfolded like it did, until the 1900 alien incident. CS and information theory is in its infancy. You're correct that most of the necessary work would first go to physics and chemistry and their various spin-off fields, because that's what's needed to build tools necessary to inspect the machine in full detail. The math would develop along the way, and eventually enough CS to make sense of the observations made before.
Now for an alternate scenario: it's the 1900 again, with the twist that CS is already well-developed theoretical field of mathematics (IDK, perhaps the same aliens dropped us a mechanical computer in year 1800). We'd still need to push physics and chemistry (and spin-offs) forward, but this time, we would know what we're looking for. We'd know the thing does computation, we'd be able to model what kind of computation it does. The question would change from "what does this thing do" to "how exactly does it compute the specific things we know it does". I imagine this would speed up the process of getting a complete picture, because it's easier to understand a specific solution to a problem once you know the answer, than it is to figure out the answer along with the solution.
In terms of understanding the brain, we are in the second situation. We may still know little about how the gooey thing ticks, but we have a growing understanding of what comes out of all that ticking, and a very good understanding of the fundamental rules of ticking.
Nearly every physical system implements every algorithm -- if you wanted to find what in a laptop was 'sorting numbers' that would every part.
The light emitted by the screen is being 'sorted' as it is scanned out, the heat air by the fan is being 'sorted' as it swirls around, etc.
You cannot ask, "what physical system implements this algorithm?" as an investigative question, the answer is: nearly all of them.
This is why computable functions, ie., pure algorithms, are explanatorily useless. They play only a (observer-relative) 'design role' in creating real programs.
You're normally a lot more coherent than you have been in this thread, so… are you feeling alright? Getting enough sleep?
> The light emitted by the screen is being 'sorted' as it is scanned out, the heat air by the fan is being 'sorted' as it swirls around, etc.
This reads like either you're trolling, or that was written by an LLM, or English isn't your native language, or don't know what 'sorting' is, or you don't know what screens and fans do.
It's so fundamentally wrong I was actually tempted to get ChatGPT to respond to it, but that would be a bit mean and add little.
there's nothing garbled about this idea -- not sure about my messaging in this thread, maybe the explanations are a bit looser today
A computable function is a function from naturals to the naturals typically specified as an algorithm: a sequence of steps by which input numbers are transformed into output numbers.
Eg., consider sorting: 101, 001, 111, etc.
Now any physical system can have any component part associated with 0 or 1. There is no reason, a priori, to suppose that voltage flux on a CPU is a "1" or a "0" any more than to associate a photon emission.
If one associates a photon emission at some location with a 0, and another with a 1, then displaying content on a screen is a form of sorting.
Likewise a planet orbiting the sun is implementing a while(true) i = -1*i, if one associates -1/1 with position of the planet orbiting the sun. This is the heart of 'reversible computing'.
The only reason we associate some microscopic part of a CPU with 0, 1, etc. is by design it is something we as observers bring to bare on our interpretation of the physical system. But there's an infinite number of such attributions. We would only ever come to conclude that voltage flux across transitiors was relevant to the operation of a laptop via physics experiments --- no hope via computer science.
This is very important for understanding why csci is presently useless and misinformative as far as the brain is concerned. There are an infinite number of 0/1 attributions to make, and infinite number of algorithms being implemented etc. almost all of those are irrelevant.
Just, as you detect the absurdity, of using sorting algorithms to understand how an LCD works. This is presently less absurd than people talking about neural networks and equivocating with brain structures
> This is very important for understanding why csci is presently useless and misinformative as far as the brain is concerned. There are an infinite number of 0/1 attributions to make, and infinite number of algorithms being implemented etc. almost all of those are irrelevant.
What makes brain a computer, and the air molecules in your room not a computer, is entropy. The behavior of air molecules is effectively random, the behavior of a brain very much not so.
Also, the universe isn't an uniform temperature soup where everything is equally random. There's energy cost to complexity, and there's a likelihood penalty to complexity. This gives us good confidence that the brain isn't doing something absurdly incomprehensible: it was made by evolution, which is a dumb, brute-force, short-term process. It didn't go out of its way to make things complex - it went with the first random thing that improved survival, which, being random, means generally the simplest thing that could work well enough.
Whatever trickery made brains tick, it must be something that's a) dumb enough for evolution to stumble on it, b) generic enough to scale up by steps small enough for evolution to find, all the way to human level, while c) conferring a survival advantage at every step of the way. Sure, the brain design isn't optimal or made in ways we'd consider elegant, but it's also not actively trying to be confusing. There's literally a survival penalty to being confusing (by means of metabolic cost)!
All to say, we're not dealing with a high-entropy blob of pure randomness. We're dealing with a messy and unusual system, but one that was strongly optimized to be as simple as one could get away with. This narrows down the problem space considerably, and CS is our helpful guide, at the very least by putting lower bounds on complexity of specific computations.
As soon as you add these physical constraints on what counts as a 'computer' you're no longer talking about computers as specified by turing, nor computer science -- which is better called Discrete Mathematics.
You're conflating the lay sense of the term meaning 'that device that i use' with the technical sense. You cannot attribute properties of one to the other. This is the heart of this AI pseudoscience business.
All circles are topologically equivalent to all squares. That does not mean a square table is 'equivalent' to a circular table in any relevant sense.
If you want to start listing physical constraints: the physical state can be causally set deterministically, the physical state evolves causally, the input and output states are measurable, and so on -- then you end up with a 'physical computer'.
Fine, in doing so you can exclude the air. But you cannot exclude systems incapable of transfering power to devices (ie., useless systems).
So now you add that: a device which, through its operation, powers other devices. You keep doing that and you end up with 'electrical computers' or a very close set of physical objects with physical propeties.
By the time you've enumerated all these physical properties, none of your formal magical 'substrates dont matter' things apply. Indeed, you've just shown how radically the properties of the substrate do apply -- so many properties end up being required.
Now, as far as brains go -- the properties of 'physical computers' do not apply to them: their input/output states may be unmeasurable (eg., if QM is involved); they are not programmable (ie., there is no deterministic way to set their output state); they do not evolve in a causally deterministic way (sensitive to biochemical variation, randomness, etc.).
Either you speak in terms of formalism, in which case you're speaking in applicable non-explanaotry toys of discrete mathematicans'; or you start trying to explain actual physical computers and end up excluding the brain.
All this is to avoid the overwhelmingly obvious point: the study of biological organisms is biology.
There's little sense in ignoring the whole basic mode of operation, physics, chemistry and biology of the brain in order to analogise it to another system without any of those properties.
This, at best, provides a set of inspirations for engineers -- it does nothing for science.