I'm not an expert on this, but here's my current understanding:
Symbolic reasoning/AI is fantastic when you have the right concepts/words to describe a domain. Often, the hard ("intelligent") work of understanding a domain and distilling its concepts need to be done by humans. Once this is done, it should in principle be feasible to load this "DSL" into a symbolic reasoning system, to automate the process of deduction.
The challenge is, what happens when you don't have an appropriate distillation of a complex situation? In the late eighties and early nineties, Rodney Brooks and others [1] wrote a series of papers [2] pointing out how symbols (and the definiteness they entail) struggle with modeling the real world. There are some claimed relations to Heideggerian philosophy, but I don't grok that yet. The essential claim is that intelligence needs to be situated (in the particular domain) rather than symbolic (in an abstract domain). The "behavior driven" approach to robotics is stems from that cauldron.
[1]: Authors I'm aware of include Philip Agre, David Chapman, Pattie Maes, and Lucy Suchman.
[2]: For a sampling, see the following papers and related references: "Intelligence without reason", "Intelligence without representation", "Elephants don't play chess".
The essential claim is that intelligence needs to be situated (in the particular domain) rather than symbolic (in an abstract domain).
I think there is something (a lot) to this. Consider how much of our learning is experiential, and would be hard to put into a purely abstract symbol manipulating system. Taking "falling down" for example. We (past a certain age) know what it means to "fall", because we have fallen. We understand the idea of slipping, losing your balance, stumbling, and falling due to the pull of gravity. We know it hurts (at least potentially), we know that skinned elbows, knees, palms, etc. are a likely consequence, etc. And that experiential learning informs our use of the term "fall" in metaphors and analogies we use in other domains ("the market fell 200 points today, on news from China...") and so on.
This is one reason I like to make a distinction between "human level" intelligence and "human like" intelligence. Human level intelligence is, to my way of thinking, easier to achieve, and has arguably already been achieved depending on how you define intelligence. But human like intelligence, that features that understanding of the natural world, some of what we call "common sense", etc., seems like it would be very hard to achieve without an intelligence that experiences the world like we do.
Anyway, I'm probably way off on a tangent here, since I'm really talking about embodiment, which is related to, but not exact the same as, situated-ness. But that quote reminded me of this line of thinking for whatever reason.
I'm not into AI, but it's been a while that from hearing of it I've been percieving that there's quite a gap between AI and human intelligence, which is embodied cognition. It appears to me that human reasoning concepts are vastly sized and paced by the physical and biological world, while this information is not accessible to a highly computational AI.
E.g. human sizing of time is highly linked to physiological timing, may it only be heartbeat pace.
More generally, all emotional input can gear reasoning (emotional intelligence).
The amount of confusion I see between two people from different cultures speaking the same language is amazing. 70% of communication is body language. The rest appears to be shared assumptions about what the other person just said.
I don't think we'll ever be able to have a conversation with a dolphin. We know they talk to each other, we know they're able to interact with us, but how would we ever communicate with them? Their world is so different from ours. To use the example above, a dolphin cannot "fall down", so any language concepts that we have around "falling" will be impossible for them to grok. Likewise we won't have mental concepts around sonar that they use every day, and so won't understand what they mean when they refer to that. We may be able to get to "hello, my name is Alice", but beyond that... nope.
Same with "conversational" AI - it's going to need to understand what its like to have a body, so it can understand all the language around bodies. Simulating that, and being able to make references to "falling over" as a brain-in-a-box, is going to lead to misunderstanding, for exactly the reasons you describe.
I hadn't thought about the measurements aspect, but it's true. There's been some research into trees communicating - could be a classic example. They talk (via fungal networks in their roots, apparently), but so slowly that we can't hear them.
And yes, human emotion is linked to human physiology, and hormones. A lot of human communication is about recognising and empathising with human emotion. That's going to be a tough thing for a machine to do...
I think the opposite is true. Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings. He discovers he needs food, he needs to be protected and cared for. He discovers he doesnt like pain. If you talk to a 3 year old child you can have a fairly intelligent conversation about his parents, about his sense of security because this child has built a mental model of the world as a result of being trained by his parents. This kind of training requires context and crossreferencing of information which can only be done by inferencing. You cant train a child by flashing 10,000 pictures at him because pictures are not experience, even adults can be fooled by pictures which are only a 2D representation of 3D concepts of 3D space. So all these experiences that a small child has of knowing about the world come to him symbolically, these symbols model the world and give even a small child the ability to reason about external things and classify them. This is human level intelligence.
Human like intelligence is training a computer to recognize pixel patterns in images so it can make rules and inferences about what these images mean. This is human like intelligence as the resulting program can accomplish human like tasks of recognition without the need for context on what these images might mean. But there is no context involved about any kind of world, this is pure statistical training.
> Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings.
Actually, the research has found that new born infants can perceive all sorts of things, like human faces and emotional communication. There is also a lot of inborn knowledge about social interactions and causality. The embodied cognition idea is looking at how we experience all that.
By the way, Kant demonstrated a couple of centuries ago that the blank slate idea was unworkable.
>Actually, the research has found that new born infants can perceive all sorts of things, like human faces and emotional communication.
yes, thats called sensory input....
a child deprived of sensory input when newborn can die because there is nothing there to show the baby of its existence, this the cause of crib death (notice that crib death is not called arm death because a baby doesnt die in the mothers arms)
>There is also a lot of inborn knowledge about social interactions and causality.
no, babies are not born with any knowledge at all of even the existence of society or beings. causality is learned from the result of human experience, causality is not known at birth
There's no reason to think the human brain learns things using purely statistical methods, and then turn around and try to argue that evolution cannot encode the same information into the structure of a baby using those exact same methods. Humans have lots of instinctual knowledge; geometry, facial recognition; kinesthetics, emotional processing; affinity for symbolic language and culture, just to name a few. What we don't have is knowledge of the specific details needed for socialization and survival.
Hume successfully argued that it's impossible to get causality from experience, because causes aren't in experience, only correlations or one event following another. You need a mental concept of causality to draw those connections. Hume called it a habit. Kant argued that it had to be one of the mental categories all humans structure the world of experience with. Space and time are two others. You don't get those concepts from raw sensory data. We have to be born with that capability.
This is human like intelligence as the resulting program can accomplish human like tasks of recognition without the need for context on what these images might mean.
That's a very limited subset of what I mean by "human like intelligence". And within that specific subset, yes, AI/ML can and have achieved "human level" results. But that same ML model can recognize cats in vectors of pixels doesn't know anything about falling down. It's never tripped, stumbled, fallen, skinned it's palms, and felt the pain and seen the blood that results. It's never know the embarrassment of hearing the other AI kids laughing at it for falling, or the shame of having it's AI parent shake it's head and look away after it fell down. It's never been in love with the pretty girl AI (or pretty boy AI) and been had to wonder "did he/she see me fall and bust my ass?"
Now giving a computer program some part of the experience of falling we could do. We could load the AI into a shell of some sort, and pack it with sensors: GPS receiver, accelerometers, ultrasonic distance detectors, cameras, vibration sensors, microphones, barometric pressure sensor, temperature detector, etc., and then shove it off a shelf. Now it would "know" something about what falling actually is. And that's what I mean by the need for experiental learning in a situated / embodied setting.
While it might be possible in principle to get that knowledge into a program in some other way, my suspicion is that it would be prohibitively difficult to the point of being effectively impossible.
I think the key term here is "concept formation" as well as "knowledge representation". How do we form concepts, and how are they represented internally to make them tractable?
Symbols are one way to represent concepts (or rather, point to them). But with symbols we are limited to surface-level transformations according to a syntax (I'm pretty sure Chomsky said something similar?). What do the concepts actually point to, though, and can we represent that underlying structure programmatically?
As I wrote in another comment, I'm very inspired by the conceptual spaces model:
I used to think ML was missing the ability to formulate abstractions until I read about autoencoders and GAN. If you have not, I suggest looking into it.
In a well designed autoencoder, the network ends up discovering an abstract representation of inputs and a conceptual space to express it.
I posted a comment about the Heidegger philosophy side of things a few days ago. Winograd’s book Understanding Computers and Cognition is my reference - explains the connection really well. He gives the example of hammering to argue that common sense human intelligence is based on situatedness (related to heidegger’s “being-in-the-world”) as opposed to manipulating symbolic representations. While engaged in hammering, you don’t have a mental model of a hammer top of mind.
An original source for the Heideggerian critique of symbolic AI projects is Hubert Dreyfus, a philosophy professor at MIT who specialized in Heidegger and argued that his colleagues in the CS department were codifying just the kind of naive views on cognition that Heidegger spent his life criticizing.
See his books “What Computers Can’t Do”, “Being-in-the-world”, and the paper “Why Heideggerian AI Failed and Why Fixing It Would Require Making it More Heideggerian” (something like that).
A basic point is that ordinary human coping does not involve conceptual thinking, schematic rules, or the manipulation of symbols. It’s sort of like “Thinking Fast and Slow”.
We do not fundamentally live by constantly consulting our inner symbolic representation of the world, though we do that too. The more fundamental way of being is to just cope and care directly without explicit cognitive representation.
So I could attempt to codify an “expert system” for my way of coping with and caring for my cat, let’s say. But it would only be a kind of symbolic ghost of my real way of being, and it would never be sufficient. The more precise I tried to make it, the more complex it would become, until it became a gigantic mess, because it’s fundamentally an inaccurate model.
Dreyfus “Being-in-the-world” brings up many examples of the way the intelligence of daily life is informal, unconscious, and nonsymbolic. The way we maintain distance from other bodies which is only roughly approximated by the idea of a “personal space”, or the ways in which we live out masculinity and femininity.
“There are no beliefs to get clear about; there are only skills and practices. These practices do not arise from beliefs, rules, or principles, and so there is nothing to make explicit or spell out. We can only give an interpretation of the interpretation already in the practices.”
“Being and Time seeks to show that much of everyday activity, of the human way of being, can be described without recourse to deliberate, self-referential consciousness, and to show how such everyday day activity can disclose the world and discover things in it without containing any explicit or implicit experience of the separation of the mental from the world of bodies and things.”
“The traditional approach to skills as theories has gained attention with the supposed success of expert systems. If expert systems based on rules elicited from experts were, indeed, successful in converting knowing-how into knowing-that, it would be a strong vindication of the philosophical tradition and a severe blow to Heidegger's contention that there is no evidence for the traditional claim that skills can be reconstructed in terms of knowledge. Happily for Heidegger, it turns out that no expert system can do as well as the experts whose supposed rules it is running with great speed and accuracy. Thus the work on expert systems supports Heidegger's claim that the facts and rules ‘discovered’ in the detached attitude do not capture the skills manifest in circumspective coping.”
The question then is whether the kind of work a philosopher supposedly does—formal, conscious, symbolic—is especially fundamental to intelligence. Like, is the mind in its basic function similar to an analytic philosopher or logician? In order to make artificial intelligence, should we try to develop a simulation of a logician?
But not even philosophers actually work in the schematic way of an AI based on formal logic...
I think dichotomy between "formal, conscious, symbolic" and "informal, unconscious, non-symbolic" may be false. We will find out in a few hundred years when AI matures. Of course I don't think we will have an AGI based on first order logic a la 1960s efforts. On the other hand, deep neural networks are not that far from "informal, unconscious, non-symbolic", but are still based on formal and symbolic foundations.
Well, every dichotomy is false, probably even the dichotomy between dichotomies and non-dichotomies...
Dreyfus’s critique is about the first order (or whatever) logic programs, and I don’t think neural nets are cognitivistic in the same way, but there’s also the point that until they live in the human world as persons they will never have “human-like intelligence”.
I think it’s interesting to think of AI in a kind of post-Heideggerian way that includes the possibility that it can be desirable or necessary for us human beings to submit and “lower” ourselves to robotic or “artificial” systems, reducing the need for the AIs to actually attain humanistic ways of being. If the self-driving cars are confused by human behaviors, we can forbid humans on the roads, let’s say. Or humans might find it somehow nice to let themselves act within a robotic system, like maybe the authentic Heideggerian being in the world is also a source of anxiety (anxiety was a big theme for Heidegger after all).
Symbolic reasoning/AI is fantastic when you have the right concepts/words to describe a domain. Often, the hard ("intelligent") work of understanding a domain and distilling its concepts need to be done by humans. Once this is done, it should in principle be feasible to load this "DSL" into a symbolic reasoning system, to automate the process of deduction.
The challenge is, what happens when you don't have an appropriate distillation of a complex situation? In the late eighties and early nineties, Rodney Brooks and others [1] wrote a series of papers [2] pointing out how symbols (and the definiteness they entail) struggle with modeling the real world. There are some claimed relations to Heideggerian philosophy, but I don't grok that yet. The essential claim is that intelligence needs to be situated (in the particular domain) rather than symbolic (in an abstract domain). The "behavior driven" approach to robotics is stems from that cauldron.
[1]: Authors I'm aware of include Philip Agre, David Chapman, Pattie Maes, and Lucy Suchman.
[2]: For a sampling, see the following papers and related references: "Intelligence without reason", "Intelligence without representation", "Elephants don't play chess".