Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the opposite is true. Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings. He discovers he needs food, he needs to be protected and cared for. He discovers he doesnt like pain. If you talk to a 3 year old child you can have a fairly intelligent conversation about his parents, about his sense of security because this child has built a mental model of the world as a result of being trained by his parents. This kind of training requires context and crossreferencing of information which can only be done by inferencing. You cant train a child by flashing 10,000 pictures at him because pictures are not experience, even adults can be fooled by pictures which are only a 2D representation of 3D concepts of 3D space. So all these experiences that a small child has of knowing about the world come to him symbolically, these symbols model the world and give even a small child the ability to reason about external things and classify them. This is human level intelligence.

Human like intelligence is training a computer to recognize pixel patterns in images so it can make rules and inferences about what these images mean. This is human like intelligence as the resulting program can accomplish human like tasks of recognition without the need for context on what these images might mean. But there is no context involved about any kind of world, this is pure statistical training.



> Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings.

Actually, the research has found that new born infants can perceive all sorts of things, like human faces and emotional communication. There is also a lot of inborn knowledge about social interactions and causality. The embodied cognition idea is looking at how we experience all that.

By the way, Kant demonstrated a couple of centuries ago that the blank slate idea was unworkable.


>Actually, the research has found that new born infants can perceive all sorts of things, like human faces and emotional communication.

yes, thats called sensory input.... a child deprived of sensory input when newborn can die because there is nothing there to show the baby of its existence, this the cause of crib death (notice that crib death is not called arm death because a baby doesnt die in the mothers arms)

>There is also a lot of inborn knowledge about social interactions and causality.

no, babies are not born with any knowledge at all of even the existence of society or beings. causality is learned from the result of human experience, causality is not known at birth


There's no reason to think the human brain learns things using purely statistical methods, and then turn around and try to argue that evolution cannot encode the same information into the structure of a baby using those exact same methods. Humans have lots of instinctual knowledge; geometry, facial recognition; kinesthetics, emotional processing; affinity for symbolic language and culture, just to name a few. What we don't have is knowledge of the specific details needed for socialization and survival.


Hume successfully argued that it's impossible to get causality from experience, because causes aren't in experience, only correlations or one event following another. You need a mental concept of causality to draw those connections. Hume called it a habit. Kant argued that it had to be one of the mental categories all humans structure the world of experience with. Space and time are two others. You don't get those concepts from raw sensory data. We have to be born with that capability.


This is human like intelligence as the resulting program can accomplish human like tasks of recognition without the need for context on what these images might mean.

That's a very limited subset of what I mean by "human like intelligence". And within that specific subset, yes, AI/ML can and have achieved "human level" results. But that same ML model can recognize cats in vectors of pixels doesn't know anything about falling down. It's never tripped, stumbled, fallen, skinned it's palms, and felt the pain and seen the blood that results. It's never know the embarrassment of hearing the other AI kids laughing at it for falling, or the shame of having it's AI parent shake it's head and look away after it fell down. It's never been in love with the pretty girl AI (or pretty boy AI) and been had to wonder "did he/she see me fall and bust my ass?"

Now giving a computer program some part of the experience of falling we could do. We could load the AI into a shell of some sort, and pack it with sensors: GPS receiver, accelerometers, ultrasonic distance detectors, cameras, vibration sensors, microphones, barometric pressure sensor, temperature detector, etc., and then shove it off a shelf. Now it would "know" something about what falling actually is. And that's what I mean by the need for experiental learning in a situated / embodied setting.

While it might be possible in principle to get that knowledge into a program in some other way, my suspicion is that it would be prohibitively difficult to the point of being effectively impossible.


You've obviously never had children.


[citation needed]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: