Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The question then is whether the kind of work a philosopher supposedly does—formal, conscious, symbolic—is especially fundamental to intelligence. Like, is the mind in its basic function similar to an analytic philosopher or logician? In order to make artificial intelligence, should we try to develop a simulation of a logician?

But not even philosophers actually work in the schematic way of an AI based on formal logic...



I think dichotomy between "formal, conscious, symbolic" and "informal, unconscious, non-symbolic" may be false. We will find out in a few hundred years when AI matures. Of course I don't think we will have an AGI based on first order logic a la 1960s efforts. On the other hand, deep neural networks are not that far from "informal, unconscious, non-symbolic", but are still based on formal and symbolic foundations.


Well, every dichotomy is false, probably even the dichotomy between dichotomies and non-dichotomies...

Dreyfus’s critique is about the first order (or whatever) logic programs, and I don’t think neural nets are cognitivistic in the same way, but there’s also the point that until they live in the human world as persons they will never have “human-like intelligence”.

I think it’s interesting to think of AI in a kind of post-Heideggerian way that includes the possibility that it can be desirable or necessary for us human beings to submit and “lower” ourselves to robotic or “artificial” systems, reducing the need for the AIs to actually attain humanistic ways of being. If the self-driving cars are confused by human behaviors, we can forbid humans on the roads, let’s say. Or humans might find it somehow nice to let themselves act within a robotic system, like maybe the authentic Heideggerian being in the world is also a source of anxiety (anxiety was a big theme for Heidegger after all).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: