It's really not clear to me what gives a human the nearly-magical ability to make decisions that a computer could not possibly make.
If anything, I think we've seen that computers are much better than humans at making decisions in an ever-expanding set of narrow contexts (eg.: chess, go, protein folding...). It's not so much a matter of "can a computer do it better", it's more a question of "when are we going to figure out how to break down the problem in a way that a computer can solve much better than a human".
Your examples are things that can mostly be broken down or modeled logically though. Chess are Go are hard so I guess it seems like AI for them means there some sort of fundamental break through.
But consider how a human or even a dog catches a ball. You can study physics and learn that a ball will follow a parabola when thrown and the the shape of parabola is determined by the velocity as it leaves the throwers hand. But none of that matters. You try and catch a ball. Miss a bunch of times. Practice. And then you get better at it. And then you can catch all manner of things coming at you at various velocities. You can unconsciously predict a ball's path accurately enough to catch it without even knowing any physics. Some people can even rely on their instincts.
This line of thought is moving goal posts, but I think even for people that have some understanding of what AI is, the current state of the art doesn't pass our intuition about all the things a human does to drive a car. And the sorts of mistakes that current self driving makes don't really seem better than a human.
I absolutely think the type of decision we're talking about can be modelled. I'm not suggesting it's trivial, and I'm not saying there's a nice and tidy closed analytical solution, which is what you appear to be focused on with your parabola/physics-based example, but I absolutely think statistical methods (or "AI/ML" if you prefer) can be applied to the problem. It's pretty much how your brain solves these sorts of problems too, there's no magic to intuition or human decision-making.
>I absolutely think the type of decision we're talking about can be modelled.
The possibility is not in dispute. Billions of biological organisms do these things everyday.
But this is:
>It's pretty much how your brain solves these sorts of problems too, there's no magic to intuition or human decision-making.
Is it how the brain works?
My ball example isn't about it having a closed solution. If you throw a ball I can track it with my eyes and I know it's going to follow some path. If there is a gust of wind mid flight I can even attempt to make corrections on the fly. Someone with no notion of projectile motion can do this and never come up with the concept of projectile motion.
The "hardware" so to speak figures this out on it's own and the same "system" can be applied to infinitely many problems without prior knowledge and get results. This would be like showing a Tesla some chess games, and then the car realising, this is some sort of game, and then learning how to play or inventing something new to do with the pieces.
The results so far are really terrible compared to our intuition. We have Teslas that drive straight into road barriers and can't predict that a person might reappear after moving behind an obstruction.
This is even more goal post moving and I admit it's very unfair. But I'm not baby sitting a car.
Yeah, I concede that it absolutely is a lot of hand-waving. But we're also talking about emergent properties: it's possible to understand why they arise, what the emergent properties are, and how the underlying system works, without having full information about it all. In fact that's a very analogous situation to our thermodynamic understanding of gases, or fluid dynamics.
I don't want to mislead, there is an incredible amount of stuff we don't know... But we do understand more than most well-informed people think, even people in adjacent fields.
I think there's some niche driving scenarios that humans will continue to beat computers at. What comes to mind most readily are meta-knowledge things like "It's 2:45 and I saw the school bus go by, right now it's letting kids off and blocking traffic up ahead so there will be no traffic on this cross-street". Or "The football game on the radio is ending soon, I better avoid the streets near the stadium because there will be a lot of pedestrians".
> It's really not clear to me what gives a human the nearly-magical ability to make decisions that a computer could not possibly make.
IMO, it's 16+ years of life experience that a computer doesn't get. Sure we train computers on what things likely are in pictures, and where they are on some sort of map. But that gives zero context into the 50,000 other things going on at any moment in a busy street.
If anything, I think we've seen that computers are much better than humans at making decisions in an ever-expanding set of narrow contexts (eg.: chess, go, protein folding...). It's not so much a matter of "can a computer do it better", it's more a question of "when are we going to figure out how to break down the problem in a way that a computer can solve much better than a human".