Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans assign a lot of, well, meaning to meaning. It turns out that you can get a really good score on tasks that superficially you would think require actual understanding without programming any of that in.

Does this mean the neural network has learned about meaning? Does that mean that it has just gotten really good at faking it? Does is mean that meaning itself doesn't really exist, and it's just a shorthand for advanced pattern matching? Does it matter?

Honestly, we don't know. But we've been thinking about it for a very long time. See for example the famous Chinese Room thought experiment:

https://en.wikipedia.org/wiki/Chinese_room



> Does it matter?

As long at you don't make reckless assumptions then it not for some application, unklike (not going to name here) build a cult a like that GPT like models in near future will perform most if not all tasks better then humans.

Where it really matters is for mission critical application for example; in Windows or Linux terminal would you allow GPT to run terminals commands based of events in automated way ?


Try driving a car around without both conceptual and causal systems understanding of the world - meaning matters for survival.


Try flying in 3-dimensional space in real time without both a conceptual and causal systems understanding of the world.

https://en.wikipedia.org/wiki/Drosophila_melanogaster#Connec...


Thats easier than having to follow traffic rules in an ever changing environment of people cars and kids doing random things. This is why cats and dogs with their great perception skills would get themselves and others killed in traffic if they tried to drive. But for sure that animal has modeled a spacetime world of events with some memory of event patterns to anticipate and predict what might happen next.

I live in SF and I have not yet seen one of the so many AV's here drive without a driver. Once that really starts happening with any scale, we will see what happens next for sure. But there is definitely a Theranos kind of promise to AV's at the moment, and so much money on the line that the tech works...


How many Gees can that pull?

If a car could easily stop in the space of a meter then it would be so easy to make self-driving safe.

Not that I think a car needs to understand anything more complex than momentum, but you're not offering a very strong argument on the matter of car navigation.


The strong argument is that a car can't stop immediately, and we humans are always predicting and anticipating what may happen next and taking precautionary action. For example, if you are driving along and see kids playing in a yard with a ball, and that ball is thrown heading for the street, and some kid is in the lawn looking up running directly for the street, you would know to expect that kid to run right between parked cars and be in the street, and you would slow down. Our AV friend would kill him.

We humans are constantly predicting what might happen next based on patterns of events by systems we understand the causality of without realizing it - it is a basic survival skill that current AV's entirely lack.


Some situations are much better with intelligence, but I don't think your example is very convincing of your point either. If the kid is running toward the street then a momentum calculation is plenty.


The point is the kid is seen running on the yard as you apprach, however there are parked cars obscuring the view such that there is no way to know the kid will continue when no longer seen between the cars and into the street without anticipating that that might happen. Momentum is not helpful the moment the kid is not in direct view.

Why do you think so many animals, with such great perception, end up road kill? The point is, perception does not a safe driver make!


> Momentum is not helpful the moment the kid is not in direct view.

Assuming things still exist for one or two seconds after losing sight of them isn't a difficult task. It's still a pretty basic momentum calculation. It's not about modeling the mind of the child to know if they'll continue: the dumbest option says motion will continue and gives you the safe result here.

> Why do you think so many animals, with such great perception, end up road kill?

Because they're not cautious around cars and/or wait for the last second on purpose? Switching to the perception of the thing getting hit is a very different context.


Well, we can see what actually happens in the near future, assuming any Waymo's drive around SF without a driver - so far all the many I've seen have a driver.


They've had a lot of driverless Phoenix service, I think.


> meaning matters for survival

That's the root source of meaning, the most fundamental reason we assign value to states and actions. It's certainly not something that happens just in a part of the brain, but an agent-in-environment thing.

We should give GPT a pair of legs and make its survival dependent on its behaviour to bootstrap the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: