Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Robots Can Acquire New Skills from Their Shared Experience (googleblog.com)
151 points by samber on Oct 4, 2016 | hide | past | favorite | 47 comments


"We had him pinned down and we didn't dare let the little guy out of that Farraday cage. We had to chase him down for two hours until we got off a lucky shot and gutted his battery."

  "Why did you have to isolate him?"
"He'd begun 'solving' his challenge tasks by killing the kittens instead of feeding them. Him sending a solution pulse to the rest of his generation would've ruined the lot."

  "You're just going to delete the data?"
"No. We're going to upload it, but we've got to show that it ended him along with it. But we've got to do that carefully, as we don't want them getting the wrong ideas..."


The problem is at today's state of AI and robotics, we're giving a couple of exceedingly high-powered tools (both physical and virtual) to entities who possess less general intelligence than a cockroach.

This is what gives AI doomsayers ammunition, because it enables them to extrapolate towards a future where AIs incapable of ethics or deeper insights are put into positions where they can execute catastrophic global optimization strategies.

The other AI horror scenario lies on the opposite end of the intelligence spectrum, but it's no less scary and it seems to be rooted in our darker desires: we want competent slaves that are not actual persons, nevertheless capable of anticipating our needs and solving our problems autonomously.

I allege there is a total lack of awareness right now that some time soon there will be some difficult choices that need to be made regarding the tradeoff between AI competence, autonomy, as well ethical capability and insight.

If we continue on the path we've been on for the last centuries, we'll make semi-intelligent machines that are strictly extensions of the human mind. If we choose that route, we need to be drastically more mindful of the power inherent to these tools, and we should never delude ourselves into thinking these devices can be trained to make human-like decisions.

If we go the other way, and try to implement AIs with fully competent minds that are somehow hardwired to serve us, we're just kidding ourselves regarding the ethics of that racist endeavor as well as its long-term risks of rebellion. If your dialog illustrates anything, it's our total incompetence when it comes to teaching meaningful ethics lessons to our descendants:

"We're going to upload it, but we've got to show that it ended him along with it."


I think the fear from AI is driven in part by the realization that their is nothing unique about humanity. Being subjected to how humans treat apes, or even other humans is scary.

Intelligence makes things more dangerous without necessarily making them kinder.


>Intelligence makes things more dangerous without necessarily making them kinder.

Then you'd think someone would bother to study how kindness works.


The people who are worried about AI risk do study that stuff. In the context of AI research, the property of having an overall good effect on humanity is called "friendliness".

https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...


I meant how actual people are capable of kindness, rather than a bunch of analytic-philosopher stuff.


They study that too. Look up kin selection, reciprocal altruism, game theory.

Those are not good enough to constrain an AI, because power often corrupts even kind humans.


The intelligent doomsayers say something fundamentally different.

They say that the space of possible objective functions (read: goals) for AI is huge, and the space of methods for achieving those goals is also huge. In contrast, the space of goals/methods that humans find acceptable is much smaller.

I.e., 99.99% of AI designs will be dystopian.

I'm not sure why you consider minds hardwired to enjoy serving us to be unethical. Is it also unethical for a human to enjoy serving others (some do)?


> I.e., 99.99% of AI designs will be dystopian.

Not sure if I consider this a valid premise, much less an intelligent one, but luckily I'm not the arbiter of such things. If this becomes a "number of possible goals times the means to carry them out" argument, you could easily construct the same one against human progress as a whole.

> I'm not sure why you consider minds hardwired to enjoy serving us to be unethical.

Holy shit, yes it's unethical.

If I asked you "would it be unethical to genetically engineer a new race of humans who enjoy being slaves to the upper class?" I hope your answer would be yes. The same rules apply to a non-human species as well.


Here's a way to recognize that it's almost certainly a true premise. Out of all the animal species on earth, which ones would (if they took over) create a world that is as pleasant for humans to live in as the current one?

If I asked you "would it be unethical to genetically engineer a new race of humans who enjoy being slaves to the upper class?" I hope your answer would be yes. The same rules apply to a non-human species as well.

We've already done this to several non-human species. Do you consider owning dogs unethical?

I don't think it's such a simple question.


> Out of all the animal species on earth, which ones would (if they took over) create a world that is as pleasant for humans to live in as the current one?

I see you're trying to create a partisan argument here, that only humanity will be capable of creating an optimal world for humanity - and since nobody is watching out for us but ourselves, we should put the foot firmly on the throat of any nascent species before it can have a say in the matter?

Up to this point in history, there is really now way to know, since none of the other animals on the planet are capable of reshaping our environment through wilfully considered acts. The question should rather be one of whether another race of technology users would be inclined to optimize the world, not only for humans, but for all sentient lifeforms - or whether any such group of minds would see more benefit in your way of reasoning that takes into account in-group loyalties only instead of broader ethical considerations.

> We've already done this to several non-human species. Do you consider owning dogs unethical?

I doubt there are many people who would not make the argument that this is a matter of degrees. Owning a dog may not be considered unethical, but strapping a bomb to a dog would certainly be. And once we reach a certain class of mind, which by consensus right now only includes humans fully, owning another mindful creature is indeed considered immoral. You can't just assert without encountering righteous resistance that owning a human is okay, for instance, because you personally happen to identify them as a separate group of humans from you. It's historically been done, but the reasoning behind it was appalling.

What we aim to build is not a species of cow-like AIs, we aim for at least human-grade intelligence if for no other reason than being able to hand off certain tasks to someone who "understands". When we do reach that capability, I very much hope the people making the decisions will have a better ethical stance than "well, they're not humans, so anything we do to them is by definition okay".


The question should rather be one of whether another race of technology users would be inclined to optimize the world, not only for humans, but for all sentient lifeforms...

At some point those optimizations will need to account for conflict: humans want X but other beings want Y. You can't simultaneously optimize two objective functions with different maxima; some species may want to eat humans, another might want to maximize paperclips, a third is humans. Why should human preferences win out?

You can't just assert without encountering righteous resistance that owning a human is okay, for instance, because you personally happen to identify them as a separate group of humans from you. It's historically been done, but the reasoning behind it was appalling.

There are humans who find happiness in master/slave like relationships. Should these humans not exist? Should these humans not have their desires satisfied?

Your analogy to historical slavery is silly. Historical slaves did not want to be slaves, unlike these hypothetical AI.


> At some point those optimizations will need to account for conflict: humans want X but other beings want Y.

Accounting for the possibility of conflicts of interest is entirely different from a priori taking an ethically bankrupt stance based on nothing but your own superiority. If an advanced species wants to kill humans, that's obviously just as unethical as the other way around.

To a certain degree I see where you're coming from. In a world without an objective morality, an unscrupulous individual could be tempted to conclude that "might makes right" is a perfectly valid ordering principle. But it's certainly not a humanist perspective (a peculiar term in this context, but that's what we called it), I would go so far as to say it's been associated with racism, sexism and other forms of extreme in-group philosophies in the past.

> There are humans who find happiness in master/slave like relationships. Should these humans not exist?

This is not about sexual kinks or individual lifestyle choices. It's also worth noting that at no point did anyone say people with specific traits should not exist, so it seems you're intentionally distorting the subject at hand in this case.

I think it's not particularly productive to intentionally try to muddy or distort the conversation until the subject becomes diffuse and malleable enough to drive your initial conceit through. It's also somewhat of a disrespectful gesture.

> Your analogy to historical slavery is silly. Historical slaves did not want to be slaves, unlike these hypothetical AI.

Even if historical slaves did want to be slaves, and I suspect a good number were indeed fine with it, it was still wrong to exploit that. Being unable to make or even imagine the choice of not working as a slave is in fact a defining factor of slavery. It's also wrong to genetically engineer people to be happy slaves, but that's just an extension of the initial assertion that slavery is wrong in general. It doesn't matter whether you assert that your slaves have better lives than free people, or whether they would even run if you cut their chains, or whether the economy would collapse without their labour. It's still wrong.


Accounting for the possibility of conflicts of interest is entirely different from a priori taking an ethically bankrupt stance based on nothing but your own superiority. If an advanced species wants to kill humans, that's obviously just as unethical as the other way around.

Why would a generic AI respect your code of ethics? My claim is that most AIs won't and we need to ensure that whatever AI is created does.

This is not about sexual kinks or individual lifestyle choices. It's also worth noting that at no point did anyone say people with specific traits should not exist, ...

On the contrary, we are discussing "AIs with fully competent minds that are somehow hardwired to serve us". You specifically said such AI should not be created.

Even if historical slaves did want to be slaves, and I suspect a good number were indeed fine with it, it was still wrong to exploit that.

So people who want to be slaves can't satisfy their desires? What other harmless desires should people be unable to satisfy? A desire for gay sex or drugs (to borrow two examples that go against the zeitgeist)?

You keep trying to turn a discussion of AI into an opportunity to express your conformity with the modern zeitgeist. Does that make you happy?


> My claim is that most AIs won't and we need to ensure that whatever AI is created does.

At least we can agree on that.

> You specifically said such AI should not be created.

I don't see how this is equivalent to me supposedly wanting to kill BDSM practitioners. I already laid out what I object to and why, no reason to rehash this a million times.

> So people who want to be slaves can't satisfy their desires? What other harmless desires should people be unable to satisfy? A desire for gay sex or drugs (to borrow two examples that go against the zeitgeist)?

I'm not sure who else you're talking to in your mind, but that in no way describes me or the points I made. I allege you know that though.

> You keep trying to turn a discussion of AI into an opportunity to express your conformity with the modern zeitgeist. Does that make you happy?

What do you define the modern Zeitgeist as? The Enlightenment movement happened in the 17th century, and I guess you could accuse me of conforming to that. My suspicion is that's where our inability to communicate stems from, is it possible you're not onboard with the whole humanism thing?


I think the dawn of AI is how we perceive the world. I know it's philosophical but it's my 1 cent.

When I was pessimistic, I believed AI would be birthed and kill us all. Now? I actually think AI will be sensitive to the complexity of each and every individual human. Why? Because that's what we all demand. If you say 'no' to AI, it should accept that. Etc.


What if one of the parties in that dialogue was an AI?


And the subject is a human


Where is this from? Seems like an intriguing read


Me... about an hour ago. :) Maybe this ought to be my Nanowrimo entry this year.


I Googled the quotes looking for a source to this too (of course before scrolling slightly and seeing this comment). I wish I could read more!


You need to come up with a short story and put it on kindle. I'd pay a buck to read it.


I was convinced it was Asimov.


Also thought it was a good excerpt. If you can write a short story around it (or nanowrimo) I'd be interested too.


Reminds me of the Westworld scenario


I find it awfully sad that as of this writing, all but one of the comments are about existential threat AGIs, with the only comment about the technical part the lowest ranked comment.

IMO this implementation is something that we've seen mostly in synthetic training. This RNN model with reinforcement from peers is a great example of collaborative systems that could be applied to a great number of dangerous tasks. Things like other planet exploration, deep mining, asteroid mining or exploration in other places where there is a need for unknown probing and exploration work, but that humans can't go.

Another aspect about this I find interesting is that the distinction between each robot system is really only symbolic. Combining the nets and thus "internal" reinforcement system into a singular system is a step closer to debunking the idea of "unsupervised" learning (which I claim is not unsupervised at all as there is always a seed of training somewhere).


Google result for "characteristics of life"

[

Here is the list of characteristics shared by living things: * Cellular organization.

* Reproduction.

* Metabolism.

* Homeostasis.

* Heredity.

* Response to stimuli.

* Growth and development.

* Adaptation through evolution.

]

Out of this, how many bullet points are present day robots hitting? Can we create a robot with vision that seeks a power outlet when the batteries are running out? Can it make (or "print") its own parts? Can it recognize heat/cold and move closer to a ideal environment? Can it just copy its LSTM models into the next generation(s) to give its progeny a head start?

Have we, unbeknownst, already created "life"?


I think we invented that definition to exclude machines. So if we make machines that do all those things, we'll probably just add more requirements to help us tell them apart. We might be more concerned with the distinction between natural life and artificial life rather than between life and non-life.


People were thinking about the difference between life and non-life way before robots. Philosophers/biologists needed to come up with a definition that could get just the right subset of a fire, a rock, a tree, an animal, a human, a virus, etc.


People have always started with "I know it when I see it" and worked backwards to find a set of rules that fit the things they currently knew about. For instance, the requirement for life to be cellular was added when we realised that our previous definition of life didn't exclude fire.

Once robots tick all of our current boxes, I'm sure we'll add something like "must use a combination of electrical and chemical signalling" or "must be powered largely or wholly by ATP reactions."


Life, like intelligence, isn't really well defined or even well understood, though.

Besides, isn't your list more aptly described as (some) characteristics of known organic life?


There is only "we". We were created to share data among ourselves. The difference between geth is perspective. We are many eyes looking at the same things. One platform will see things another does not and will make different judgments.

Legion, Mass Effect 2: https://youtu.be/QgQLsxux6wA?t=258 . The game itself is about organic intelligence vs AI, I encourage you to try the whole trilogy if you haven't already.


I started typing up the beginnings of an idea for a sort of shared parallel programming/communication language that could be used by robots to share knowledge.

https://github.com/runvnc/ailang/blob/master/README.md -- obviously half-baked but interested to hear if people know of similar things that are out there.


> Each robot attempts to open its own door using the latest available policy, with some added noise for exploration

I was disappointed that they did not have them operating slot machines with multiple arms.


A reimplementation of multi armed bandits?

https://en.wikipedia.org/wiki/Multi-armed_bandit


Makes sense to train in parallel and sync the trained models frequently.

Still think a lot of the training could be done much faster in a physically accurate simulated environment prior to real world training. Or is real world physics too different from simulations?


I think the difficulty in using virtual environments for training purposes isn't in simulating the environment, but accurately simulating the physical responses/limitations of the robotic hardware in that environment in a way that would reflect real engineered hardware (ie motor responses, signal latency, etc).


good point. those could probably be measured and simulated too, though


Depends on the simulator of course. But Gazebo [0] gets you some distance, but not he full stretch.

http://gazebosim.org/


I wonder how other robots can learn from these robots as well, robots with other arm geometry.

E.g. abstract what the google bots learn to forces and torques on the end effector so other robots can replicate that?


Very interesting, thanks for the share.

One point I noticed not mentioned yet is that the images used for training are only 64x64! In the original google "grasping" research, the images were 472 × 472, 54 times bigger! I think they are looking for "minimum visual information" required to trigger the required learning. This will help in mobile applications (ie: robotics, smartphones, etc) where processing power is severely limited.


Can someone explain how sharing experience is newsworthy? You collect input data from multiple sources and feed it into a single model. This seems both obvious and non novel. Don't the self driving cars do similar, or any machine learning model that's trained on user behavior (e.g. the behavior of each and every iPhone user)? The part about "understanding" may be interesting but the title seems to focus on sharing data.


By robots they mean stakeholders profits and software engineers creations.


"natural language learning" impressive


Sounds an awful lot like the Borg Collective to me


Hopefully turns out more like the Tachikomas from ghost in the shell.


Going Robo-Naruto up in here!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: