My cynical take is that he wants to avoid taking any moral responsibity for his ai. As long as there's regulations, they can fall back to a "we followed the rules" defense and morality falls on the regulators.
Without regulation, they have to defend their actions by arguing that they do is morally justified.
I have believed this to be the case for many laws and regulations, and people's corresponding actions in all scope of modern living.
Offshoring personal "rights and wrongs" by scapegoating 'against' a law ("well, it isn't illegal!") Or regulation.
This also leads for obvious abuses. When there are books of "lines in the sand", hard statements and conditions, it only takes a dedicated and we'll resources group/company/individual to work around those rules, while the normal person* cannot likely do the same.
Regulations have a heavy hand in the picking and choosing of market victor.(s)
Someone smart, once said, "Watch what they do, and not what they say".
Unless google is actually spending serious money lobbying for laws that impose any restriction, this is just bland obligatory public speech, that the culture demands.
Sundar Pichai's call for regulating artificial intelligence is a slap in the face to anyone working in the field of AI. There are obvious existential risks that he seems happy to skip over. His choice of AI is pure marketing driven FUD, mixed with a healthy dose of self promotion.
Don't wait for him to put up any cash to reign in Nuclear weapons, Global pandemic, Carbon emissions, Bioterrorism etc. Pichai will dismount this soapbox as soon as a new buzzword hits Twitter. https://killedbygoogle.com/
How can we seriously begin to regulate anything tech related until we actually have people who have worked in tech or understand tech in government roles in some capacity, either consulting or full time ?
This is why the people that regulate things often come from industry. You’ll see a lot of (not entirely misguided) complaining about the fact that the people that regulate finance, for example, often come from a major bank like Goldman Sachs. While this can lead to questionable rules in some cases that “favor the banks” at least someone that understands the industry is regulating the industry.
I’d rather that than have some idiot who knows nothing write the rules and screw everything up.
Mario Draghi also had a career at Goldman before he ran the ECB. Goldman is not called a vampire octopus squid for nothing. We have especially seen in banking what regulation from the industry boils down to.
There is no need to be an industry expert to become a regulator, it is on the regulated to explain the context as simple as possible.
The thread is already filled with the usual "CEOs only call for regulation when they're ahead" posts.
I find this to be such a shallow generic libertarian take on tech. Not only does this always seem to come from people who eat up everything tech executives say on every other topic except regulation, it also totally treats Pichai's arguments in bad faith.
Here's another take. Pichai is just a smart guy who is genuinely worried about the abuse of the technology because there is indeed a lot of state actors or other potential to abuse the technology in privacy damaging ways. If you want to scare Google with regulation try anti-trust, not AI and privacy security.
Nobody is competing with Google or the large players on AI anyway, regulation or not. Their advantage is in data, scale and talent. If anything higher privacy standards might create ecosystems of privacy focussed companies in the space.
I'm someone who generally thinks we need much more tolerance for regulation in the States, and even I'm not buying Pichai's stance.
CEOs of publicly-traded corporations do not act in good faith towards anyone but shareholders. We can debate whether or not they're morally obligated to (I would say yes), but realistically, they do not.
The only reason a hired CEO would push for regulation of his own industry besides wanting to stifle competition, would be if current (or upcoming) regulation was unpredictable and he wanted to reduce risk by getting a clear-cut answer. You maybe could make that argument around something like privacy where there's a mounting wave of public pressure. But at least right now, AI is in no such situation.
Google probably loses out a good amount of bucks due to their stances on defence work. I don't see how this can be attributed to some sort of cynical corporate play, it's simply company culture.
Likewise, I don't see how a lot of these companies calling for facial recognition bans can be considered to be in their interest. When the government looks for large infrastructure those companies are the only real players. Remove all regulation, and the government still not hand their data to some sort of garage startup.
CEOs of large companies are totally able to act in good faith towards people other than their shareholders. Of course, often they do not, but they certainly can.
The arguments around caution and privacy of citizens are sound and reasonable. Everyone can make them. There is no reason to not listen to Pichai just because he runs a business.
"Pichai is just a smart guy who is genuinely worried about"
It's the CEO of one of the world's larges companies talking. What he says publicly is fundamentally corporate communications and 'part of this job', it's nary impossible to separate out his 'personal view'.
"Nobody is competing with Google or the large players on AI anyway, regulation or not." - this is quite upside down, there are tons of smaller AI players competing indirectly with Google.
When Google 'open sources' some research, it may benefit them and others, but put a slew of others out of business. It's not goodwill, it's mostly for profit. By wiping out the encroachment of adjacent industries entirely they keep their moat, and the surpluses in their core line of business. This is the logic behind open sourcing Android. If you think they're just 'nice' consider why they don't open source the rest of their software.
AI intrinsic socialist nature is showing in comments like these. It's so weird to hear companies ask for regulation while the government is saying don't worry about it. Either Google knows something mainstream AI research doesn't or it's some kind of weird deferential posturing about competencies.
Companies tend to only want regulation when they need to block out competitors by raising the bar to compete. Maybe they're worried about stealth startups that may already be ahead of them?
It's not even necessarily anticompetitive motive (though it usually is). It can arise out of NIH syndrome: "well we know this works because we did it, but we don't have as much knowledge about other things and they might be unsafe, so let's encourage standardizing around this." Whether Sundar is pitching it due to anticompete behavior or NIH syndrome is another question.
Well, also to reduce uncertainty about what the future regulatory environment will look like. If you were planning out a multi-billion dollar investment that would take years to execute, you'd want the regulation in place at the beginning when you could still plan around it rather than being surprised after you've already spent the time and money.
Yup. Just look at the current most highly regulated industries: fossil fuels and power, vehicle manufacturing, finance, pharma and healthcare, water, air transportation. There aren't many startups competing with the established companies in those spaces, and it's not because there aren't things to innovate on.
Insurance and moat building. When you're that large, the best way to minimize potential risk from new legislation involving your market is to cosy up to the regulators and make sure the regulation is written in your favour. In the process, it makes sense to sway the regulation towards raising the barrier to entry for the market.
We wouldn't want a couple of software engineers in a garage somewhere to start "the next Google", now would we?
It's called "Regulatory capture". They're trying to slow down / hinder competition, while at the same time benefiting from loopholes sewn into the legislation their lobbyists helped to create.
The Financial Times article provides a little more depth. It mentioned that Google has already decided to delay the rollout of any general-purpose facial recognition tools, in anticipation of malicious use. The FT article also calls his comments a "call for a moratorium", which is quite different than a call for heavy long-standing regulation.
His comments seem a little more driven by things like the use of cameras to identify and locate Uighur Muslims for the purpose of locking them in concentration camps in Xinjiang, where over 1.8 million Muslims are already detained [0,1], than the more civilian uses of unlocking cell phones and bank accounts.
Given that facial recognition is already good enough to be universal, including in Google Photos which is exceptional at it, you would think that ship has sailed at this point.
Sure he does. They actually want global regulation so they don't have to worry about pesky little things like national entities regulating their stuff and perhaps fining them some billions down the road.
If only there was a global government to lobby to it would make things so much easier legally speaking. /s
Regulation is also great to kill would-be competitors. Classic pull the ladder move.
The government should start taxing data. Just like any asset data should be listed in their financial statement per type and quantity and taxed. Derived models taxed based on the input data.
Case in point - recent onslaught of regulation in the EU, including GDPR, was meant to push Google out of the market but their market share only increased, with valuation hitting record yet again just now.
Is it a case of either us regulating AI or AI ultimately regulating us? It would be tricky to implement such regulation, it would need to be a global effort like nuclear non-proliferation. Otherwise certain nations might pursue unregulated AI in hopes of some advantage to the detriment of all.
What times we live in.
EDIT: updated text to clarify position of curiosity.
There already is an autonomous closed loop from human behavior, digital surveillance of that behavior, and AI models that train on that data and use their models to influence human behavior.
I have limited knowledge in this area, but would AI not at some point behave in a way like virus or bacteria and propagate and adapt in some ways until it’s free (even if unintentionally) of those limitations.
Protocols always have ruled us and always will. Makes me wonder: does dehumanization and automation (offloading regular everyday things we do to automation) of our habits make us more free or more restricted?