Even setting aside the malicious SF stuff, Waymo's have enormous advantages over humans relying on mirrors and accounting for blindspots. I never have to be concerned a Waymo hasn't seen me.
I can't wait until the technology is just standard on cars, and they won't let drivers side-swipe or door cyclists.
> I never have to be concerned a Waymo hasn't seen me.
Funnily enough that's exactly why I don't like them. Every time one rolls by me I know that tens of photos of me and even my 3D LIDAR scan get piled in to some fucking Google dataset where it will live forever :/
“The Waymo Driver's perception system takes complex data gathered from its advanced suite of car sensors, and deciphers what's around it using AI - from pedestrians to cyclists, vehicles to construction, and more. The Waymo Driver also responds to signs and signals, like traffic light colors and temporary stop signs.”
Totally fair to be concerned about pervasive surveillance for the _potential_ of privacy violation. Not sure what to do about that.
That being said, just speaking with some knowledge of current state: the scans don't live forever. At this point, all the data they collect is way too big to store even for a short period. They'll only keep data in scenarios that are helpful for improving driving performance, which is a tiny subset.
Personally identifiable information is also redacted.
You should probably be more worried about what gmail knows about you than Waymo.
True; I should have said metadata and not just data since you're right that the volume of raw images would be too big to store indefinitely. It's way more feasible to process the raw images and store the inferences, like number of persons visible in last 5 seconds, or dates and times a person who looks like me has been seen by a Waymo while my particular Android phone is nearby, or dates and times they have seen [my OCRed car number plate].
> Video Clips captured by the LPR system will automatically be deleted after 30 days; although Images are deleted when no longer needed, the data obtained from the Images may be retained indefinitely. Should any information from the LPR Dashboard be needed to assist with a security or law enforcement matter, it may be retained indefinitely, in paper and electronic form, as part of the security file until it is determined it is no longer needed; in addition, it may be shared with local law enforcement who may retain it in accordance with their own retention policy.
If anyone can share a link to a similar IRL privacy policy for Waymo I would love to read it. The one on their website is conspicuously labeled Waymo Web Privacy Policy lol
That's still not really what I'm looking for. I am curious about a “what we keep, for how long” policy for the sensors on the outside of the cars like the Flock one I linked above.
Your second link does mention cameras and microphones outside the car but doesn't mention what they keep (full video? stills? LIDAR? RADAR?) or for how long:
“Waymo’s cameras also see the world in context, as a human would, but with a 360° field of view. Our high-resolution vision system can help us detect important things in the world around us like traffic lights and construction zones. Our systems are not designed to use this data to identify individual people.”
The “Our view on your privacy” section links to the same page as your first link, and that page's “What we keep” section is explicitly only about riders:
“We will retain information we associate with your Waymo account, such as name, email and trip history, while your account remains active.”
Right. I merely shared the closest thing that I could find, and as mentioned, the first link is specific to the ride-hailing service. Notably, in the second link, there is a reference to sharing information with law enforcement, but it's generally lacking in details.
It's Google we're talking about. In no way do I trust them to take pictures of me using their city-wide camera network and not use face/body recognition to keep track of where I go, for the purposes of targeting advertisement.
People are able to get their very boring suburban house that you can find pictures of the interior on zillow of blurred for years (indefinitely?) on google street view. If they were so cartoonishly evil they would not let you do that.
I agree in principle that a privately run company could use information in nefarious ways internally, and that barring any additional knowledge you should not trust them.
That being said, I have an anecdote as a former googler: the reality with Google though is very thoughtful and favorable for users if you ask Googlers who've worked on their software products. There are audit trails that can result in instant termination if it's determined that you accessed data without proper business justification. I've known an engineer who was fired for an insufficiently justified user lookup (and later re-hired when they did a deeper look -- hilariously they made this person go through orientation again).
And safeguards / approvals required to access data, so it's not just any joe shmoe who can access the data. Wanna use some data from another Google product for your Google product? You're SOL in most cases. Even accessing training data sourced from youtube videos was so difficult that people grumbled "if I were outside of Google at OpenAI or something I'd have an easier time getting hold of youtube videos -- I'd just scrape them."
This isn't to say any of this is a fair thing to make decisions on for most people, because companies change and welp how do you actually know they're doing the right thing? Imo stronger industry-wide regulations would actually help Google because they already built so much infra to support this stuff, and forcing everyone else to spend energy getting on their level would be a competitive advantage.
The impression I have, as an outsider, is that Google hoovers up all information available to them and uses it as input to various algorithms and ML models for targeted advertising. I'm sure individual Google employees are as thoughtful as you say, but I don't think the organization views itself as it users' "enemy" or as something which its uses should be protected from.
I'm not afraid of employees at Google or random Google divisions obtaining unauthorised access to information at me, it's not about that. I'm certain that there's very little data that the targeted advertising part of Google can't access.
Honest question - what's the harm in being targeted by ads? Is it just scrolling youtube more often than you should? Or is there a nefarious side that I'm failing to consider?
For me the thing I hate about location tracking and the ilk is primarily about its harmful externalities (e.g. put into use by gov't, abusive users, or by Google for anticompetitive reasons), not targeted advertising itself.
I find it disgusting that our society invests so much effort into manipulating everyone, companies spend billions on armies of psychologists, computer science experts and data centres whose only job is to manipulate people into buying things they don't need. Targeted advertising is even more disgusting than non-targeted advertising, because there you're trying to find an individual person's weaknesses for more effective manipulation. It's simply evil.
That, and the existence of targeted advertising incentivises collecting and correlating as much data about people all the time. If it wasn't for targeted ads, I'm sure that Google would've actually just used data from their city-wide surveillance networks for improving their cars (at least until a government would've asked for the data, which is also an issue). But with targeted ads in the mix, there's a huge incentive to collect it and correlate it with all the other data Google has, which is creepy.
I guess. But bringing new products to market requires distribution, and do you have a better way for people to crack that? Targeted advertising through say, Instagram, has enabled a lot of small businesses whom would otherwise struggle to aggregate demand.
So it's not like pure evil. In many cases there's a service being provided to match users to products they want / that don't suck.
> with targeted ads in the mix, there's a huge incentive to collect it and correlate it with all the other data Google has, which is creepy.
Strongly agree that in theory this shit can be used nefariously. That said, Google is far from the scariest of the bunch despite being the biggest. Telecom for example wants to deep inspect your network packets, and they can tell where you are physically today, anywhere in the country without even having cameras driving around 5 US cities.
Stronger regulations around data rights and privacy have been proven to work by the EU. I don't really see another solution apart from a legislative one.
Traffic isn't the right place to be if you demand not to be seen. If you do not want your data to be stored that's a different matter, but I'm still gonna look at you while driving to not crash, I have to.
> Funnily enough that's exactly why I don't like them. Every time one rolls by me I know that tens of photos of me and even my 3D LIDAR scan get piled in to some fucking Google dataset where it will live forever :/
It's not going to be stored forever.
That would be incredibly expensive.
Those cars are taking in TB of information each daily. Scale that to 10s of millions of cars.
It's just not going to happen.
Maybe an ultra compressed representation of you that shares maybe 1 bit in 1 weight somewhere in a NN will live forever.
Don’t they currently have less than 1000 cars? I don’t think they will keep every recording forever but technically, they still could at the current scale.
Human drivers have dash cams, too. Maybe without as sophisticated a data ingestion system as google, but they could theoretically put their dashcam footage on youtube if they wanted to.
A ring camera can do image recognition and store durations of video where a person is in frame. I'm not sure what the privacy difference is between these two. Is lidar and radar recording that much more of a privacy concern than video recording.
I'm pretty sure between traffic cameras and security cameras lots of commuters on th street are being filmed. With or without Waymos
What bothers me regarding surveillance and self driving cars is that an executive sympathetic to the surveillance state could build a system that allows arbitrary surveillance of vehicles or housing by license plate or location. Eric Schmidt was a regular visitor of The Pentagon and Billionaires simply live in a different world than we do. So while some driver could happen to capture me and upload it to YouTube, Waymo could, if someone wanted, have a secret operations center which allows surveillance of all sorts of people, locations, and vehicles. The same way that AT&T had a secret NSA closet that split off major fiber pipes, some data pipeline could have an invisible filter that duplicates data matching certain variables and delivers it to a surveillance partner.
There are somewhere between several hundred and a couple thousand Waymo vehicles per city being served. Even if that expands tenfold, it will be a small fraction of the number of cameras you pass by every day.
Fear not, your images and recordings will get piled on somebody's dashcam to do as their heart desires.
I got a dashcam in my Camry recording front and back everytime i drive. I have no interest in preserving those images outside of an accident, but who knows what sommebody else will.
We have no expectations of privacy in public spaces and ultimately I would trust Googles IT security more than some dude with a dashcam
Second this gripe. At this rate they're going to turn American cities into copies of beijing or london with cameras every other place you look. "Oh but the police will then be able to subpoena footage to catch criminals more easily" yes I don't want a world in which the government can instantly do that. It sucks.
man, the Googs already has a library of images of you. If there's anything about you that the Googs doesn't already know, I'd be shocked. the Googs probably knows you better than your therapist, because you've only shared with your therapist what you wanted. the Googs gets data about you from places you know nothing about.
being concerned that a Waymo car took your picture isn't invalid, but man is it a tear drop in the rain of everything else the Googs is doing.
If "the Googs" knows so much about me, why do they keep showing me ads for products I not only have zero interest in, I'm also not even the target demographic for?
Because they need to hide some of what they know about you. There have been cases where they (in this case target not google) knew a 14 year old was pregnant - but the ads for things she needed are sensitive. So the industry has learned everyone gets a few ads that don't apply just to give cover for those that do but someone doesn't admit to.
also some ads really are to everyone. You may not be in a place where you would think about a car, but car makers don't want you to forget you could just in case your situation changes. They also want you to think of some goods as luxuries so you are impressed when you see someone who does have it.
> Because they need to hide some of what they know about you.
My favorite example of this is in the desktop web version of Google Maps. If you search for some place and try to plot directions from “Your Location” it will prompt for the browser's Geolocation API and will refuse to give directions at all if you don't consent to the prompt: https://i.imgur.com/fIQswnD.png
This is despite the fact that it opens the map with a perfectly-centered and reasonably-sized window around my current location. I have never seen this fail when not using any sort of VPN that moves my GeoIP. They could totally give a reasonable mix–max time estimate based on that window just like the one they show for variable traffic.
Because all the smart people at Google who worked on Quality (Ads Quality, Search Quality, etc) got promoted and moved away from those, and the revenue is good enough that google can maintain its monopoly without improving the product.
The system was never designed to show you relevant ads that you want to see and would like to buy a product from. The system has always been designed to show you profitable ads. Those have always been and will always be a completely distinct set, with only coincidental overlap if at all.
It's almost like people think that the phrase "relevant ads" means interesting to the person viewing the ads. Instead, it means you are relevant to the person buying the ad, so Googs shows you their ad. It means you are in the age range and income range, and possibly the geographic area that the person buying the ad placement finds relevant. The person viewing the ad is never relevant to Googs.
When I met my ex and we linked over what was then Google+, we found that I had been auto labeled in a photo of a protest from years before we met. They've got a lot of info that they don't surface...
Do you dislike it enough to be happy with the increased car incidents that a human dominated driving world implies?
Why are you so worried of something snapping tens of thousands of pics of your body (mostly identical) that don't tell much about you while the world's biggest ad companies know you better than any single human ever will. I feel popular western sci fi has made people fear companies taking some visual data of their bodies covering minutes at max while fully overlooking the dangers of having your behavioural data covering years at a very granular level.
Yes, I know it's not a choice, both are bad. But I find people everywhere, including here in HN, are overly conscious of getting a few minute worth pictures of their bodies uploaded to some private servers while they are nowhere near as conscious when it comes to non-visual data about them (which I would argue mostly covers behavioural data imo).
Definitely a big privacy concern, especially for people like you who aren't using the technology, and haven't consented to giving your data.
But car crashes are the third highest cause of death in the US (https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm). As a society, I think the benefit outweighs the cost in this case, and we can (theoretically) continue to make progress on privacy as a society. Seems like much more of a step forward than a step back to me
No, that says “Accidents (unintentional injuries)” as a category are collectively the third leading cause of death, and that category contains a lot of things.
Dooring is so incredibly preventable with simple computer vision and some kind of actuator that adds an audible alarm and mechanical 3x resistance to the door opening when a cyclist is detected. The door should still be openable in emergency but should be hard to open until the cyclist passes.
(For cars that have both a normally-used electronic door open button and a manual emergency release (e.g. Teslas), the electronic button can use the car's existing cameras to detect cyclists first before actuating the door to open. This would be a trivial software change in the specific case of Teslas. The only thing I dislike about the Tesla setup though is that most non-owners are unaware of where the mechanical emergency release is; it is not obvious and not labelled.)
> This would be a trivial software change in the specific case of Teslas
Tesla already has dooring prevention. If it detects a bicycle or something coming, it prevents you from opening the door the first time, and shows a warning. You can override it by trying to open it the second time, if you are sure.
Waymo already warns you if it detects road users when you open a door. They just don't actively prevent you from opening the door, but they could implement it in their next generation vehicles.
> Dooring people aside, what do you do if someone just leaves the door open when they leave their ride?!
Continue billing them for the ride and send an app notification or phone-call to their phone.
Other potential solutions: If the door is still not closed after n minutes, plead with passers-by, or offer a passing or nearby rider the chance to earn credit by closing the door.
Health insurance companies should pay for it. Their costs would come down if they subsidize the full R&D cost of this system for all car manufacturers. It would work in their favor.
They're probably too stupid to think like that, though.
Health insurance generally has a fixed profit margin (state legislation). They have little incentive to reduce cost because then the entire pie shrinks. A nice example of where well meaning legislation can completely backfire.
Of course, passing costs to all insurance companies is really the same as passing it to all people paying insurance premiums, at which point you can just use tax money to get the same effect. At which point, it's probably easier to regulate it and have the cost passed to everyone buying a car.
That would lead to ridiculously overengineered car doors. It's already incredible how such a simple thing like a door can be so unreliable on newer cars, with handles that sink into the doorframe when not in use, or a locking system that only works with battery power. I'm not sure that adding more complexity would be a net benefit for society.
It's already there, my new fancy car has it. Push the lever to open with electronic help, pull the lever twice for mechanical release. The electronic help version checks for safety first (as long as you do it with a timeout from when the car was running/ready) We'll have to see how the fancy car does over time, but I did get one with handles on the outside that don't disappear.
You could just design infrastructure and road rules such that cyclists aren't encouraged/required to ride in the dooring zone, or even hold people accountable for their actions beyond just the cost of the damages they cause. Car based damages are so normalized that we allow reckless or negligent behavior around cars that we would never allow in any other part of our cities.
You could probably design the latch jaws to have an electronically controlled second catch. It would activate whenever a cyclist is present so if someone tries to slam the door open it would catch with the door slightly open and trigger a warning. A second pull then opens the door no matter what for safety.
I bet that when this tech is in normal cars some will have it tuned to drive much more aggressively and/or simply have that be a setting. I suspect that would be a big selling point / driving tacitly would be an anti-selling point.
Nah, insurance companies will change their coverage rates based on the feature, and / or it'll become another legally mandated feature like backup cameras.
On the other hand, I wonder why insurance companies haven't led to the ubiquity of dashcams. I thought by now every vehicle sold would have one built in.
And my suspicion is that insurance companies don't push for you to get one because it prevents them from fighting claims that they would've won had there been no evidence.
Maybe it's similar for self-driving or whatever we're talking about here (sensors?).
They don’t care because at their scale it would be a wash - you’d only come out ahead if your insured drivers were consistently and significantly better drivers than every other insurance provider you fight claims against.
Then why not offer a "dash-cam discount" to the subset of customers that the insurer believes _are_ better drivers, like those with a long history of having no accidents or tickets and tons of miles?
In the first years maybe. However governments are watching this data and will make it mandatory on when they decide it is really better. (Assuming it is better in unbiased study) There are many governments, it only takes one and the car makers will be looking at if the override button is worth having.
Even setting aside the malicious SF stuff, Waymo's have enormous advantages over humans relying on mirrors and accounting for blindspots. I never have to be concerned a Waymo hasn't seen me.
I can't wait until the technology is just standard on cars, and they won't let drivers side-swipe or door cyclists.