Questionable. I guarantee the vast majority of users don't even read the massive legalese text walls companies show them before they sign up. Usability studies have shown that people don't even read small error messages, they just want to get rid of the annoying message as quickly as possible. The few of them that actually do read these things probably won't have the foggiest idea what any of it means or the risks associated with the breach of their privacy. So how could this be real informed consent?
Of course, we also have sites where this document is not shown at any time and can only be reached through a link buried in the page's footer. Sites that just write whatever terms they want into this hidden page and then say everyone is agreeing with it by virtue of using the site.
Under the GDPR, any non-essential data processing (analytics, ads, marketing, etc falls into that) should be opt-in and dark patterns like pre-ticked checkboxes are not allowed.
Under the GDPR, any non-essential data processing (analytics, ads, marketing, etc falls into that) should be opt-in
This isn't strictly true. Consent is only one lawful basis for processing under GDPR, and it comes with a lot of strings attached that other bases don't necessarily have, which is why so many lawyers and consultants were recommending against relying it unless it was the only way during the mad rush to GDPR compliance a few years back.
In particular, even some of the regulators have themselves indicated that marketing might be a legitimate interest of a business. Obviously the details matter here, and handing personal data over to third parties like Facebook without their knowledge or consent seems materially different to, for example, the original business sending a relevant email about a new product that is related to something that the recipient already bought from them. Time will tell how the regulators decide to handle this.
> This is a core feature of every ad platform I've seen and is absolutely not a violation of the GPDR since users are giving consent when they signup.
That's not how it works. Hiding the "consent" in the fine print doesn't count, and at least in Germany, it's clear that you need valid consent and can't weasel out of it by claiming "legitimate interest" etc.
I already had a DPA explain this to one of the companies that decided to give my data to Facebook, and the DPA indicated that they were acting on multiple complaints in that regard.
There's a good chance they'll let you get away with a warning the first time if you haven't gotten in trouble before, but especially if you keep doing it (or if they decide that by now, you certainly should have known), expect quickly escalating fines.
I agree that this is a core feature. However, the GDPR mandates that consent should be opt-in, granular (you can provide consent for your data to be used for one purpose but not another) and you can't refuse service because a user is refusing to consent to non-essential data processing (ads would fall into that).
So yes, technically you can ask the user for consent, but it has to be explicit ("we'd like to share your e-mail/phone number with our advertising partners such as Facebook, accept/decline?") and I can't imagine anyone in their right mind consenting to that.
> You've signed up for a web service and never seen ads on other sites for it ? Very strange.
I sign up for stuff only when I have no other choice for exactly this reason, and often provide fake details. Reminds me of an ex-client where they had an issue with their potential customers not providing the right contact details because they're afraid we're going to spam them. "But do we actually spam them? -Yes."
But you are not sharing your email with fb. The user already shared it with fb. I am only telling fb, if you have this user with this email, show him an ad. I really don't see the problem. Much better a targeted ad than ads about porn, casinos, viagra or poker.
Regardless of whether Facebook has my e-mail, services providing them with a hashed version of it for advertising purposes still allow Facebook to tell "this hash is associated with these services" even if they never had the original un-hashed email. They can combine it with all the other information they have (stolen from people's contacts which may have the unhashed e-mail along with my name and potentially phone number) and create a pretty good profile on me even if I never signed up for a Facebook account and agreed to their ToS/privacy policy.
Things get murky in this area (or perhaps not, the lawyers will figure it out in time).
If Facebook is only using something like a hash of an email address in order to target ads at specific Facebook users at the request of one of their advertisers, they are probably only acting as a data processor for a very specific purpose that might be acceptable for both Facebook themselves and the advertiser under the GDPR rules.
If Facebook does anything else at all with that data, their role probably changes from a GDPR perspective. The hash is personal data, since by definition it's being used to identify a specific person. If Facebook is using the data they have associated with that hash -- for example, anything they know about the business that provided it -- to build up more of a profile on their users, they are probably now a data controller, possibly as well as a data processor in connection with the original targeted ad process. Then you get into questions about whether Facebook's users have given their suitably informed consent to Facebook or there is some other lawful basis for whatever processing is happening.
Obviously if businesses were providing actual email addresses to Facebook or if Facebook were using that data to do things like building shadow profiles on non-Facebook users, that would be another level entirely. And AFAIK, the custom audience tools on marketing platforms like Facebook typically do accept directly uploads of literal email addresses, phone numbers or other identifying details for the audience to be targeted, so maybe the discussion about hashing above is all moot anyway.
Thank you, that's nice of you to say, but I claim no special insight here. I just happen to live in the UK where these issues are relevant and to have some professional experience dealing with them.
The German DPAs have a FAQ on this topic, and they're very clear about the fact that hashing isn't anonymization and doesn't change the fact that you're sharing PII. (The FAQ also mentions that you need consent and can't claim "legitimate interest").
Thanks for that. Would you be able to link/quote the relevant section? I'm personally interested in it, but my German language skills are extremely limited.
There is two kinds of "custom audiences" - one list-based and one based on tracking pixels. I'll only quote the parts relevant to the method where customer lists are uploaded.
a.Rechtmäßiger Einsatz - Der Einsatz ist nur aufgrund einer informierten Einwilligung der Kunden zulässig. Das Hochladen der Kundenliste kann weder auf eine Rechtsgrundlage des BDSG noch des TMG gestützt werden. Diese Rechtsauffassung beruht auf einer europarechtskonformen Auslegung der geltenden deutschen Datenschutzbestimmungen und berücksichtigt die jüngsten Entscheidungen des EuGH zum Datenschutz. Im Übrigen wird das Übermitteln dieser Liste an Facebook auch auf der Basis des ab Mai 2018 geltenden Rechts, d.h. nach der Datenschutz-Grundverordnung (DS-GVO), nicht ohne Einwilligung zulässig sein.
b.Widerruf der Einwilligung - Widerruft der Betroffene seine Einwilligung, so muss er von der Kundenliste entfernt werden. Da der Webseiten-Betreiber keine Kenntnis davon hat, welche Kunden auch Nutzer auf Facebook sind und beworben werden, ist die vollständige Custom Audience-Liste unverzüglich zu aktualisieren.
(Translation - Google translate with misleading issues corrected manually:
Lawful use - Use is only permitted with the informed consent of the customer. The uploading of the customer list can neither be based on a legal basis of the BDSG nor the TMG. This legal opinion is based on an interpretation of the applicable German data protection regulations in accordance with European law and takes into account the most recent decisions of the ECJ on data protection. Beyond that, transmitting this list to Facebook will also not be permitted without consent according to the law applicable from May 2018, i.e. according to the General Data Protection Regulation (GDPR).
Withdrawal of consent - If the person concerned withdraws his or her consent, he or she must be removed from the customer list. Since the website operator has no knowledge of which customers are also users on Facebook and are being advertised, the complete Custom Audience list must be updated immediately.)
> I am only telling fb, if you have this user with this email, show him an ad.
You're also telling Facebook "by the way, I have a relationship with someone with this email address". That's personally identifiable information that you're sending to Facebook. Under the GDPR you can only do that if you have the explicit and freely given opt-in permission to do that from each respective person. "By using this site you agree to..." or "by signing up you agree to..." does not qualify as consent under the GDPR.
If the person does not live in Europe and you are not in Europe then the GDPR doesn't apply, of course.
If I'm not on Facebook (which I'm not) you are telling them that there most likely exist a user with this email address and an interest in your service. If many companies do this FB might even be able to build a profile of me without me doing anything
This is (or at least should be) not Bueno under GDPR / data minimalization.
No way. If I sign up to, say, a mailing list, or make an account using my email address, I am NOT giving my consent for that site to use my email for targeted marketing (other than the specific mailing list I signed up for).
> This is a core feature of every ad platform I've seen and is absolutely not a violation of the GPDR.
I agree with you on this part. It is not a violation of GDPR on the ad platform side since you, as the data controller, are responsible to obtain a permission from the end-user. The ad platform is a data processor defined under GDPR. I am sure that the agreement between you and the ad platform is stating that you have a permission to use the email addresses for targeted advertising purposes and bear the full legal responsibility if not.
> since users are giving consent when they signup.
See Nextgrids comment. Yes, the GDPR admittedly lacks on the enforcement side and yes, I agree that this is a common practice, but that does not make it legal. Not for a data subject residing in the EU.
This is a core feature of every ad platform I've seen and is absolutely not a violation of the GPDR since users are giving consent when they signup.
I think we'll see regulators take a different view when they get around to challenging this practice, and the businesses who get made into examples might find it an expensive lesson. Handing over personal details to big data hoarders for remarketing purposes is the epitome of behaviour the GDPR was intended to curtail. You can't just mutter the word "consent" and claim some small print on a Ts & Cs page no-one reads protects you, and regulators have shown very little sympathy so far for data controllers who have tried to weasel their way out of GDPR obligations with this kind of strategy.
Those regulators are still under-resourced and it will presumably take some time for them to get around to dealing with this issue. Right now they're still going after serious leaks and the like. But they're already handing out 9-figure fines to big name businesses for those breaches, and by default those fines go back into central government coffers. Given the current economic climate, how long do you think it will be before their governments realise that this is potentially a very lucrative revenue stream that the public is unlikely to mind, and so start pushing the funding for those regulators up? The ICO (the UK's regulator) has already significantly increased its budget and headcount since the GDPR came into effect, and is reportedly looking at ways to ringfence some of the fines to cover the litigation costs when it inevitably has to defend the big penalties it will hand down from time to time.
When the Cambridge Analytica scandal happened here in the UK, the ICO fined Facebook £500,000. That was the largest fine they could legally impose at the time. As they observed themselves, in what might charitably be considered a thinly veiled threat, under the GDPR that could have been well over £1B instead. Even an organisation the size of Facebook is going to feel that, particularly since there is nothing that says it can't be repeatedly fined on that scale if it misbehaves in multiple different ways.
A couple of potentially important issues have, as far as I know, not yet been resolved in this area.
Firstly, what happens if processing in violation of the GDPR is widespread, the businesses you give your address to are the data controllers, but you still have the likes of Facebook hoovering up huge amounts of personal data inappropriately but possibly only in a capacity of data processor? No doubt there will be some interesting legal arguments about where liability is going to be placed if Facebook was actively soliciting that sort of activity as part of its business model.
Secondly, what happens after the UK has fully separated from the EU at the end of this year, if as the government has stated we retain the GDPR in our national law? Until Brexit was relevant, the GDPR was an EU-wide measure, and typically one member state's regulator would take the lead role in any given case. Anyone breaking the GDPR's rules could be duly investigated and penalised, but only once, not in the same way by every regulator in every member state where there was offending behaviour. If the UK is no longer to be a part of that scheme, will regulators still co-ordinate in this way, or will the businesses sharing data with Facebook face a kind of double jeopardy where both the UK and a lead regulator from an EU member state can potentially fine them for the same behaviour, effectively doubling the maximum penalty they could receive?
If both of those issues were resolved in ways unfavourable to the marketing platforms like Facebook, they could be looking at huge fines for promoting this sort of scheme on the scale that they do, potentially enough to make whole strategies based on selective targeting unviable.
Agreed. If I feel violated or tracked, I'm far more likely to develop negative feelings for your product. If Facebook starts showing me more ads for your product right after I visited your site, you're definitely not getting my business.
Ask yourself this: Would you rather have targeted ads, for something you might be interested in, or completely random junk you couldn't care less about? Targeted advertising benefits both you and the advertiser.
Targeted advertising creates a liability for me in the form of leaking which services I use to a third-party advertising partner I may have no relationship with and haven't accepted their privacy policy (the service itself doesn't know whether I use Google/Facebook and sends them the information regardless).
If advertising was targeted at the browser level (the browser has access to the entire catalog of ads out there and then does the selection locally based on sites/services I interacted with previously) then I would be in favor of that.
Finally you are omitting a third option in your comparison: how about no advertising at all? Preferring paid services over ad-supported ones and countermeasures like uBlock Origin make that a real possibility. I can't recall the last time I've seen a proper ad online (in fact my problem with the parent's idea is more about the data sharing than the ads themselves since I won't see the ads anyway).
If you feel it's a liability, it is up to you to protect yourself. Use VPNs, disposable VMs, multiple email accounts, private browsing, and whatever else you think is necessary to preserve your privacy. "Tracking" is baked in to the web. The cat is out of the bag.
No advertising isn't a viable option in this world. I'd go as far as to say that the Internet, as we know it today, would not exist without targeted ads.
In this case you could say that we need to go back to the Middle Ages and we don't need laws & enforcement and if you are concerned about getting robbed or killed it's up to you to defend yourself by wearing body armor, carrying weapons and having your own personal army.
Society has laws for a reason when its constituents decide that certain behavior is detrimental to it and should be outlawed & discouraged by the use of appropriate punishment. I don't see why this shouldn't apply here? The GDPR is in fact a step in that direction, though its enforcement is severely lacking.
> No advertising isn't a viable option in this world.
This is debatable but it's a discussion for another thread.
> I'd go as far as to say that the Internet, as we know it today, would not exist without targeted ads.
The Internet originally was about sharing information freely. It facilitated commerce to a certain extent but commerce wasn't its core purpose. The internet as well have nowadays has actually become worse because of the increased focus on commerce & advertising.
The difference is enforcement. GDPR cannot be enforced worldwide. Even if it "legally" can, which is debatable, practical enforcement is another matter. Even if it could be practically enforced, accidents happen. People make mistakes. Your data could still be shared with a third party due to a bug or just plain incompetence. It's still a good idea to protect yourself.
This is a superficial view that does not account for the advertiser's ability to price discriminate via advertising. For example, say there is a Batman movie coming out, and I sign up on the Batman website to find out when/how it is released.
The movie folks now know that I am very interested in this movie. They can choose to target me for a small coupon advertisement, knowing that I will likely claim it and consider it a win.
Simultaneously they can target people on FB that they think are Batman fans (but who have not signed up for their email list) with a more generous coupon.
So while I am seeing advertisements for relevant products, I may be seeing less-generous offers than I would see in a world without tracking.
With e-commerce, the coupon bit will soon be unnecessary - you'll just see higher "personalized" prices, with no indication that they differ from what others see. Like a more targeted version of https://crow.app/blog/price-localization-with-stripe
I do get targeted ads, it's called newletters I sign up for on services, and blogs of techincal companies I keep up with. Both work well for things I am interested in.
As yourself this: How do you feel about the possibility of any personal information you give to any company may be given to others without your consent (or "with your consent" behind a huge wall of "this is how we use your data, take it or leave it"), and for those companies to sell it to data arrgrators to build a complete picture of you, to sell it to anyone with enough cash?
I am fine with pseudo-anonymous ad targeting. You can collect personal "interests" without collecting truly identifying PII.
However, what you describe already happens and has for decades, in the offline world. Tons of personal info, like real estate and voter records, are already public in many jurisdictions anyway. Insurance companies, credit card companies, phone companies, and everyone else all take this stuff and spam the hell out of everyone.
The problem is in what the person said above - it works for them, with 30% conversion rate...they wouldn't be doing it if they didn't get money from other people that way...
Criminal gangs could also say that crime such as theft, robbery, blackmail, extortion, etc works for them and makes them money. It doesn't mean we should be legitimizing and encouraging this behavior that most of us agree is detrimental to society.
There are plenty of scams and malware being spread through ads. Furthermore ads are a parasite that wastes most people's time for no benefit with no official way for them to opt-out (a lot of services don't allow you to pay money to opt-out); it'a a cancer on society.
> Especially in this case where PII data is not being provided to the advertising company.
You are literally talking about capturing e-mail addresses so you can pass them to an advertising partner to target ads to these users. How is that not PII?
The emails can be hashed, turning it into a pseudo-anonymous ID. It is debatable whether that is PII. It probably comes down to whoever can afford the better legal representation.
Hashes are not a panacea. I see them being suggested as solution for anonymization of ip addresses/domain names/urls/file names/emails all the time but these people that make said suggestions are either clueless or are arguing in bad faith. It is extremely easy to brute-force the majority of said hashes. (in addition to that I doubt that anyone is passing hashed emails as it would make it slightly inconvenient to send emails to said accounts)
It's indeed extremely easy to brute force these hashes when you have a database of the original (plaintext) data stolen from people's contacts which reduces your search space dramatically.
They do. Advertising's whole purpose is to manipulate people into doing what companies want them to do. Not every company has your best interests in mind. Due to this inherent conflict of interest, ads should be viewed with healthy suspicion at best.
Banning tobacco ads helped reduce smoking in my country. We should ban a whole lot more.
Content blockers work really well too. They should be integrated into browsers. That ought to reduce the conversion rates and get companies to stop spamming us with noise.
That is scummy as hell and might even get you in trouble when it comes to the GDPR if you're operating in the EU.
If I sign up for your web service the last thing I want is Facebook/Google knowing that fact.