Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mental-health information 'sold to advertisers' (bbc.co.uk)
250 points by abhi3 on Sept 4, 2019 | hide | past | favorite | 140 comments


I can't separate what actually happened from the sensationalizing in the article.

It says the websites had Google cookies and others. OK, anybody using Google Analytics has Google cookies. Anyone using FB plugins has FB cookies. Yes, websites certainly have a lot of cookies these days, and the information inferred by use of these can be in a generic fashion to target and rank ads. That's worthy of scrutiny -- but this isn't what the article is about. What is the evidence of "mental health" being treated in some scary specific way?

The only substantial thing in the article supporting the headline is one charge that a site sent quiz answers to Player.qualifo.com -- which appears to be an unregistered domain.

OK, so what information was sold? When did money change hands? Is there any hard evidence that mental health information was "sold to advertisers", as is claimed in the headline? Or is this just bullshit they made up to get clicks?

For that matter, even if you wanted to do some evil psychographic ad thing, why would anyone sell that info to advertisers? What are they gonna do with it? Like, why hand them the literal data, when you could allow them to just bid on ads that may be targeted/ranked using your painstakingly collected psychographic data (a la Cambridge Analytica)? Why sell cow when you can sell the milk?

The fearmongering and disinformation around adtech and privacy numbs us to legitimately scary things being done that should be covered more.

Makes me think of this Blake Ross rant: https://medium.com/@blakeross/don-t-outsource-your-thinking-...


> It says the websites had Google cookies and others. OK, anybody using Google Analytics has Google cookies. Anyone using FB plugins has FB cookies.

It's not that complicated: certain websites, dealing in certain kinds of sensitive information, may be legally required to seek explicit user consent before even using Google Analytics or Facebook Plugins.

Yes, Google Analytics use is damn near ubiquitous right now. That does not mean it is unambiguously OK to slap it on ever website under the sun.

> What is the evidence of "mental health" being treated in some scary specific way?

No evidence is required. The law is clear. That someone subject to GDPR regulations has the information that they were seeking information related to depression transmitted to google and facebook in the form of a Google Analytics and Facebook scripts on page, without their explicit consent to that, is in and of itself a violation of the law.


I'm not sure it's clear that generic page analytics necessarily means that ones presence on certain pages is any more sensitive than Google maps knowing that you zoomed in on a depression clinic. I can understand this is easy with third party analytics on specific-subject pages, but the waters may appear muddier on web server logs, dns lookups/caches, browser history, search results etc where generic approaches may only be illegal depending on what's visited or looked up. What if the analytics scripts promised not to collect URL or page content (not saying that's the case here)?

If not already clarified somewhere (pardon my ignorance on the bodies/law involved), I think these common cases should be listed as clearly illegal not in canon, but as a helper to these services. Keeping a list containing things like "it's illegal for sites remotely related to health to use third party analytics of any kind sans prior consent" (or "it's illegal for sites remotely related to health to let the URL or content of the page be known to third parties sans prior consent") could help.

I too struggled w/ some of the sensationalizing in the article, because I read:

> Sensitive personal information about mental health is routinely being traded to advertisers around the web

as a bold headline, only to read:

> This was problematic because sensitive information could be broadcast to all of those bidding, PI said

I couldn't really tell what was really being sold routinely, personal info or ad space. And I couldn't separate what "could be broadcast" vs what was.


> I couldn't really tell what was really being sold routinely, personal info or ad space

It’s the same thing.

Look, the website deals in mental health information, right? And health information is part of a class of information that’s protected under the GDPR. Under that protection, explicit user consent is required before that info can be shared with anyone.

When the website invokes the Google ad APIs, that results in Google learning that this user, who they build an identity around via tracking cookies, is reading a website with information about a health condition. Google isn’t the website the user visited, it’s a third-party the website just informed without explicitly asking for the user’s permission. That violates the GDPR. The website makes money off of informing Google about this. That’s “selling user health information” to an advertiser — Google.

Google turns around to its ad bid network and solicits bids to show an ad to google push_id abcdefghihk123, who is visiting a website about mental health. That, again, is a GDPR violation, because disclosing information about a pseudonym ID still counts under the GDPR (and the info can be de-anonymize easily by aggregation networks). That, again, is Google selling personal info to advertisers.

What it boils down to is you’re thinking about this is all wrong; the way Google sells ads, there’s no difference between selling personal info and selling ad space because you’re always bidding to show an individual you’re given data about an ad. The bids are based on how much the person’s profile matches the bidder’s interests, and anything new learned about the person that can be gleaned from the current website visit is added to that profile. Advertisers are literally being told “here: this person with id 12345 and these characteristics matching your target profile is currently reading about depression. How much to show them an ad?” The bid request data they’re given in turn is enough to identify the user in any number of data aggregation services — most of which require “share and share alike” participation — so the bidder looks up the bid request data and discovers probable email address, location, income, marital status, interests, etc, and in return tells the aggregation service “this user is currently visiting doihavedepression.com”. Again, GDPR violation.

That’s the nature of modern online advertising. You’re never bidding to just show N ads on site Y. There is no such thing, in using Google’s advertising system, as selling an ad space on your site to Google that isn’t also a sale of personal information to Google and by Google to the advertisers bidding to fill that spot. As such there are entire classes of websites that probably should not be showing Google ads.


For that matter, even if you wanted to do some evil psychographic ad thing, why would anyone sell that info to advertisers? What are they gonna do with it? Like, why hand them the literal data, when you could allow them to just bid on ads that may be targeted/ranked using your painstakingly collected psychographic data (a la Cambridge Analytica)? Why sell cow when you can sell the milk?

Because you don't have the platform. Not every company in data brokerage out there owns an advertising platform so that advertisers/ad networks can bid on ads. Furthermore, data isn't only about direct advertising. You can also analyze trends. See what influences purchases. Correlate purchase habits with news events or various other sociopolitical indices. Make your product better. Make new products. Find out which area is hip so you can open your new shop there, or buy some real estate. And the list goes on. Advertising is just the tip of the iceberg. You can exploit such data in myriads of ways.


A greater problem is the divorce of collection of data and consent.


That's because not everyone agrees that observational data is something that can (or should) be owned by a person.

When a human looks at another person, are they "collecting data without consent"? Most would say no. So at what point do you "own" observational data about yourself? When it's observed by a machine? So then CCTV should be illegal too then, and indeed any sort of security system that records audio/video without consent.


When a human looks at another person, are they "collecting data without consent"? Most would say no.

But if that human follows and looks at that other person 24 hours a day, would you still say that no consent is required?

So then CCTV should be illegal too then

It is, in many cases.


>But if that human follows and looks at that other person 24 hours a day, would you still say that no consent is required?

This. Scale matters. Scale matters. Scale matters.

It's such an infuriating and intellectually bankrupt slight-of-hand when someone implies that because some small thing X is fine, then 10000X is fine all the same.


> 24 hours a day [...] 10000X [...]

I find it infuriating that it's either small or all the time + many orders of magnitude more. It's unfair to say "any look is invalid" because of perceived/potential scale. Scale only matters when defined.


Well, then quantify the scale at which data collection should become illegal with unambiguous terms.

I don't see anything wrong with making observations about people, especially if it helps my business. I'm allowed to do so without computers: "Ah, sir, I see you are sunburned, did you know we have ointment for that on aisle 4?" (no one would say: "How dare you observe that I am sunburned! That's a privacy violation! You didn't have my consent to observe that!")

So tell me how far I'm allowed to go with computers.


So, you would walk up to someone and say "Hello, I see you are clearly having mental problems! I have some products you are going to LOVE!"and then expect nothing but a positive response?


Observations need not always end in direct interaction with the potential customer. If I notice a lot of people in my store are depressed, for example, then why wouldn't I stock more products that depressed people buy? If I see that my clientele is mainly women, I might tweak my inventory in other ways.


Well that’s easy — with computers, you need explicit consent to even “observe that the user is sunburned”, because there are no inherent scaling limitations with computers.


So you are saying if we remove some of the scaling limitations of the human brain with biotech (i.e. by enhancing memory detail retention, figuring out a way to serialize memories to computer-compatible storage, etc.) it could become illegal to look at a person without consent (since you would effectively become a walking, breathing CCTV)?


Yes, I think that’s the logical conclusion of GDPR-style thinking about privacy. I would certainly protest against being able to index that data any which way.

I understand the flipside that you’re implying and the argument that you’re making, I just don’t agree.

When an individual has a computer-indexed memory that is admissible as evidence in court, I think it’s pretty okay to use it for all of the things that we use memories for today. But what about reselling that data? What about data sharing agreements that subsidize your implants? What about hackers?

I really hope we don’t get truly infallible, computer-backed memory.


Well, at least you are consistent with your position.

I just don't like that we rely on what amounts to DRM in order to give the human brain exemptions to privacy and copyright laws.

I feel that I "own" my memories and nobody should be allowed to tell me what I can do with them. If there's a device that lets me dump them to computers and sell them, first of all, it should be legal, and second of all, I should have that right.

I feel that you do not "own" what I observe about you with my own senses and that I do not need your permission to look at you, listen to you, or generally infer things about you. I don't see how a memory dumping device is different from computer sensors, and thus I don't have a problem with computers collecting information about me in public.


I feel you. Quite literally, I am emotionally attached to the idea of ownership and my memories seem like something I should own.

I just think the entire concept of “ownership” breaks down when you have zero- (or epsilon-)cost copies.


This is only a problem in a world with corporate personhood. Presumably the issue with people would be their use of said actions.


> But if that human follows and looks at that other person 24 hours a day, would you still say that no consent is required?

What if I'm not even following anyone; I'm just sitting on my front porch all day and the I notice the same person keeps walking in front of my house? Is consent required even though I haven't moved? I've made 10 observations in 10 hours, yes, but only because the person has walked in front of my house 10 times.


I’d be fine regulating corporate use of behavioral data. “Persons” are not a major concern to me.


> Why sell cow when you can sell the milk?

Because you want to sell beef and not milk? This analogy breaks down for a lot of the same reasons people in real life sell cows and not just milk.


Indeed. In particular, neither people selling beef nor people selling milk necessarily breed their own cows. Professional specialization is a thing.


> Why sell cow when you can sell the milk?

Because according to medical tracking data, they are lactose-intolerant? :)


I have a rather specific condition that I’ve seen YouTube advertisements for the drugs for, despite not even searching for the details of it except through an anonymizer. I’ve always wondered what the cost per click for an advertisement for a $100,000/yr drugs is.


That’s so messed up. What do you think of advertisements for drugs in general? Have they been helpful finding treatment for your condition ? They are illegal where i’m from i believe (i’ve never seen one myself except in the us).

Btw, this reminds of this absolutely hilarious story i think i first heard of here on HN: https://ghostinfluence.com/the-ultimate-retaliation-pranking...


There’s basically one treatment, and one only that any specialist would recommend at this point, so it’s effectively moot other than the question of it being covered by whatever healthcare or insurance you can muster up.

The alternative and older drugs tend to have some rather horrific side effects (detaching your retinas, making your skin detach in some cases) but are very cheap, nobody would choose them if they had the better option.

I’m sort of confused in general what advertising for these sort of drugs actually achieves. It’s not like the patient really has a deciding ability (other than live or die if they can’t afford it), so all it does is act as a constant reminder of how financially screwed you are.


There's basically some segment of the population who will demand $SOME_DRUG_THEY_SAW_ADS_FOR and will change doctors repeatedly until they can get a prescription for it, even if there are better alternatives that were recommended by the previous doctors. Eliminating direct advertisement won't completely avoid that since there will always be blogs full of crazy nonsense or whatever, but I don't think there's any reason that consumers should be getting targeted for ads for things they can't even legally buy.


It's illegal to do that in the UK and therefore we have no (major) issues like you describe.

One of those regulations that is a net good to society.

Interesting video https://www.youtube.com/watch?v=ic_FpRG7Z_k


I’m betting these advertisements are for healthcare providers.


Here are some ideas:

- If you have a call with your doctor that probably is specialized in that specific condition, that metadata would be good enough to guess that condition.

- Some doctors send appointment reminders via email, if you use gmail that would explain the youtube ad

- If you use the wifi/gps at that doctor office, and some other patient also uses the wifi/gps and they use google to search for those conditions, that would be enough to link you with them. This could be any website making the link and tying all those people as similar.

- Close friends and family that research about your condition, would also point to you having that condition. Would probably use google to research, that would explain ads on youtube. They would also see ads for that and actually even might click on them because they would be interested on it.


>Some doctors send appointment reminders via email, if you use gmail that would explain the youtube ad

Google stopped mining Gmail for ad targeting years ago: https://www.nytimes.com/2017/06/23/technology/gmail-ads.html


While this may be true, recently I started using Google Flights and couple of days later I started receiving email ads (ads in Gmail) about flight cancelation insurance.

So the link is there.


That's entirely different to what we're discussing. You will still be targeted based on sites your visited, but not based on emails you receive.


Language is not clear enough, scan is different from metadata. I can see a lawyer defining scan as checking every word of the content and metadata being account Foo sends email to account Bar. That would be enough to make the connection. Foo account later uses google to search about topic A then topic A is presented to account Bar.


Those are all very plausible, but I'm surprised you didn't guess the most obvious:

His healthcare information was not transmitted through technical means. His insurer and/or doctor sold the information.


It’s also possible it was stolen. Though a FOI request against my hospital I was able to find the URLs where they store documents (they printed them and sent them to me, with the URLs at the top), which appear (though I obviously didn’t try) to be vulnerable to enumeration attacks. The files I was supplied with have sequential identifiers.


Assuming they live in the US, that is against HIPAA and in my experience it is taken very seriously since what you described is crime punished by a fine of up to $250,000 and up to 10 years in jail.


None of these sound reasonably ethical to me. I think we all agree that medical information shouldn't be sold. If you disagree then what do we do about HIPPA? Get rid of it? I feel like the things you listed above basically skirt around HIPPA. Legal, but definitely violates the spirit of the law.


That's how Google operates, all they see is bits flowing one way and another and the algorithm makes the connection. That's the problem, there is no safeguards or way of saying don't make those connections since they leak personal info.


Are there any alternatives? It often gets very weird at the high end of drug prices, but usually in a way that inverts normal economic logic - this story from a few years ago blew my mind https://news.ycombinator.com/item?id=13995249#13996448


> I’ve always wondered what the cost per click for an advertisement for a $100,000/yr drugs is.

I doubt it's very much. The reason the price of the drug is so high is that there's zero competition. CPC in auction-style ads is driven upwards by competitive pressure.


One of the team behind this research posted an interesting walk-through of the traffic from one of the worst offenders: https://twitter.com/Bendineliot/status/1169259912184115206


Personally, it really sucks to get ads targeted at my mental health diagnoses.


I am not excusing Google for this behavior but in case you didn't know, you can let Google know which types of Ads you don't want to see based on your profile.

(Not Joking) This was vital to me at work, because I share my screen most of the day and I started getting ads for a Vacation Cruise that was sexual in nature that were unbelievably embarrassing.

https://support.google.com/accounts/answer/2662856?co=GENIE....

I would suggest turning off ALL personalized ads on your Google profile, but in case you like getting ads about Pet products and just don't want your health information shared you can just deselect all the "health", "diet", and "medicine" categories.

It really sucks to target block ads this way and you might even need to stretch to blocking categories like "family and relationships", to get all the ads you don't like.


Why not just get an ad blocker?


That's a good question. I'm a web developer in the Marketing department, so a lot of the work I do involves how ads work with pages.

Having to constantly turn off ad blocker is not only inconvenient but is also slightly embarrassing when I get called out for not looking at things the way a customer would be seeing them.


It's really tedious to make a lot of websites work with an ad-blocker. I only started using one when I got some malicious ads, but I still have to disable it for almost every business's site that I need to log in to. And they break in non-obvious ways, so you have to remember to try turning it off.


Well, Google is taking steps to neuter those, so...


I would be interested in more details


See: https://news.ycombinator.com/item?id=20050173 for a link to the ublock origin author's take on Chrome's upcoming changes, and discussion


They're not - its clickbait sensationalism.

They're changing how extensions can access web requests, in order to increase user privacy and prevent phishing. This requires all extension devs to change how they handle web requests. ublock is making a stink about how it's "targeting them", despite it being a change that all devs have to make, and you can still block ads with the new version. Adblock plus works fine with the new changes, for example.

More here: https://www.xda-developers.com/google-chrome-manifest-v3-ad-...


> They're not - its clickbait sensationalism

No it isn’t. Your take is absolutely inconsistent with the on the ground impact of the specific technical changes in Manifest v3.

> and you can still block ads with the new version.

Based on a static, fixed limit of URLs. That cannot be updated without resubmitting the extension to Google for reapproval. Which match using a fixed, dumb matching algorithm that Google alone controls (negating a number of the more sophisticated pattern-matching based rules UBlock Origin relies upon).

> Adblock plus works fine with the new changes, for example.

That the crappy adblocker that has largely sold out to advertisers is unaffected by this change is hardly a ringing endorsement. Ublock Origin’s dev is complaining about this new API precisely because it cripples the much more sophisticated ad blocking rules that are Ublock Origin’s entire advantage over less sophisticated, less performant, less effective ad blockers like Adblock plus.


> They're not - its clickbait sensationalism.

It's really not. The new mechanism for blocking ads is extremely limiting - not unlike what Safari provides. While Google's stated intention is to protect user's privacy it just so happens to cripple extensions that pose the greatest threat to Google's business. If Google wasn't one of the world's largest ad companies, perhaps their statements could be taken at face value. But perhaps in that case they would have taken the time to come up with an API that could achieve the goals of both Google and uBlock Origin.

Edit:

> Adblock plus works fine with the new changes, for example.

Adblock plus is an "ad blocker" who's main concern is protecting the interests of "good" advertisers, not those of its users.


42% of chrome extensions have abused the web requests API in it's current format, and Google has long since said they're going to change how it works to protect users' privacy.

uBlock Origin can change how they block ads for the new more secure manifest, just like AdBlock plus does. Even if you don't like ABP as a company it is an example of how it is still possible to block ads on the new more secure Chrome manifest, making it sensationalism to claim otherwise.


> 42% of chrome extensions have abused the web requests API

No. The accurate quote is (my emphasis) "According to Google, 42% of MALICIOUS extensions have used the Web Request API since January 2018".[1]

Note that the source for this quote is Google itself, so they picked this one statistic for publication. We do not know what else they found which is not published, i.e. it's potentially conveniently self-serving toward their manifest v3 narrative.

I reiterate: deprecating the blocking ability of the webRequest API will break key parts of uBlock Origin ("uBO"), and will break uMatrix completely, because of the hard-coded matching algorithm of declarativeNetRequest (there are other issues) which is merely an implementation to enforce EasyList-like rules.

I know how uBO/uMatrix extensions work, I wrote them from scratch. If you want to argue why they will work fine, you will have to do better they merely repeating Google's narrative regarding changes in manifest v3.

ABP will be fine because the primary purpose of ABP is to serve as a revenue source for Eyeo GmbH through its "Acceptable Ads" product (of which Google is a partner), which can still function just fine with the declarativeNetRequest API -- as shown by it's Safari iOS version.

I am not alone in my criticism, for instance the EFF: https://www.eff.org/deeplinks/2019/07/googles-plans-chrome-e...

Chrome Web Store allows extensions with remote code execution capability[2], this is the foremost issue and it's the one they could have fixed a long time ago with no API changes. That it has not been fixed is what you should be questioning.

* * *

[1] https://www.xda-developers.com/google-chrome-manifest-v3-ad-...

[2] https://twitter.com/gorhill/status/1139306139072507906


There is no privacy benefit to v3, since it still allows dynamic notification of requests made by the page, it only removes the ability for addons to inject their own responses.

It's almost like someone sat down and asked "How can we get rid of ad blockers, without impacting spyware?".


> 42% of chrome extensions have abused the web requests API in it's current format, and Google has long since said they're going to change how it works to protect users' privacy.

So we're throwing the baby out with the bathwater because Google's extension approval process is so dismal it allows through a large minority of abusive extensions, and this is supposed to make us feel better about a change in the API that gives this dismal and apparently near-useless approval process more control over the contents and behaviour of ad blockers?

> uBlock Origin can change how they block ads for the new more secure manifest, just like AdBlock plus does

Which, again, entails UBlock Origin moving from a frequently updated list of blocking rules relying on sophisticated wildcard matching functionality to a hard-coded, fixed-size static list of blocked URLs that are not allowed to contain any wildcards for redirecting to more secure or private versions of content, and which cannot use any other kind of more complex ad-blocking logic. A hard-coded list of dumb blocked domains that is infrequently updated only with Google's approval (via the same dismal process that allowed through 42% of abusive extensions above...) of a complete re-submission of the blocking extension, approved or rejected at their whim and leisure, because live updating rulesets will also be banned "for security".

"UBlock Origin will be able to work every bit as poorly, and be every bit as easily defeated as Ad-Block Plus' unsophisticated, frequently useless filters are, plus its rules will get updated far less often" is, again, not in any way the good thing you seem to be pretending it is.

Just some highlights about this "great" change that you're signing the praises of, discoverable from the very link you posted:

> uBlock Origin heavily relies on pattern matching, and the extension developer stated that it is not possible to retrofit his extension’s matching algorithm to meet the APIs requirement. The API would also require a complete extension update to simply update the filter list, which would be a far too frequent activity considering the frequency with which these filter lists are updated. Of course, these updates would also hinge on Google’s extension review criteria and processes.

Hmmm.

> The blocking list must be present in the extension at install time and can’t be updated without updating the entire extension. This is subject to Google’s extension review criteria and processes. This means you won’t be able to opt-out of something like AdBlock Plus’ Acceptable Ads program, as the proposed new API doesn’t allow for rulesets to be turned on or off in the same extension.

Hmmm.

> declarativeNetRequest’s redirect action can only redirect to a static URL; meaning that you cannot redirect using the new API from a pattern like “://www.youtube.com/embed/” to “https://www.youtube-nocookie.com/embed/%2”. This is pretty much all my Privacy Enhanced Mode for Embedded YouTube Videos extension does, by the by.

Hmmm.

> Some extensions, like the EFF’s Privacy Badger that compiles lists of and blocks web beacons and trackers based on browsing activity (a method that breaks a fair number of websites), wouldn’t survive the transition.

Hmmm.

This API redesign "for security" just so happens to expertly fuck over the most powerful privacy and ad-blocking extensions, while leaving neutered crap like Adblock Plus With Mandatory "Acceptable" Ads working.

None of this is an accident. With Ublock Origin neutered out of usefulness, driving users back to Adblock Plus with mandatory acceptable ads that just so happen to whitelist Google will, of course, be a big win for Google's ad revenue.


Sure, I did my best to turn off all personalized Google ads, and then every now and then I get something extremely specific, that has to be targeted at somebody. But is it based on my data, or something that correlated with me? Something not part of Google's data per se?


Could you imagine being diagnosed as being paranoid schizophrenic because you think there is some conspiracy to profile and target you with advertisements based on mental health? And your doctor must be in on it because instead of listening to you they just hand you drugs and tell you to take them.


I've seen online ads that mention tardive dyskinesia, which is a drug side effect that is particularly associated with psychiatric drugs such as antipsychotics. Of course, there's no way to tell exactly what algorithm/data this is based on, and whether it's my data or the data of people that I'm connected with. Seems like somewhere there should be a HIPAA violation, but that's pretty hopeless.

I think it's safe to say that anyone who is diagnosed as a paranoid schizophrenic is being targeted, which makes it all the more difficult to suppress feelings of paranoia and distinguish actual tracking from coincidences. I doubt there are any people who are perfectly sane driven mad by advertising, but it's quite plausible that it has a negative effect on people trying to cope at the margins.


You just described Alex Jones's business model.


C'mon, everyone knows AJ is controlled opposition.


To avoid this: Firefox with uBlock Origin and resist fingerprinting setting on. Use private mode whenever you don't need logins. Bonus: Use a $3/mo VPN to dodge your ISP's tracking (xandr.com and weep) or just use Tor Browser for browsing, it's pretty fast these days.


If you're using Firefox, give the Firefox containers a try. https://addons.mozilla.org/en-US/firefox/addon/multi-account...

Gives you everything you seek from a private session


You could get "targeted" in the sense of a prospective employer not employing you as well.


> it really sucks to get ads targeted at my mental health diagnoses

I am extremely cautious to never look up health info without using a VPN and to never discuss health in email exchanges.

After receiving a diagnosis from my doctor some years ago for a condition, a week later I received a targeted snail mail ad for the condition using my full legal name, which only the DMV, my doctor, and the property assessor use. All web stuff, purchases, email, use a nickname instead.

I discussed it with the doctor's office. They said they are HIPAA compliant and wouldn't say more other than referring me to their privacy policy. The privacy policy like ALL medical privacy statements these days contains vague information about "sharing" with third parties and partners in certain circumstances which as written don't appear to be selling private medical data for advertising, but clearly do.

Currently when seeing new doctors I use a fake name, pay cash, and give them a phone number to a phone separate from my normal one, which I bought with cash and never associate with my own name. Works for prescriptions for the most part too since I pay cash for that as well and id is only required for narcotics.

Since then, not so many targeted ads.


a week later I received a targeted snail mail ad for the condition using my full legal name, which only the DMV, my doctor, and the property assessor use

I had something similar happen. When I told my doctor, he blamed the drug store chain (Walgreens in this case) for selling the information. I don't know if that's true, but my doctor believes it.


I was unhappy that CVS sent me promotional snail mail based on my prescriptions, but I don't know what I can do about it, since I assume they have lawyers that told them it was legal under HIPAA.

I also get text messages from them that have links that may or may not allow the retrieval of health information, which also bugs me inasmuch as there's no way to opt out (at least not without ending all text messages).


Agree, I don't need a million ads for a useless text therapist app.


You obviously need therapy if you don't want the advertisements. /s


Is that how you really feel?


The /s was supposed to mean sarcasm, however that is probably the direction that companies will try to take the mental health profession.


Some are already offering text chat for a monthly fee as a cheaper alternative to a face-to-face talk with a therapist at an hourly rate. However, I don't know if anyone is charging for a bot yet.


I suspect that GP was riffing on typical responses from a therapist chat-bot.

https://en.wikipedia.org/wiki/ELIZA


Of course, someone developed a script for a patient too; I think it was called PARRY. Perhaps these could be combined in an adversarial learning setup...


I wish there was somewhere I could explicitly opt-in/sign-up for ads targeted at my mental health problems...or my problems in general.

As someone who hates shopping, I've been really disappointed with the quality of ad targeting for the past decade.

The ability to select categories of ads has improved things. But I wish I could just write some rule-based criteria and then take 3 hours, look through a bunch of ads, and give them ratings in order to train personalized machine learning models.


Don't know about ads but I know Google has this feature where when something new about a particular keyword comes up it will email you with the links. I use it to track new research on my own health issue.


Why not use independent review sites? Those exist for a lot of categories of shopping.

I'm not sure one exists for prescription drugs or mental health services, though--that would be an interesting idea.


Ads are not targeted at your diagnosis - that is absurd. Ads are targeted at your browsing habits, which may reflect your diagnosis. One is illegal, one is legal. If we want to have an open discussion about advertising making claims like that only hurt privacy advocates.


It's not that clear-cut. If particular metrics of 'browsing habits' directly reflect health information, then it is not legal to collect and use these metrics unless very specific conditions are met (which generally can't be met in the generic third party advertiser case). Under GDPR, health data, among with some other categories, counts as especially protected data with much more stringent restrictions than the usual GDPR restrictions.

For example, if you place a cookie recording visits to a specific results page of a mental health test (e.g. a page saying "you got 14-18 points on this test which implies foobar" or get a https://www.hotjar.com/ recording of what the user typed in that test, then that would be blatantly illegal, violating GDPR article 9.1. I'm not even talking about using that data for targeting ads, the default position (if no exceptions are met) is that you're not allowed to collect and store that data.

Collecting user browsing habits is not (in EU) "legal by default" - it may be legal, but it may be not (e.g. if sensitive data are collected as in this case) and it's the duty of data controller to ensure that all the requirements are met - if it turns out that it's too difficult for you to automatically distinguish which browsing habits you are allowed to collect and which not, then the only legal option is to assume that you're not allowed to collect them.

Having your health data not be collected by random companies is an important inalienable right (e.g. privacy as one of the core rights in EU charter of human rights); while collecting and storing browsing habits of other people is a privilege, you're allowed to do that only if you can meet all the required conditions.


That is a fair response - the article is on European countries and I was only thinking of the US in my response.


It's absurd that you think advertisers can't generate this information:

1. Get addresses of mental care clinics and offices.

2. Geofence addresses

3. Correlate devices that visited geofence addresses (using LiveRamp data) with devices that saw your ads on Mental Health sites.

4. Bonus, look at the path on the pages to figure out what disease they were viewing when your ad was displayed if you weren't already targeting specific page content.


From a technical perspective its possible, but geodata companies have business rules that don't allow it. No provider will allow those segments to be created (I've tried to do it). If you reach out to placeiq, factual, oracle, etc they will turn you away.

Healthcare targeting has been severely reduced in the past couple years because of concerns like these. You used to be able to target diseases/interest in diseases/etc but can't anymore.


This is a little comforting, but I doubt the secret rulebook is going to cover everyone's concerns.


How would they access location data unless authorised by the user?

("absurd" is a strange word to use; it might be nicer to educate without condescension).


Sorry, I used "absurd" to echo the condescension of the parent comment. Which you didn't respond to....

Location data is so easily available that is largely a commodity now, sources include GPS from "always on" apps like the Weather Network but also apps that are collecting this data without your permission. Apple and Google are constantly kicking apps that do this out of the store.

Also tricks like "local wifi" devices are being used, Apple just reduced apps ability to sniff networks for this reason. There is another response that lists some other data sources, your cell phone company being the worst offender for many reasons.


> How would they access location data unless authorised by the user?

> > Correlate devices that visited geofence addresses (using LiveRamp data)

LiveRamp is hooked into a lot of ad services and a user's location information will routinely make its way to them. Played some ad-supported mobile game on your phone while in the Doctor's waiting room? LiveRamp has that location data.

Even if the user doesn't have GPS enabled for the game, they might have it enabled for some other background process that routinely asks for it, like Facebook or Twitter or FourSquare, who package that data for sale. Or you're on the Doctor's wifi, and most stable IPs like that are in location databases. Or they just buy it off your cell phone provider, since many happily sell location information.

LiveRamp is a gigantic business built entirely on knowing approximately where you are any time you interact with their tracking servers.


The thing is, what do you make of advertising that is very, very, specific and not aimed at a general audience...but also wrong? When someone is very certain and wrong, it means they think* they have inside information. The certainty points to it being overly intimate, yet misinterpreted.

*Please don't take this excessively literally.


This kind of thing makes me want to start a small independent ISP that automatically blocks trackers, a la Pi-Hole, unless the customer specifically opts in for them. Though, I’m not sure if there are legal hurdles in the US for doing this.


I was interested in Pi-Hole, but didn't really want to modify my existing DNS setup.

I have a server running BIND that has forwarders set to external DNS and has views setup so I can reach servers within my network with the same DNS names as externally.

I found this - https://www.pitt-pladdy.com/blog/_20170407-105402_0100_DNS_F... - and it takes the same blocklists Pi-Hole uses and generates BIND RPZ files. It accomplishes the same thing but integrated nicely into my setup. I have it updating weekly.

I mention this to say that I would consider using my ISP's DNS servers if they were Pi-Hole RPZ enabled. And since you can use any DNS you want I can't see why that would create legal issues.


An ISP won't really cut it these days, since more and more internet use, particularly in developing countries, is mobile. You need to start a small independent mobile carrier.


Someone needs to call John Legere and get T-Mobile working on this.


I mean offer a virtual ISP using VPN. Sell a preconfigured router.


Net neutrality is gone, block all you want as an ISP!


The legal hurdles are billions of dollars spent by corporate ISP conglomerates that have less morals than you do, and unfortunately that makes it more difficult to compete.


More like no morals. They only have "morals" when they might get sued, fined or just isn't profitable.


Can't you just offer a DNS service that does that?


Sure, that would be more cost effective, but less user-friendly. Most people don’t even know what DNS is, and it would be a nightmare walking every single customer through their router’s DHCP settings. Plus, if they cancel their subscription to my DNS, now they have to change it back.


When I read things like this, any quams I have about using ad/tracking blockers melt away.


Companies have practices for opioid abuse surveillance and prevention that highlight how available this data is. Example

https://www2.deloitte.com/us/en/pages/public-sector/solution...

They use insurer datasets to model at-risk populations, especially populations with high costs, to identify intervention opportunities. I saw one model where they could identify all pregnant women in a state with 98% accuracy and score them for risk of opioid dependency.

You can layer that type of data with online advertising from vendors like Google and identify opportunities to target behavior factors that combined with medical risk factors present opportunities. For example, a blue collar worker with back pain treated opioid treatment has a baseline risk of abuse. If her address changes or behaviors like online gambling happen, that increases the risk of abuse.

Similar tech has been developed to combat extremist behavior.

http://chicagopolicyreview.org/2019/04/18/can-online-ads-hel...

In short, your health data is not meaningfully private.


Unfortunately, if you have the clout and money and a facile excuse, you can also get data on patients straight from the NHS itself.

'Revealed: Google AI has access to huge haul of NHS patient data' - https://www.newscientist.com/article/2086454-revealed-google...

'Data deadlines loom large for the NHS' - https://www.bmj.com/content/360/bmj.k1215


this is exactly why i view surveillance and advertising as attacks on my personal autonomy. advertisers will use anything and everything against me, whether or not i ever consented to anyone knowing or using any of my information.

if i'm surveilled, anything i do can be monetized and that monetization directly and seriously harms me and exposes me to risk that i did not opt into. if an advertiser knows that i am mentally ill on the basis of my search queries or other data which they illegitimately procure without my consent and against my active objections, they can target me for exploitation with an arsenal of dirty psychological tricks designed to get me to buy their products. if i am like most people, in the long run, they will win because i will buy at least one product which they have forced onto me.

in other words, if i am forced against my will to view a targeted advertisements, it is an inexcusable and unprovoked attack on my right to refrain from economic activity that i do not wish to undertake. it is an attempt at coercion using weaponized persuasion. it is not an attempt in good faith to improve my life or to help me.

this remains true whether that advertisement targets an area where people are explicitly exceptionally vulnerable, such as in the case of mental health, or something more mundane, like my love of fast cars and nice wine. however, in the case of mental health, it may be such that the act of targeting someone with a relevant advertisement genuinely makes their condition worse. so, as we all knew all along, these advertisements are actively, maliciously, and viciously harming people for the sake of a few clicks.

the bottom line here is that advertisers and advertiser-enablers are long overdue for their comeuppance. i'd support a ban on targeted advertisements, but that won't happen legislatively. GDPR and similar laws are a start, but they don't go nearly far enough to punish transgressors. i'd be more satisfied with criminal liability for advertisements exploiting protected classes of PII, but we'll see how things evolve.


Oh awesome. So does that mean we can finally do away with the gigantic money sink that is HIPAA? Because if we're gonna have our health information leaked, then there's no reason to keep up this charade.


I get the feeling these are just general websites with mental health information, articles and little tests, but not actual health providers. If it's just a directory and not your health insurance company or doctor, they don't fall under HIPAA. HIPAA only applies to the US. EU laws and the GDPR, which this article are discussing, are very different.


Always use a VPN and incognito mode when accessing any sites for which your interest has value to others that could harm you.

It's not surprising at all that web sites covering medical issues are also tracking interest in those issues, correlating them with identities, and selling them to third parties, perhaps including life and health insurance companies, potential employers, etc.


> And many used Hotjar, a company that provides software that allows everything users type or click on to be logged and played back.

Er, what?


It even records the mouse movements, generate hotmaps, etc. You have to do extra efforts to filter out certain input fields, like passwords or credit card details.

https://help.hotjar.com/hc/en-us/articles/115012439167-How-t...


IBM had something like this once, forget the name, something like TeaLeaf. We were forced to use it in our mobile apps until it became obvious it was terribly broken.


Yes, every mouse movement, unsubmitted form input, scroll, highlight, and so on. This stuff can also be used to fingerprint you. LiveSession, Inspectlet, UXCam are competitors. uBlock Origin on Firefox can be configured to block these.


I was on the fence about installing an ad blocker until this.


I worked for a major DSP / advertising platform and I was asked to setup a campaign to advertise some particular drug to bi-polar people in a manner that would specifically target that population.

I declined to do so and I was later fired. They were within their rights to fire me and I was within my rights to decline. That is all.


We need to start thinking ahead about who is going to run the federal agencies and "tobacco truth" type organizations funded by the billions of damages from the inevitable settlements, to avoid regulatory capture.


The question that should be asked is what technology companies bought that information and was that info sold to human resources departments?


Show HN: A boat for servers to keep your tech company on international waters post-digital civil rights.


Patent US7525207B2 - Water-based data center (Google LLC)

https://patents.google.com/patent/US7525207B2/en

Google's [...] floating data centers off US coasts (2013)

https://www.theguardian.com/technology/2013/oct/30/google-se...


Why waste money on floating the servers when you can sink them?

Microsoft is already piloting it:

https://www.bbc.com/news/technology-44368813


Passive cooling with no rent costs is hard to beat. Just gotta get to the bottom of the ocean first.


Never heard of one being implemented. Maybe that’s because you can’t store a googol in a floating point?

I’ll see myself out.


Jesus, imagine Google's data handling practices in international waters.


Microsoft has an actual underwater data center for energy efficiency purposes: https://news.ycombinator.com/item?id=17244525


If this is how I get rich I'll be happy and sad at the same time.


Or a satellite...


Ever tried keeping a server cool on a satellite? It's not fun.

If underwater cooling is way better than air cooling, air cooling is absurdly better than vacuum cooling in a sun oven...


Good point


That doesn't protect you from GDPR, specifically cited in TFA. The companies studied were located in France, Germany and UK and presumably dealt with EU residents' Personal Data.

So a boat in international waters (seasteading) won't help you. However, simply locating in China is probably adequate.

Obviously your comment is tongue-in-cheek, but being inaccurate it is also misinformation. You've already attracted a serious comment.


As much as I value privacy and think America needs a new constitutional amendment enshrining it in our rights, this title is clickbait and the entire article is designed to whip you into a panic. Here's the crux:

"Privacy International (PI) investigated more than 100 mental health websites in France, Germany and the UK.

It found many shared user data with third parties, including advertisers and large technology companies."

Yes. Mental health websites use third-party ads just like everyone else. Case closed.

They're not "selling mental-health information" as if they're violating HIPAA or something. They're just ordinary websites with ads and other tracking cookies.

This is more of the silly kind of half-truths that were used to put cookie warnings on nearly every site on the internet. Don't fall for it.


> They're not "selling mental-health information" as if they're violating HIPAA or something. They're just ordinary websites with ads and other tracking cookies

> This is more of the silly kind of half-truths that were used to put cookie warnings on nearly every site on the internet. Don't fall for it.

I’d suggest instead that your post is the kind of “silly half-truth” that people shouldn’t fall for.

As the article points out (and which you neglected to mention):

> And in the case of particularly sensitive data, such as health information, this consent must be explicit.

> But the PI investigation found many cookies were installed on people's devices before any consent had been given.

Contrary to your “it’s just an ordinary website, and ordinary websites use cookies, nothing to see here” narrative, these are websites that specialize in giving health advice. The knowledge that a person is seeking advice and tests for indicators re: depression is absolutely sensitive health information (insurance companies and potential employers would happily weaponize that information given half a chance), and the law therefore required explicit consent before using those cookies and forwarding those depression indicator survey answers to third-party marketing data-aggregation affiliates, which these websites did not seek.

This is a clear-cut violation.


Indeed.

I'd also add that:

> half-truths that were used to put cookie warnings on nearly every site on the internet

The half-truth here is that half-truths were used. Cookie warnings are "this site spies on you" warnings (you don't need them if you only use cookies for making the site work), which were introduced as a strong suggestion for the industry to get its act together. It didn't, so now we have GDPR.


> They're just ordinary websites with ads and other tracking cookies.

This is an interesting take. It means that for information to be sold to third parties it must be explicitly negotiated and delivered, through contractual means, as if it were a product sold with invoices emitted.

What actually happens is virtually the same, minus the explicit negotiation and invoice. Information gets sent to third parties, who cross reference browser information and the location from which the info was sent, effectively identifying and tying the user to the mental health related website. That is, in my opinion no different than explicitly selling mental health information to ad companies and should be treated as such.


This is indeed just business as usual on the web, but I'm not sure that should be an acceptable excuse. Clickbait or not, the article isn't wrong to describe in vivid terms how huge companies traffic in very private personal data so that they can more effectively put ads in your face. Given how willing people seem to be to trade their privacy for a few pennies in free services, maybe plain-language articles like this are what we need to communicate the consequences of where that kind of attitude can lead.


How are precise reading logs of websites that help people learn about and treat mental health conditions not "mental-health information"?

"Subject A traveled from a house (maybe not his house, you see, it's de-identified!) went to the library, selected a book on depression and another on cognitive behavorial therapy, read two chapters in the book on depression, made notes about 4 chapters from the CBT book, went back and re-read one chapter from the first book. Subject took quiz from first book which indicated a severity of 22 on the PHQ-9 depression score (it's not PHI because maybe he made up all the answers!) Subject appeared to move slowly with a sad affect through the library and while reading. Subject appeared to be interested in the 2nd chapter of the first book. Subject's other interests include 2000s romantic comedies and RPG video games. Subject's income is approximately 60k/yr and lives in a household with size 1."


If a hospital said “we can’t give you patient data, but we’ll give you your own booth to record faces, names, and let you follow patients around", people would be right to be angry.


What? The data that you've visited a mental health support website plus your IP easily takes you into GDPR sensitive data territory, and the rules around this are super strict: https://gdpr-info.eu/art-9-gdpr/.

You absolutely should not be handing that information about people to random marketing companies without consent, let alone with informing users. In France, Germany & the UK I would expect that this is a clear & major breach of GDPR and who knows what else.

If you read the report, there's also sites where you fill in a quiz about your mental health & depression, and they forward that data to services like qualifio.com ('The leading Interactive Marketing &Data Collection Platform').

That's definitely concerning. There's no way this is reasonable behaviour, and we shouldn't accept it as such.


If it's gathering health information and outputting guidance then they are legally no longer "just an ordinary websites with ads and other tracking cookies". Something like a survey that may offer guidance to, say, a suicide hotline if the survey results point that way crosses a border between passive information dissemination and active health advice. I think it's unlikely US courts would rule that such systematic gathering of health information doesn't make it a "covered entity". But that's just the US. The article focused on the EU, and there the GDPR automatically elevates the level of opt-in permission required.


[flagged]


That's not remotely true and has nothing to do with my point.


You're right about it having nothing to do with your point, but GP is correct.

Simple example: Californians can't get pre-filled tax forms because the program to provide that service was lobbied against by tax firms. There are tons of example of oil and gas lobbies against environmental concerns, food/beverage lobbies against public health, the list goes on.

In the USA, large companies will get their way most of the time at the expense of the public. Every once in a while, the government realizes an issue and puts in some regulations. Rinse and repeat.


> Californians can't get pre-filled tax forms because the program to provide that service was lobbied against by tax firms.

Do you have information on this? From what I heard California cancelled it after implementing because there was not sufficient use. I think its less lobbying and more that people sticking with what they are familiar with... Turbo Tax or an accountant.


This policy report shows great success on all measures: https://www.taxpolicycenter.org/briefing-book/what-was-exper...

Most articles pointing out the lobbying issues are somewhat biased, so I'm struggling to find something objective enough I want to share. It was no secret at the time though, so a little Googling will probably surface some resources to look at.


They were talking about here on Earth, it may be different where you are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: