Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you, was the first thing I looked for.

While the intention of the bill seems good, I worry that this will become the next cookie popup. From the text of the bill:

"Any large online operator that engages in any form of behavioral or psychological research based on the activity or data of its users shall disclose to its users on a routine basis, but not less than once each 90 days, any experiments or studies that user was subjected to or enrolled in"

Based on their definitions of behavioral research, simple analytics would fall under the purview. This means that every site that has over 100m MAU will have to have a popup disclosing that they're running analytics and A/B testing (because honestly this won't stop any of them from doing it - these things are industry standards).

I don't need to be informed that Facebook tracks what links I click on their site. I don't need google to tell me that they have a history of every search I've made, and that they tailored those results based on my past searches. We're trying to create a safe space web at the detriment of UX. I support a lot of the stuff around younger kids, but I think the stuff for adults is just going to become a nuisance.



Given historical relevance it’s hard to imagine a time when someone would make a comment like “Regulations on human psychological experiments, what are we, a bunch of delicate flowers?!” But I have a hard time believing it would be seriously novel to conflate what goes on online with group psychological experiments. I’m not sure which side of the debate I would land on, but I do think it’s a reasonable enough thing for a deliberative body to consider, and certainly reasonable enough for a healthy debate without trying to sandbag it as “safe space”


Framing it as "human psychological experiments" evokes famous psych experiments like the marshmallow test and the Stanford Prison Experiment. Tech companies run these kinds of experiments occasionally (I believe Facebook is known to have tried to manipulate mood with a Newsfeed change), but I feel like this way of describing it masks the fact that 99% of large scale A/B tests are things like

- We slightly changed the color of the submit button

- The forward button was removed from the context menu on chat bubbles

- An infrastructure change reduces the load time of the comments on articles by 10ms

- The weekly ad relevance model update is being certified, yielding a 0.000001 increase in CTR for small segments of the market.

On average, much more mundane than "human psychological experiments".


In academia, those sorts of tests would still need to be approved by the human experimentation ethics board. There are no exceptions for trivial tests.


> In academia, those sorts of tests would still need to be approved by the human experimentation ethics board. There are no exceptions for trivial tests.

That's the point. It's like requiring a report to be filed whenever there is a "use of force" but then applying that rule using the Newtonian definition of force. Sat in your chair? File a report. Stand back up? File a report. Filed a report? File a report.

Worse, this kind of thing can happen retroactively. If you discover that your numbers are different than expected, but you hadn't declared any experiment, comparing what changed before and after is the experiment. But you hadn't notified those users that you were doing an experiment because you hadn't expected to have any reason to, so now you can't even have the people with the before and after data communicate with the people who know what changes were made to the system in that time frame because comparing that information would constitute doing the experiment.

It's like telling a car company they can't see their sales data when deciding which models to continue producing because it would constitute doing a psychological experiment on what kind of cars people like.

(On the other hand, it sounds like the law would only apply to entities the size of Facebook, and screw those guys in general. But it really is kind of a silly rule.)


We can all talk and be flippant about how trivial it is to change the colour of a button or whatever, but the sum total of all these changes is something different.

These services are running huge numbers of experiments in order to maximize engagement. Then everyone wonders what happened when tons of people on Facebook end up depressed and tons of people on YouTube end up radicalized by extremist rabbit holes.

It's death by a thousand cuts.


That's a separate problem though. The solution for that isn't to do something at the level of the individual experiments, it's to do something at the agglomeration level where the trivial individual harms are actually accumulating.

If you have some food which is infected with salmonella, you don't pick it apart with a microscope at the level of individual cells and try to separate it back out, you just throw the whole thing away and eat something else.

In this context the contaminated food is Facebook.


To continue with your analogy, Facebook is just one tainted chicken breast in the meat counter. We need to examine the entire meat packing and inspection infrastructure that gave rise to this mess.


> comparing what changed before and after is the experiment.

IIUC, in order to do that comparison you still need to collect data. You may throw that data away and your experiment ends right there, you may do analysis on that data, but you said it yourself - it is an experiment.


Right, so what are we trying to do here then? Having a notification that you're constantly participating in an open-ended experiment with a purpose to be determined at a future date seems worse than nothing. But if you require a more specific notification before the data is collected then the after the fact analysis doesn't just require user notification, it's inherently prohibited.


Yeah i’d expect the notification as an opt-in? Do you want to be part of that experiment ?


The experiment where they change the color of the submit button? What should cause me to care about that?

And what does opt-in even look like? No matter whether you want to "participate in the experiment" the submit button still needs to be some color for you, which is the only part of the "experiment" with any direct effect on you.

The concern with psychological experiments isn't that they're collecting data. That's a different bailiwick. The major issue with psychological experiments is that they may have significant direct psychological consequences. If you show people only news stories about mass shootings and conflict it may cause them to become violent or suicidal -- which has nothing to do with whether you collect data on it or what you do with it afterwards. The experiment itself is the harm.

Which means we would need some kind of principled and efficient way of distinguishing those kinds of "real" experiments from just measuring what happens when you make a subtle adjustment to a context menu.


Yes, and it's dumb. It's a bureaucratic nightmare that most likely inhibits progress. Not only that, this is also being used as a cudgel to silence the people that did the grievance studies hoax. [0]

[0] https://reason.com/blog/2019/01/07/peter-boghossian-portland...


Neither does it deliver a guarantee on the results you get out at the end - the Stanford experiment was faked, and academics have struggled to replicate the marshmallow test.


While I agree with your point, I do think that some kind of ethics oversight should happen over experiments like the ones you mentioned. I just think that it's absurd to expect the same from simple tests.


I thought marshmallow was replicated but clarified -- that it showed how children react to un/trusted adults, not uncover some genetic propensity to deferred rewards.


A/B testing is usually trivial and the disclosure will be trivial too. It does not need to be any burden to the readers (like a popup), and with a good tooling (and A/B testing requires tooling) it will not add much work to the developers too. It might be just automatically recorded in a public log. From time to time there will be something more interesting in that log and that will be the task of the media to analyse the logs and discover the interesting stuff. The disclosure just makes this work easier.


I think the hope is more along the lines that they'll have to tell you if they are running something like the emotional contagion study.

https://www.forbes.com/sites/gregorymcneal/2014/06/28/facebo...

https://www.theatlantic.com/technology/archive/2014/06/every...


The text of the law is what matters, and it's pretty clear that

> Any large online operator that engages in any form of behavioral or psychological research based on the activity or data of its users shall disclose to its users on a routine basis ... any experiments or studies that user was subjected to or enrolled in

means that every A/B test needs to be disclosed.


No. Not every A/B test.

> (6) LARGE ONLINE OPERATOR

> The term "large online operator" means any person that—

> (A) provides an online service;

> (B) has more than 100,000,000 authenticated users of an online service in any 30 day period; and

> (C) is subject to the jurisdiction of the Commission under the Federal Trade Commission Act (15 U.S.C. 41 et seq.).

Presumably there are definitions of other terms in that sentence (e.g. experiments, studies).

Here, it is:

> BEHAVIORAL OR PSYCHOLOGICAL EXPERIMENTS OR RESEARCH—

> The term "behavioral or psychological experiments or research" means the study, including through human experimentation, of overt or observable actions and mental phenomena inferred from behavior, including interactions between and among individuals and the activities of social groups.

Honestly, I don't think this is clear enough. Person clicks BLUE instead of GREEN may or may not fall under this definition. I don't think it should, but if I have 100M+ authenticated users per month, I'm probably going to put up a notice anyway.


> Honestly, I don't think this is clear enough. Person clicks BLUE instead of GREEN may or may not fall under this definition. I don't think it should, but if I have 100M+ authenticated users per month, I'm probably going to put up a notice anyway.

I'm not sure why you think an A/B test is not covered by

> the study ... of overt or observable actions and mental phenomena inferred from behavior

but it seem to me to be the very thing targeted by this legislation. I agree that the end result: Google disclosing 100k A/B tests each quarter is a grotesque tax on private industry without any social gain. However, it doesn't strike me as terribly out of line from other legislation in its effect.


Define study. Define mental phenomena. Define behavior. I just don't picture the wording being clear enough that it wouldn't be scrutinized if some company failed to disclose seemingly benign (blue vs. green) A/B tests.

I'm not a lawyer though.


I am also not a lawyer, but I spend a great deal of time reading about these things and how they're handled in the courts.

Ambiguity in law is handled in at least two different ways: in criminal matters, ambiguity is read in the most favorable light for the defendant; in regulatory matters, the interpretation adopted by the regulatory agency responsible for the law's implementation is considered binding so long as it is "permissible construction" of the statute. The latter is commonly known as the "Chevron doctrine"[0]

The long and short of it is that this bill, if enacted, will mean whatever the Executive Branch says it means. If it's particularly egregious, then their interpretation will be challenged in court and perhaps eventually trimmed down a bit.

0: https://en.wikipedia.org/wiki/Chevron_U.S.A.,_Inc._v._Natura....


At least this has a reasonable lower limit, so it doesn't screw over small platforms.


A simple disclosure, akin to the GDPR disclosure, doesn't seem unreasonable. Meatspace psychological tests would require informed consent and IRB approval. I agree that a more nuanced definition of what requires disclosure would be preferable (e.g. link tracking and testing button colors seems benign, messing with peoples moods with different content algorithms is not).


GDPR disclosure is atrocious. We literally legislated pop ups back into websites and mandated that they basically exist.

I'd rather they not ask me anything and just assume consent. It's not like EU legislation is going to stop a Chinese company anyway.


Assume consent...for what, exactly? To note which links on their site you clicked, and which you ignored - OK, that may be reasonable. To share/sell that information to a multitude of other businesses, most of which you've never heard of? I'm not so sure. To follow your activity across the rest of the internet for the following month, and sell access to that data? No thanks.


When you see a popup about cookies on a site, what do you do? Read the agreement, close the site, or just click "Allow"? Absolute majority of people simply click "Allow".

But even for the people who want to read the agreement, it would be much better if this was implemented as a browser feature, giving users control and consistency, instead of different popups on each site.


If I get a popup just saying that a site uses cookies, I may well allow it (knowing that my browser will clear the cookies when I close my incognito session, perhaps).

If I get a popup listing various kinds of data collection that the site wants to do, and lists of "trusted partners" it will be shared with, etc., I generally refuse everything except "essential". If the site's idea of what is "essential" sounds excessive compared to the use I expect to make of it (just how much tracking is reasonably required in order to read an article?), I simply won't use it.

And if it makes the process of refusing consent particularly opaque or cumbersome (in violation of GDPR requirements), I certainly won't trust or use the site at all (I'm looking at you, Oath...)


Does this apply then to telcoms too I wonder?


Disclose here should not mean opening a popup, it should be a separate page where user can see what data have been collected and what did the algorithm decide based on that data.

Similarly GDPR should have been about a requirement to provide s page listing all of the user data, not about endless popups on every site asking to accept cookies. If i use a browser that supports cookies then i already accept them.


I wonder why people disagree with this, do people think that a popup is better than a settings page where user can see what data was gathered and how it was used, or my point about accepting cookies being a browser feature, seemed to some as an attack on privacy in general?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: