Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Evidence of massive-scale emotional contagion through social networks (pnas.org)
100 points by danso on June 27, 2014 | hide | past | favorite | 36 comments


This study has an enormous sample size, but an incredibly small effect. They're measuring effects of less than 0.1%. Admittedly, they have the statistical power to do this, but it's not a very impressive result. "Reading Others' Emotional Posts on Social Networks Can Very, Very Slightly Affect Your Mood" doesn't sound as cool.


A "small" effect size doesn't make it any less interesting or less impressive. If anything it underlines how even the most trivial human interactions can make a meaningful impact, positively or negatively.


Huh? A small effect size is the opposite of a meaningful impact.


No, what is meaningful is not what the effect size (a number) is measuring, it's just a statistical metric. So in this case, I think it's clearly meaningful if humans have the power to impact other humans inner-state non-verbally. For example, if one smile to a stranger makes their inner-state more positive, that's impactful and they then have the same power for their emotional state to impact others. A trivial encounter can, in such, make a significant positive or negative impact. This study proves that the effect is extremely robust in both directions.

Effect size tends to get smaller the bigger the sample size for any kind of psychological/sociological study. This was a study of over half a million N, most studies use < 1000. In other words, the effect size is impressive despite looking small.

Side note: It's interesting to note that paranormal researchers often get criticized for finding small effect sizes in their studies. (Part of that is thinking that it's more likely due to chance or noise) For all intents and purposes, if you believe the effect to be real but small, it would still have paradigm changing consequences on how we view the way brains work, a radical departure from what mainstream science currently believes. It's another example of why a small effect size can be incredibly interesting, course altering even.


I can't say I follow you entirely, but you seem to be confusing "effect size" with p-values. At least, that's what I conclude from "Effect size tends to get smaller the bigger the sample size for any kind of psychological/sociological study".

Effect size just refers to the size of the effect. So if you do an intervention on some children and it leads to them being taller as adults than a control group of children by an average of four inches, you've got an effect size of "four inches". It's difficult to see how this can be described as "just a statistical metric"; for one thing, it has a dimension (here, inches). The effect size tells you what to expect from something.

A p-value is an answer to the statistical question "assuming the effect size is zero, what are the odds that the data I see would occur by chance?" With p-values, smaller is better. These do in fact get smaller with increasing sample size, whereas an effect size that diminishes with increasing sample size is solid evidence that the effect is not real.


> These do in fact get smaller with increasing sample size, whereas an effect size that diminishes with increasing sample size is solid evidence that the effect is not real

The realness of an effect is not just judged by the effect size. If you have more confidence over the result (lower P and/or higher N). You can run most accepted psychological effects done with smaller sample sizes over a much greater sample size and expect to see a smaller effect size. If the effect size dissipates and the P increases, that's where you have the biggest problem. If your confidence grows bigger in proportion to the lowering of the effect size, you still should have greater trust in the realness of the effect.

The core of my point though is that looking at effect size and thinking it is not meaningful can be distorting things a bit here. A smaller numerical expression that involves a large data set does not capture the actual meaningfulness of the impact the effect has on a single person to person interaction.


> You can run most accepted psychological effects done with smaller sample sizes over a much greater sample size and expect to see a smaller effect size

This is because, not to put too fine a point on it, the effects aren't real. The statistical power of a study places a lower limit on the effect size the study is capable of reporting (for a given p-value threshold). So underpowered studies report absurdly high effect sizes (for whatever phenomenon is being studied) because they're not capable of showing statistical significance for smaller effect sizes. Andrew Gelman likes to write about this. The upshot is that, if an effect isn't real, a paper demonstrating statistical significance (traditionally, p < 0.05) will tend to report an effect size close to the minimum it can -- since the true effect size is small, you're much more likely to find data showing a large effect size than a gargantuan effect size. Then, obviously, a study with larger N will show a smaller effect size still, because it has the power to show P < 0.05 at that smaller effect size. But the true effect size doesn't change from experiment to experiment -- the true effect size is the effect size you would measure at N = ∞. There is no reason to expect the effect size to diminish in the face of a larger study unless you expected that it was smaller than originally reported. I could measure the extra time required for an apple to hit the ground when released from 9 feet up instead of 4 feet up. The effect size, measured in seconds, will not steadily diminish as I measure more and more apples; it will stabilize at a quarter of a second.

As your parent comment points out, this study, with its very large sample size, has the statistical power to report a small effect size, which is what it's doing. However, a very small effect size is just another way of saying "this effect, while just barely large enough to measure, is much too small to matter to anyone".


I think you continue to miss both my points. If you're measuring apples falling to the ground, of course you wouldn't expect effect size to diminish with higher N. But social/psychological studies are not like a physics study, you have far less control over variables. This is especially true with a large N, where there is a noisier environment with more variables coalescing, typically, you get less precision over what you can say about impacted individual datapoints.

I wouldn't argue that a larger effect size here wouldn't be more impressive, of course it would. I'm just saying that a small effect size for a study of this kind does not diminish it's meaningfulness and that it's to be expected for these kinds of studies. There's an effect, that we're very confident is real, works in both directions and has real world implications.


A noisier environment doesn't mean you expect smaller effects. It means your measurement is unstable. This problem also occurs in physics; using the standard approximation of gravity of 32 feet per second per second, the extra time required to fall 9 feet instead of 4 feet is exactly 1/4 second. Should you actually try the experiment, you'll quickly notice that your measured time varies from attempt to attempt. There is an office of the government which (among other duties) measures the weight of a coin (the same physical coin) every day, and records the result. Some days are anomalous. There's variation every day.

What larger N does is enable you to see past the noise. With a large sample, the effect of the noise in your measurements diminishes to zero, letting you estimate the effect you're looking for more accurately. So over 200,000 apple drops, I should see an average fall time discrepancy very close to 0.25 seconds; whereas with 2 apple drops, I might for whatever reason measure the time discrepancy as 2/3 second. The 0.7 seconds estimate is way off because of small N.

If, as you work with larger and larger sample sizes, the effect you're measuring recedes steadily to zero, the obvious conclusion is that it's all noise.

However! We started this by talking about a different thing entirely. You say this:

> There's an effect, that we're very confident is real, works in both directions and has real world implications.

This study has immense statistical power and a minuscule effect size. The immense statistical power means, yes, "that we're very confident [the effect] is real". That's measured (from a traditional perspective) by the p-value.

The effect size measures the real-world implications. A very small effect size means that the real-world implications are likewise very small.

As a toy example, suppose I do a study finding that feeding children between the ages of 4 and 7 meat with bones in it vs meat without bones increases their height as adults by three feet (p < 0.9). The real-world implications are huge. Our confidence in the study is low.


The physics measurements are interesting for different reasons. Those are measuring very objective things, even if measurements vary, they vary for ways we can conceivably calculate. Some physicists even raise the idea that certain constants aren't that constant.

But it's still vastly different to appreciate statistical data coming from those kinds of experiments and those that touch on psychology and social effects. Your height/nutrition example is convenient because we all can appreciate an effect expressed in objective units such as cm that we can see with our eyes. It's much harder to weight the effect that, say emotional states, have in pure numbers.

I could continue this discussion endlessly, probably not going to get anywhere with it.


>>> As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research >>>

Wait, what?


There's a related test that is far more basic, and more directly relevant to Facebook's bottom line, that must have been studied at some point: does seeing good/negative things in the newsfeed affect a user's re-visit behavior? If I had to take a cynical guess, I would guess that negativity, and even tragedy, would draw people in more...for the same reason that conflict makes both fiction and non-fiction media interesting.

I'm also going to cynically guess that Facebook has probably studied this but probably doesn't find it in their best interest to reveal the optimal mix of emotion.

Although positivity/negativity may be a minor factor compared to the other factors that influence what shows up on your feed...for me, my feed seems to be heavily weighted toward: "the last 20 people you've interacted with on Facebook"


> for me, my feed seems to be heavily weighted toward: "the last 20 people you've interacted with on Facebook"

Lucky. For me it's more like "the last 34543532514543254 brands you happened to glance at, oh and you have like 3 friends who sometimes post stuff too"

Except for profile picture changes. You see every single bloody profile picture change. Every single one of them.


Breaking news: human culture spreads through social contact.

It's almost like we're social animals or something. Although granted, it's very interesting that with modern distribution methods the effects can span continents rather than villages.


You'd say the 'social' in social networks would give the game away.


It's a shame that a reputable journal accepted the bogus claim of "informed consent". Not a big deal in an observational study like this, but sets a horrible precedent for EULA-based fraudulent claims of informed consent.


PNAS' policies[1] say that experiments on humans must go through review by an ethics board before they will publish them, so it's especially strange that PNAS published this without any apparent review (none is mentioned in the paper). It's not up to the researchers to decide whether informed consent was obtained, it's up to the ethics board.

[1]: http://www.pnas.org/site/authors/journal.xhtml (vii)


Having submitted to that journal before, I suspect the authors just ticked a check box that said something like "Relevant ethics approval was obtained for this paper".


Where do you complain, though? Do you write a stern letter to the editor?


Oh wow, what if someone in the negatively influenced group committed suicide? It would be unlikely that the tinkering actually caused the suicide, but the media would probably jump to this conclusion quickly.


Or flip it around - what if the next time Facebook sees you post 5 negative things in a row, it decides to positively skew what it shows you? i.e. trying to cheer up depressed people and stop suicide. It's creepy, eh?


Both sides are really creepy. A possible benefit of this experiment is gaining more insight into detecting whether people are happy or sad. Advertisers might soon be able to adapt their ads to a user's mood, thereby reinforcing the artificial connection between the company and the user. Users will feel like the brand really understands them.


That's pretty damn creepy too.


It seems far more plausible people are picking up behavioral norms than being emotionally influenced. If none of your friends ever complain on facebook, are you going to be the one "grump"? I didnt read the study admittedly but this feels off. I thought it was axiomatic by now that what people post to facebook is 90% about how people want to be percieved, not who they really are.


Sure, the reason why people replicate each other's sentiments is often to be normative. The interesting part about publicly expressed sentiments though is that they tend to be subconsciously adopted by those expressing them even in the case of initial insincerity. This is essentially Cialdini's consistency principle.


I think that heuristic attributes more foresight to people than they actually exercise in practice. It's more like the impressions one anticipates making on others create incentives and disincentives that shape behavior...but this applies offline as well. Even IRL, everyone's behavior tends to be bounded by "how people want to be perceived, not who they really are."


The same probably applies to TV, radio, newspapers, novels, and plays. So I don't think anyone should panic.


I've read about studies where just seeing someone else smile and be happy makes others who see it also happier. Like a happy person gets on a bus full of less than happy people and the entire bus ends up happier. I don't think this is really that new, except maybe to the degree it scales the distribution. I imagine words create a smaller impact than a face to face encounter, but I'll leave that to a study to figure out.


The person who mentored me when I first became a college instructor mentioned the importance of appearing warm and engaging on the first day to remove anxiety of the new students. There's probably some truth to that "just seeing someone else smile" thing.

:)


The study basically concludes that reading sad stuff makes one sad, which in turn makes one post stuff that is also sad. The unique thing about it is that one is both a reader and an author. Otherwise, movie and TV studios have been testing and tweaking movies for a particular emotional effect since forever.


Although perhaps more importantly that constant exposure to emotional bias can put that bias in you. There was a much less credible study of whether or not watching Fox News made you more conservative (the unspoken threat there was that watching any consistently biased narrative would align you to that narrative) but the Facebook study has better statistics.

So one wonders if all the kids programming is consistently biased towards some topic, will it bias that generation? Sort of a weird inception sort of hypothesis.

For me it just confirmed my own experience that being around people who are sad makes me feel sad (not sure if I'm sad for them or me, but what ever) and so being optimistic around people might keep them going. But again not a lot of science there.


We believe this is a significant finding . The contagion effect should be seriously looked at by organisations/brand which is engaging its (prospective) customers significantly through social media channels for marketing, campaigns and reputation management. A number of organisations today are involved in setting up dedicated customer care handles on FB,Twitter,G+ so on..

Social media is increasingly transferring the power to the end users/customers. Perhaps it's high time for organisations to make sure that their business creates either a positive or a negative contagion effect on FB and other social channels.

The contagion effect also means that probably we have more work to do on Social media monitoring and analysis front going forward!


Sometimes one can't help but wonder if the long term impact of everyone being connected, all the time, is not going to be one planet-scale hive mind!


Hodos ano kato.


Wanna see emotional contagion? Try /b/.


Good old P-nas




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: