Why You Should Care About the Recent Facebook Study in PNAS

897px-Not_facebook_not_like_thumbs_down

The extremely well-respected Proceedings of the National Academy of Science (PNAS) has just published a paper that is causing some controversy in the scientific world. Volume 111, no 24, contains the paper “Experimental evidence of massive-scale emotional contagion through social networks” by Kramer, Guillory and Hancock. The study itself was defined to evaluate if changing the view of Facebook that a user had would affect their mood: in other words, if I fill your feed with sad and nasty stuff, do you get sadder? There are many ways that this could be measured passively, by looking at what people had seen historically and what they then did, but that’s not the approach the researchers took. This paper would be fairly unremarkable in terms of what it sets out, except that the human beings who were experimented upon in this paper, over 600,000 of them, were chosen from Facebook’s citizenry – and were never explicitly notified that they were being experimented on or had the opportunity to give informed consent.

We have a pretty shocking record, as a scientific community, regarding informed consent for a variety of experiments (Tuskegee springs to mind – don’t read that link on a full stomach) and we now have pretty strict guidelines for human experimentation, almost all of which revolve around the notion of informed consent, where a participant is fully aware that they are being experimented upon, what is going to happen and, more importantly, how they could get it to stop.

So how did a large group of people that didn’t know they were being experimented upon become subjects? They used Facebook.

Facebook is pointing to some words in their Terms of Service and arguing along the lines that indicating that your data may be used for research is enough to justify experimenting with your mood.

None of the users who were part of the experiment have been notified. Anyone who uses the platform consents to be part of these types of studies when they check “yes” on the Data Use Policy that is necessary to use the service.

Facebook users consent to have their private information used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.” The company said no one’s privacy has been violated because researchers were never exposed to the content of the messages, which were rated in terms of positivity and negativity by a software algorithm programmed to read word choices and tone.

(http://www.rawstory.com/rs/2014/06/28/facebook-may-have-experimented-with-controlling-your-emotions-without-telling-you/)

Now, the effect size reported in the paper is very small but the researchers note that their experiment worked: they are able to change a person’s mood up or down, or generate a withdrawn effect, through manipulation. To be fair to the researchers and PNAS, apparently an IRB (Internal Review Board) at a University signed off on this as being ethical research based on the existing Terms of Service. An IRB exists to make sure that the researchers are being ethical and, based on the level of risk involved, approve the research or don’t give it approval. Basically, you can’t use or publish research in academia that uses human or animal experimentation unless it has pre-existing ethics approval.

But let’s look at the situation. No-one knew that their mood was being manipulated up – or down. The researchers state this explicitly in their statement of significance:

…leading people to experience the same emotions without their awareness. (emphasis mine)

No-one could opt-out unless they decided to stop using Facebook but, and this is very important, they didn’t know that they had anything to opt out from! Basically, I don’t believe that I would have a snowball’s chance on a hot day of getting this past my ethics board and, I hasten to add, I strongly believe that I shouldn’t. This is unethical.

But what about the results? Given that we have some very valuable science from some very ugly acts (including HeLa’s cell line of course), can we cling to the scoundrel’s retreat that the end justified the means? Well, in a word, no. The effect seen by the researchers is there but it’s really, really small. The techniques that they used are actually mildly questionable in the face of the size of the average Facebook post. It’s not great science. It’s terrible ethics. It shouldn’t have been done and it really shouldn’t have been published.

By publishing this, PNAS are setting a very unpleasant precedent for the future: that we can perform psychological manipulation on anyone if we hide the word ‘research’ somewhere in an agreement that they sign and we make a habit of manipulating their data stream anyway. As an Associate Editor, for a respectable but far less august journal, I can tell you that my first impression on seeing this would be to check with my editor and then suggest that we flick it back as it’s of questionable value and it’s not sufficiently ethical to meet our standards.

So why should you care? I know that a number of you reading this will shrug and say “What’s the big deal?”

Let me draw up an analogy to explain the problem. Let’s say Facebook is like the traffic system: lots of cars (messages) have to get from one place to another and are controlled using traffic lights (FB’s filtering algorithms). Let’s also suppose that on a bad day’s drive, you get frustrated, which shows up by you speeding a little, tailgating and braking late because you’re in a hurry.

Now, the traffic light company wants to work out if it can change your driving style by selecting you at random and altering the lights so that you’re always getting red lights, you get rerouted through the town sewage plant and jamming you on the bridge for an hour. During this time, a week, you get more and more frustrated and Facebook solemnly note that your driving got worse as you got more frustrated. Then the week is over – and magically your frustration disappears because you know it’s over? No. Because you didn’t know what was going on, you didn’t get the right to say “I’m really depressed right now, don’t do this” and you also didn’t get the right to say “Ahh – I’ve had enough. Get me out!”

You have a reasonable expectation that, despite red-light cameras and traffic systems monitoring you non-stop, your journey on a road will not change because of who you are, and it most definitely won’t be unfair just to make you feel bad. You won’t end up driving less safely because someone wondered if they could make you do it. Facebook are, yes, giving away their service for free but this does not give them the right to mess with people’s minds. What they can do is to look at their data to see what happens from the historical record – I’m unsure how, across the size of their user base, they don’t have enough records to be able to put this study together ethically. In fact, if they can’t put this together from their historical record, then their defence that this was “business as usual” falls apart immediately. If it was the same as any other day, they would have had the data already, just from the sheer number of daily transactions.

The big deal is that Facebook messed with people without taking into account whether those people were in a state to be messed with – in order to run a study that, ultimately, will probably be used to sell advertising. This is both unethical and immoral.

But there are two groups at fault here. That study shouldn’t have run. But it also should never have been published because the ethical approval was obviously not quite right – even if PNAS did publish it, I believe it should have been accompanied by a very long discussion of the appropriate ethics. But I don’t think it should have run. It’s neither scientific nor ethical enough to be in the record.

Someone speculated over lunch today that this is the real study: the spread of outrage across the Internet. I doubt it but who knows? They obviously have no issue with mucking around with people so I guess anything goes. There’s an old saying “Just because you can, doesn’t mean you should” and it’s about time that people with their hands on a lot of data worked out they may have to treat people’s data with more decency and respect, if they want to stay in the data business.


2 Comments on “Why You Should Care About the Recent Facebook Study in PNAS”

  1. Bill Barth says:

    You might find this new post of James Grimmelmann’s helpful in keeping track of this issue: laboratorium.net/archive/2014/06/30/the_facebook_emotional_manipulation_study_source.

    Also, I think I made another comment here, but maybe WP ate it.

    Like

    • nickfalkner says:

      Thanks, Bill! Very useful indeed. I edited your post to turn the text into a hyperlink, which didn’t happen for some reason. Please feel free to change it back if that wasn’t your intention.

      Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s