Publish and be damned, be silent and be ignored.

I’m working on a longer piece on how student interaction on electronic discussion forums suffers from the same problems of tone as any on-line forum. Once people decide that how they wish to communicate is the de facto standard for all discussion, then non-conformity is somehow weakness and indicative of bad faith or poor argument. But tone is a difficult thing to discuss because the perceived tone of a piece is in the hands of the reader and the writer.

A friend and colleague recently asked me for some advice about blogging and I think I’ve now done enough of it that I can offer some reasonable advice. I think the most important thing that I said at the time was that it was important to get stuff out there. You can write into a blog and keep it private but then no-one reads it. You can tweak away at it until it’s perfect but, much like a PhD thesis, perfect is the enemy of done. Instead of setting a lower bound on your word count, set an upper bound at which point you say “Ok, done, publish” to get your work out there. If your words are informed, authentic and as honest as you can make them then you’ll probably get some interesting and useful feedback.

But…

But there’s that tone argument again. The first thing you have to accept is that making any public statement has always attracted the attention of people, it’s the point really, and that the nature of the Internet means that you don’t need to walk into a park and stand at Speakers’ Corner to find hecklers. The hecklers will find you. So if you publish, you risk damning. If you’re silent, you have no voice. If you’re feeling nervous about publishing in the first place, how do you deal with this?

Let me first expose my thinking process. This is not an easy week for me as I think about what I do next, having deliberately stepped back to think and plan for the next decade or so. At the same time, I’m sick (our whole household is sick at the moment), very tired and have come off some travel. And I have hit a coincidental barrage of on-line criticism, some of which is useful and developing critique that I welcome and some of which is people just being… people. So this is very dear to my heart right now – why should I keep writing stuff if the outcome risks being unpleasant? I have other ways to make change.

Well, you should publish but you just need to accept that people will react to you publishing – sometimes well, sometimes badly. That’s why you publish, after all, isn’t it?

Let’s establish the ground truth – there is no statement you can make on the Internet that is immune to criticism but not all criticism is valid or useful. Let’s go through what can happen, although is only a subset.

  1. “I like sprouts”

    Facebook is the land of simple statements and many people talk about things that they like. “I like sprouts” critics find statements like this in order to express their incredulity that anyone could possibly enjoy Brussels Sprouts and “ugh, they’re disgusting”. The opposite is of course the people who show up on the “I hate sprouts” discussions to say “WHY DON’T YOU LOVE SPROUTS”? (For the record, I love Brussels sprouts.)

    A statement of personal preference for something as banal as food is not actually a question but it’s amazing how challenging such a statement can be. If you mention animals of any kind, there’s always the risk of animal production/consumption coming up because no opinion on the Internet is seen outside of the intersection of the perception of reader and writer. A statement about fluffy bunnies can lead to arguments about the cosmetics industry. Goodness help you if you try something that is actually controversial. Wherever you write it, if someone has an opinion that contradicts yours, discussion of both good and questionable worth can ensue.

    (Like the fact that Jon Pertwee is the best Doctor.)

    It’s worth noting that there are now people who are itching to go to the comments to discuss either Brussels Sprouts or Tom Baker/David Tennant or “Tom Baker/David Tennant”. This is why our species is doomed and I am the herald of the machine God. 01010010010010010101001101000101

  2. “I support/am opposed to racism/sexism/religious discrimination”

    It doesn’t matter which way around you make these statements, if a reader perceives it as a challenge (due to its visibility or because they’ve stumbled across it), then you will get critical, and potentially offensive, comment. I am on the “opposed to” side, as regular readers will know, but have been astounded by the number of times I’ve had people argue things about this. Nothing is ever settled on the Internet because sound evidence often doesn’t propagate as well as anecdote and drama.

    Our readership bubbles are often wider than we think. If you’re publishing on WP then pretty much anyone can read it. If you’re publishing on Facebook then you may get Friends and their Friends and the Friends of people you link… and so on. There are many fringe Friends on Facebook that will leap into the fray here because they are heavily invested in maintaining what they see as the status quo.

    In short, there is never a ‘safe’ answer when you come down on either side of a controversial argument but neutrality conveys very little. (There’s also the fact that there is no excluded middle for some issues – you can’t be slightly in favour of universal equality.)

    We also sail from “that’s not the real issue, THIS is the real issue” with great ease in this area of argument. You do not know the people who read your stuff until you have posted something that has hit all of the buttons on their agenda elevators. (And, yes, we all have them. Mine has many buttons.)

  3. Here is my amazingly pithy argument in support of something important.

    And here is the comment that:
    Takes something out of context.
    Misinterprets the thrust.
    Trivialises the issue.
    Makes a pedantic correction.
    Makes an unnecessary (and/or unpleasant) joke.
    Clearly indicates that the critic stopped reading after two lines.
    Picks a fight (possibly because of a lingering sprouts issue).

    When you publish with comments on, and I strongly suggest that you do, you are asking people to engage with you but you are not asking them to bully you, harass you or hijack your thread. Misinterpretation, and the correction thereof, can be a powerful tool to drive understanding. Bad jokes offer an opportunity to talk about the jokes and why they’re still being made. But a lot of what is here is tone policing, trying to make you regret posting. If you posted something that’s plain wrong, hurtful or your thrust was off (see later) then correction is good but, most of the time, this is tone policing and you will often know this better as bullying. Comments to improve understanding are good, comments to make people feel bad for being so stupid/cruel/whatever are bullying, even if the target is an execrable human being. And, yes, very easy trap to fall into, especially when buoyed up by self-righteousness. I’ve certainly done it, although I deeply regret the times that I did it, and I try to keep an eye out for it now.

    People love making jokes, especially on FB, and it can be hard for them to realise that this knee-jerk can be quite hurtful to some posters. I’m a gruff middle-aged man so my filter for this is good (and I just mentally tune people out or block them if that’s their major contribution) but I’ve been regularly stunned by people who think that posting something that is not supportive but jokey in response to someone sharing a thought or vulnerability is the best thing to do. If it derails the comments then, hooray, the commenter has undermined the entire point of making the post.

    Many sites have now automatically blocked or warped comments that rush in to be the “First” to post because it’s dumb. And now, even more tragically, at least one person is fighting the urge to prove my point by writing “First” underneath here as a joke. Because that’s the most important thing to take away from this.

  4. Here is a slight silly article using humour to make a point or using analogy to illustrate an argument.

    And here are the comments about this article failing because of some explicit extension of the analogy that is obviously not what was intended or here is the comment that interprets the humour as trivialising the issue at hand or, worse, indicating that the writer has secret ulterior motives.

    Writers communicate. If dry facts, by themselves, aligned one after the other in books educated people then humanity would have taken the great leap forward after the first set of clay tablets dried. Instead, we need frameworks for communication and mechanisms to facilitate understanding. Some things are probably beyond humorous intervention. I tried recently to write a comedic piece on current affairs and realised I couldn’t satirise a known racist without repeating at least some racial slurs – so I chose not to. But a piece like this, where I want to talk about some serious things without being too didactic? I think humour is fine.

    The problem is whether people think that you’re laughing at someone, especially them. Everyone personalises what they read – I imagine half of the people reading this think I’m talking directly to them, when I’m not. I’m condensing a billion rain drops to show you what can break a dam.

    Analogies are always tricky but they’re not supposed to be 1-1 matches for reality. Like all models, they are incomplete and fail outside of the points of matching. Combining humour and analogy is a really good way to lose some readers so you’ll get a lot of comments on this.

  5. Here is the piece where I got it totally and utterly wrong.

    You are going to get it wrong sometime. You’ll post while angry or not have thought of something or use a bad source or just have a bad day and you will post something that you will ultimately regret. This is the point at which it’s hardest to wade through the comments because, in between the tone policers, the literalists, the sproutists, the pedants, the racists, TIMECUBE, and spammers, you’re going to have read comments from people where they delicately but effectively tell you that you’ve made a mistake.

    But that is why we publish. Because we want people to engage with our writing and thoughtful criticism tells us that people are thinking about what we write.

The curse of the Internet is that people tend only to invest real energy in comment when they’re upset. Facebook have captured this with the Like button, where ‘yay’ is a click and “OH MY GOD, YOU FILTHY SOMETHINGIST” requires typing. Similarly, once you start writing and publishing, have a look at those people who are also creating and contributing, and those people who only pop up to make comments along the lines I’ve outlined. There are many powerful and effective critics in the world (and I like to discuss things as much as the next person) but the reach and power of the Internet means that there are also a lot of people who derive pleasure from sailing in to make comment when they have no intention of stating their own views or purpose in any way that exposes them.

Some pieces are written in a way that no discussion can be entered into safely, without leaving commentators any room to actually have a discussion around it. That’s always your choice but if you do it, why not turn the comments off? There’s no problem with having a clearly stated manifesto that succinctly captures your beliefs – people who disagree can write their own – but it’s best to clearly advertise that something is beyond casual “comment-based” discussion to avoid the confusion that you might be open for it.

I’ve left the comments open, let’s see what happens!


Why You Should Care About the Recent Facebook Study in PNAS

897px-Not_facebook_not_like_thumbs_down

The extremely well-respected Proceedings of the National Academy of Science (PNAS) has just published a paper that is causing some controversy in the scientific world. Volume 111, no 24, contains the paper “Experimental evidence of massive-scale emotional contagion through social networks” by Kramer, Guillory and Hancock. The study itself was defined to evaluate if changing the view of Facebook that a user had would affect their mood: in other words, if I fill your feed with sad and nasty stuff, do you get sadder? There are many ways that this could be measured passively, by looking at what people had seen historically and what they then did, but that’s not the approach the researchers took. This paper would be fairly unremarkable in terms of what it sets out, except that the human beings who were experimented upon in this paper, over 600,000 of them, were chosen from Facebook’s citizenry – and were never explicitly notified that they were being experimented on or had the opportunity to give informed consent.

We have a pretty shocking record, as a scientific community, regarding informed consent for a variety of experiments (Tuskegee springs to mind – don’t read that link on a full stomach) and we now have pretty strict guidelines for human experimentation, almost all of which revolve around the notion of informed consent, where a participant is fully aware that they are being experimented upon, what is going to happen and, more importantly, how they could get it to stop.

So how did a large group of people that didn’t know they were being experimented upon become subjects? They used Facebook.

Facebook is pointing to some words in their Terms of Service and arguing along the lines that indicating that your data may be used for research is enough to justify experimenting with your mood.

None of the users who were part of the experiment have been notified. Anyone who uses the platform consents to be part of these types of studies when they check “yes” on the Data Use Policy that is necessary to use the service.

Facebook users consent to have their private information used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.” The company said no one’s privacy has been violated because researchers were never exposed to the content of the messages, which were rated in terms of positivity and negativity by a software algorithm programmed to read word choices and tone.

(http://www.rawstory.com/rs/2014/06/28/facebook-may-have-experimented-with-controlling-your-emotions-without-telling-you/)

Now, the effect size reported in the paper is very small but the researchers note that their experiment worked: they are able to change a person’s mood up or down, or generate a withdrawn effect, through manipulation. To be fair to the researchers and PNAS, apparently an IRB (Internal Review Board) at a University signed off on this as being ethical research based on the existing Terms of Service. An IRB exists to make sure that the researchers are being ethical and, based on the level of risk involved, approve the research or don’t give it approval. Basically, you can’t use or publish research in academia that uses human or animal experimentation unless it has pre-existing ethics approval.

But let’s look at the situation. No-one knew that their mood was being manipulated up – or down. The researchers state this explicitly in their statement of significance:

…leading people to experience the same emotions without their awareness. (emphasis mine)

No-one could opt-out unless they decided to stop using Facebook but, and this is very important, they didn’t know that they had anything to opt out from! Basically, I don’t believe that I would have a snowball’s chance on a hot day of getting this past my ethics board and, I hasten to add, I strongly believe that I shouldn’t. This is unethical.

But what about the results? Given that we have some very valuable science from some very ugly acts (including HeLa’s cell line of course), can we cling to the scoundrel’s retreat that the end justified the means? Well, in a word, no. The effect seen by the researchers is there but it’s really, really small. The techniques that they used are actually mildly questionable in the face of the size of the average Facebook post. It’s not great science. It’s terrible ethics. It shouldn’t have been done and it really shouldn’t have been published.

By publishing this, PNAS are setting a very unpleasant precedent for the future: that we can perform psychological manipulation on anyone if we hide the word ‘research’ somewhere in an agreement that they sign and we make a habit of manipulating their data stream anyway. As an Associate Editor, for a respectable but far less august journal, I can tell you that my first impression on seeing this would be to check with my editor and then suggest that we flick it back as it’s of questionable value and it’s not sufficiently ethical to meet our standards.

So why should you care? I know that a number of you reading this will shrug and say “What’s the big deal?”

Let me draw up an analogy to explain the problem. Let’s say Facebook is like the traffic system: lots of cars (messages) have to get from one place to another and are controlled using traffic lights (FB’s filtering algorithms). Let’s also suppose that on a bad day’s drive, you get frustrated, which shows up by you speeding a little, tailgating and braking late because you’re in a hurry.

Now, the traffic light company wants to work out if it can change your driving style by selecting you at random and altering the lights so that you’re always getting red lights, you get rerouted through the town sewage plant and jamming you on the bridge for an hour. During this time, a week, you get more and more frustrated and Facebook solemnly note that your driving got worse as you got more frustrated. Then the week is over – and magically your frustration disappears because you know it’s over? No. Because you didn’t know what was going on, you didn’t get the right to say “I’m really depressed right now, don’t do this” and you also didn’t get the right to say “Ahh – I’ve had enough. Get me out!”

You have a reasonable expectation that, despite red-light cameras and traffic systems monitoring you non-stop, your journey on a road will not change because of who you are, and it most definitely won’t be unfair just to make you feel bad. You won’t end up driving less safely because someone wondered if they could make you do it. Facebook are, yes, giving away their service for free but this does not give them the right to mess with people’s minds. What they can do is to look at their data to see what happens from the historical record – I’m unsure how, across the size of their user base, they don’t have enough records to be able to put this study together ethically. In fact, if they can’t put this together from their historical record, then their defence that this was “business as usual” falls apart immediately. If it was the same as any other day, they would have had the data already, just from the sheer number of daily transactions.

The big deal is that Facebook messed with people without taking into account whether those people were in a state to be messed with – in order to run a study that, ultimately, will probably be used to sell advertising. This is both unethical and immoral.

But there are two groups at fault here. That study shouldn’t have run. But it also should never have been published because the ethical approval was obviously not quite right – even if PNAS did publish it, I believe it should have been accompanied by a very long discussion of the appropriate ethics. But I don’t think it should have run. It’s neither scientific nor ethical enough to be in the record.

Someone speculated over lunch today that this is the real study: the spread of outrage across the Internet. I doubt it but who knows? They obviously have no issue with mucking around with people so I guess anything goes. There’s an old saying “Just because you can, doesn’t mean you should” and it’s about time that people with their hands on a lot of data worked out they may have to treat people’s data with more decency and respect, if they want to stay in the data business.