Facebook: Your Slightly Manipulative But Accommodating Friend

Encouraging people to sign up as organ donors seems nice, as does standing up for transgender people. Seeing how you react to manipulated input, not so much. Friend or foe?

Feature by Natasha Bissett | 16 Jul 2014

If Facebook was a person, it would be that friend who knows everyone and all their business. It’d also be a bit opportunistic, because while it’s incredibly accepting of people no matter their religion, race, occupation – or, most recently, gender self-identification – it’s also not shy of pulling some manipulative tricks to get deeper into your life. In the final week of June, Facebook announced that UK users would have greater gender self-identification options, a win for transgender Britons, but it also published the findings of its experiment to measure the impact of positive and negative posts on your status updates. So, what do we make of Facebook’s ultimate intentions?

In the experiment, Facebook manipulated nearly 700,000 users’ news feeds, from which they hid selected positive or negative posts and tracked whether the affected users posted more positive or negative content. The response internet-wide has been stamping feet, apathetic shrugs and the self-righteousness of the non-Facebooker. But was the brouhaha about Facebook emotionally manipulating its users really justified?

Many people felt uncomfortable with Facebook’s actions, and described it as “creepy” or “evil”. One news commenter called the company “disgusting, manipulative monsters”, while others likened the incident to George Orwell’s dystopian novel 1984. This reaction is understandable, especially with reports on the Facebook experiment making it sound sinister. Slate, for example, reported that the experiment “intentionally manipulated users’ emotions without their knowledge”, and The Guardian described affected users as “emotional lab rats”.

While no one likes to feel manipulated or violated, Facebook already plays around with your news feed daily, so the Facebook experiment isn’t outside of the company’s usual interference in your digital life. Facebook uses algorithms to temper the inevitable digital storm of information that would bombard you. It promotes stories from people you regularly digitally interact with, it’ll tailor content based on likes, and if a post is popular amongst your friends or the public (for public posts), it’ll be promoted on your news feed. Facebook also promotes stories amongst friends that are worthy of congratulations, such as big events like birthdays, weddings, newborn babies, and new jobs.

Manipulation is one thing, but there’s also the issue of the actual results of the experiment. The researchers found that omission of positive posts reduced positive content output and increased negative output, and vice-versa. But this was by less than 0.1%, which means that of the 689,003 users surveyed – which is 0.04% of Facebook’s total population of 1.2 billion people – that resulted in 1 fewer emotional word per thousand words.

Some commentators raised the question of what impact the emotional experiment potentially had on users at risk of depression or suicide. Law Professor James Grimmelmann blogged that “the study harmed participants” and should have had ethical clearance to experiment on human subjects. Without finding out who was surveyed and conducting significant psychological investigation, it’s difficult to know if anyone was actually influenced to self-harm or commit suicide because they had a bad week on Facebook.

The other question is whether Facebook and the researchers had ethical clearance to survey users, and if informed consent was required. While many users didn’t seem surprised that Facebook would do something like this, there seems to be a lot of disappointment about PNAS’ role in publishing potentially unethically obtained data, to which they later responded.

Facebook used its Terms of Service as its shield. All users agree to the Terms, whether they read them or not when creating an account. PNAS clarified that they felt the Terms had been ethically sufficient to publish the article, but admitted “it is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.” The sticking point for many with Facebook’s defence is that the research clause in the Terms was added four months after the experiment was conducted. Academically, studies about Facebook are important to understand the effects of a platform unrivalled in its interactivity and pervasiveness in daily life. This also isn’t the first time Facebook has used user data to research people. In 2012, researchers logged 76 million links shared by 250 million users over seven weeks to determine how influential their friends were promoting content.

From a consumer perspective, there’s disappointment about Facebook’s unrepentant response to the incident. Facebook Chief Operating Officer Sheryl Sandberg apologised for poor communication and said, “We never meant to upset you.” The lead researcher offered an apologetic explanation on his Facebook page, but it seems the company was not sorry for conducting the research, just the way that it was reported and revealed. Just another legal speed bump the company has faced involving user privacy after late 2011, which resulted in beefed-up privacy settings.

Regardless of the furore and the investigations, the Facebook experiment has been a flash in the pan for most of the public. Some people left Facebook, some tightened up their settings and profiles, but the majority probably kept using the site as normal. So, was the furore justified? In many ways, the answer is no. It’s the standard modus operandi of Facebook to track, mine and manipulate content for marketing and research purposes. But there is the larger issue of corporate research crossing ethical boundaries and hiding behind generic Terms of Service, and the societal laxity of not reading fine print agreements, never mind the indifference about how much information we are willing to give away to a corporate entity, and the faith we place that it won’t be abused.