search
Dorit Rubinstein Reiss

Facebook’s policy, banning and anti-vaccine tactics

This is a follow up to a previous post, Abusing the Algorithm: Using Facebook Reporting to Censor Debate. To remind you, starting in late December, persons connected with the anti-vaccine organization the Australian Vaccination Network falsely reported innocuous comments from vaccine advocates as harassment. Numerous people received 12-hour bans as a result, sometimes several such bans in succession. In response, multiple bloggers called the tactic out, most notably the talented Orac.

The anti-vaccine activists’ behavior is, as explained previously, characteristic of the movement. It is also extremely problematic, and deserves strong critique. That’s not the focus of this post: this post is looking at the other part of the picture – Facebook’s policies, which potentially contributed to the problem. Besides condemning the tactics, critics called, strongly, for a change in Facebook’s banning policy that allowed this abuse.

Facebook responded. A member of Facebook’s policy team got in touch with me, and explained some of the basics of the way they handle claims of harassment, and which changes will be made, in response to these events.

The starting point is that Facebook sees preventing harassment of users of its forum as an important part of its mission. This is not obvious – they could, instead, choose to have a free-for-all forum, and there are possible justifications for that, based on free speech. An open forum would save them the need from making tricky, inherently imperfect decision about which content should be allowed or not. It would solve the problem of a private company, which has become a gigantic social forum, in a sense, a public square, controlling the speech of the participants. There may be a cost, however: words are powerful, and sometimes, people can be hurt. Especially since some users of Facebook are minors. Facebook made the policy choice to aim for a “safe” forum. This means they need to balance free speech with preventing attacks, a very tricky endeavor.

How is this implemented? First, contrary to my assumption, he explained, every harassment report is reviewed by a human, who decides on the response. The problem is that with multiple reviewers and massive amounts of reports (Facebook has over 1.2 billion users), Facebook needed to come up with guidelines to assure consistency among its reviewers. The second problem  – and I think Facebook is right about this – is that the content of an individual comment is not always good indication of whether harassment is taking place: an apparently innocent comment can be harassment, if in the right context. So they had to come up with guidelines to protect users.

The problem was their choice of guidelines. While assessing harassment had several components, of major importance here was that when evaluating harassment, Facebook treated use of the reporter’s name in the comment as very strong evidence that harassment was taking place. The problem is two-fold, as I already mentioned: it is natural, indeed polite, to use a person’s name when responding to their comment, so the policy is bound to be over-inclusive; and this approach is really easy to abuse, if anyone catches on. And apparently the anti-vaccine activists did catch on. Which surprised Facebook’s policy team.

Facebook had been working on changing the policies related to bullying and harassment over the past couple of months, but is still figuring out how to implement those policies. As a response to the problems raised in this case, Facebook is re-reviewing reports and working to make sure their new policies are implemented in ways that achieve correct, justifiable results.

Facebook is not inclined to share the new guidelines that they are adopting to prevent harassment. They are – again, correctly, in my view – worried that disclosing their choices may lead to more abuse. But what they are doing is using the profiles of the members of SAVN banned – graciously provided by them, and thank you, people – to examine the policy, and make sure that the people who received unfair bans under the previous policy, would not be banned under the new policy.

Will the guidelines be perfect? Probably not. In fact, certainly not. There will probably be unjustified bans, and unjustified decisions not to ban in the future, too. It’s a tough balance to achieve, and whatever general rule is adopted will probably not work well in at least some specific cases. But the effort is in place to improve, and at the very least, this suggests that future problems can be brought to Facebook’s attention and progress continued to be made. That’s quite a bit.

More banning: AVWoS’ removal and Return

In addition, on January 9, 2014, the pro-vaccine Facebook page the Anti-Vax Wall of Shame  was taken down by Facebook, in response, apparently, to reporting by anti-vaccine activists (See here and here ). Again, vaccine advocates spoke up. Upon notification, Facebook’s policy group examined the removal, concluded it was “reviewer error” and on Friday, January 10 reinstated the page, with an apology. The administrators, however, remained banned for several more days, with one administrator receiving a 30 day ban.

On Monday, January 13, Facebook undid the bans on the administrators. They explained that since the removal of the page was an error, the administrators of the page should not have been restricted from posting. They also apologized for the delay in reinstating the administrators.

About the Author
Dorit teaches law in UC Hastings, is a mom, and a member of the Parents Advisory Board at UC Hastings. She was born in Israel and still has a large family there.