Social Networks have been around for as long as people have socialised. Tools like Facebook have only helped people maintain and manage their social networks in a more effective way. I can locate old school and university friends and maintain a connection with them, and communicate within my group of FacebookFriends™ very easily.

The internet – e-mail, blogs, social media tools – has enabled a communication revolution and has democratized media. Now anyone can easily publish anything about anything and distribute to a very large audience. If your intentions are good, the internet can be an amazing tool for good. But it can equally be used to spread hate more effectively than ever. I’ve seen this first hand in the shape of sockpuppeting and anonymous blogs that have sought to defame me. Much like “one man’s terrorist is another man’s freedom fighter“, Google (who owns the platform blogspot.com) has a policy that resembles “one man’s hate speech is another man’s parody”.

Just as it has never been easier to connect with people, it has never been easier to spread hate.

Facebook is a slightly different story, because it requires a greater level of identification of participants (and while you can create a profile with a fake name, you can be more easily tracked). That’s probably one of the reasons this very site uses Facebook to manage comments. Facebook has legal obligations (and a few policies to go with it) regarding hate speech, but how can a huge company with over one billion users truly manage over a billion posts per day and filter the wheat from the chafe? Could China monitor every piece of communication uttered by (or to) every one of its citizens with anyone else anywhere in the world and be sure no-one says something bad (Heaven knows they’ve probably tried)?

Today is the International Day for the Elimination of Racial Discrimination, according to the UN. This coincides with the release by a report from the Australia-based Online Hate Prevention Institute (OHPI) called “Recognizing Hate Speech – Antisemitism on Facebook“.

The OHPI, headed by Dr Andre Oboler, have established their credibility as experts in online hate campaigns, and have run several effective counter-campaigns against diverse strains of hate such as Facebook memes targeting indigenous Australians, and suicide. It’s quite staggering to see the amount of creative energy expended by perpetrators of such nasty hate posts.

Where OHPI distinguishes itself is that rather than bringing hate campaigns to our attention (not too hard), they focus on the systems needed to combat these campaigns. And with so many billions of Facebook posts, it is only through robust systems that online hatred can be curbed, let alone stopped.

While Facebook has a reporting system, who knows what sits behind it. After all, with billions of posts, there could easily be hundreds of thousands of reports about those posts every day. Many are likely vexatious, so they are faced with a secondary task of finding the reports that are worthy of action. What resources are they putting to these tasks? Most likely they have offshored it to teams of well-meaning people who have little subject matter expertise. Their jobs are probably to clear their task queues as quickly as possible, and find every excuse to push the button that results in the response “Thank you for your report. We carefully reviewed the photo you reported, but found it doesn’t violate our community standard on hate speech so we didn’t remove it.” because that’s the easiest way to make these things go away.

This is the response OHPI received to many offensive photos with themes like Anne Frank, Hitler, the Protocols of the Elders of Zion, and Israel. Photos like this one are still there despite being reported.

Would the typical Facebook staffer even recognize a meme that used a photo of Anne Frank? They certainly seem unable to effectively recognize Holocaust denial.

The OHPI report makes several calls for Facebook to formally recognize certain categories of hate speech, to educate their staff, and to adopt international standards for recognition of antisemitism. They also make recommendations relating to Facebook’s reporting systems, and called for specific systemic improvements that can help make it easier for Facebook themselves to do the job they ought to be doing. Some of these recommendations would likely reduce the human workload (and therefore the cost) to Facebook in identifying hate speech.

After I suggested to OHPI that Facebook put in a reputation system for people who report improper content (such as the star system used by eBay), they advised that they had developed plans for a platform-independent system for tracking complaints and how companies are dealing with them. All they need to do this is the funding.

Above all, Facebook must recognize but they can’t do it all. They need to start communicating with and recognizing experts like OHPI and others who can give them the subject matter expertise in specific areas like online hatred. This will ensure The Social Network doesn’t turn into The Hate Network.