search
Danny Swibel

Alex Jones, Holocaust Denial, Hate Speech and Tech

The removal of content from conspiracy theorist Alex Jones on Apple, Spotify, Google, and Facebook (not Twitter) is only the latest controversy for the tech world whose policies on hate speech – whether viewed positively or negatively – are still confusing and inconsistent to citizens worldwide.  Tech companies as of late have been squeezed by all sides due to the massive size and resulting mismanagement of their platforms: by their users, both left and right, NGOs, and governments across the planet.  Whether or not you agree with Jones’s removal, this is not the first time the tech world has seemed reactive on such issues and unclear why they took the actions they did. 

Last year Twitter removed verification badges (the blue check) from the accounts of white nationalists such as Richard Spenser and David Duke.  Though the blue check was originally believed to verify authenticate identity, it has been seen by many as an endorsement or indicator of importance.  The removal of the blue check was seen as a way to limit the exposure of such individuals and viewed by supporters of these types of characters as a line-crossing, editorial decision.  But for others, tech staying neutral and doing nothing in itself was viewed as an editorial decision.  Just as with getting flagged for hate speech, losing your verification check, losing the ability to have ads appear alongside your video for “controversial religious or supremacist content, what is the fine line for removal?  Hate speech has mostly been allowed under the guise of free speech and a safety net for social media companies to avoid getting into the business of managing content.  But does hate speech with calls to violence cross the line?  Indeed, there exists content that is more difficult to define.    Have social media companies defined these parameters enough?  How does one differentiate between disagreeable speech, hate speech, or violence-inspiring speech?  When is removing certain hate speech going against free speech?   Who in these companies decide?

Certain ‘fake news’ stories that are meant to antagonize communities against one another for cynical aims can surely be considered ‘incitement to violence’ speech by many.  During the 2016 US Presidential elections, where potentially 126 million Americans may have been exposes to false news stories allegedly originating from Russia, certain posts actually led to violence.  In May 2016, a Russian-linked group, referring to itself as Heart of Texas, outputted many false, and frankly nasty claims and managed to organize supporters of Hillary Clinton and Donald Trump to rally near each other, creating violent tensions between the two groups.  Should such campaigns be removed?  How should companies decide motive?  Fake news is surely detrimental to society, but it in itself is not illegal.  The question is not only who should decide motive behind the posts, but if social media employees should be tasked with fleshing out the substance of the content and intervening if the hate speech doesn’t translate to direct calls for violence.  Despite the dangers inherent in the spreading of such content, I understand many folks saying no, tech should stay out of it. 

But what happens when fake news is used to stir up hate and succeeds in sparking a massacre?  Digital researcher and analyst Raymond Serrato explained to the Guardian that, “Facebook definitely helped certain elements of society to determine the narrative of the conflict in Myanmar.  6,700 Rohingya Muslims were eventually murdered and 650,000 were forced to flee to Bangladesh in the past year.  Extensive Facebook propaganda campaigns had been launched by hardline Buddhists in Myanmar, who stated falsely that mosques in Yangon were “stockpiling weapons in an attempt to blow up various Buddhist pagodas.”  Is social media responsible for some of the violent acts that followed?  In this respect, it is easy to argue YES.  This lie-spreading campaign surely helped vilify the already marginalized Rohingya and helped justify violence against them.  In this case, should this abuse of the platform be allowed as free speech?  Or can its intent be judged as directly leading to violence, thus making it an easy choice to remove?

Facebook, YouTube, and Spotify have been in the news for weeks over the suspension of conspiracy theorist Alex Jones for uploading a number of his shows that violate the community guidelines surrounding hate speech and child endangerment. And perhaps it is a good thing that such questions surrounding his removal are being posed by society at large.   Many conservatives, including Jones himself, have accused social media platforms of trying to silence right wing voices and commentators.  Others on the left have accused the tech giants of not going far enough, only suspending Alex Jones himself for hate speech but allowing his Infowars page as well as the official Alex Jones page to remain on the site through a technicality.  Inconsistency by tech with these policies will only continue to bring more controversy and division.  This story is only the latest of this kind and there surely will be more.  

Facebook CEO Mark Zuckerberg, who has tried to create a fine line balance between free speech and stopping hate speech and misinformation, originally defended his decision of allowing Infowars to remain on the site, describing it as a “free speech” issue.  This then prompted Mr. Zuckerberg to explain in his interview with Recode’s Kara Swisher that he didn’t believe fake news should actually be removed from Facebook unless it incites violence.  Even deeply offensive stuff like Holocaust denials should remain, he added.  But weeks later, after seeing Infowars podcasts removed from iTunes, Mr. Zuckerberg declared the strikes against the Infowars would count individually rather than as a single, collective strike, conceding to the view that Jones’ posts were defamatory, harmful to children,  dangerous and crossing the line.  Alex Jones was then promptly removed from Facebook.  Does that mean Facebook is still potentially considering changes in their philosophy surrounding Holocaust denial as a form of dangerous hate speech worthy of removal?  They have finally started to understand the ramifications of their lack of action in Myanmar.  Facebook, Twitter, and YouTube continue to have the philosophical debate about their community guidelines, walking a fine line between their bottom line ($), coming across as fair and free, and whether they will be viewed as a catalyst in the unraveling of society.  

In YouTube’s Hate Speech policy, it specifies the “fine line between what is and what is not considered to be hate speech.” It adds, “it is generally okay to criticize a nation-state, but if the primary purpose of the content is to incite hatred against a group of people solely based on their ethnicity, or if the content promotes violence based on any of these core attributes, like religion, it violates our policy.”  The videos of Al Qaeda cleric Anwar Al-Alaki were finally removed from Youtube after years of being allowed on the site, as the preacher was deemed very much responsible for inspiring a generation of American terrorists, including the Fort Hood gunman, the Boston Marathon bombers and the perpetrators of massacres in San Bernardino, California and  in the Orlando, Florida nightclub shooting. 

Yet, still present on the platform are videos of white supremacist leader, Dr.William Pierce, whose blatantly antisemitic, hate speech videos are overflowing with visuals of Jews as nefarious, money-wielding, big-nosed, blood-drinking devils, with calls to the public to take “Timothy McVeigh style actions” to stop them.   Many of these videos finish with classic antisemitic tropes such as “Jews are our misfortunes,” concepts that were found in the weekly, 1920s/1930s German tabloid-format newspaper, Der Sturmer to call for violence and the extermination of Jews.  Jews and fake news go back a millennia, and it sadly seems these bankrupt ideas will live on, but should these particular videos be allowed to be so easily shared by tens of thousands of people?  They might not call for violence directly, but hasn’t history revealed they perhaps will down the road? 

It is an ongoing and important philosophical debate of what options are best for hate speech and when it crosses into calls to violence.  When will any of us ever truly understand the rulings of these companies unless they are forced to discuss their policies out in the open, not just on the floor of the US Congress or the EU Parliament on individuals afternoons, but constantly and consistently?  Should hate speech be allowed under the banner of free speech as long as there aren’t direct calls to violence?  Tech companies are finally beginning to handle these problems as journalists or academics, as opposed to engineers.  But they have still been caught denying the severity of the problem, being too reactive on these issues, and/or only discussing potential solutions behind closed doors without much explanation.

The German government has been threatening fines as large as 50M Euro to social media giants who fail to remove certain hate speech or terror content within 24 hours.  The EU and US government are also considering regulation for social media.  Perhaps this can be looked at a positive method to eliminate dangerous content online, but would we as citizens feel comfortable giving the sole mechanism of control to governments to decide what is what?  Based on the rolling back of civil liberties, press freedoms, and free thought in places like Turkey or Russia, I’d say a firm no.  Social media companies should continue to consult with civil society, academics and users as trusted flaggers to help keep their community guidelines fair and consistent, while hearing the concerns from governments or international bodies who demand they take more action against dangerous content.

I believe the free market place of ideas will ultimately win, but this freedom cannot be limitless if it serves as an inspiration to violence.  Twitter recently explained it does not want to be seen as the arbiters of truth, which in some cases is fair.  It is certainly not always easy to decide what is offensive and what can become harmful.  But some posts can be judged for what they really are with basic common sense.  Despite what Alex Jones preached, the Sandy Hook massacre in 2012 happened.  20 children between six and seven years of age were shot, as well as six adult staff members.  It is fact, and Mr. Jones lying about it is defamatory and slanderous and an action that certainly can lead to more violence. 

It seems social media companies cannot be viewed like a post office, as the ones simply delivering the letter and not responsible for the items inside.  If the items are dangerous or violent, the post office has a responsibility just as social media does to make sure the message is not delivered.  Conversely, social media companies should naturally be careful to deeply understand their limits when flagging and banning posts, so as not to create a mechanism and culture of silencing speech.   The answer as to what must change lies somewhere in the middle of all these debates, with users, governments, the UN, NGOs and tech themselves deliberating carefully and collaborating meaningfully to ensure a reasonable outcome.   

About the Author
Danny Swibel is a Tel Aviv based reporter and analyst with They Can't, an organization that tracks and fights to remove anti-Semitic & extremist content online. He researches issues related to Iraq, Syria, Islamist groups, as well as security issues related to the Israel/Palestinian conflict. He was an analyst in the counter-terrorism firm Terrogence and reporter at i24News. He holds an MA from Tel Aviv University's Middle Eastern Studies Program.
Related Topics
Related Posts