Facebook, technology, & the end of the World

The great knowledge spreading, democratizing tool that the internet was created to be has also revealed its down sides.  Technology unveiled to help the world can also be abused.  Surprise, surprise.  Society’s frustration with social media and the internet is growing by the hour.  Tech companies virtually know every single detail about us.  Citizens’ information is being overshared and our elections are being swayed by the utilization of user data to carry out propaganda campaigns — thus creating echo chambers of information.  And despite reassuring words from tech heads like Mark Zuckerberg types who admit a “breach of trust” has occurred and promise they are doing everything they can to tackle these problems, ultimately we continue to be victim to the Frankenstein monster they created.  Terror, hate speech, trolling, fake news, and the end of private lives as citizens continue to plague us.

As cautious citizens, we must imagine the possibility that they are not looking out for the common good.  How can companies whose bottom line is the profit motive expect to be fully moral and be trusted as the protectors of free society?  It’s an obvious and valid concern.  But everything in the world is not so binary, and I believe that many employees in the tech world that I’ve spoken with really want to make sure they are not part of a system that does bad in the world.  They seem genuine in wanting to find solution to our concerns, and are working diligently to take steps to fix our experience.  But from our perception (in gauging where their motives lie), if we give companies like Facebook, Twitter, and Google the benefit of the doubt that they are aware of a common good and don’t have purely cynical aims, why do things keep turning out so terribly?

It seems they either:

1. Keep being reactive rather than proactive in catching the problematic issues that plague their networks.  We recently saw it in the form of hate speech by neo Nazis groups as well as terror recruitment campaigns by the likes of groups like ISIS becoming widespread on these networks.  When tech began ramping up efforts to squash this phenomenon, they were largely unprepared in terms of coherent policy (of what and what is not acceptable), lacking the numbers in their human reviewer teams and the language skills, judgment and expertise in deciphering what needed to be removed and what should remain.

The September 2016 removal of a historical anti-war photograph famously known as Napalm Girl was one high profile error in judgment by Facebook, citing the company’s effort to limit child nudity.  Mass complaints from its users that the photo was perhaps one of the single most important photographs in journalism pushed Facebook to reverse its decision.  A major policy head scratcher occurred – this time by YouTube – occurred when blogger Logan Paul was suspended by the Google owned company for posting a video of him discovering a dead body in a suicide forest in Japan.  After originally deeming the video did not violate its policy on violent and graphic content – allowing the video to remain and generate revenue – YouTube later cracked down on him after receiving public backlash.  Many still wonder if YouTube ever fully understood what rule Mr. Paul had actually broken.  It seemed that just reacted to make the problem go away.  Social media sites are allowed to make mistakes sometimes, with the hopes that they will learn from them and develop more specific, transparent policy.  But when they continue to illustrate a culture or reactivity, it becomes problematic.  And governments are growing tired of looking the other way.

And with roughly 6,000 tweets posted every second on Twitter, totaling a whopping 500 million tweets per day, 400 hours of video added per minute on YouTube, and with Facebook’s roughly 2.07 billion users worldwide posting around 30 million messages per day, these tech firms realized that the power and reach of their algorithms must be stepped up to spot violent imagery, videos, and hate speech before they can spread and serve as inspiration for real life violence.  Mark Zuckerberg said during his reprimand by the US Congress that Facebook will need another 5-10 years before it can handle hate speech properly.  That’s a long time to wait for hate.  But also, how can these algorithms be expected to get it right when the human beings who program them continue to be behind the curve and fail to grasp the problem?    

2. Tech companies are miscalculating the scale of the problem and not taking full precautions. (Taking full precaution means many times going far beyond the bounds of normal precaution to ensure safety, even if later some steps in hindsight are deemed unnecessary).  ISIS had been recruiting for years on various social media platforms.  Facebook believed political consultancy group Cambridge Analytica had destroyed data that had been harvested from roughly 87 million people – 37 million more than previously reported. 

3. Tech companies are caught in a loop of conflict of interest. The cost of full security precaution more likely than not spells less interconnectivity, as well as more money and energy spent.  Let’s assume that the majority of employees innocently fall into the conflict of interest trap.  I guess the metaphor could be something like no person who is accused of being cheap ever actually thinks they are cheap – they justify it as being frugal and focusing more on the essential.  In this situation, they really feel they were doing enough.  And now let’s look at the situation cynically, where the conflict of interest slowly pushes the importance of accountability behind the goal of winning.  Reading the leaked June 2016 memo by Facebook employee Andrew Boswell, saying “Maybe {Facebook} costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack co-ordinated on our tools.  And still we connect people.” is the embodiment of this more bleak reality. The “ugly truth,” Boswell explains, “is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.

Wherever we stand, the problems might be less about errors and more about structure and philosophy.  In a New York Times piece title Don’t Fix Facebook. Replace It, law professor Tim Wu wrote, “The conflicts are too formidable, the pressure to amass data and promise everything to advertisers is too strong for even the well-intentioned to resist.”  Similarly, even if YouTube came to its senses about the Logan Paul video, some feel the company has built itself on rewarding creators for being outrageous, so are naturally unprepared and unwilling to err on the side of caution.  Whatever side of this philosophical discussion one may be on, whether Facebook’s purpose is so grand that the ends justify whatever means or not, we can all agree that it shouldn’t be a handful of tech CEOs that get to fully decide how to proceed forward.

What is the correct answers moving forward?  That’s a tricky one.  Can tech heads shield their tools from being exploited to the detrament of free society?  Can more clear, transparent policy help make us feel safer with our data?  Can human moderators and algorithms eventually handle the tasks of hate speech?   I want tech to be better moderators in one respect, but I also don’t know how comfortable I feel  with tech being the sole arbiters of truth, getting to decide in a heartbeat which news is legitimate and which is fake news.  I don’t like the inconsistencies of their community guidelines in deciding what is objectionable content or not. I don’t feel comfortable with tech continuing to decide everything in the dark. 

European governments felt they had no choice but to threaten the levying of massive fines (50 Million) against the tech world for failure to respond in a timely manner to pages or posts they deem dangerous material.  In May, they will pass regulations making it easier for people to know how their data is being used.  US lawmakers also look to be making moves, beginning to look more closely at the Communications Decency Act, Section 230.  This ruling has freed social media from the responsibilities of being a media company, virtually immunizing them from any harm for the items that get shared on their sites.  These steps are looked at as positive by many (and perhaps are), but I don’t know how I feel about too much regulation and oversight from government either.  Seeing a more controlled internet by governments in countries like China, Turkey, Vietnam, or Russia should make us all take a step back and ponder the ramifications carefully.

Protecting the internet as a basic human right should be a top priority while encouraging social media giants to sign up to an agreed upon charter to put the safety and well being of citizens worldwide above profit.  (Perhaps I am being naive.)  Their community standards must be more specifically defined.  All people must be able to access a safe internet in order to exercise and enjoy their rights to freedom of expression and opinion.  Progress has been made in safe-guarding the internet by efforts of civil society, government, and internet companies themselves, yet hate and abuse remains.  But this does not mean its the end of the world.  We can find a way.

Moving forward, we should perhaps heed the warning of Austrian/British philosopher and professor Karl Popper, who stated in 1945 that if a society is tolerant without limit, their ability to be tolerant will eventually be seized or destroyed by the intolerant.  Popper came to the seemingly paradoxical conclusion that in order to maintain a tolerant society, the society must be intolerant of intolerance.  In a resolution passed in July 2016, the UN Human Rights Council described the internet as having “great potential to accelerate human progress.”  If we bridge the gaps in combating the abuse, hate and extremism that come with it, we can perhaps make that potential one day realized.

About the Author
Danny Swibel is a Tel Aviv based reporter and analyst with They Can't, an organization that tracks and fights to remove anti-Semitic & extremist content online. He researches issues related to Iraq, Syria, Islamist groups, as well as security issues related to the Israel/Palestinian conflict. He was an analyst in the counter-terrorism firm Terrogence and reporter at i24News. He holds an MA from Tel Aviv University's Middle Eastern Studies Program.
Related Topics
Related Posts