search
Faye Lincoln

The Ethics of Censorship

The issue of censorship on the internet and social media platforms is raging given the tension between corporate interests and societal harm. The public wants to hold large digital platforms such as Google, Facebook, and Twitter accountable for damaging content developed or communicated by the users of such platforms. Others believe that no content except that which is clearly illegal should be banned from such sites based on the right to freedom of speech. Whether the information being propagated is true or false and who is entitled to make the true/false call only further complicates the matter.

In January 2020, Mark Zuckerberg, CEO, Facebook, spoke at the Silicon Slopes Technology Summit held in Salt Lake City, Utah. Referring to his 2019 Senate Judiciary testimony, he addressed the issue. A central question is whether companies such as Facebook have a responsibility to censor false or harmful content on their platforms

His response seemed reasonable at the time. Of Facebook’s total global workforce, 35,000 people monitor social media content for clearly violent and illegal postings. Beyond this demarcation, communications enter a “gray zone,” where censorship, he believes, impinges on the right to free speech. If Facebook bans such content, when have they gone too far? Reportedly, after much soul searching, Mr. Zuckerberg determined that the risk of eliminating so-called “gray zone” communications was not appropriate.

Given the public concern over the right to free speech, as well as the risk that certain communications correlate to potential societal harm, this question needs to be addressed comprehensively. But the average person holds a point of view based on a limited slice of knowledge as to what censorship, potential harm, and the rights of companies really mean in terms of social context, statutory law, and corporate liability. Few could identify the necessary elements to analyze the possibility of harm in the digital world.

To develop a basic understanding of corporate responsibility with regard to the banning or censoring of information, it is helpful to understand how digital platforms work. The most common and fundamental question people ask is whether these providers should be regulated in the same manner as companies that transmit data over telephone lines or airwaves. The answer not only provides insight into the legalities of transmission of data in the digital world, but also helps us make sense of the corporate accountability for monitoring harmful internet and social media content. First, however, this requires a brief explanation of how organizations that transmit internet data differ from those that host internet websites and social media platforms.

In the United States, companies such as Cisco provide the physical infrastructure  to internet service providers (ISPs) through long distance cables. Once data reaches a regional destination, providers transmit data through secondary lines to smaller geographic areas. CenturyLink, Comcast, and GoogleFiber, to name a few, are both regional and community players.

Data can be transmitted wirelessly or over copper wiring, coaxial cable (electrical current transmitted through higher quality copper wire), or fiberoptic broadband (light transmitted at tremendous speeds). As information is carried from local communities to regional and national lines, routers direct the information requests from users to proper locations and back again. As with telephone calls, we do not hold these companies responsible for the users’ content. Provider systems are merely carriers of information, and callers are responsible for their actions.

Digital communications have several similarities to—and significant differences from—telephone communications. Someone who wishes an online presence can sign up for a platform like WordPress or Blogger or Medium; or they can register their own domain with a hosting company and design their own sites. In either case, readers retrieve the content via a browser. Hosting companies are generally treated as common carriers (like telephone companies), although their Terms of Service (ToS) include prohibitions on content that is unlawful or that infringes on other entities’ intellectual property.

Hosting companies, therefore, act more like “landlords” to originating website owners, functioning as conveyors of website information. An individual or company generates a webpage and the landlord rents space on their servers so that everyone can gain access to the site. In a landlord/tenant relationship, the landlord is not responsible for the actions of the tenant. The tenant has certain rights, as well as obligations, based on law and a contract. In the digital world, this agreement is commonly referred to as ToS or Terms of Use, which governs the webpage owner’s responsibilities, including prohibiting illegal activities.

Given the billions of online interactions which take place daily, it is prohibitively impractical to monitor all these interactions and make decisions about content appropriateness. Technically, the internet is not owned by any one company or person; it is based on distributive technology. This makes the website originator and end user responsible for their actions. The ToS agreement governs such interactions, but the internet company does not approve, design, or monitor the content of webpages.

Social media platforms, unlike hosting companies, have a second type of relationship with content creators and are referred to as “publisher” platforms. Facebook, Twitter, and YouTube function as publishers. The digital platforms are owned by companies, unlike hosting companies that do not assert ownership over the content they serve. They design the templates that people use to connect and communicate with each other, oversee their general use, and promote preferred content. In some cases, companies that are landlords can also be publishers. Google is a landlord when hosting websites and a publisher when overseeing YouTube and Blogger.

As with a traditional publisher, there is oversight or control on how people utilize product templates and what content is accessible to the public for viewing. What is different is that formal decisions regarding editing and approvals do not occur, and there is greater freedom for everyone using these sites. In this scenario, one would argue that social media platforms have more corporate and social responsibility for their sites’ contents than hosting companies, although not to the extent that book or newspaper publishers do.

With this basic level of understanding, it should be relatively easy to determine just how much liability both “landlord” and “publisher” sites might have based on platform content. Unlawful activities such as child pornography and sex trafficking should be prohibited by both types. In the case of the internet landlord, they should enforce the ToS agreement and close down the account, as long as they can reasonably screen for content. The originator of the website, and possibly the end user, should be held responsible for the content and its use. In the case of a “publisher” digital company, they have the additional obligation to aggressively monitor and eliminate illegal content.

What becomes more difficult to identify are gray areas in social media that are not illegal but are potentially harmful to society. Should digital platforms that are landlords or publishers be responsible for revoking access of “gray zone” content providers? Many would say that if a high potential for social harm exists, the social media company should share liability with the originator of the content. A company’s responsibility for its actions should apply to the digital world. We have already seen that it is next to impossible for an internet host, as a landlord, to monitor all content existing on its servers. But such monitoring is possible for publisher digital platforms.

By way of example, a recent court case brought by the US Department of Justice found Purdue Pharmaceuticals liable for the opioid drug epidemic due to its aggressive marketing of OxyContin. As the perpetrator of this kind of marketing and abuse, Purdue Pharmaceuticals knowingly harmed society, and it was appropriate to hold the company accountable. Fines and penalties totaled $8.3 billion. If social media platforms and websites presented information aligned with the inappropriate promotion of opioid use to the public, should these user sites also be held accountable?

In the case of Purdue Pharmaceuticals, given that societal harm was demonstrated in a court of law, hosting companies and social media platforms should remove culpable webpages and posts. Removing this content serves to minimize culpability and meet a societal obligation. A similar example is the bar owner who has shared responsibility in ensuring that a customer not order too many drinks and prohibiting an intoxicated customer from driving. A drunk driver is a potential harm to society.

These cases are clear cut, just as there is no question regarding the divisiveness and harm caused to society when social media communications result in heightened violence by individuals and groups. When the outcome is brutal attacks, such as the case of the Capitol Building insurrection in Washington, DC on January 6, or racist and anti-Semitic violence around the country, there is an implied social responsibility to remove such damaging content.

In March 2021, Mark Zuckerberg gave testified during hearings addressing social media disinformation to the Joint House Subcommittees on Consumer Protection & Commerce and Communications & Technology. He stated that those communications and actions that run the risk of “imminent harm” should be removed. The recent Facebook independent Oversight Board made public on May 5, 2021, defended the decision to uphold Mr. Trump’s Facebook ban based on a “serious risk of violence” and given his widespread influence. Facebook further extended these restrictions for a period of three years. Yet, the counterargument to banning such content focuses on the position that information should not be censored based on the fundamentals of free speech.

Impact of Law vs. Free Speech

Regardless of how society feels about censorship, one must ask two fundamental questions: what does the law say about digital platforms and their liability, and what is the relationship to the right of free speech? This analysis now takes an interesting turn. The first discussion will deal with the law governing digital liability, and the second with the constitutional impact of free speech.

The Communications Act of 1934 governs airwave frequencies and communications. In 1996, Congress passed legislation, Section 230, titled the Communications Decency Act of 1996 (CDA), addressing the issue of liability for internet companies and social media providers (referred to as “interactive computer services”). This Act was passed by Congress at a time when the internet was first developing, and did not have the complexity, breadth, and size that we see today. The goal was to provide a platform of innovation and development. Section 230 reads as follows:

Protection for “Good Samaritan” Blocking and Screening of Offensive Material

(1) Treatment of Publisher or Speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. (47 US Code §230 (c)(1))

This law made clear, regardless of whether the platform is considered a landlord or publisher, that no hosting company or social media platform is liable for second- or third-party content. Except in the case of certain federally-imposed restrictions, both host internet providers and owned social media publisher platforms have no liability, including shared liability, under the law. Other major countries do not have such a law on the books. This is a primary reason electronic automation and the internet have been so innovative in the US.[1] But, regardless of harm to society, there is no requirement for any of these companies, not Google, MSN, Facebook, nor Twitter, to remove inappropriate gray area content.

Second, let us evaluate federal law related to free speech and censorship. This is addressed in the First Amendment (1A) to the US Constitution. One does not need to be an attorney to understand what it implies:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise hereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Censorship, at least as discussed in the media today, tends to focus on 1A arguments as to whether companies have the right to ban content.. The First Amendment clearly states that Congress, as part of our democracy, is prohibited from passing laws that lead to the censoring or banning of information. Exceptions to this prohibition that have withstood constitutional scrutiny include obscenity, child pornography, libel, and slander. The right to free speech is assured by preventing Congress from passing such laws that curtail freedom of speech or the press, which also applies to the digital world.

By extension, the federal government cannot censor information. Furthermore, through the Fourteenth Amendment of the Constitution, no state, division of a state, or service governed by the state, including local government, can pass legislation or act in a manner that bans or censors information. There have been many court cases involving public libraries, as well as schools, that wished to ban or censor reading materials. Except in the case of clearly illegal information, upheld by the Supreme Court, such action is prohibited.[2]

Congress, as well as state and local governments are prohibited from censoring or banning free speech. The Chinese government, in contrast, does not have such a law and they censor information which criticizes government. China is also free to invade privacy by utilizing information for surveillance oversight, as in the case of using facial recognition to impose travel bans. Having the government censor books, media, and electronic information is a direction no democracy can, or should, move toward. If this did occur, where would we draw the line? When does such action go too far?

The political right has argued that digital corporations should be barred from censoring any information based on the right to free speech. Such organizations, legislators, and citizens accuse social media platforms of politically-motivated censorship when banning certain conservative groups’ communications. The political left’s groups and individuals demand the opposite, that social media content be removed if there is a high correlation with harm to society. Both are wrong within current legal parameters. The political right is incorrect because only government is prohibited from censoring information in order to ensure free speech. Private companies in America are not governed by such laws. The left argues that companies such as Facebook have an obligation to both monitor and eliminate communications which are potentially harmful to society. Based on changing societal values, they may be right—but Section 230 of the CDA grants liability immunity to these companies, thereby creating incentives to take no action.

However, both parties have missed the point. Private corporations are allowed to determine what services they provide and what products they carry—and, to an extent, to whom they provide these services and products. The Constitution does not prevent companies from deciding what they choose to sell or make available to the public. It only prevents the passage of laws and the prevention of actions taken by the government to censor content. Let us take Starbucks as a simple example. If a large group of customers do not like a certain type of coffee, the company is free to eliminate the product from their menu. Or, if a publishing company does not think a book will sell or is somehow inappropriate for their brand, that company is free to make the decision not to publish the book.

If a company sells a product or service, or provides information, and their representatives believe the product or information to be somehow offensive, or even harmful to the public, that company has every right, freedom, and, even, obligation to remove or censor the product sold or information provided. This extends to privately-owned internet as well as social media platforms. Such companies remove content prohibited by their ToSes, prevent users from posting, and suspend or revoke users’ access to their platforms. This does not prohibit another platform from picking it up, only that the originating platform can terminate its business relationship with a given user.

Furthermore, regardless of whether companies are operating as landlords or publishers, Section 230 protects these companies from liability when information is banned. Alternatively, if a digital platform or internet company does not make a decision to eliminate potentially harmful social media content that falls in the gray zone of harm to society, that company is also protected. Only the originating website or user remains at fault. This interpretation has been repeatedly upheld by the courts [3], allowing both hosting and social media companies wide protection whether they remove content or users—or don’t.

Societal Expectations and Self-Advocacy

Clearly, there is overwhelming evidence that our culture is becoming more sophisticated when it comes to social media and its adverse impacts. We may not always be able to determine whether a site or group of sites cause harm to society, but there is growing concern over the link between certain forms of social media content and the likelihood of causing harm.

As America becomes more divided, especially over wealth inequality and political issues, radicalization within social media escalates, creating the potential for inciting violence in our communities. QAnon, Proud Boys, and other groups storming the Capital on January 6, 2021 is the most obvious example. But even the George Floyd protests, which initially began as peaceful demonstrations, turned riotous and violent. Social media polarization has escalated the risk of this outcome.

Digital technology has enabled social media platforms to facilitate growing self-advocacy efforts to reach out to national and global audiences. Their collective voices spread like wildfire in short periods of time. In a democracy, this allows groups of people to amplify their voice and effectiveness with fewer capital resources. While negative media campaigns and disinformation can go viral in the blink of an eye, the positive outcomes from campaigns seeking positive changes can also be amplified.

The context of societal expectations changes rapidly when people post content to multiple social media sites simultaneously. Emotional connections are far more effective when advocating positions. Misinformation only heightens emotional reactions. And, although social media reaches a global audience, it is the online impact of many smaller, more socially connected networks of family and friends who really promote widespread awareness on the larger issues, even if it results in adverse consequences.

The result is that political advocacy has increased with the rise in digital platforms. People expect action to be taken on important issues that affect their communities. If human rights have been violated, or if something or someone, is causing violent harm to society, then the growing voice of self-advocacy on social media platforms begins to change expectations. For example, the #MeToo movement started with specific instances of sexual harassment and discrimination against women. Soon, voices—not only women’s—demanding accountability grew around the country.

Preventing or Reducing Societal Harm on Digital Platforms

It does not matter that, at least as the current law exists, a private digital company is not liable for its actions, whether it keeps content up for the public or chooses to ban it. Some form of censorship, and proactive or preemptive action, against clearly harmful outcomes is becoming acceptable, even anticipated. Alternatively, there is not always a guarantee that the raised voice of self-advocacy will result in needed change. As we begin to address these issues, it is recognized that these problems will not be quickly resolved. Nevertheless, dialogue involving solutions require serious attention.

Disinformation vs. Truth: We recognize that self-advocacy is not always grounded in truth, making it harder to determine what constitutes harm to society. Disinformation is a fact of life. And while this article does not specifically address this issue, it is tightly woven into the social media platforms of today. It is challenging to verify misinformation. In the past, newspapers, as publishers, were responsible for verifying facts and investigating the truth. There was great pride in this type of journalism. Today, only a handful of newspapers seem to focus on the truth. Online news stories are based often on human interest for the purpose of selling advertisements and gaining viewership. There is no unspoken “code” for ensuring truth in social media.

Critical discussion, with viewers being able to intelligently reach their own conclusions through civil discourse, is not valued. Instead, all points of view are argued as if they are right, leading to divisive and polarized positions—with little opportunity for readers to reach informed conclusions for themselves. Information has become a commodity, linked to advertising dollars and controlled through algorithms. We are no longer able discern the nuances of issues for ourselves. Instead, the “talking heads” of social media and talk shows leave us feeling pessimistic, exhausted, and confused.

We need to develop social media platforms, models of discussion, and institutions that will encourage truthful civil discourse. Given the accelerating pace of technology, we must watch for emerging models which begin to address this issue in a digital environment. For example, given the persistent growth in online conferencing as a result of the COVID pandemic, such as Zoom and LiveStream, opportunities to promote balanced discussions based on sound facts and policies have emerged. These platforms become models for participants and viewers to develop deeper thinking on unfamiliar topics.

New institutional forums must be established that can generate thoughtful discourse on the direction that digital technology and innovation is taking. Where digital technology can be beneficial, state and federal forums should help guide solutions to today’s challenges. These institutions must serve the greater good, be values-based, and stimulate creativity in thought and problem-solving.

Societal Values: Civil discourse and truth must be instilled as values in a variety of settings for the younger generation. This should grow out of our personal interactions, families, educational systems, religious settings, and local communities. Society must rise up and decide what is unacceptable behavior on digital platforms when it causes harm or is not truthful. This is where filters for certain programs and websites can be used by parents and teachers. Although this may be idealistic in the short term, we must begin now to target future expectations.

People of all ages need to learn to communicate responsibly and think critically about their reactions. Digital spaces are decoupled from physical reality. Online interactions are often undisciplined, with people proceeding in an uncivilized and hurtful manner. This is often facilitated by the anonymity granted by interacting as a remove; people often type responses they would never make face-to-face. Adults need to act professionally in their communications, as we become role models for others. We have a shared responsibility to teach children to interact appropriately on social media platforms.

To maintain some distance, we must also teach children how to maintain a healthy relationship with electronic devices and digital socialization. When people are unable to talk to one another, but instead focus solely on their devices, this sets a poor example for the younger generation. We have lost the ability to hold deeply satisfying conversations, given that digital medium often leads to shallower interactions. Only with our guidance will the younger generation learn to differentiate fact from fiction, bad behaviors from right ones. Decisions about what and how we communicate should be grounded first in our human relationships, and then enhanced through digital devices in a value-based, additive manner.

Economic Solutions: Even if self-advocacy efforts and social awareness on issues rise to a critical level on the national or global stage, there is no guarantee that positive change will follow. True change recognizes that systemic problems exist, and until people are willing to do the hard work of altering the systems around them, nothing will happen. This is where economics can play a crucial role. In a culture where this is the primary driver, hitting the corporate pocketbook may have to occur before social reform takes place.

For example, social media platforms, especially those that are considered publishers, maintain accounts that routinely communicate and escalate divisiveness and violence. If the correlating evidence is strongly linked to violence in society, public opinion alone may not be enough to encourage a company to close down these accounts. After all, no liability accrues to the company under Section 230. Under these circumstances, hitting the company’s revenues is what matters. If CEOs of large companies, independent of one another, restrict advertising dollars, or stop purchasing products and services on a company’s digital platform, revenues would be seriously impacted.

Legal Solutions: Section 230 of the CDA could be changed. Repeal of the law is probably not realistic. Lobbyists will be out in force to protect the original statute, which legitimately brought innovation. However, as long as these companies are exempt from liability, they have the right, with the decision often driven by profit, to leave in place potentially harmful content for the public to access, or to ban it. If Congress passed a law requiring companies to exclude or allow such sites, this law would be in violation of the First Amendment right to free speech. Federal or state laws which effectively lead to censoring information are also prohibited. Alternatively, a law or regulation which broadly prohibits companies from determining the types of products, services, or information they can provide in the digital world would also have severe ramifications. And yet, society is demanding something.

In 2020, during the 116th Congressional session, twenty-five bills were introduced addressing Section 230. The majority were initiated by Republicans. As of March 2021, the 117th Congress already proposed six bills, most led by House Republicans, one by Senate Democrats, and one co-sponsored by a bipartisan group of senators. [4] Given the correlation between divisive communications, extreme activist groups, and the Capitol riots of January 6, Republicans are taking the position that the “right to free speech” should prohibit private companies from censoring social media content based on bias and political motivations. Their bills focus on either repealing or penalizing digital organizations for closing down potentially damaging sites, since many ultra-conservative platforms have already been banned. These bills, however, fail to address the broader, societal concern for harm. And even though conservatives accuse social media companies of a liberal bias, Twitter’s own internal research showed that its algorithm favored conservative politicians and news sources.[5]

To gain an understanding of the legislative options, let us briefly summarize the interrelationship between the internet host and social interactive platforms, with regard to Section 230 and concerns regarding free speech:

  • Hosting companies and social media companies are exempt from liability under Section 230 because they are not considered publishers, whether they remove, or fail to remove, potentially harmful sites.
  • The law holds content creators liable for their own content.
  • Private companies have the freedom to selectively retain or ban content and platforms if they deem such material to be offensive, illegal through the courts, or a safety risk to the public.
  • Congress cannot pass a law that bans or censors information except in the case of clearly illegal activity as spelled out in the law.
  • It would be difficult to repeal Section 230 outright as it has broadly resulted in innovation. However, Congressional can revise Section 230.

The focus of each of the separately proposed bills is categorized here. This first section summarizes the Republicans’ intent and includes potential problems with each bill.

  • Prevent censorship by social media publishers by eliminating legal protections through repeal of Section 230. This risks frivolous lawsuits and constraint of innovation.
  • Prohibit the posting of illegal materials by exclusion of legal protections to social media platforms for such violations.
  • Minimize banning various internet sites, other than illegal ones, by imposing fines on companies for doing so. This would not necessarily meet societal concerns for harm and interferes with private corporate rights to determine their own programming.
  • Require social media platforms to report any groups planning, committing, promoting, or facilitating terrorism to the Department of Justice. While a correct goal, this may put private corporations in an inappropriate role of facilitating government surveillance.

These next several bills, generated by either Democrats or bipartisan sponsorship in the Senate, could potentially move forward in some form.

  • Compel organizations to ban a site if a court order has determined it hosts content or encourages activities determined to be illegal, or if broader illegal materials are posted. Exclude digital advertising from Section 230 corporate liability immunity.
  • Regulate organizations to maintain transparency by developing and communicating their own policies for monitoring and taking down sites, following their policies, and maintaining a complaint system for the public. Ban illegal postings based on court order. Allow for an appeal process if a site is banned. Establish fewer responsibilities for smaller organizations.

The last bipartisan Senate bill might pass successfully and might stand up to constitutional scrutiny. As long as the requirements are not too onerous, the technology exists to maintain such monitoring systems, and frivolous complaints or lawsuits are minimized, this bill initiates an oversight regulatory process.

There should be no question that the originators of website content creators must be held liable for their actions. Society now expects the same from social media publishers, but this responsibility can be more difficult to enforce when there is no real editorship authority by the digital provider and the current law precludes the requirement for such oversight. In the future, it may be appropriate to bifurcate Section 230, differentiating and limiting liability immunity to those sites that “passively display content generated entirely by third parties” from those sites where the publisher “helps to develop problematic content.”[6]

It is even more difficult to discern all of the thorny issues of liability linked to misinformation, data privacy concerns, artificial intelligence programs, and digital advertising. These areas escalate the risk of corporate responsibility as well as begin to complicate the boundaries between free market competition versus market power concentration. None of the proposed bills Section 230 bills adequately addresses these complex issues. Generating a basic and common understanding of censorship in the digital world is a necessary and first step in social responsibility on digital platforms.

Faye Lincoln is an author and public policy analyst. Her book Values That Shape the World was released in August 2021. She evaluates historical and economic impacts on societal values and how to shape technology and innovation for a better future.  

 

[1] Electronic Frontier Foundation (EFF), “Section 230 of the Communications Decency Act,” Electronic Frontier Foundation. www.eff.org/issues/cda230 (11/14/2021).

[2] American Library Association, “Challenge Support,” American Library Association, Apr 30, 2021. www.ala.org/tools/challengesupport (11/14/2021).

[3] EFF, “Section 230 Protections,” EFF, Jan 3, 2013. www.eff.org/issues/bloggers/legal/liability/230 (11/14/2021).

[4] Meghan Anand, Kiran Jeevanjee, et al., “All the Ways Congress Wants to Change Section 230,” Slate, Mar 23, 2021. slate.com/technology/2021/03/section-230-reform-legislative-tracker.html ((11/14/2021).

[5] Dan Milmo, “Twitter admits bias in algorithm for rightwing politicians and news outlets,” The Guardian, Oct 22, 2021. www.theguardian.com/technology/2021/oct/22/twitter-admits-bias-in-algorithm-for-rightwing-politicians-and-news-outlets (11/14/2021).

[6] Valerie C. Brannon, “Liability for Content Hosts: An Overview of the Communication Decency Act’s Section 230,” Congressional Research Service, Jun 6, 2019. https://crsreports.congress.gov/product/pdf/LSB/LSB10306 (download; 11/15/2021).

About the Author
Cultural ethicist Faye Lincoln is author of Values That Shape the World: Ancient Precepts, Modern Concepts (Dialog Press, 2021). She has worked in high-level public policy and government relations for three decades, and analyzes the value-based implications of US and Middle Eastern policy initiatives based on history, religion, and economics. She focuses on the effects of converging religious, political, and economic structures of society and culture. Ms. Lincoln views biblical history through the lens of her second generation Holocaust experience.