First Amendment - Op Ed
Prompt: Analyze the complexities involved in the regulation of online behaviors under Section 230 of the Communications Decency Act and propose a solution
The First Amendment Tug of Words
By Carey K. Thorpe
Succeeding Kentaji Brown Jackson’s Supreme Court confirmation hearings this month, Section 230 of the Communications Decency Act is being revisited by legislators with a great chance of being analyzed by the high court.
The 1996 law has been called for reform by Democrats and Republicans alike with special emphasis on the regulatory issues of social media. Former President Trump argued for a repeal to the clause believing that Big Tech companies used this law to censor too much, while current President Biden and his administration thinks they moderate too little.
Donald Trump’s presidency was nothing short of controversial, however whether or not he deserved to be silenced on Twitter is up for debate. Twitter not only removed some of his more disputable post’s but removed his account altogether. This example is often brought up when discussing the question of freedom speech and content moderation as it relates to the current rule of law, which states in 47 U.S. Code, Section 230: “No provider or user of an interactive computer service shall be treated as the publisher of speaker of any information provided by another information content provider.“ (Stemler, 2021) This means that companies like Twitter and Facebook can exist, allowing users to operate within the platform, and users are liable and held accountable for what they say and publish online; similar to the way freedom of speech operates in real time offline. However, it allows for the moderation of content at the discretion of the platform, the exceptions being for content that violates federal law, copyright, and unlawful behavior. The unlawful behavior is the grey area that has created a call for an amendment of the legislation.
It helps to understand that this Act was first created by Sen. James Exon, who wanted to remove and prevent “filth” on the Internet by creating The Communications Decency Act as an attempt to regulate pornographic material. However, much of it violated the First Amendment and was dismissed as unconstitutional. What remained from its inception was Section 230, which was originally intended as an economical prevalence after the 1995 case Stratton Oakmont, Inc. v. Prodigy Services Co.
The court’s decision made is so that if a platform did any moderating of user content they would be held liable for that content, but if they did nothing, there’d be an open forum with unforeseen circumstances, henceforth, the Section 230 clause was created to prevent companies from being held to any form of defamation through their websites. Note, this was proposed at a time when less than 10% of Americans had access to the Internet, much less publishing. Before this rule of law, bookstores, libraries, and newsstands were the only ones held liable for the content of others, but only if they knew the content was unlawful. This is a vastly different scenario today within the capacious web and ubiquitous platforms that allow third-party content to be published from all over the world, even if it’s unpopular. The magnitude, however, of user-published content now warrants for further accountability. But for who? By whom? And for what? Those appear to be the real dilemmas.
While the CD Act containing Section 230 was proposed for porn regulation and alternatively intended to prevent ‘interactive computer services’ from being sued, it has now become a partisan political bent against platforms like Twitter and Facebook. The call for repeal really began to take storm around the 2016 election with Democrat’s such as Nancy Pelosi arguing that platforms aren’t moderating enough, while Republicans like Ted Cruz arguing that they are moderating too much.
The reality is, "platforms get to decide what goes on their platforms,…it's exercising its First Amendment right," said Chris Cox, Section 230 co-author, and former Representative.
Platform service providers are being positioned by those in opposition of Section 230 to be held accountable for everything from the cause and spread of child sexual abuse material, violent content, nonconsensual pornography, harassment, hate speech, and alleged political bias on social media. However, this would change the way we know and love the Internet to operate if so. If social media platform service providers were held accountable for the content of its users, it would not only be responsible for policing the Internet at an incredible rate, which could incentivize a fee for users, but moreover, reduction of the Section 230 could result in the prompt for more lawsuits towards providers for any number of reasons, prompting platforms to limit their legal risk altogether by restricting users’ speech indefinitely.
As it stands, the internet is our modern public square. The impact of the law has made it possible for people to connect and share knowledge, engage, and provide new ways to access information. Moreover, it allows users the same freedoms of speech it would offline, all the while, protecting the platform hosts. These are essentially First Amendment rights online. But if companies were not allowed to moderate alongside what they found reasonable, the dangers of harmful content, misinformation, and the perpetuation of harassment would exist that could cause anarchy, such as the capitol riots that occurred. These moderations similar to prior restraint are therefore warranted. Common critiques to this legislation are that social media platforms can pick and choose what they want, essentially having the power to censor. Jack Dorsey of Twitter was quoted weighing in on the criticism saying that Section 230 isn’t the problem, it’s the belief that the platform isn’t acting in good faith. Trump accuses Twitter of cherry-picking and censoring more conservative content. But what’s often not considered is the “otherwise objectionable” text of the CDA. The context gives providers the liberty to limit material they or their users believe to be "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." This is often argued as subjective, but in the eyes of the law, it’s only factored on what’s considered reasonable. Certain online behaviors are easily unreasonable, like the inciting of violence by Donald Trump on the Capitol, but few platform services users have the resources to fight the loopholes of the law.
In the call for reform, with over 20 reform proposals by Democrats and Republicans, some critics argue for a ‘good faith’ requirement to yield bad actors from taking advantage of the liability shield of the Act. Where it stands today, sites like Backpage, which allowed sex trafficking, the sharing of illicit content, and defamatory sexual abuse material, prevent the victims from seeking justice against these online services where these activities could have been prevented; but because of the language in the clause on ‘creating the content’, they were thrown out. The good faith clause would fix this, making for an improved protection in combatting unlawful content. Additionally, Section 230 can benefit from being transparent about the protocols of what’s moderated and deleted to create a politically neutral public square. What’s needed is a standard of what’s reasonable and what’s unlawful to the courts. But further, what’s even more beneficial is adopting exceptions for state criminal prosecutions under state law that operate within federal criminal law. Currently, there are exceptions to the CDA under intellectual property and federal law, even with The Stop Enabling Sex Traffickers Act (SESTA) and Allow States and Victims to Fight Online Sex Trafficking Acts (FOSTA) but as it stands, the federal government still typically handles the bulk of trafficking prosecutions even with the bill by Congress in place. New exceptions that tighten the statute’s loophole “to whether FOSTA also permits state civil claims concerning sex trafficking; to what, exactly, constitutes “participation in a venture,” are needed to see real change to safety online. With great points from both sides of the fence towards reform, there doesn’t appear to be a one size fits all legal regime but what’s certain is that considerations to the section to accommodate the advance of users submitted content is needed.