Gonzalez v. Google Has the Potential to Change the Foundations of the Internet. What Are the Human Rights Implications?

Image Credit: Joe Jocas

Everytime there is a new development in the world of technology, we hear about it through the technologists themselves. A keynote presentation with dazzling graphics, big pictures and bold declarations that “this is the future!” We’re used to tech CEOs and entrepreneurs in California deciding the direction and future of our digital landscape. But right now, nine people nearly 3,000 miles away from the buzz and hum of Silicon Valley have the power to fundamentally change the internet as we know it. 

The case

On Feb. 21, 2023, the U.S. Supreme Court began oral arguments for Gonzalez v. Google LLC. The case surrounds the Gonzelaz family, who lost their daughter Nohemi in a terrorist attack in Paris, France in 2015. Nohemi was a U.S. citizen, and after the attack that killed her, ISIS claimed responsibility for the violence through a video uploaded to Youtube. The Gonzalez family subsequently sued Twitter, Facebook and Google (which owns Youtube), alleging that their services — including the ability for ISIS members to upload recruitment content and an algorithm that recommends similar videos to potential extremist viewers — were indirectly responsible for their daughter’s murder at a French café. 

But calls for big tech and their failure to curb extremist content is nothing new. In 2017, European leaders from the United Kingdom, France and Italy met with Google, Facebook and Microsoft at a UN summit to push for more action against online extremism. Last year, the EU unveiled a new law that held tech companies liable for user content that violates EU-member state law. This past September, the White House announced a series of policies and initiatives aimed at combating the online spread of extremism and terrorist rhetoric. So, why does this case have the eyes of tech and legal experts glued to it? 

“The twenty-six words that created the internet”

Gonzalez brings into question Section 230 of the Communications Decency Act, which gives online companies immunity from any problems that arise from content posted by third parties on their platform. The law, created in 1996, seeks to protect users’ right to free speech by protecting the platforms that host it. It reads:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Dubbed the “twenty-six words that created the internet,” the law set the foundation for every website and social media platform we know today. The entire business models of tech behemoths like Facebook, Twitter and Youtube rest on this sentence.

Thus, the Supreme Court has a very difficult question to answer: Should social media companies be legally immune for the content that they host — even if this content can foment terrorism, kill American citizens, and spread hateful, racist, extremist ideology?

While most everyone agrees that Section 230 is due for reform since its inception 27 years ago, the tech and human rights community are divided on what this reform should look like. On one hand, Section 230 provides necessary protections for platforms. Yelp isn’t responsible when a reviewer complains about a restaurant, just as telephone companies aren’t responsible for a conversation that two users have. If Section 230 is overturned, it could incentivize social media platforms to screen user’s posts and practice surveillance at a large scale, which has pretty scary implications for user privacy rights. More importantly, however, a sweeping reform could radically change the way that platforms operate. Those that rely on user-generated content, whether it’s TikTok or Wikipedia, and those that rely on user-guided algorithms, as every social media platform does, would be forced to redesign their models in a way that could create paywalls and remove constitutionally protected speech. As Scott Wilkens, Senior Counsel at the Knight First Amendment Institute, told the Institute for Rebooting Social Media, “One has to wonder whether the absence of Section 230 immunity for recommending content to users would have inhibited the invention or development of search engines like Google or social media platforms like Facebook.”

On the other hand, social media has struggled to crack down on the spread of extremist content on their platforms. Recent school shootings and acts of domestic terrorism illustrate the need for tech companies to do more about hateful content. The mass shooter in Buffalo, New York, who murdered 10 Black people in a grocery store, was radicalized almost wholly online through racist and anti-Semitic content. Facebook’s failure to curb material that incited violence — which led to the persecution and targeted killing of Rohingya muslims in Myanmar — demonstrates that a failure to curb such content has potentially genocidal implications. 

Implications for human rights

While much of the legal discourse surrounding Gonzalez emphasizes the First Amendment, there is also a critical international legal component to requiring harsher content moderation policies. Social media platforms allow virtually anyone with access to the Internet to upload content to a public audience, giving it the potential to be seen by millions. Particularly in areas facing oppression, conflict or human rights abuses, social media has popularized citizen journalism as a way to identify, document and publicize human rights violations. This is especially important in countries where institutional instability, heightened insecurity, and lack of government cooperation make it difficult for human rights investigators to collect evidence on the ground. During the past decade, conflicts in Iraq, Syria, Sudan, Libya, Yemen and Myanmar have prompted a flood of content on Facebook, Twitter, Youtube, and other social media platforms. Sometimes, user-generated content is the only documented evidence of abuse. This has revolutionized the way that human rights researchers collect evidence and study conflict, as these platforms have accidentally become the world’s largest digital archives for human rights abuses and war crimes.

This content is often graphic in nature, and pressure to curb such content is incentivizing social media platforms to wipe it from their sites. Gonzalez is only the most recent pressure on social media companies to handle its problem with extremist content. Human moderators were common at first, but now most platforms use algorithms to remove content before it’s viewed by any human. This presents a problem for victims, human rights groups and international criminal courts, who use this content to build evidence for international criminal proceedings. As a result, these platforms simultaneously accumulate and delete most of the world’s digital evidence for these proceedings. International investigators also rely on algorithms to suggest related content to them, as it makes the process of locating legally relevant material significantly easier. 

Either way the Court rules, there are potentially harmful human rights implications. Doing away with Section 230 would not only fundamentally transform the digital engagement space we’re familiar with, but also incentivize platforms to wipe content critical to human rights investigations. Conversely, retaining Section 230 and giving the responsibility to reform it to Congress (who could take years to produce any meaningful progress) would keep platforms legally immune from content that spreads hate, foments lies and even kills.

Comments

Joe Jocas

Joe Jocas is a senior majoring in International Relations and double minoring in Applied Analytics and Economics. On campus, he is a Research Assistant for the Security and Political Economy Lab where he studies the “resource curse,” the Editor-in-Chief of the Southern California International Review, a member of Delta Phi Epsilon, and a Trip Lead with Peaks and Professors. His research interests consist of democracy, governance, election security, misinformation/disinformation, and emerging technologies with regional interests in Eastern Europe and Eurasia.

jocas@usc.edu