As the focus shines on the role that social media is playing in the rise of hate groups and conspiracy theorists in the U.S., Rep. Tom Malinowski is a lead sponsor of a federal bill looking to hold social media companies accountable for content promoting hate online.
Malinowski, along with Rep. Anna Eshoo (D-CA), recently reintroduced the Protecting Americans from Dangerous Algorithms Act that is designed to hold social media platforms with more than 10 million used accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence.
“Social media companies have been playing whack-a-mole trying to take down QAnon conspiracies and other extremist content, but they aren’t changing the design of a social network that is built to amplify extremism,” said Malinowski in a press statement.
Removing Liability Immunity
The bill narrowly amends Section 230 of the Communications Decency Act to remove liability immunity for a platform if its algorithm is used to amplify or recommend content directly relevant to a case involving interference with civil rights; neglect to prevent interference with civil rights; and in cases involving acts of international terrorism.
Malinowski noted the first two laws, Reconstruction-era statutes originally designed to reach Ku Klux Klan conspirators, have been invoked in recent lawsuits against the Proud Boys, the Oath Keepers, and others following the attack on the U.S. Capitol on Jan. 6. The third statute is implicated in several lawsuits, including one against Facebook, alleging its algorithm connected Hamas terrorists with one another and enabled physical violence against Americans.
“They feed us more fearful versions of what we fear, and more hateful versions of what we hate,” stated Malinowski. “Their algorithms are based on exploiting primal human emotions—fear, anger, and anxiety—to keep users glued to their screens, and thus regularly promote and recommend White Supremacist, Anti-Semitic, and other forms of conspiracy-oriented content.”
According to its sponsors, the bill preserves the core elements of Section 230 that protect the speech of users, narrowly targeting the algorithmic promotion of content that leads to some of the worst types of offline harms, and does not seek to mandate political “neutrality” as a condition for Section 230 protections.
“This legislation puts into place the first legal incentive these huge companies have ever felt to fix the underlying architecture of their services — something they’ve shown they are capable of doing but are consciously choosing not to,” said the New Jersey Congressman.
Section 230 immunizes online platforms from legal liability for user-generated content. While the law has helped to enable the growth of the modern internet economy, supporters of the bill note it was enacted 25 years ago when many of the challenges we currently face could not have been predicted.
Large internet platforms are now use sophisticated algorithms to determine the content their users see, leveraging users’ personal and behavioral data to deliver content designed to maximize engagement and the amount of time spent on platforms.
What Malinowski and Eshoo allege is that these engagement-based algorithms often amplify and recommend White Supremacist, Anti-Semitic, and other conspiracy-oriented material that can intensify fringe beliefs and lead to offline violence. The proposed legislation would establish the principle that platforms should be accountable for content they proactively promote, if doing so leads to specific offline violence.
The authors of the bill note it preserves the core elements of Section 230 that protect the speech of users, as it is narrowly targeted at the algorithmic promotion of content that leads to some of the worst types of offline harms. Additionally, it does not seek to mandate political “neutrality” as a condition for Section 230 protections.
Push After Capitol Riot
“When social media companies amplify extreme and misleading content on their platforms, the consequences can be deadly, as we saw on January 6th. It’s time for Congress to step in and hold these platforms accountable. That’s why I’m proud to partner with Rep. Malinowski to narrowly amend Section 230,” said Rep. Eshoo.
Following the attack on the U.S. Capitol, Malinowski and Eshoo sent letters to the CEOs of Facebook, YouTube, and Twitter urging the companies to address the fundamental design features of their social networks that facilitate the spread of extreme, radicalizing content to their users. Lawmakers called on the companies to reexamine their policy maximizing user engagement as the basis for algorithmic sorting and promotion of news and information, and to make permanent and platform-wide design changes to limit the spread of harmful, conspiratorial content.
Last year, Malinowski’s bipartisan resolution to condemn QAnon and the dangerous conspiracy theories it promotes passed in the House of Representatives 371-18 and has led the effort in the House to restore funding for the U.S. Department of Homeland Security’s program to combat domestic terrorism and targeted violence.