As the anniversary of the January 6 insurgency draws near, Senator Brad Hoylman introduces a bill to hold tech companies accountable for promoting vaccine misinformation and hate speech on social media


NEW YORK— In the week leading up to the anniversary of the infamous Jan. 6 uprising on the United States Capitol, and as vaccine reluctance continues to fuel the Omicron variant, Sen. Brad Hoylman (D / WFP -Manhattan) announced new legislation (S.7568) to hold social media platforms accountable for knowingly promoting disinformation, violent hate speech and other illegal content that could harm others. While Section 230 of the Communications Decency Act protects social media platforms from being treated as publishers or speakers of content shared by users on their apps and websites, this legislation focuses more on active choices than these companies do when implementing algorithms designed to promote the most controversial. and harmful content, which creates a general threat to public health and safety.

Senator Hoylman said: “Social media algorithms are specially programmed to disseminate disinformation and hate speech to the detriment of the public good. Prioritizing this type of content has real costs to public health and safety. So when social media pushes anti-vaccine lies and helps domestic terrorists plan a riot on the U.S. Capitol, they must be held accountable. Our new legislation will force social media companies to be held accountable for the dangers they promote. “

For years, social enterprises have demanded protection from all legal consequences of their actions regarding the content of their websites by hiding behind Section 230 of the Communications Decency Act. However, social media websites are no longer just a host for their users’ content. Many social media companies use complex algorithms designed to present the most controversial and provocative content to users as much as possible. These algorithms drive engagement with their platform, keep users hooked, and increase profits. Social media companies using these algorithms are not a deadpan forum for the exchange of ideas; they actively participate in the conversation.

In October 2021, Frances Haugen, a former Facebook employee, provided shocking testimony to United States Senators, saying the company was aware of research proving its product was harmful to teens, but deliberately hid these public research. She also testified that the company was willing to use hateful content to retain users on the social media website.

The amplification of social media has been linked to many societal ills, including misinformation about vaccines, encouragement of self-harm, bullying and body image issues among young people, and extremist radicalization leading to terrorist attacks like January 6.e insurgency against the US Capitol.

When a website knowingly or recklessly promotes hateful or violent content, it creates a threat to public health and safety. The conscious decision to elevate certain content is a distinct and affirmative act from mere hosting of information and is therefore not contemplated by the protections of Section 230 of the Communications Decency Act.

This bill will provide a tool for the attorney general, municipal corporation lawyers and private citizens to hold social media companies and others accountable when promoting content that they know or should reasonably be aware of the content. :

  1. advocates for the use of force, seeks to instigate or produce imminent unlawful action, and is likely to produce such action;
  1. advocates for self-injury, aims to induce or produce imminent self-injury, and is likely to induce or produce such action; Where
  1. Includes a false statement of fact or a fraudulent medical theory that is likely to endanger the safety or health of the public.



Comments are closed.