States could let parents sue Big Tech for addictive kids. Here’s what that really means.

0

ISoftware makers across the country are increasingly looking for ways to crack down on Instagram, TikTok, YouTube and others, claiming the platforms use addictive social media algorithms and exploit children. Last week, lawmakers in California and Minnesota made progress on a bill that would hold companies accountable for the consequences their platforms have on the mental health of young people. The bills coincide with calls for Washington to implement meaningful oversight of Big Tech to help keep children safe.

The California bill would allow parents to sue companies that fail to take steps to prevent child addiction. The proposal is the first of its kind in the United States and the most aggressive state-level effort to rein in Big Tech over its use of algorithmic tools that rely on children’s personal data to generate recommendations and other techniques to increase their engagement. This would hold social platforms legally liable for features designed to be addictive for children, such as “Like” buttons and endless scrolling. Violators could face civil penalties of up to $25,000 per child or damages that could include $1,000 or more per child in a class action lawsuit, according to the Advocacy Institute. Children’s Rights from the University of San Diego Law School, co-sponsor of the bill.

Still, if passed, this sort of liability law would likely fail to reign supreme in Big Tech, says Abbey Stemler, an associate professor of business law and ethics at Indiana University who specializes in internet law, regulatory theory and Big Tech. The data. “This law really doesn’t say anything,” she told TIME. “It’s too vague to be really exploitable.”

Where challenges remain

Dubbed the Social Media Platform Duty to Children Act, the proposal was advanced in the California Assembly on March 15 by a pair of bipartisan lawmakers, Republican Jordan Cunningham of Paso Robles and Democrat Buffy Wicks of Oakland, with backing from the Children’s Advocacy Institute. Cunningham told the Los Angeles Times that some of these companies intentionally design their apps to keep kids coming back for more. He asks, “Who should pay the social cost? Should it be supported by schools, parents and children, or should it be supported in part by the companies that profited from the creation of these products? »

California’s bill came the same day another state, Minnesota, made progress on another measure to protect young people from social media. A state committee has voted to advance a bill that would ban social media platforms from using algorithms to recommend content to anyone under 18. The state’s Judicial Finance and Civil Law Committee will now vote on the measure on March 22. , the companies would be liable for damages and a civil penalty of $1,000 for each violation of the law. “The bill would require anyone operating a social media platform with more than one million users to require that algorithm features be disabled for accounts owned by people under 18,” indicates the summary of the bill.

While these types of proposals aim to force social platforms to take some responsibility for the damage inflicted by their algorithms, Stemler says a more effective strategy would be to adopt measures that address companies’ ability to access data. which initially feed these algorithms. location.

“The reason the algorithms work is because they suck in as much data as possible about what these young people are doing,” she says. “And once they have that data, they can use it. So instead of saying “Hey, don’t create addictive systems”, we should really focus on [preventing platforms from] learning from this data. The easiest way to do this is to simply limit access to the data itself.

Another bill introduced by Cunningham and Wick in February, the California Age-Appropriate Design Code Act, takes a similar angle. The proposal would restrict the collection by social platforms of children’s personal and location data.

What’s going on in Congress

Congress has also moved forward with federal legislation designed to help reduce the dangers children face online. In February, Senators Richard Blumenthal and Marsha Blackburn introduced the Kids Online Safety Act, a bipartisan measure that gives children and their parents options to protect their information, disable addictive product features, and turn off algorithmic recommendations (platforms). -forms would be needed to enable the strongest settings by default).

The push for these regulatory efforts is driven by the continued fallout over corporate documents leaked by Facebook whistleblower Frances Haugen. These documents showed that Meta, the parent company of Facebook and Instagram, played down its own research on the harmful effects of its platforms on young people, including eating disorders, depression, suicidal thoughts and more. This has led to a series of congressional hearings and growing calls for the biggest social media players to be held accountable for how they keep young users scrolling through content for as long as possible.

Features that encourage endless scrolling are among the most harmful to young people, according to the company’s own research. “Instagram aspects are heightening to create a perfect storm,” reads a leaked report from Haugen.

Algorithmic recommendation systems used by popular video platforms such as TikTok and YouTube have also drawn criticism. The New York Times reported in December that the inner workings of TikTok’s algorithm had been leaked by a source “disturbed by the app’s push for ‘sad’ content that could induce self-harm.”

As state and federal efforts expand, Stemler says it’s crucial lawmakers get it right and quickly.

“My concern for the mental health of this generation is serious,” she said. “There are deep issues stemming from the pandemic and isolation…technology has become how young people interact with the world.”

More Must-Try Stories from TIME


Write to Megan McCluskey at [email protected]

Share.

Comments are closed.