tech is being defended by users, businesses, and experts against algorithm lawsuits. //
A diverse group of people and organizations supported Big Tech’s liability shield in a Supreme Court case involving YouTube’s algorithm. These people included internet users, businesses, academics and human rights experts. Some argued that the removal of federal legal protections for AI driven recommendation engines would have a significant impact on the open Internet.
Major tech companies like Meta, Twitter and Microsoft were among those who weighed in at the Court, along with some of Big Tech’s most vocal critics such as Yelp, the Electronic Frontier Foundation, and Yelp. Reddit, along with a group volunteer moderators from Reddit, also participated in the case.
What actually happened. This controversy began with Gonzalez v. Google at the Supreme Court. It centers on the question of whether Google is liable for suggesting pro-ISIS content through its YouTube algorithm.
Google claims that Section 230 of The Communications Decency Act protects it from such litigation. The plaintiffs in this case, the relatives of a Paris victim of the 2015 ISIS attack, claim that YouTube’s recommendation algorithm is liable under an American anti-terrorism law.
The filing stated:
Reddit is built on users “recommending” content to others through actions such as upvoting and pinting. It is important to understand the implications of the petitioners claim in this case. Their theory would significantly increase the potential for Internet users to be sued online.
Yelp is now on the scene. Yelp, which has had a history with Google in conflict, claims that its business model is based on providing honest and legitimate reviews to users. A ruling holding recommendation algorithms liable could have a severe impact on Yelp’s operations, as it would force them to stop sorting reviews that are manipulative or fake.
Yelp wrote
Yelp would be unable to analyze or recommend reviews and not face liability. Yelp would have to display all reviews submitted… Business owners could submit hundreds positive reviews for their business without any effort or risk of being penalized.
Meta’s involvement. Meta, a parent of Facebook, stated in their legal submission that if Section 230 were to be changed to protect platforms’ ability remove content but not recommend content it would raise serious questions about what the internet means to recommend something.
Representatives from Meta spoke:
“If third-party content is displayed in a user’s feed, it qualifies for’recommending’ it. Many services could face liability for almost all third-party material they host. Nearly all decisions regarding how to organize, sort, or display third-party information could be considered’recommending’ the content.
Advocates for human rights intervene. The Stern Center for Business and Human Rights at New York University has stated that it would not be easy to make a rule specifically targeting algorithmic recommendations for liability and that it could lead to the suppression of speech or loss of significant speech, especially speech from marginalized groups.
Why we care. This case could have major implications for how tech companies work. The court could rule that companies are liable for recommendations made by their algorithms. This could have a significant impact on the design and operation of recommendation systems.
This could result in more content curation and less content being recommended to users. This could lead to higher legal costs and uncertainty for these businesses.
The post Users, businesses, and experts defend big technology against algorithm lawsuits appeared first on Search Engine Land.