Supreme Court Makes Announcement That Has Democrats Absolutely Furious

OPINION:  This article contains commentary which may reflect the author’s opinion

Big Tech Giants have cloaked their power grabs, censorship, and other aggressive activity toward internet users behind protections that may be about ready to expire if the Supreme Court of the United States (SCOTUS)  rules to abolish a federal law that is known as ‘Section 230’.

“Section 230 is a section of Title 47 of the United States Code enacted as part of the United States Communications Decency Act, that generally provides immunity for website platforms with respect to third-party content,” according to the code.

The long title is: Protection For ‘Good Samaritan’ Blocking and Screening of Offensive Material.

The question is, who gets to decide what is offensive?

On Monday, the Supreme Court agreed to hear the case and dive into the question that could change social media and the Internet forever.

“The case involves a man whose daughter was killed in a 2015 ISIS attack in Paris. The grieving father, Reynaldo Gonzalez, sued YouTube’s parent company, Google, under the U.S. Anti-Terrorism Act. Gonzalez claims that ISIS posted recruitment videos on YouTube, that YouTube recommended these videos to users, and that this led to his daughter’s death,” Reason Magazine reported in April 2022 in a brief description of the case.

The left threatens that overturning the law would lead to an end of the open internet, claiming that the move to strike the law would likely usher in a new era of super moderation as companies become legally liable for user posts.

“The case appears to be the court’s first test of Section 230 of the Communications Decency Act, a controversial provision that shields online platforms from lawsuits over moderation practices and user-posted content,” Axios reported on Tuesday, adding:

“Industry groups and supporters of Section 230 argue that its protections make it possible for website publishers and app services to use and moderate user-contributed content in ways that benefit their customers and society.”

The right sees the use of the federal protections as a way the left censoring political opposition by the right. Big Tech’s censorship of Hunter Biden’s laptop comes to mind:

Justice Clarence Thomas has already indicated he is not a fan of Section 230 protections and has made it known he is looking for a way to rule on it.

SCOTUSblog notes that in 2020, Justice Clarence Thomas suggested that “in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms.” (Thomas made a similar suggestion earlier this year.)

“The petition in Gonzalez v. Google LLC tries to present itself as the case Thomas has been looking for,” writes Andrew Hamm:

“The district court dismissed Gonzalez’s claim on the ground that Section 230 nonetheless protected Google for its recommendations because ISIS produced the videos, not Google. The U.S. Court of Appeals for the 9th Circuit affirmed, concluding that Section 230 protects such recommendations, at least if the provider’s algorithm treated content on its website similarly. However, the panel in Gonzalez considered itself bound to reach this result because of a recent case on the same issue that another 9th Circuit panel decided while Gonzalez was pending. The Gonzalez panel further concluded that, if it could resolve the question itself, Section 230 would not protect a provider’s content recommendations,” Reason reported.

Gonzalez admits that the circuits to face this question are not split. Instead, Gonzalez suggests that, had his case simply come out before the other 9th Circuit decision, then Google would be the one seeking review. Regardless, he maintains, the providers’ financial dependence on advertisements and hence on algorithms that can target users makes this question of recommendations important to resolve.

Gonzalez is far from the first person to sue social media companies over terrorist acts. But these people “have yet to talk a court into agreeing with their arguments,” as Tim Cushing noted at Techdirt last fall.

“The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform,” wrote constitutional lawyer Cathy Gellis in 2017:

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm.

Section 230 “should prevent a court from ever even reaching the tort law analysis,” added Gellis. “With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it.”

“Courts have long emphasized nontextual arguments when interpreting 230, leaving questionable precedent in their wake,” the justice said. “Extending 230 immunity beyond the natural reading of the text can have serious consequences,” he added, and specified his concern about giving companies immunity from civil claims for “knowingly host[ing] illegal child pornography” and “for race discrimination.”

“We should be certain that is what the law demands,” he said.

COMMENTS

Leave a Reply

Your email address will not be published.

 

 

 

 

 

Send this to a friend