Judges Rule Big Tech's Free Ride on Section 230 Is Over
The Third Circuit Court ruled TikTok must face trial for promoting harmful content to children, challenging Section 230 protections and potentially impacting tech companies' business models reliant on targeted advertising.
Read original articleThe Third Circuit Court has ruled that TikTok must face trial for its role in promoting harmful content to children, marking a significant shift in the interpretation of Section 230 of the Communications Decency Act. This law previously provided tech companies with immunity from liability for user-generated content, allowing them to operate with minimal accountability. The case arose after a ten-year-old girl, Nylah Anderson, died after attempting a dangerous challenge promoted by TikTok's algorithm. The court's decision indicates that TikTok's algorithm, which curates content based on user demographics and interactions, constitutes its own speech, thus making the platform liable for the consequences of that speech. This ruling challenges the long-standing legal protections that have allowed tech companies to evade responsibility for harmful content, suggesting a potential end to their "free ride" under Section 230. Legal experts anticipate that this decision will prompt a reevaluation of the law and could lead to further litigation against tech platforms. The implications of this ruling extend beyond TikTok, potentially affecting the business models of major tech companies reliant on targeted advertising and user engagement.
- The Third Circuit ruled TikTok must stand trial for promoting harmful content to children.
- This decision challenges the protections offered by Section 230 of the Communications Decency Act.
- The ruling suggests that algorithms used by tech platforms can be considered their own speech, making them liable for content consequences.
- Legal experts expect this ruling to prompt a reevaluation of Section 230 and further litigation against tech companies.
- The decision could significantly impact the business models of major tech firms reliant on user engagement and targeted advertising.
Related
TikTok collected US user views on issues like abortion and gun control
The U.S. Justice Department accuses TikTok of collecting sensitive data and facilitating communication with ByteDance in China, raising national security concerns. TikTok contests the allegations and potential ban.
Uncle Sam sues TikTok for 'extensive' data harvesting from kids
The U.S. government has sued TikTok for violating children's privacy laws, alleging it allowed minors to create accounts without parental consent. The lawsuit seeks fines and a potential ban on the app.
Minds are 'not currency for social media,' says EU as TikTok kills Lite Rewards
TikTok is ending its Lite Rewards program in the EU to comply with the Digital Services Act, addressing concerns over potential addiction, especially among minors, and pledging not to introduce similar initiatives.
The TikTok Case Will Be Determined by What's Behind the Government's Black Lines
The U.S. government defends a potential TikTok ban citing national security risks from ByteDance, while TikTok challenges the evidence's credibility, raising First Amendment concerns and proposing a special master for transparency.
Appeals court revives TikTok 'blackout challenge' death suit
A U.S. appeals court revived a lawsuit against TikTok regarding the death of 10-year-old Nylah Anderson, ruling that TikTok's algorithm does not qualify for Section 230 protection, impacting future liability assessments.
- Many commenters express concern that the ruling could lead to increased liability for social media companies, particularly regarding their algorithmic content recommendations.
- There is a strong sentiment that social media platforms should be held accountable for the content they promote, especially when it comes to protecting children.
- Some believe that this ruling may benefit larger tech companies while creating barriers for startups, potentially stifling competition.
- Several users argue for a reevaluation of Section 230, suggesting it should adapt to the realities of modern algorithms and content curation.
- Concerns about government overreach and censorship are prevalent, with some fearing that increased regulation could harm free speech online.
I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.
In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.
Honestly, I think I'd love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I'm inclined to see what shakes out.
"But by the time Nylah viewed these videos, TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31–32. Yet TikTok “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages].” App. 32–33. Instead, TikTok continued to recommend these videos to children like Nylah."
We need to see another document, "App 31-32", to see what TikTok "knew". Could someone find that, please? A Pacer account may be required. Did they ignore an abuse report?
See also Gonzales vs. Google (2023), where a similar issue reached the U.S. Supreme Court.[1] That was about whether recommending videos which encouraged the viewer to support the Islamic State's jihad led someone to go fight in it, where they were killed. The Court rejected the terrorism claim and declined to address the Section 230 claim.
TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. -- https://cases.justia.com/federal/appellate-courts/ca3/22-3061/22-3061-2024-08-27.pdf?ts=1724792413
An algorithm accidentally enticed a child to hang herself. I've got code running on dozens of websites that recommends articles to read based on user demographics. There's nothing in that code that would or could prevent an article about self-asphyxiation being recommended to a child. It just depends on the clients that use the software not posting that kind of content, people with similar demographics to the child not reading it, and a child who gets the recommendation not reading it and acting it out. If those assumptions fail should I or my employer be liable?Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too."""
I've agreed for years. It's a choice in selection rather than a 'natural consequence' such as a chronological, threaded, or even '__end user__ upvoted /moderated' (outside the site's control) weighted sort.
> “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” the members wrote. “That interpretation enables Section 230 to fulfill Congress’s purpose of encouraging innovation in content presentation and moderation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Section 230’s protection remains as essential today as it was when the provision was enacted.”
[1][PDF] https://www.wyden.senate.gov/download/wyden-cox-amicus-brief...
[2] https://www.wyden.senate.gov/news/press-releases/sen-wyden-a...
However, if Section 230 is on its way out, won't this just benefit the largest tech companies that already have massive legal resources and the ability to afford ML-based or manual content moderation? The barriers to entry into the market for startups will become insurmountable. Perhaps I'm missing something here, but it sounds like the existing companies essentially got a free pass with regard to liability of user-provided content and had plenty of time to grow, and now the government is pulling the ladder up after them.
This is a pretty good take, and it relies on pre-Internet legal concepts like distributor and producer. There's this idea that our legal / governmental structures are not designed to handle the Internet age and therefore need to be revamped, but this is a counterexample that is both relevant and significant.
> So what happens going forward? Well we’re going to have to start thinking about what a world without this expansive reading of Section 230 looks like.
There was an internet before the CDA. From what I remember, it was actually pretty rad. There can be an internet after, too. Who knows what it would look like. Maybe it will be a lot less crowded, less toxic, less triggering, and less addictive without these gigantic megacorps spending buku dollars to light up our amygdalas with nonsense all day.
It would be nice to see those 'For you' and youtube's recomendations gone. Chronological timelines are the best , and will bring back some sanity. Don't like it? don't follow it
> Accordingly, TikTok’s algorithm, which recommended the Blackout Challenge to Nylah on her FYP, was TikTok’s own “expressive activity,” id., and thus its first-party speech.
>
> Section 230 immunizes only information “provided by another[,]” 47 U.S.C. § 230(c)(1), and here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.
Section 230 is not canceled. This is a significant but fairly narrow refinement of what constitutes original content and Stoller’s take (“The business model of big tech is over”) is vastly overstating it.
Some kinds of recommendation algorithms produce original content (speech) by selecting and arranging feeds of other user generated content and the creators of the algorithms can be sued for harms caused by those recommendations. This correctly attaches liability to risky business.
The businesses using this model need to exercise a duty of care toward the public. It’s about time they start.
More specific than being harmed by your product, Section 230 cares about content you publish and whether you are acting as a publisher (liable for content) or a platform (not liable for content). This quote is supposing what would happen if Section 230 were overturned. But in fact, there is a way that companies would protect themselves: simply don't moderate content at all. Then you act purely as a platform, and don't have to ever worry about being treated as a publisher. Of course, this would turn the whole internet into 4chan, which nobody wants. IMO, this is one of the main reasons Section 230 continues to be used in this way.
Would love to have a timeline only, non reinforcing content feed.
Wouldn't this be the perfect precedence case on why minors should not be allowed on social media?
Facebook has a “marketplace” but no customer support line. Google is serving people scam ads for months, leading to millions in losses. (Imagine if a newspaper did that.) And feeds are allowed to recommend content that would be beyond the pale if a human were curating it. But because “it’s just an algorithm bro” we give them a pass because they can claim plausible deniability.
If fixing this means certain companies can’t scale to a trillion dollars with no customer support, too bad. Google can’t vet every ad? They could, but choose not to. Figure it out.
And content for children should have an even higher bar than that. Kids should not be dying from watching videos.
The key thing people are missing is that TikTok is not being held responsible for the video content itself, they are being held responsible for their own code's actions. The video creator didn't share (or even attempt to share) the video with the victim- TikTok did.
If adults want to subscribe themselves to that content, that is their choice. Hell, if kids actively seek out that content themselves, I don't think companies should be responsible if they find it.
But if the company itself is the one proactively choosing to show that content to kids, that is 100% on them.
This narrative of being blind to the vagaries of their own code is playing dumb at best: we all know what the code we write does, and so do they. They just don't want to admit that it's impossible to moderate that much content themselves with automatic recommendation algorithms.
They could avoid this particular issue entirely by just showing people content they choose to subscribe to, but that doesn't allow them to inject content-based ads to a much broader audience, by showing that content to people who have not expressed interest/ subscribed to that content. And that puts this on them as a business.
> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.
If that reading is correct, then Section 230 isn't nullified, but there's something that isn't shielded from liability any more, which IIUC is basically the "Recommended For You"-type content feed curation algorithms. But I haven't read the ruling itself, so it could potentially be more expansive than that.
But assuming Matt Stoller's analysis there is accurate: frankly, I avoid those recommendation systems like the plague anyway, so if the platforms have to roll them back or at least be a little more thoughtful about how they're implemented, it's not necessarily a bad thing. There's no new liability for what users post (which is good overall IMO), but there can be liability for the platform implementation itself in some cases. But I think we'll have to see how this plays out.
Court: Social Media algos are protected speech
Social Media: Yes! Protect us
Court: Since you're speech you must be liable for harmful speech as anyone else would be
Social Media: No!!
The Ninth Circuit has a reputation as flamingly progressive (see "Grants Pass v. Johnson", where SCOTUS overruled the Ninth Circuit, which had ruled that cities couldn't prevent homeless people from sleeping outside in public parks and sidewalks). The Fifth Circuit has a reactionary reputation (see "Food and Drug Administration v. Alliance for Hippocratic Medicine", which overruled a Fifth Circuit ruling that effectively revoked the FDA approval of the abortion drug mifepristone).
So- platforms aren’t publishers, they are distributors (like news stands or pharmacies).
So they are responsible for the goods they sell.
They aren’t responsible for user content - but they are responsible for what they choose to show.
This is going to be dramatic.
I am not happy with how governments think they can dictate what internet users can and cannot see.
With respect to TikTok, parents need have some discipline and not give smart phones to their ten-year-olds. You might as well give them a crack pipe.
I guess that's one way to say that you never experienced the early internet. In three words: rotten dot com. Makes all the N-chans look like teenagers smoking on the corner, and Facebook et.al. look like toddlers in paddded cribs.
This will frankly hurt any and all attempts to host any content online, and if anyone can survive it, it will be the biggest corporations alone. Section 230 also protected ISPs and hosting companies (linode, Hetzer, etc) after all.
Their targeting may not be intentional, but will that matter? Are they willing to be jailed in a foreign country because of their perceived inaction?
Parasites.
I think especially content for children should be much more severely restricted, as it is with other media.
It’s pretty well-known that age is easy to fake on the internet. I think that’s something that needs tightening as well. I’m not sure what the best way to approach it is though. There’s a parental education aspect, but I don’t see how general content on the internet can be restricted without putting everything behind an ID-verified login screen or mandating parental filters, which seems quite unrealistic.
Once they've weighed in, we'll know if the "free ride" really is over, and if so what ride replaces it.
This is fascinating and raises some interesting questions about where the liability starts and stops i.e. is "trending/top right now/posts from following" the same as a tailored algorithm per user? Does Amazon become culpable for products on their marketplace? etc.
For good or for bad, this century's Silicon Valley was built on Section 230 and I don't foresee it disappearing any time soon. If anything, I suspect it will be supported by future/refined by legislation instead of removed. No one wants to be the person who legisliate away all online services...
If you actually want to understand something about the decision, I highly recommend Eric Goldman's blog post:
https://blog.ericgoldman.org/archives/2024/08/bonkers-opinio...
If you're going to throw up your hands and say, "Well, users posted this, not us!" then you'd better not promote or bury any content with any algorithm, period. These assholes (TikTok et al) are now getting what they asked for with their abusive behavior.
Deleting the useless garbage one has to develop an idea where the line is suppose to be. The bias there will eventually touch all angles of human discourse. As an audience matures it gets more obvious what they would consider interesting or annoying. More bias.
Then there are legal limits in each country, the "correct" religion and natuonalism.
Quite the shit storm.
Excellent news, frankly.
... because it's small tech that need Section 230. If anything, retraction of 230 will be the real free ride for big tech, because it will kill all chance of threatening competition at the next level down.
So, we actually have to watch out for kids, and maybe only have a 25% profit margin? Oh, so terrible! /s
I'm 100% against the political use of censorship, but 100% for the reasonable use of government to promote the general welfare, secure the blessings of liberty for ourselves, and our posterity.
Whether it was Eugene Debs being thrown in the pokey, or every Japanese civilian on the west coast, or some harmless muslim suburbanite getting waterboarded, nothing ever changes. Wake me up when they actually do something to Facebook.
the chinese and iranians are taking advantage of this and thats not something i would want to entrust to them
Section 230 absolutely should come with an asterisk that if you train an algorithm to do your dirty work you don't get to claim it wasn't your fault.
An algorithm in this context is nothing more than a reflection of what all the humans who created it designed it to do. In this case, it's to deny Medicaid to make money. For RealPage, it's to drive up rents for profit. Health insurance companies are using "AI" to deny claims and prior authorizations, forcing claimants to go through more hoops to get their coverage. Why? Because the extra hoops will discourage a certain percentage.
All of these systems come down to a waterfall of steps you need to go through. Good design will remove steps to increase the pass rate. Intentional bad design will add steps and/or lower the pass rate.
Example: in the early days of e-commerce, you had to create an account before you could shop. Someone (probably Amazon) realized they lost customers this way. The result? You could create a shopping cart all you want and you didn't have to create an account unti lyou checked out. At this point you're already invested. The overall conversion rate is higher. Even later, registration itself became optional.
Additionally, these big consulting companies are nothing more than leeches designed to drain the public purse
Related
TikTok collected US user views on issues like abortion and gun control
The U.S. Justice Department accuses TikTok of collecting sensitive data and facilitating communication with ByteDance in China, raising national security concerns. TikTok contests the allegations and potential ban.
Uncle Sam sues TikTok for 'extensive' data harvesting from kids
The U.S. government has sued TikTok for violating children's privacy laws, alleging it allowed minors to create accounts without parental consent. The lawsuit seeks fines and a potential ban on the app.
Minds are 'not currency for social media,' says EU as TikTok kills Lite Rewards
TikTok is ending its Lite Rewards program in the EU to comply with the Digital Services Act, addressing concerns over potential addiction, especially among minors, and pledging not to introduce similar initiatives.
The TikTok Case Will Be Determined by What's Behind the Government's Black Lines
The U.S. government defends a potential TikTok ban citing national security risks from ByteDance, while TikTok challenges the evidence's credibility, raising First Amendment concerns and proposing a special master for transparency.
Appeals court revives TikTok 'blackout challenge' death suit
A U.S. appeals court revived a lawsuit against TikTok regarding the death of 10-year-old Nylah Anderson, ruling that TikTok's algorithm does not qualify for Section 230 protection, impacting future liability assessments.