August 29th, 2024

Judges Rule Big Tech's Free Ride on Section 230 Is Over

The Third Circuit Court ruled TikTok must face trial for promoting harmful content to children, challenging Section 230 protections and potentially impacting tech companies' business models reliant on targeted advertising.

Read original articleLink Icon
ConcernSkepticismAnticipation
Judges Rule Big Tech's Free Ride on Section 230 Is Over

The Third Circuit Court has ruled that TikTok must face trial for its role in promoting harmful content to children, marking a significant shift in the interpretation of Section 230 of the Communications Decency Act. This law previously provided tech companies with immunity from liability for user-generated content, allowing them to operate with minimal accountability. The case arose after a ten-year-old girl, Nylah Anderson, died after attempting a dangerous challenge promoted by TikTok's algorithm. The court's decision indicates that TikTok's algorithm, which curates content based on user demographics and interactions, constitutes its own speech, thus making the platform liable for the consequences of that speech. This ruling challenges the long-standing legal protections that have allowed tech companies to evade responsibility for harmful content, suggesting a potential end to their "free ride" under Section 230. Legal experts anticipate that this decision will prompt a reevaluation of the law and could lead to further litigation against tech platforms. The implications of this ruling extend beyond TikTok, potentially affecting the business models of major tech companies reliant on targeted advertising and user engagement.

- The Third Circuit ruled TikTok must stand trial for promoting harmful content to children.

- This decision challenges the protections offered by Section 230 of the Communications Decency Act.

- The ruling suggests that algorithms used by tech platforms can be considered their own speech, making them liable for content consequences.

- Legal experts expect this ruling to prompt a reevaluation of Section 230 and further litigation against tech companies.

- The decision could significantly impact the business models of major tech firms reliant on user engagement and targeted advertising.

AI: What people are saying
The Third Circuit Court's ruling on TikTok has ignited a significant discussion regarding the implications for social media platforms and Section 230 protections.
  • Many commenters express concern that the ruling could lead to increased liability for social media companies, particularly regarding their algorithmic content recommendations.
  • There is a strong sentiment that social media platforms should be held accountable for the content they promote, especially when it comes to protecting children.
  • Some believe that this ruling may benefit larger tech companies while creating barriers for startups, potentially stifling competition.
  • Several users argue for a reevaluation of Section 230, suggesting it should adapt to the realities of modern algorithms and content curation.
  • Concerns about government overreach and censorship are prevalent, with some fearing that increased regulation could harm free speech online.
Link Icon 61 comments
By @nsagent - about 2 months
The current comments seem to say this is rings the death knell of social media and that this just leads to government censorship. I'm not so sure.

I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.

In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.

Honestly, I think I'd love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I'm inclined to see what shakes out.

By @Animats - about 2 months
This turns on what TikTok "knew":

"But by the time Nylah viewed these videos, TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31–32. Yet TikTok “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages].” App. 32–33. Instead, TikTok continued to recommend these videos to children like Nylah."

We need to see another document, "App 31-32", to see what TikTok "knew". Could someone find that, please? A Pacer account may be required. Did they ignore an abuse report?

See also Gonzales vs. Google (2023), where a similar issue reached the U.S. Supreme Court.[1] That was about whether recommending videos which encouraged the viewer to support the Islamic State's jihad led someone to go fight in it, where they were killed. The Court rejected the terrorism claim and declined to address the Section 230 claim.

[1] https://en.wikipedia.org/wiki/Gonzalez_v._Google_LLC

By @delichon - about 2 months

  TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. -- https://cases.justia.com/federal/appellate-courts/ca3/22-3061/22-3061-2024-08-27.pdf?ts=1724792413
An algorithm accidentally enticed a child to hang herself. I've got code running on dozens of websites that recommends articles to read based on user demographics. There's nothing in that code that would or could prevent an article about self-asphyxiation being recommended to a child. It just depends on the clients that use the software not posting that kind of content, people with similar demographics to the child not reading it, and a child who gets the recommendation not reading it and acting it out. If those assumptions fail should I or my employer be liable?
By @mjevans - about 2 months
"""The Court held that a platform's algorithm that reflects "editorial judgments" about "compiling the third-party speech it wants in the way it wants" is the platform's own "expressive product" and is therefore protected by the First Amendment.

Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too."""

I've agreed for years. It's a choice in selection rather than a 'natural consequence' such as a chronological, threaded, or even '__end user__ upvoted /moderated' (outside the site's control) weighted sort.

By @hn_acker - about 2 months
For anyone making claims about what the authors of Section 230 intended or the extent to which Section 230 applies to targeted recommendations by algorithms, the authors of Section 230 (Ron Wyden and Chris Cox) wrote an amicus brief [1] for Google v. Gonzalez (2023). Here is an excerpt from the corresponding press release [2] by Wyden:

> “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” the members wrote. “That interpretation enables Section 230 to fulfill Congress’s purpose of encouraging innovation in content presentation and moderation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Section 230’s protection remains as essential today as it was when the provision was enacted.”

[1][PDF] https://www.wyden.senate.gov/download/wyden-cox-amicus-brief...

[2] https://www.wyden.senate.gov/news/press-releases/sen-wyden-a...

By @Xcelerate - about 2 months
I'm not at all opposed to implementing new laws that society believes will reduce harm to online users (particularly children).

However, if Section 230 is on its way out, won't this just benefit the largest tech companies that already have massive legal resources and the ability to afford ML-based or manual content moderation? The barriers to entry into the market for startups will become insurmountable. Perhaps I'm missing something here, but it sounds like the existing companies essentially got a free pass with regard to liability of user-provided content and had plenty of time to grow, and now the government is pulling the ladder up after them.

By @octopoc - about 2 months
> In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that’s a misconception of what they do. They don’t speak, they are middlemen. And hopefully, we will follow the logic of Matey’s opinion, and start to see the policy problem as what to do about that.

This is a pretty good take, and it relies on pre-Internet legal concepts like distributor and producer. There's this idea that our legal / governmental structures are not designed to handle the Internet age and therefore need to be revamped, but this is a counterexample that is both relevant and significant.

By @tboyd47 - about 2 months
Fantastic write-up. The author appears to be making more than a few assumptions about how this will play out, but I share his enthusiasm for the end of the "lawless no-man’s-land" (as he put it) era of the internet. It comes at a great time too, as we're all eagerly awaiting the AI-generated content apocalypse. Just switch one apocalypse for a kinder, more human-friendly one.

> So what happens going forward? Well we’re going to have to start thinking about what a world without this expansive reading of Section 230 looks like.

There was an internet before the CDA. From what I remember, it was actually pretty rad. There can be an internet after, too. Who knows what it would look like. Maybe it will be a lot less crowded, less toxic, less triggering, and less addictive without these gigantic megacorps spending buku dollars to light up our amygdalas with nonsense all day.

By @seydor - about 2 months
The ruling itself says that this is not about 230, it's about TikTok's curation and collation of the specific videos. TikTok is not held liable for the user content but for the part that they do their 'for you' section. I guess it makes sense, manipulating people is not OK whether it's for political purposes as facebook and twitter do, or whatever. So 230 is not over

It would be nice to see those 'For you' and youtube's recomendations gone. Chronological timelines are the best , and will bring back some sanity. Don't like it? don't follow it

> Accordingly, TikTok’s algorithm, which recommended the Blackout Challenge to Nylah on her FYP, was TikTok’s own “expressive activity,” id., and thus its first-party speech.

>

> Section 230 immunizes only information “provided by another[,]” 47 U.S.C. § 230(c)(1), and here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.

By @chucke1992 - about 2 months
So basically closer and closer to governmental control over social networks. Seems like a global trend everywhere. Governments will define the rules by which communication services (and social networks) should operate.
By @skeltoac - about 2 months
Disclosures: I read the ruling before reading Matt Stoller’s article. I am a subscriber of his. I have written content recommendation algorithms for large audiences. I recommend doing one of these three things.

Section 230 is not canceled. This is a significant but fairly narrow refinement of what constitutes original content and Stoller’s take (“The business model of big tech is over”) is vastly overstating it.

Some kinds of recommendation algorithms produce original content (speech) by selecting and arranging feeds of other user generated content and the creators of the algorithms can be sued for harms caused by those recommendations. This correctly attaches liability to risky business.

The businesses using this model need to exercise a duty of care toward the public. It’s about time they start.

By @ssalka - about 2 months
> There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.

More specific than being harmed by your product, Section 230 cares about content you publish and whether you are acting as a publisher (liable for content) or a platform (not liable for content). This quote is supposing what would happen if Section 230 were overturned. But in fact, there is a way that companies would protect themselves: simply don't moderate content at all. Then you act purely as a platform, and don't have to ever worry about being treated as a publisher. Of course, this would turn the whole internet into 4chan, which nobody wants. IMO, this is one of the main reasons Section 230 continues to be used in this way.

By @hnburnsy - about 2 months
To me this decision doesn't feel it is demolishing 230, but reducing its scope, a scope that was exanded by other court decisions. Per the article 230 said not liable for user content and not liable for restricting content. This case is about liability for reinforcing content.

Would love to have a timeline only, non reinforcing content feed.

By @blueflow - about 2 months
Might be a cultural difference (im not from the US), but leaving a 10 year unsupervised with content from (potentially malicious) strangers really throws me off.

Wouldn't this be the perfect precedence case on why minors should not be allowed on social media?

By @Smithalicious - about 2 months
Hurting kids, hurting kids, hurting kids -- but, of course, there is zero chance any of this makes it to the top 30 causes of child mortality. Much to complain about with big tech, but children hanging themselves is just an outlier.
By @janalsncm - about 2 months
Part of the reason social media has grown so big and been so profitable is that these platforms have scaled past their own abilities to do what normal companies are required to do.

Facebook has a “marketplace” but no customer support line. Google is serving people scam ads for months, leading to millions in losses. (Imagine if a newspaper did that.) And feeds are allowed to recommend content that would be beyond the pale if a human were curating it. But because “it’s just an algorithm bro” we give them a pass because they can claim plausible deniability.

If fixing this means certain companies can’t scale to a trillion dollars with no customer support, too bad. Google can’t vet every ad? They could, but choose not to. Figure it out.

And content for children should have an even higher bar than that. Kids should not be dying from watching videos.

By @ang_cire - about 2 months
This is wonderful news.

The key thing people are missing is that TikTok is not being held responsible for the video content itself, they are being held responsible for their own code's actions. The video creator didn't share (or even attempt to share) the video with the victim- TikTok did.

If adults want to subscribe themselves to that content, that is their choice. Hell, if kids actively seek out that content themselves, I don't think companies should be responsible if they find it.

But if the company itself is the one proactively choosing to show that content to kids, that is 100% on them.

This narrative of being blind to the vagaries of their own code is playing dumb at best: we all know what the code we write does, and so do they. They just don't want to admit that it's impossible to moderate that much content themselves with automatic recommendation algorithms.

They could avoid this particular issue entirely by just showing people content they choose to subscribe to, but that doesn't allow them to inject content-based ads to a much broader audience, by showing that content to people who have not expressed interest/ subscribed to that content. And that puts this on them as a business.

By @WCSTombs - about 2 months
From the article:

> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.

If that reading is correct, then Section 230 isn't nullified, but there's something that isn't shielded from liability any more, which IIUC is basically the "Recommended For You"-type content feed curation algorithms. But I haven't read the ruling itself, so it could potentially be more expansive than that.

But assuming Matt Stoller's analysis there is accurate: frankly, I avoid those recommendation systems like the plague anyway, so if the platforms have to roll them back or at least be a little more thoughtful about how they're implemented, it's not necessarily a bad thing. There's no new liability for what users post (which is good overall IMO), but there can be liability for the platform implementation itself in some cases. But I think we'll have to see how this plays out.

By @kevwil - about 2 months
Whatever this means, I hope it means less censorship. That's all my feeble brain can focus on here: free speech good, censorship bad. :)
By @2OEH8eoCRo0 - about 2 months
I love this.

Court: Social Media algos are protected speech

Social Media: Yes! Protect us

Court: Since you're speech you must be liable for harmful speech as anyone else would be

Social Media: No!!

By @renewiltord - about 2 months
If I spam filter comments am I subject to this? That is, the remaining comments are effectively like I was saying them?
By @dwallin - about 2 months
By @deafpolygon - about 2 months
Section 230 is alive and well, and this ruling won't impact it. What will change is that US social media firms will move away from certain types of algorithmic recommendations. Tiktok is owned by Bytedance which is a Chinese firm, so in the long run - no real impact.
By @telotortium - about 2 months
Anyone know what the reputation of the Third Circuit is? I want to know if this ruling is likely to hold up in the inevitable Supreme Court appeal.

The Ninth Circuit has a reputation as flamingly progressive (see "Grants Pass v. Johnson", where SCOTUS overruled the Ninth Circuit, which had ruled that cities couldn't prevent homeless people from sleeping outside in public parks and sidewalks). The Fifth Circuit has a reactionary reputation (see "Food and Drug Administration v. Alliance for Hippocratic Medicine", which overruled a Fifth Circuit ruling that effectively revoked the FDA approval of the abortion drug mifepristone).

By @intended - about 2 months
Hoo boy.

So- platforms aren’t publishers, they are distributors (like news stands or pharmacies).

So they are responsible for the goods they sell.

They aren’t responsible for user content - but they are responsible for what they choose to show.

This is going to be dramatic.

By @carapace - about 2 months
Moderation doesn't scale, it's NP-complete or worse. Massive social networks sans moderation cannot work and cannot be made to work. Social networks require that the moderation system is a super-set of the communication system and that's not cost effective (except where the two are co-extensive, e.g. Wikipedia, Hacker News, Fediverse.) We tried it because of ignorance (in the first place) and greed (subsequently). This ruling is just recognizing reality.
By @janalsncm - about 2 months
This seems like it contradicts the case where YouTube wasn’t liable for recommending terrorist videos to someone.
By @jrockway - about 2 months
I'm not sure that Big Tech is over. Media companies have had a viable business forever. What happens here is that instead of going to social media and hearing about how to fight insurance companies, you'll just get NFL Wednesday Night Football Presented By TikTok.
By @game_the0ry - about 2 months
Pavel gets arrested, Brazil threatens Elon, now this.

I am not happy with how governments think they can dictate what internet users can and cannot see.

With respect to TikTok, parents need have some discipline and not give smart phones to their ten-year-olds. You might as well give them a crack pipe.

By @drbojingle - about 2 months
There's no reason,as far as I'm concerned, that we shouldn't have a choice in algorithms on social media platforms. I want to be able to pick an open source algorithm that i can understand the pros and cons of. Hell let me pick 5. Why not?
By @falcolas - about 2 months
> the internet grew tremendously, encompassing the kinds of activities that did not exist in 1996

I guess that's one way to say that you never experienced the early internet. In three words: rotten dot com. Makes all the N-chans look like teenagers smoking on the corner, and Facebook et.al. look like toddlers in paddded cribs.

This will frankly hurt any and all attempts to host any content online, and if anyone can survive it, it will be the biggest corporations alone. Section 230 also protected ISPs and hosting companies (linode, Hetzer, etc) after all.

Their targeting may not be intentional, but will that matter? Are they willing to be jailed in a foreign country because of their perceived inaction?

By @1vuio0pswjnm7 - about 2 months
"In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that's a misconception of what they do. They don't speak, they are middlemen."

Parasites.

By @ratorx - about 2 months
I think a bigger issue in this case is the age. A 10-year old should not have access to TikTok unsupervised, especially when the ToS states the 13-year age threshold, regardless of the law’s opinion on moderation.

I think especially content for children should be much more severely restricted, as it is with other media.

It’s pretty well-known that age is easy to fake on the internet. I think that’s something that needs tightening as well. I’m not sure what the best way to approach it is though. There’s a parental education aspect, but I don’t see how general content on the internet can be restricted without putting everything behind an ID-verified login screen or mandating parental filters, which seems quite unrealistic.

By @tempeler - about 2 months
Finally, it goes to end of global social media. jurisdiction cannot be use as a weapon. if you use it as a weapon. they don't hesitate use that to you as a weapon.
By @drpossum - about 2 months
I hope this makes certain streaming platforms liable for the things certain podcast hosts say while they shovel money at and promote them above other content.
By @6gvONxR4sf7o - about 2 months
So under this new reading of the law, is it saying that AWS is still not liable for what someone says on reddit, but now reddit might be responsible for it?
By @Nasrudith - about 2 months
It is amazing how people were programmed to completely forget the meaning of Section 230 over the years just by repetition of the stupidest propaganda.
By @BurningFrog - about 2 months
Surely this will bubble up to the Supreme Court?

Once they've weighed in, we'll know if the "free ride" really is over, and if so what ride replaces it.

By @nness - about 2 months
> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.

This is fascinating and raises some interesting questions about where the liability starts and stops i.e. is "trending/top right now/posts from following" the same as a tailored algorithm per user? Does Amazon become culpable for products on their marketplace? etc.

For good or for bad, this century's Silicon Valley was built on Section 230 and I don't foresee it disappearing any time soon. If anything, I suspect it will be supported by future/refined by legislation instead of removed. No one wants to be the person who legisliate away all online services...

By @tomcam - about 2 months
Have to assume dang is moderating his exhausted butt off, because the discussion on this page is vibrant and courteous. Thanks all!
By @rsingel - about 2 months
With no sense of irony, this blog is written on a platform that allows some Nazis, algorithmically promotes publishers, allows comments, and is thus only financially viable because of Section 230.

If you actually want to understand something about the decision, I highly recommend Eric Goldman's blog post:

https://blog.ericgoldman.org/archives/2024/08/bonkers-opinio...

By @skeptrune - about 2 months
My interpretation of this is it will push social media companies to take a less active role in what they recommend to their users. It should not be possible to intentionally curate content while simultaneously avoiding the burden of removing content which would cause direct harm justifying a lawsuit. Could not be more excited to see this.
By @DidYaWipe - about 2 months
While this guy's missives are not always on target (his one supporting the DOJ's laughable and absurd case against Apple being an example of failure), some are on target... and indeed this ruling correctly calls out sites for exerting editorial control.

If you're going to throw up your hands and say, "Well, users posted this, not us!" then you'd better not promote or bury any content with any algorithm, period. These assholes (TikTok et al) are now getting what they asked for with their abusive behavior.

By @linotype - about 2 months
Twitter sold at the perfect time. Wow.
By @theendisney - about 2 months
I put a few forums online that never got active users. What they did get was spam, plenty of it, a lot of it. We can imagine the sheer amount of garbage posted on hn, reddit, Facebook etc

Deleting the useless garbage one has to develop an idea where the line is suppose to be. The bias there will eventually touch all angles of human discourse. As an audience matures it gets more obvious what they would consider interesting or annoying. More bias.

Then there are legal limits in each country, the "correct" religion and natuonalism.

Quite the shit storm.

By @nitwit005 - about 2 months
I am puzzled why there are no arrests in this sort of case. Surely, convincing kids to kill themselves is a form of homicide?
By @endtime - about 2 months
Not that it matters, but I was curious and so I looked it up: the three-judge panel comprised one Obama-appointed judge and two Trump-appointed judges.
By @Devasta - about 2 months
This could result in the total destruction of social media sites. Facebook, TikTok, Youtube, Twitter, hell even Linkedin cannot possibly survive if they have to take responsibility for what users post.

Excellent news, frankly.

By @zmmmmm - about 2 months
What about "small tech"?

... because it's small tech that need Section 230. If anything, retraction of 230 will be the real free ride for big tech, because it will kill all chance of threatening competition at the next level down.

By @oldgregg - about 2 months
Insane reframing. Big tech and politicians are pushing this, pulling the ladder up behind them-- X and new decentralized networks are a threat to their hegemony and this is who they are going after. Startups will not be able to afford whatever bullshit regulatory framework they force feed us. How about they mandate any social network over 10M MAU has to publish their content algorithms.. ha!
By @mikewarot - about 2 months
>There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.

So, we actually have to watch out for kids, and maybe only have a 25% profit margin? Oh, so terrible! /s

I'm 100% against the political use of censorship, but 100% for the reasonable use of government to promote the general welfare, secure the blessings of liberty for ourselves, and our posterity.

By @hello_computer - about 2 months
This is a typical anglosphere move: Write another holy checklist (I mean, "Great Charter"), indoctrinate the plebes into thinking that they were made free because of it (they weren't), then as soon as one of the bulleted items leaves the regime's hiney exposed, have the "judges" conjure a new interpretation out of thin-air for as long as they think the threat persists.

Whether it was Eugene Debs being thrown in the pokey, or every Japanese civilian on the west coast, or some harmless muslim suburbanite getting waterboarded, nothing ever changes. Wake me up when they actually do something to Facebook.

By @stainablesteel - about 2 months
tiktok in general is great at targeting young women

the chinese and iranians are taking advantage of this and thats not something i would want to entrust to them

By @2OEH8eoCRo0 - about 2 months
Fantastic! If I had three wishes, one of them might be to repeal Section 230.
By @trinsic2 - about 2 months
When I see CEO's, CFO's going to prison for the actions of there corporations, then I'll believe laws actually make things better. Otherwise any court decisions that say some action is now illegal is just posturing.
By @phendrenad2 - about 2 months
I have no problem with this. Section 230 is almost 100 years old, long before anyone could have imagined an ML algorithm curating user content.

Section 230 absolutely should come with an asterisk that if you train an algorithm to do your dirty work you don't get to claim it wasn't your fault.

By @jmyeet - about 2 months
What I want to sink in for people that whenever people talk about an "algorithm", they're regurgitating propaganda specifically designed to absolve the purveyor of responsibility for anything that algorithm does.

An algorithm in this context is nothing more than a reflection of what all the humans who created it designed it to do. In this case, it's to deny Medicaid to make money. For RealPage, it's to drive up rents for profit. Health insurance companies are using "AI" to deny claims and prior authorizations, forcing claimants to go through more hoops to get their coverage. Why? Because the extra hoops will discourage a certain percentage.

All of these systems come down to a waterfall of steps you need to go through. Good design will remove steps to increase the pass rate. Intentional bad design will add steps and/or lower the pass rate.

Example: in the early days of e-commerce, you had to create an account before you could shop. Someone (probably Amazon) realized they lost customers this way. The result? You could create a shopping cart all you want and you didn't have to create an account unti lyou checked out. At this point you're already invested. The overall conversion rate is higher. Even later, registration itself became optional.

Additionally, these big consulting companies are nothing more than leeches designed to drain the public purse