July 30th, 2024

Anthropic is scraping websites so fast it's causing problems

Anthropic faces criticism for aggressive web scraping while training its Claude model, causing disruptions to websites like Ifixit.com and Freelancer.com, raising ethical concerns about data usage and content creator rights.

Read original articleLink Icon
Anthropic is scraping websites so fast it's causing problems

Anthropic has been criticized for its aggressive web scraping practices while training its Claude language model. Reports indicate that the ClaudeBot has made excessive requests to various websites, with one site, Ifixit.com, experiencing a million hits in a single day. Freelancer.com reported an even more extreme case, with 3.5 million hits in just four hours, leading them to block the bot due to the disruption it caused to their operations. These actions have raised concerns about the ethical implications of data scraping, especially since Anthropic reportedly ignores directives set in robots.txt files, which are intended to guide web crawlers on what data can be accessed. Despite being founded by former OpenAI researchers with a commitment to developing responsible AI systems, Anthropic's current practices have drawn parallels to the broader issue of plagiarism in AI model training. Other organizations, like Read The Docs, have also called for more respectful behavior from AI crawlers. The situation highlights ongoing tensions between AI development and the rights of content creators, as companies seek to balance the need for data with ethical considerations.

Link Icon 12 comments
By @jsheard - 6 months
At least their bots accurately identify themselves in the User-Agent field even when they're ignoring robots.txt, so serverside blocking is on the table for now at least.

Bytedances crawler (Bytespider) is another one which disregards robots.txt but still identifies itself, and you probably should block it because it's very aggressive.

It's going to get annoying fast when they inevitably go full blackhat and start masquerading as normal browser traffic.

By @ericholscher - 6 months
For those saying "just use a CDN", it's not nearly that simple. Even behind a CDN, the crawlers on our site are hitting large files that aren't frequently accessed. This leads to large cache miss rates:

https://fosstodon.org/@readthedocs/112877477202118215

By @l1n - 6 months
> Sites use robots.txt to tell well-behaved web crawlers what data is up for grabs and what data is off limits. Anthropic ignores it and takes your data anyway. That’s even if you’ve updated your robots.txt with the latest configuration details for Anthropic. [404 Media]

doesn't seem supported by the citation, https://www.404media.co/websites-are-blocking-the-wrong-ai-s...

By @JohnFen - 6 months
It didn't take long for the "responsible" Anthropic to show its true colors.
By @ChrisArchitect - 6 months
By @lolpanda - 6 months
Cloudflare has a switch to block all the unknown bots other than the well behaved one. Would this be a simple solution to most of the sites? I wonder if the main concern here is that the sites don't want to waste bandwidth/compute for AI bots or they don't want their content to be used for training.
By @jakubsuchy - 6 months
Just like Cloudflare many providers now just allow blocking: https://www.haproxy.com/blog/how-to-reliably-block-ai-crawle...

(disclaimer: I wrote this blog post)

By @superkuh - 6 months
I've noticed Anthropic bots in my logs for more than a year now and I welcome them. I'd love for their LLM to be better at what I'm interested in. I run my website off my home connection on a desktop computer and I've never had a problem. I'm not saying my dozens of run-ins with the anthropic bots (there have been 3 variations I've seen so far) are totally representative, but they've been respecting my robots.txt.

They even respect extended robots.txt features like,

    User-agent: *
    Disallow: /library/*.pdf$
I make my websites for other people to see. They are not secrets I hoard who's value goes away when copied. The more copies and derivations the better.

I guess ideas like creative commons and sharing go away when the smell of money enters the water. Better lock all your text behind paywalls so the evil corporations won't get it. Just be aware, for every incorporated entity you block you're blocking just as many humans with false positives, if not more. This anti-"scraping" hysteria is mostly profit motivated.

By @zorrn - 6 months
I don't know if I should Block Claude. I think it's really good and use it regularly and I think it's not fair to say that others should provide content.
By @dzonga - 6 months
what happens when ai scrappers no longer have info to scrap.

funny thing - with wasm, the web won't be scrappable.

By @iLoveOncall - 6 months
One million hits in 24 hours is only 11 TPS, if that's causing issues, then Anthropic isn't the problem, your application or hosting is.