August 17th, 2024

AI stole my job and my work, and the boss didn't know – or care

A freelance writer lost his job to an AI at Cosmos Magazine, which used his work without consent. This incident raises concerns about transparency and the value of human authorship in journalism.

Read original articleLink Icon
AI stole my job and my work, and the boss didn't know – or care

A freelance writer recounts losing his job to an AI system at Cosmos Magazine, which had previously been a fulfilling collaboration. After submitting a column, he and other freelancers were abruptly informed that no further submissions would be accepted due to financial difficulties. Later, it was revealed that Cosmos had developed a custom AI service funded by a grant, which generated articles using content from contributors without their consent. This AI was likely trained on the writer's previous work, effectively replacing him. The decision to implement AI-generated content was made without informing the editorial staff, leading to criticism from former contributors and editors. The article highlights the broader implications of AI in journalism, emphasizing the need for transparency and the irreplaceable value of human touch in writing. The author argues that while AI can produce content, it lacks the depth and engagement that human writers provide, and calls for a system to trace the origins of written content to ensure accountability.

- A freelance writer was replaced by an AI system at Cosmos Magazine without prior notice.

- The magazine used a grant to develop an AI service that generated articles from existing content.

- Contributors were not consulted about the use of their work in training the AI.

- The incident raises concerns about transparency and the value of human authorship in journalism.

- Critics emphasize the need for accountability in AI-generated content creation.

Link Icon 7 comments
By @jms703 - 2 months
Maybe an unpopular opinion, but some jobs shouldn't exist. We should fire some terrible jobs that can be done by machines.

Most of the content being manufactured today is quite lazy and I'm fine with AI writing it because I won't see it anyway.

But replacing a person without helping them grow into a new and more impactful job isn't the way to do this. I wish more companies would 'fire' certain types of work and help their employees grow into other roles.

By @szastamasta - 2 months
I hope that these media companies realize sooner than later, that if their content is AI generated, there’s no reason to go there. We’ll just skip the middlemen and go straight to OpenAI or whatever. No need to go to their site.
By @BeetleB - 2 months
> Cosmos Magazine – Australia's rough analog of New Scientist

It's been a few decades since I read New Scientist, for good reason. It was a fairly low quality publication - not unlike what Popular Science became.

If Cosmos is similarly poor, then this is not surprising.

By @righthand - 2 months
I think the LLM bot infatuation leads to a closed internet. Block all untrusted traffic and you request access to websites for information once you’re validated as a human.

There are two divisions of people in the AI war. People who want this because they think it’s the future so they are trying to apply it to everything. And people who don’t want or don’t care about LLM being added to their daily life.

LLM bots everywhere might not go away but neither will the push back against them. Hurt enough people with your attempts to displace people’s jobs and they will develop defenses as a reaction to the out right abuse of the internet’s good will.

By @bell-cot - 2 months
IANAL, but the specifics of this case sound ripe for running it past a good Aussie IP lawyer. Maybe all that he could do would be to send a few unpleasant letters. Or maybe those specifics could make this an "interesting" test case, giving the publication a painful or fatally expensive lesson in cutting-edge IP law.
By @SoftTalker - 2 months
> Techniques exist to watermark such AI generated content – readers easily could be alerted. But that idea has already been nixed by OpenAI CEO Sam Altman, who recently declared that AI watermarking threatened at least 30 percent of the ChatGPT-maker's business.

I realize it would be an arms race, but are there any browser plugins that can alert readers to content that is likely AI-generated?

I do think that some required disclosure that "this content was generated" might be a good thing. Sort of how magazine and newspaper advertisements that are formatted to look like news articles, or advertising programs on TV that look like news programs or talk shows, have to be disclosed as advertisements.

By @mannykannot - 2 months
I see that articles on the Cosmos website, nominally from the past week, are being attributed to human authors.