August 16th, 2024

Disrupting a covert Iranian influence operation

OpenAI has banned accounts linked to an Iranian influence operation, Storm-2035, which used ChatGPT to create low-engagement content on U.S. politics and other topics, with minimal impact on public opinion.

Read original articleLink Icon
Disrupting a covert Iranian influence operation

OpenAI has taken action against a covert Iranian influence operation, identified as Storm-2035, which utilized ChatGPT to generate content on various topics, including the U.S. presidential campaign. The operation involved creating long-form articles and social media comments that were disseminated through multiple accounts on platforms like X and Instagram. Despite the operation's efforts, it did not achieve significant audience engagement, with most posts receiving minimal interaction. OpenAI's investigation revealed that the operation produced content on issues such as the Gaza conflict, U.S. politics, and Latinx rights, while also mixing in non-political topics to appear more authentic. The company has banned the involved accounts and is actively monitoring for further violations. OpenAI is committed to preventing the misuse of its services for foreign influence operations and has shared intelligence with relevant stakeholders to combat such activities. The operation's impact was assessed as low, indicating that it did not effectively manipulate public opinion or political outcomes.

- OpenAI banned accounts linked to an Iranian influence operation using ChatGPT.

- The operation generated content on U.S. politics but achieved low audience engagement.

- It produced both long-form articles and social media comments on various topics.

- OpenAI is dedicated to preventing abuse of its services and sharing intelligence with stakeholders.

- The operation's impact was assessed as low, indicating minimal effectiveness in influencing public opinion.

Related

ChatGPT just (accidentally) shared all of its secret rules

ChatGPT just (accidentally) shared all of its secret rules

ChatGPT's internal guidelines were accidentally exposed on Reddit, revealing operational boundaries and AI limitations. Discussions ensued on AI vulnerabilities, personality variations, and security measures, prompting OpenAI to address the issue.

US officials announce the takedown of an AI-powered Russian bot farm

US officials announce the takedown of an AI-powered Russian bot farm

US officials and allies dismantle a Russian AI-powered bot farm with 1,000 fake accounts spreading disinformation on social media. The operation linked to RT's digital media department highlights challenges in countering AI-driven propaganda.

US disrupts Russian government-backed disinformation campaign that relied on AI

US disrupts Russian government-backed disinformation campaign that relied on AI

The U.S. Justice Department disrupted a Russian-backed disinformation campaign using AI to spread propaganda in the U.S. Fake social media profiles promoted Russian interests, including misinformation about Ukraine. The operation involved RT and Kremlin support, targeting multiple countries.

Partisan bot-like accounts continue to amplify divisive content on X

Partisan bot-like accounts continue to amplify divisive content on X

Partisan bot-like accounts on X amplified divisive political content, generating over 3 billion impressions before the UK general election, spreading conspiracy theories and disinformation while lacking clear political affiliations.

Meta says: Russia's AI tactics for US election interference are failing

Meta says: Russia's AI tactics for US election interference are failing

Meta's report reveals Russia's generative AI tactics for U.S. election interference are largely ineffective. Despite disruptions, concerns about AI's potential for disinformation persist, highlighting the need for platform collaboration.

Link Icon 29 comments
By @sweeter - 8 months
CNN just did a piece on private Israeli groups doing the exact same thing. It is pretty scary the shear scale of this. Literally any entity or group can spin up a ton of bots and use any AI service locally or otherwise, to attempt to sway public opinion.
By @PerilousD - 8 months
These jokers seem like the AI version of "script kiddie" hackers, and OpenAI may be engaging in a bit of humble bragging. It doesn't take considerable investments in time or money to run local LLMs, INCLUDING ChatGPT, where your questions, prompts, and results are not sent home to the mothership, so it's a BS article as to (the real) actors who may or may not be doing this. NOW, if OpenAI or Gemini or LLama, etc, showed how they analyzed social media posts and flagged the ones that were AI generated and the analysis as to WHY the article is flagged, then that would be much more useful, actionable by at least some of the readers and would put the accounts spreading the content (particularly the rebroadcast fluffers) in the spotlight.
By @simonw - 8 months
From a Google search it looks like this is one of the articles in question: https://teorator.com/index.php/2024/08/12/x-censors-trumps-t...

I requested the Internet Archive grab this copy: https://web.archive.org/web/20240816210620/https://teorator....

By @rvnx - 8 months
The headline should be "OpenAI publicly admits it supported Iran influence operation despite the sanctions"
By @ImHereToVote - 8 months
I wonder if it would be possible to get a list of countries that can have influence operations using ChatGPT and countries that can't.
By @kjellsbells - 8 months
One area that OpenAI did not comment on, was the likelihood that the AI-generated content here was itself used in the training data for a later model.

Let's say the OpenAI engineers are working on ChatGPT 5 and spent last month scraping Teorater and X/Twitter, where this material ended up. How does OpenAI know that the new model is not poisoned?

This isn't just OpenAI's problem of course. Anyone training on the open Internet now has this problem.

By @navaed01 - 8 months
Its great that openAI is using this infraction as an opportunity to posture about how open they are and how they are a company that can tame evil applications of AI, while totally missing and not addressing the broader concern, what if this was run on a local instance?? How are we stopping / spotting and squatting that?!

Side note. This is some pretty terrible propaganda. The post about Kamala, immigrants and climate change barely makes any sense.

X just hosted trump for a live stream, who is being affected by a headline that reads ‘X censors trump tweet”.

By @usefulcat - 8 months
I'd be more interested in an analysis of the likely intention of the campaign. Is it just an attempt to reduce voter turnout? If so, that doesn't seem all that useful by itself.
By @djaouen - 8 months
This is as much an indictment of ChatGPT as it is of the Iranians. According to OpenAI, their product produces output that no one in their right mind would want to read for any purpose.
By @programmarchy - 8 months
The linked PDF (Storm-2035 [1]) from Microsoft is more detailed and interesting than the blog post. However, what's missing from the reports is how they detected those operations and how they tied them to different groups. There's a lot of claims being made without showing all of the supporting evidence.

To give them the benefit of the doubt, they likely want to keep their detection methods secret to make circumvention more difficult. And it all sounds totally plausible of course. But at the same time, a degree of skepticism is warranted because Microsoft has a huge incentive to fearmonger about AI so they can lock it down and capture the market. And what better way is there than to use the usual bogeymen to do so.

[1] https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcor...

By @lambdaba - 8 months
The only thing noteworthy about this is how small-scale this is, and that the perpetrators don't even bother/have the means to set up their own infrastructure.
By @bangaladore - 8 months
"...We ran these images through our DALL·E 3 classifier, which identified them as not being generated by our services..."

I'd be shocked if Twitter weren't stripping the metadata they are checking.

It is apparently C2PA per [1]

[1] https://openai.com/index/understanding-the-source-of-what-we...

By @FactKnower69 - 8 months
Main takeaway is that OpenAI is reading all of your prompts :)
By @irthomasthomas - 8 months
speaking of influence operations, the strawberry dude and 'lily' just did one of those twitter voice group chat things where everyone tried to guess if lily was an ai or not. there just happened to be a worldcoin rep in the room...
By @ComplexSystems - 8 months
Why wouldn't Iran just use Llama or something?
By @akira2501 - 8 months
Wow. What a high value target. [0]

I mean. I get that low rent actors will use low rent services to try to generate political garbage. Is there any evidence that this is actually having a measurable or meaningful impact?

Who exactly is fooled by these sites? And is it the sites that are the problem or the relative lack of sophistication in American education when it comes to political corruption?

[0]: https://niothinker.com/

By @katzinsky - 8 months
Yeah. Knowing my tools are judging my political positions and could self destruct if the authors disagree just makes me love using my computer.

This is why I only use local models these days.

EDIT: Out of posts for today but I've been pretty happy with Gemma2. The context is short but the performance is very good and it's easy to disable refusal.

By @langsoul-com - 8 months
LLM powered disinformation machines is terrifying. The barrier to entry and sustained cost is so low.

But the social impact is significant. I'm reminded of a fake story about kid kidnapping in India, that caused a mob to burn the two people that were targeted alive... They were completely innocent, the mob attacked them based on fake news. Now, that can happen enmasse.

By @hobo_in_library - 8 months
It's hard to take an article like this at face value when they provide zero evidence for any of their claims.

This is coming soon after Trump decided to accuse Iran of being behind his assassination attempt (done by a white 20 year old) and Israel literally assassinated Hamas's chief negotiator while he was visiting Iran.

It seems like the powers that be are desperate for a war with Iran and will continue beating the drum to build consent.

Reminds me of the build up to the 2003 Iraq invasion (you know, because "they have WMDs")

By @joe_the_user - 8 months
I don't think people should jump to any "this is the level Iran is at?" conclusions.

Many nations employ "patriotic citizens" in an informal and semi-formal fashion along with trained propaganda and "infowar" experts. I know of China and Israel doing this but I'd assume it's everywhere.

By @mediumsmart - 8 months
influencing the politics of the United States? Is there an alternative choice I have not heard of? I thought it was samesame in blue or red as it always has been.
By @pessimizer - 8 months
Nothing like 900 million articles about secret Iranian plots from government entangled dystopian megacorps and the pundits who love them, soon after the US starts moving troops into the Middle East to defend the progress of an ongoing genocide.

I suppose this is a great opportunity for the people whose entire income comes from the fact that the US overpays insiders to supply its own military; if Trump wins, he gets to play tough and have the papers start his term praising him to the skies for starting a war with Iran, he'll also be able to blame everything on the last administration and work closely with the people who replace Netanyahu. If Harris wins, she's made no promises and has no beliefs, and will be aided by the press in blaming Gaza on Iran.

OpenAI itself is surely running larger covert influence operations in order to affect US legislation and elections.

> Similar to the covert influence operations we reported in May, this operation does not appear to have achieved meaningful audience engagement. The majority of social media posts that we identified received few or no likes, shares, or comments. We similarly did not find indications of the web articles being shared across social media.

Sounds like Russian Facebook ads.

By @greatgib - 8 months
What they don't say on their post but that we can guess and is from interest is that they probably had to spy their user messages to determinate that they used the account for generate content for the influence operation.

For sure the purpose is noble, but it is good to remind everyone that everything you type, submit or generate there is not private but could be randomly snooped by strangers!

By @commandpaul - 8 months
Given that any self hosted open source model would have worked just as well. I can’t see this good faith post as anything more than forwarding open ais long campaign for regulatory capture.