Disrupting a covert Iranian influence operation
OpenAI has banned accounts linked to an Iranian influence operation, Storm-2035, which used ChatGPT to create low-engagement content on U.S. politics and other topics, with minimal impact on public opinion.
Read original articleOpenAI has taken action against a covert Iranian influence operation, identified as Storm-2035, which utilized ChatGPT to generate content on various topics, including the U.S. presidential campaign. The operation involved creating long-form articles and social media comments that were disseminated through multiple accounts on platforms like X and Instagram. Despite the operation's efforts, it did not achieve significant audience engagement, with most posts receiving minimal interaction. OpenAI's investigation revealed that the operation produced content on issues such as the Gaza conflict, U.S. politics, and Latinx rights, while also mixing in non-political topics to appear more authentic. The company has banned the involved accounts and is actively monitoring for further violations. OpenAI is committed to preventing the misuse of its services for foreign influence operations and has shared intelligence with relevant stakeholders to combat such activities. The operation's impact was assessed as low, indicating that it did not effectively manipulate public opinion or political outcomes.
- OpenAI banned accounts linked to an Iranian influence operation using ChatGPT.
- The operation generated content on U.S. politics but achieved low audience engagement.
- It produced both long-form articles and social media comments on various topics.
- OpenAI is dedicated to preventing abuse of its services and sharing intelligence with stakeholders.
- The operation's impact was assessed as low, indicating minimal effectiveness in influencing public opinion.
Related
ChatGPT just (accidentally) shared all of its secret rules
ChatGPT's internal guidelines were accidentally exposed on Reddit, revealing operational boundaries and AI limitations. Discussions ensued on AI vulnerabilities, personality variations, and security measures, prompting OpenAI to address the issue.
US officials announce the takedown of an AI-powered Russian bot farm
US officials and allies dismantle a Russian AI-powered bot farm with 1,000 fake accounts spreading disinformation on social media. The operation linked to RT's digital media department highlights challenges in countering AI-driven propaganda.
US disrupts Russian government-backed disinformation campaign that relied on AI
The U.S. Justice Department disrupted a Russian-backed disinformation campaign using AI to spread propaganda in the U.S. Fake social media profiles promoted Russian interests, including misinformation about Ukraine. The operation involved RT and Kremlin support, targeting multiple countries.
Partisan bot-like accounts continue to amplify divisive content on X
Partisan bot-like accounts on X amplified divisive political content, generating over 3 billion impressions before the UK general election, spreading conspiracy theories and disinformation while lacking clear political affiliations.
Meta says: Russia's AI tactics for US election interference are failing
Meta's report reveals Russia's generative AI tactics for U.S. election interference are largely ineffective. Despite disruptions, concerns about AI's potential for disinformation persist, highlighting the need for platform collaboration.
I requested the Internet Archive grab this copy: https://web.archive.org/web/20240816210620/https://teorator....
Let's say the OpenAI engineers are working on ChatGPT 5 and spent last month scraping Teorater and X/Twitter, where this material ended up. How does OpenAI know that the new model is not poisoned?
This isn't just OpenAI's problem of course. Anyone training on the open Internet now has this problem.
Side note. This is some pretty terrible propaganda. The post about Kamala, immigrants and climate change barely makes any sense.
X just hosted trump for a live stream, who is being affected by a headline that reads ‘X censors trump tweet”.
To give them the benefit of the doubt, they likely want to keep their detection methods secret to make circumvention more difficult. And it all sounds totally plausible of course. But at the same time, a degree of skepticism is warranted because Microsoft has a huge incentive to fearmonger about AI so they can lock it down and capture the market. And what better way is there than to use the usual bogeymen to do so.
[1] https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcor...
I'd be shocked if Twitter weren't stripping the metadata they are checking.
It is apparently C2PA per [1]
[1] https://openai.com/index/understanding-the-source-of-what-we...
I mean. I get that low rent actors will use low rent services to try to generate political garbage. Is there any evidence that this is actually having a measurable or meaningful impact?
Who exactly is fooled by these sites? And is it the sites that are the problem or the relative lack of sophistication in American education when it comes to political corruption?
This is why I only use local models these days.
EDIT: Out of posts for today but I've been pretty happy with Gemma2. The context is short but the performance is very good and it's easy to disable refusal.
But the social impact is significant. I'm reminded of a fake story about kid kidnapping in India, that caused a mob to burn the two people that were targeted alive... They were completely innocent, the mob attacked them based on fake news. Now, that can happen enmasse.
This is coming soon after Trump decided to accuse Iran of being behind his assassination attempt (done by a white 20 year old) and Israel literally assassinated Hamas's chief negotiator while he was visiting Iran.
It seems like the powers that be are desperate for a war with Iran and will continue beating the drum to build consent.
Reminds me of the build up to the 2003 Iraq invasion (you know, because "they have WMDs")
Many nations employ "patriotic citizens" in an informal and semi-formal fashion along with trained propaganda and "infowar" experts. I know of China and Israel doing this but I'd assume it's everywhere.
I suppose this is a great opportunity for the people whose entire income comes from the fact that the US overpays insiders to supply its own military; if Trump wins, he gets to play tough and have the papers start his term praising him to the skies for starting a war with Iran, he'll also be able to blame everything on the last administration and work closely with the people who replace Netanyahu. If Harris wins, she's made no promises and has no beliefs, and will be aided by the press in blaming Gaza on Iran.
OpenAI itself is surely running larger covert influence operations in order to affect US legislation and elections.
> Similar to the covert influence operations we reported in May, this operation does not appear to have achieved meaningful audience engagement. The majority of social media posts that we identified received few or no likes, shares, or comments. We similarly did not find indications of the web articles being shared across social media.
Sounds like Russian Facebook ads.
For sure the purpose is noble, but it is good to remind everyone that everything you type, submit or generate there is not private but could be randomly snooped by strangers!
Related
ChatGPT just (accidentally) shared all of its secret rules
ChatGPT's internal guidelines were accidentally exposed on Reddit, revealing operational boundaries and AI limitations. Discussions ensued on AI vulnerabilities, personality variations, and security measures, prompting OpenAI to address the issue.
US officials announce the takedown of an AI-powered Russian bot farm
US officials and allies dismantle a Russian AI-powered bot farm with 1,000 fake accounts spreading disinformation on social media. The operation linked to RT's digital media department highlights challenges in countering AI-driven propaganda.
US disrupts Russian government-backed disinformation campaign that relied on AI
The U.S. Justice Department disrupted a Russian-backed disinformation campaign using AI to spread propaganda in the U.S. Fake social media profiles promoted Russian interests, including misinformation about Ukraine. The operation involved RT and Kremlin support, targeting multiple countries.
Partisan bot-like accounts continue to amplify divisive content on X
Partisan bot-like accounts on X amplified divisive political content, generating over 3 billion impressions before the UK general election, spreading conspiracy theories and disinformation while lacking clear political affiliations.
Meta says: Russia's AI tactics for US election interference are failing
Meta's report reveals Russia's generative AI tactics for U.S. election interference are largely ineffective. Despite disruptions, concerns about AI's potential for disinformation persist, highlighting the need for platform collaboration.