OpenAI, the company behind ChatGPT, has recently identified and shut down an influence campaign linked to the Iranian regime. This operation, aimed at disseminating false information about the US elections, utilized artificial intelligence technology to generate and spread misinformation through ChatGPT.
While OpenAI did not disclose the exact number of closed user accounts, they revealed that the operation, known as Storm-2035, involved at least 12 accounts on Platform X and one on Instagram. These accounts focused on topics such as the US presidential election, the Israel-Hamas conflict, and Israel’s participation in the Paris Olympics.
The campaign’s strategy involved producing false content using ChatGPT and subsequently spreading it through social networks and websites affiliated with the Iranian regime. However, OpenAI suggests that these efforts have not achieved significant audience engagement.
This revelation comes in the wake of an FBI announcement on August 12 about ongoing investigations into cyber attacks by hackers linked to the Iranian regime. Axios reported that Iran, rather than Russia, currently poses the most significant foreign government threat to the upcoming US presidential election scheduled for November 5.
Google has also reported ongoing attempts by Iranian-affiliated hackers to breach user accounts of individuals close to President Joe Biden, Vice President Kamala Harris, and former President Donald Trump. The tech giant identified a hacker group known as APT42, which has targeted nearly 12 individuals associated with both the Democratic and Republican parties.
Microsoft’s Threat Analysis Center has observed a significant increase in malicious cyber activities by Iranian agents in the past three months. Their researchers note that, unlike Russia, Iran’s activities tend to start later in the election cycle and focus more on creating chaos than influencing the election outcome.
The tech company has identified four websites secretly operated by the Iranian regime, which present themselves as legitimate news outlets publishing articles on controversial topics. These sites appear designed to fuel political and social discord, covering a range of issues that appeal to both liberal and conservative audiences.
Patrick Warren, a professor and expert on disinformation at Clemson University, suggests that these websites may be attempting to establish credibility to disseminate false information in the future. He notes that compared to previous Iranian efforts, these sites appear more professional and credible, possibly due to the use of artificial intelligence in their creation.
While there is currently little evidence of these suspicious sites gaining a large audience or their content being widely shared on social networks, experts warn that they may still be in the early stages of establishing credibility. This could potentially allow them to publish hacked or manipulated content more effectively in the future.





