OpenAI Disrupts Covert Influence Operations
OpenAI recently revealed that it had disrupted five covert influence operations that were using its AI models for deceptive activity across the internet. These operations, spanning the last three months, involved threat actors utilizing OpenAI’s AI models to generate short comments, longer articles in various languages, and creating fake names and bios for social media accounts.
Deceptive Activities Reported by OpenAI
The campaigns, reportedly originating from Russia, China, Iran, and Israel, focused on various topics such as Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, and politics in Europe and the United States. OpenAI stated that these deceptive operations were an attempt to manipulate public opinion or influence political outcomes.
Concerns About Misuse of AI Technology
OpenAI’s disclosure has raised concerns about the potential misuse of generative AI technology, which can produce human-like text, imagery, and audio quickly and easily. In response to this threat, OpenAI announced the formation of a Safety and Security Committee to oversee the training of its next AI model.
Despite the deceptive operations, OpenAI reported that these campaigns did not achieve any increased audience engagement or reach through its services. The operations included a mix of AI-generated and manually written texts, as well as memes copied from the internet.
AI-Generated Content Identified on Social Media Platforms
Meta Platforms, in its recent security report, identified likely AI-generated content used deceptively on Facebook and Instagram. This content included comments praising Israel’s handling of the Gaza conflict, posted under entries from global news organizations and US lawmakers.
With these developments, the concerns surrounding the misuse of AI technology for deceptive activities continue to grow.