OpenAI highlights the malicious use of AI in misinformation, cyberattacks, and fraud.

OpenAI has published a new report on emerging threats related to the misuse of its artificial intelligence models. The company details how state actors and cybercriminals have attempted to exploit its tools for misinformation campaigns, financial fraud, and cyberattacks.

Misinformation: AI as a Weapon in Global Manipulation

One of the main findings of the report is the increasing use of AI in political influence campaigns and misinformation on social media. OpenAI detected actors linked to China and Iran who used its technology to generate articles, social media comments, and propaganda messages with narratives designed to influence public opinion.

In a specific case, it was discovered that automated accounts used ChatGPT to write articles in Spanish criticizing U.S. foreign policy, which were published in digital media outlets across Latin America. Attempts to translate and adapt propaganda into multiple languages to maximize global impact were also identified.

OpenAI claims to have taken measures to block and dismantle these efforts and has shared its findings with platforms like Meta and X (Twitter) to strengthen the detection of artificially generated content.

Digital Fraud and AI-Driven Job Scams

Another malicious use identified in the report involves fraudulent hiring schemes and romance scams.

Groups linked to North Korea have used AI to generate automated responses in fake job interviews, aiming to infiltrate operations in Western tech companies. These attacks seek to gain access to sensitive information or generate revenue to fund illicit activities of the North Korean regime.

In another operation detected in Southeast Asia, OpenAI identified the use of language models to draft persuasive messages in romance scams and financial fraud. Victims were contacted through social media and, after weeks of interaction with fake profiles, were coaxed into investing in fraudulent cryptocurrency or forex platforms.

Cyberattacks: The Use of AI in Hacking Automation

In addition to misinformation and fraud, OpenAI’s report reveals that certain groups have attempted to use its tools to optimize cyberattack activities.

Malicious actors were identified using AI to generate malicious code, debug exploits, and automate intrusion processes in networks. These tactics have been particularly concerning in the case of groups linked to cyber espionage operations in Asia and the Middle East.

However, OpenAI notes that it has strengthened its detection systems and usage restrictions, blocking thousands of attempts to misuse its models and collaborating with cybersecurity agencies to mitigate these risks.

OpenAI’s Response and Security Measures

The company has reiterated its commitment to safety and has implemented new safeguards to prevent abuse of its tools. Key initiatives include:

  • Enhanced detection systems to identify patterns of misuse in real time.
  • Collaboration with governments and tech companies to share information on emerging threats.
  • Restrictions on access to its models to prevent their exploitation in malicious activities.

“Artificial intelligence has enormous potential to benefit humanity, but we must also be vigilant about the risks of its misuse. We will continue to work to ensure that our tools are used responsibly and ethically,” OpenAI stated in the report.

With this new analysis, OpenAI reinforces its stance on transparency and safety in the artificial intelligence industry, in a context where the use of these technologies continues to evolve rapidly.

Source: Artificial Intelligence News

Scroll to Top