The popular American investor Warren Buffett has recently stated in an interview his concern about the rise of AI, a technology that he “does not understand” but whose potential for scams is very clear to him. In Buffett’s eyes, the AI-driven scam industry is about to explode, and coming from someone who has dedicated his life to anticipating the success of other companies, this carries significant weight.
The truth is, the potential of AI to mimic all kinds of content is terrifying. The launch of ChatGPT highlighted for the first time the AI’s ability to mimic human behavior with a degree of precision that is practically indistinguishable, in this case when it comes to writing texts. There are more and more texts in circulation that are entirely written by ChatGPT, while human creativity takes a back seat.
ChatGPT’s ability to create coherent texts in a matter of seconds allows cyber attackers and scammers to write scams in almost any language without even needing to know how to speak it. This results in a series of fraudulent emails or smishing messages that present a much higher level of credibility than previous scam attempts, thus multiplying their success rate.
But AI does not stop at ChatGPT texts. In recent months, more and more fake photos made with AI applications such as Midjourney or DALL-E are appearing. These photos are used for propaganda, misinformation, or digital scams, and the worst part is that they can be easily generated from a simple text description.
Malicious applications even allow the creation of photos or videos that depict a real person in sexual poses, something that has caused numerous problems in different institutes in Spain. Women, particularly teenagers, are the main victims of these ‘deepfakes’, which can have devastating consequences and even lead to suicide in some cases.
Even worse, AI allows the creation of videos that also replicate the voices of the people appearing in them, potentially generating fake videos of public figures like Joe Biden or Pedro Sánchez saying things they have never said. This is a concern for experts as an avalanche of ‘deepfake’ videos is expected in the months leading up to any election, which could affect the voting outcome.
Warren Buffett even mentions the possibility of ‘deepfakes’ being used to scam individuals by forging videos of their family members. The appearance of a person’s children could be duplicated to send videos requesting money. This scam, known as the ‘distressed family member’, was previously done by sending text messages with a falsified profile picture.
Keeping our private data out of the reach of cyber attackers becomes an essential cybersecurity measure if we want to prevent the risk of falling victim to scams of this kind. Attackers need samples of a person’s voice or face to create these fake videos, so it is necessary to minimize the amount of personal photos and videos shared on social media.
Likewise, using a VPN will help reduce the risk of data leaks from man-in-the-middle attacks due to its strong encryption system. And, of course, keeping antivirus software updated to protect our devices is essential in defending against malware. The threat of ‘deepfakes’ is very real, so the importance of robust cybersecurity cannot be underestimated.
The lack of regulation is a problem
While artificial intelligence has great potential to help humanity, it remains an extraordinarily powerful tool that can be used for harmful purposes. The problem is that it is not being adequately regulated by any government. Even if institutions like the European Parliament were to establish strict regulations on AI, they would not be very effective if AIs operate from outside Europe.
In the absence of appropriate regulatory measures and, above all, a global agreement to keep artificial intelligence in check, a significant increase in hacks and scams where AI plays a central role can be expected. If we do not want to fall victim to these types of cyberattacks, the best thing we can do is strictly control what we publish online and remain vigilant against possible fake content.