Photo by Jorge Zapata on Unsplash
Surely we will all agree that to achieve long-term success, companies need smart automation solutions that integrate Artificial Intelligence.
We can assure that this applied to the field of cybersecurity, and in particular to Artificial Intelligence cyberattacks, will change the rules of the game for both criminals and victims.
There are different technologies that can help us have a vision of what we are going to find in the coming months. In this article we are going to focus on GPT-3 (Generative Pre-training Transformer 3) but the Google duplex is another good example of the use of AI, used in a non-cybersecurity environment, which serves us perfectly to illustrate the concept human-machine interaction (Adversarial). Or being a bit more dramatic, a new generation of Text to Speech Attacks (TTS) and Text to Visual Speech Attacks (TTVS).
Duplex is a project from Google that is currently live in the majority of the US. It allows users to make a restaurant reservation by phone. However, instead of the user speaking directly to the restaurant employee, Duplex, with the help of Google Assistant, speaks for the user. See Google Duplex in action here to learn with an example of positive interaction between a human and a machine.
Generative Pre-trained Transformer 3 (GPT-3)
OpenAI is an AI research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity.
In July OpenAI released the GPT-3, a new language model trained with 175 billion parameters, 10x more than any previous non-sparse language model, capable of programing, designing and even talking about politics or economy.
Here there is a Twitter thread with some of the most curious cases.
Even if there was a huge hype, the CEO of OpenAI and former president of Y Combinator, Sam Altman literally said “The GPT-3 hype is way too much. It is impressive but it still has serious weaknesses and sometimes makes very silly mistakes”.
But now, let us fly out our imagination. The GPT-3 was used to generate artificial news that were perceived as good as the ones written by specialized journalists. As you can see in the following table, users could not distinguish between GPT-3 and specialized journalists when used it at its maximum power.
How can this AI approach affect Cybersecurity?
Continuing with this use case, this could potentially be used to impersonate trusted sources in business environments. Employees constantly suffer social engineering attacks (Phishing, vishing, smishing …) , but we must prepare to receive more sophisticated attacks with the use of this type of artificial intelligence.
Putting the problem in perspective, the average cost per incident in large companies exceeds € 7m * and in SMEs it is estimated at € 40k with the aggravating circumstance that 60% of them face closure only 6 months after it occurs the incident.
And attack technology is improving at a faster rate than companies are training and strengthening their employees.
Let’s get ready, it will get harder and harder to identify a scam
* The average cost of the incident varies depending on the sector, company size and region. Contact us for more information.