Artificial intelligence is on everyone’s lips, once again, one might say. Providers of all stripes integrate appropriate technologies into their security solutions. But how far are cybercriminals, and how far are they, especially when it comes to phishing? The following article establishes the status quo and dares to look into the future.
Cybercriminals have been using all technical possibilities for years to automate their procedures as much as possible, including avoiding being followed up by law enforcement. One of the most effective and easiest ways to infect an IT system is still phishing email. Sixty-seven percent of the security incidents investigated by Verizon in the 2020 Data Breach Investigations Report could be traced back to social hacking techniques such as phishing.
The techniques are popular because a fake email alias is quick to set up. Unlike a phone call, sending phishing emails doesn’t cost money and is almost impossible to trace by law enforcement agencies. One reads more and more often that this type of automation of social engineering is also done with machine learning and AI support. Machine learning is already being used primarily to refine the most successful campaigns and use them in a wide variety of languages and cultures. This would be a cause for concern because wherever people make mistakes, machines can write grammatically error-free texts or do excellent translations. However, the potential is much greater.
AI Learns To Direct Human Behavior
In Australia, the research team CSIRO Data61, a data-driven company with links to science, developed and presented a systematic method for analyzing human behavior. They refer to them as “recurrent neural network and deep reinforcement learning.” It describes how people make decisions and what triggers those decisions. In various tests, three experiments were carried out in which test subjects were asked to play different games against a computer. CSIRO boss Jon Whittle summarized the results in the article for the magazine “The Conversation.”
After each experiment, the machine-learned from the participants’ responses and identified and targeted weaknesses in people’s decision-making. The machine-learned how she could direct the participants to specific actions. The results are, however, and Whittle openly admits that, at the moment, quite abstract can only be related to the limited and somewhat unrealistic situations. However, this is what makes IT security experts worldwide frown. Unfortunately, these results also show that machines could influence human decisions through their interactions with enough training and data.
Status Quo AI In Cybercrime
But how far are cyber criminals on their part? AI will be used primarily for spear phishing. The use of AI in spear-phishing is comparable to fishing with a sniper rifle. Neither of these is currently being carried out but is theoretically possible. And that’s the more significant problem.
On the other hand, how should companies prepare for something not yet known in practice? Of course, spear phishing is especially worthwhile if the target is financially attractive enough. This is usually the case when, as with BEC fraud or CEO fraud, the managing director or another management team member is imitated to reach a sum of millions quickly.
Deep Fakes Influence Human Behavior
When we talk about AI in spear phishing, we are talking about deep fakes. Deep Fakes in voice phishing, in particular, are a common means. Voice phishing is most effective when it comes to impersonating a voice simply. Experienced criminals can do this themselves with some voice training, but there are also enough programs such as “mixed,” “researcher,” or “deepfakenow” that can do just that and rely on ML and AI methods.
Cybercriminals would then proceed precisely as with recorded and imitated voices; they first use all information about the boss they want to imitate that can be found on the Internet, collect this data, evaluate it, and prepare the action. They then look for weak points, get to know the employees on the one hand through the information available on the Internet, and on the other hand through fake calls to the secretary’s office.
Gain access to the boss’s contact details, call him under a pretext, record his voice and let the IT systems replicate it. You then think of a reason for CEO fraud, contact the accounting department, and have the alleged boss call and exert pressure. In the end, it is as simple as it sounds because imitating voices is easy for the programs to learn.
Of course, creating deepfakes with images or videos is also possible, but the effort is still too great for the desired success. The falsification of images takes more time, than videos with people even more, and it takes a lot of time until the result is so deceptively accurate that it requires a second look. Both images and videos are the next stage in CEO fraud that security experts expect.
Currently, the method of using a simple email or a voice deep fake is still far too successful. Employees are still falling for this type of cyber fraud, so investments are not necessary yet. Cybercriminals and we see that in phishing, always try to take the easiest route and only make the effort that they have to make
There is a lot of hype about AI, and of course, IT security is also affected. Cybercriminals are already using the technology today, but not as extensively as possible. Deep Fakes will be the method of choice in the long term because, with increasingly better voice imitations, fake pictures, or even videos, emotions and human behavior can be better controlled and anticipated than is the case with plain text emails.
This shows the enormous potential that cybercriminals are already dealing with and that IT security officers already have to deal with today. Training courses that show employees what to look for, how to recognize deep fakes, and what to learn to assess situations should be integral to any IT security strategy.