The term “artificial intelligence” (AI) describes a machine’s capacity to carry out operations traditionally performed by intelligent entities like humans or animals. Artificial intelligence (AI) systems are capable of reasoning, problem-solving, generalization, planning, and experience-based learning.
AI is still developing in terms of practical applications and yet, despite this, organizations have been using it in recent years to modify their processes to become ready for opportunities and problems in advance. However, cybercriminals are now also using this technology to increase the effectiveness of their cyberattacks and hacks.
They achieve this by utilizing the intelligent automation offered by AI systems to enhance traditional cyberattacks by accelerating their speed, expanding their coverage, and raising their level of sophistication. Thus, the disruption of AI-enabled cyberattacks is three-fold. AI can assist a variety of attacker strategies and offers fresh methods to better accomplish the attackers’ objectives.
AI’s offensive capabilities
AI’s offensive capabilities are expressed in the following ways:
- boosts the autonomy of cyberattacks and decreases the manual effort needed by an attacker
- makes it possible to coordinate attacks to determine the optimal attack vector, the most vulnerable target, and the most effective attack window
- capacity to develop content that resembles the distribution from which it learned and can therefore hide malicious behavior
- offers ways to get beyond security measures including email filters and malware detectors
- Social engineering
- can study humans to better understand how to manipulate their trust and emotions and offers methods for choosing and tracking targets
- can automate and personalize interactions with people both offline and online, i.e. chatbots and spear phishing emails
- can be employed to create fake online personas and impersonate real individuals in order to connect with selected victims, i.e. deepfakes and voice cloning
- Credential theft
- can mimic human behavior to replicate authentication procedures and guess credentials and is used for both initial access and credential access tactics
- offers methods for fooling biometric identification systems by imitating a user’s voice and face, keystroke patterns and eye movements
- can guess passwords with low entropy or personal details
Examples of AI-enabled cyberattacks
In Spear phishing with target selection, AI can assist in the selection of phishing victims via user profiling to detect and target particular traits. The attacker initially gathers online profiles from social media networks in order to profile people. Then, sensible traits like friends, interests, and hobbies are used to categorize possible victims into groups with similar traits. The last step involves locating and classifying clusters of interest, such as those that are “very gullible” or “high value,” which later become the target of spear phishing attacks.
The interests of targets are usually fed into a natural language generation (NLG) model, many of which are publicly accessible online, i.e. GPT3. The model is then used to create customized emails or social media postings that mimic the target’s hobbies and writing style, boosting the likelihood that the attack will be successful. In fact, a tool that generates phishing tweets, called SNAP_R, proved to be more successful at triggering victim click-through than human written tweets.
Deep learning techniques are used by a technology known as deep voice to mimic a target’s voice and create speech from text. Audio samples of a person’s voice are necessary for training a deep voice model. The audio of public appearances or recorded online meetings, both of which are widely accessible online, can be used to gather this information. This technology enables vishing (voice phishing) attacks, many of which are successful and some have already been made public. In July 2019, a vishing call that pretended to be the CEO of a UK-based energy company resulted in a fraudulent $243,000 money transfer.
Deepfakes, which allow an attacker to simulate a target’s face and behavior, can take impersonation to a new level, as no prior technology was able to convincingly mimic voices, facial structure and gestures of targets.
How to protect yourself
At large, automation and artificial intelligence have made organizations more innovative and efficient than ever before. However, they can also be a ruthless enemy when put into the wrong hands. As humans, we know playing against a computer rarely ends in victory. Have you ever played online chess or checkers against a machine? Chances are, you lost. In this situation, the odds are stacked against you. Similarly, leaving the burden to the cyber experts in your organization to prevent AI-based attacks will leave your team feeling defeated and burnt out.
The best way to protect yourself against these attacks is to use common sense, spread awareness and fact-check using multiple sources. It’s crucial for an organization to be aware of the risks and to develop a skeptical eye among its employees, as they are the biggest vulnerability in AI-enabled cyberattacks. By reporting suspicious emails, posts and other business related activities, you can help your organization act quickly and protect others from similar attacks.
Beyond educating and monitoring your employees, additional measures can be taken to increase overall security. In recent years, artificial intelligence has enabled malicious actors to become more sophisticated in their attack strategies. As a result, organizations are being tasked with finding sophisticated solutions to defend their assets and keep their data safe. Luckily, solutions are available that can assist in reaching this goal.
Through adopting an automated solution, your organization can reap the benefits of faster analysis and mitigation of threats through vulnerability management, network security, and application security. Equip your organization with proper tools, and reduce the risk to your organization from malicious actors.
Protect your organizational assets with Bright
Bright’s Dynamic Application Security Scanner enables you to secure your applications and APIs for both technical and business logic vulnerabilities at the speed of DevOps, with minimal false positives. Avoid security becoming an afterthought, and ensure proper measures are taken to prevent attacks before they happen.
Malicious actors are out there, and although there is no one perfect solution to protect your organization from an attack, with proper security measures in place, you can reduce your organizational risk and rest easy!