In the past few years, security analysts have been able to take advantage of AI and machine learning tools to better identify cyberthreats. However, as adversaries grow smarter with more advanced attacks, what extent are hackers taking advantage of AI technology?

Staying on top of the latest innovations in technology has helped cybersecurity professionals around the world battle the growing, malicious cyberthreat landscape. With new artificial intelligence (AI) and machine learning tools, analysts are able to identify and stop threats before they have the opportunity to cause serious damage. But threats continue to grow in sophistication, and adversaries grow smarter with more advanced attacks, which begs the question: to what extent are hackers taking advantage of AI technology?

A few years ago, data scientists John Seymour and Philip Tully revealed the results of their experiment to see who was more skilled at getting Twitter users to click on phishing links–humans or SNAP_R (Social Media Automated Phishing and Reconnaissance), an AI tool trained to study the behaviors of users and implement its own bait.

SNAP_R lured 275 out of 800 attempted victims, while the human in the study lured 49 users out of 129 attempts. The results of the study proved to data scientists and cybersecurity professionals that AI is highly capable of extracting data from people, and hackers are well aware of this. With proper training, AI and machine learning tools can act similarly to regular users.

How hackers may use machine learning technologies

As stated above, the type of aggression SNAP_R launched was a phishing attack, designed to mask an adversary as a legitimate person or entity to retrieve sensitive information. However, this is not the only type of malicious initiative hackers can perform using AI.

Below are some other ways digital adversaries may be able to access your information.

1. Using “botnets”

‘IoT’ has been an equally trending topic the past few years. In fact, it is projected that there will be over 20.4 billion IoT devices by 2020. However, the downside is that, at times, when one device is compromised, that could lead to an entire network of devices being controlled by adversaries.

Botnets are collections of IoT devices composed of malware. By installing the malware, attackers can compromise one of the devices and communicate the data being transmitted through them to an outside party. These automated botnets can mimic the typical AI/machine learning capabilities of IoT devices. As the cyberthreats continue to advance as well as the number of IoT devices in the world, attackers might use this to their advantage.

2. Sophisticated malware

Security vendors all around the world have been trained on how to detect malware and eradicate it from a piece of hardware or network. However, malware writers can also code their malware to avoid detection using the same tactics as security professionals.

In 2018, IBM Research introduced DeepLocker, tools using AI designed to execute attacks. The project, which was built to better understand how existing AI models can be combined with malware, was a type of malware that could conceal its intent until it reached their victim. These types of attacks are so dangerous because they easily fall under the radar, leading to hacker dwell time and more devastating breaches.

3. Gaining access to machine learning-powered threat analytics

An important part of any cybersecurity strategy today is incorporating machine learning to wean out any false positives in your threat detection. The AI-driven technologies take some of the burden off security professionals who are exposed to thousands of alerts per day.

However, hackers may abuse these analytics simply by overloading the systems with too many alerts. Even the best of machine-learning technologies and security professionals can be overwhelmed by too many false positives. The attacker can overwhelm the system and generate many false positives, and while the system is calibrating to filter out the false threats, the hacker can launch a real attack.

Preempting the damage

While adversaries do have access to the same AI and machine learning technologies that the cybersecurity professionals do, but companies have ample time to get prepared. In the meantime, before attackers have the ability to cause devastating effects for people around the world, cybersecurity professionals can teach their AI-powered devices to avoid these attacks.

All AI technologies learn from whatever humans expose them to. According to MIT Technology Review, one of the best ways to protect artificial intelligence from attacks is to show it what an attack might look like from another AI-powered machine.

Google researcher Ian Goodfellow explained that generative adversarial networks (GANs), a type of AI that makes use of two networks trained on the same data, can utilize each other to understand which input is synthetic and what is real. One creates sets of fake and real data for the other to sort through and learn from.

We still have a long way to go before we see a complete AI-powered mega breach from hackers so there is plenty of time for enterprises to get prepared.

By Joseph Feiman

Friday, June 7, 2019

Back To Top