How hackers are using AI and machine learning to target enterprises

Cybersecurity has benefited from advances in machine learning and AI. Security teams today are inundated with data about potentially suspicious activity, but often continue to look for needles in haystacks. AI helps defenders find the real threats within this data via pattern recognition in network traffic, malware indicators and user behavior trends.

Unfortunately, attackers have found their own ways to use these beneficial advances in AI and machine learning against us. Easy access to cloud environments makes it easy to get started with AI and build powerful, capable learning models.

Let’s take a look at how hackers are using AI and machine learning to target enterprises, as well as ways to prevent AI-targeted cyber attacks.

3 ways attackers use AI against defenders

1. Test the success of their malware with AI-based tools

Attackers can use machine learning in several ways. The first — and easiest — is to build their own machine learning environments and model their own malware and attack practices to determine the types of events and behaviors that defenders look for.

For example, a sophisticated piece of malware can modify local system libraries and components, run processes in memory, and communicate with one or more domains owned by an attacker’s control infrastructure. All these activities in combination create a profile known as tactics, techniques and procedures (TTPs). Machine learning models can observe TTPs and use them to build discovery capabilities.

By observing and predicting how TTPs are detected by security teams, adversaries can subtly and often adjust indicators and behavior to stay ahead of defenders who rely on AI-based tools to detect attacks.

2. Poison AI with inaccurate data

Attackers also use machine learning and AI to compromise environments by: Poisoning AI models with inaccurate data† Machine learning and AI models rely on properly labeled data samples to build accurate and repeatable detection profiles. By introducing benign files that look like malware or creating behavior patterns that turn out to be false positives, attackers can trick AI models into believing that attack behavior is not malicious. Attackers can also poison AI models by introducing malicious files that have been labeled safe by AI training.

3. Map existing AI models

Attackers are actively trying to map existing and evolving AI models used by cybersecurity providers and operations teams. By learning how AI models work and what they do, adversaries can actively disrupt machine learning activities and models during their cycles. This allows hackers to influence the model by tricking the system into favoring the attackers and their tactics. It can also allow hackers to bypass known models altogether by subtly altering data to avoid detection based on recognized patterns.

How to defend against AI-targeted attacks

Defending against AI-targeted attacks is extremely difficult. Defenders must ensure that the labels associated with data used in learning models and pattern development are accurate. Ensuring that data has accurate label identifiers is likely to make the datasets used to train models smaller, which is not the case help with AI efficiency

For those building AI security detection models, introducing hostile techniques and tactics during modeling can help align pattern recognition with tactics seen in the wild. Researchers at Johns Hopkins University developed the TrojAI Software Framework to help generate AI models for Trojans and other malware patterns. MIT Researchers Released TextFoolera tool that does the same for natural language patterns, which can be useful for building more resilient AI models that detect problems like bank fraud.

As AI grows in importance, attackers will try to outdo defenders’ efforts with their own research. It’s critical for security teams to stay on top of attackers’ tactics to defend against them.

Leave a Comment

Your email address will not be published.