How Artificial Intelligence Will Shape the Future of Malware

Metaverse News
ai-shape-malware

As we move into the future, the prospect of AI-driven systems becomes more appealing. Artificial Intelligence will help us make decisions, power our smart cities, and—unfortunately—infect our computers with nasty strains of malware.

Let’s explore what the future of AI means for malware.

What Is AI in Malware?

When we use the term “AI-driven malware,” it’s easy to imagine a Terminator-style case of an AI “gone rogue” and causing havoc. In reality, a malicious AI-controlled program wouldn’t be sending robots back through time; it would be sneakier than that.

AI-driven malware is conventional malware altered via Artificial Intelligence to make it more effective. AI-driven malware can use its intelligence to infect computers faster or make attacks more efficient. Instead of being a “dumb” program that follows pre-set code, AI-driven malware can think for itself—to an extent.

How Does AI Enhance Malware?

There are several ways that Artificial Intelligence can enhance malware. Some of these methods are figurative, while some are tangible within the real world in some way.

Targeted Ransomware Demonstrated by DeepLocker

One of the scariest AI-driven malware examples is Deeplocker. Thankfully, IBM Research developed the malware as a proof-of-concept so you won’t find it in the wild.

The concept of DeepLocker was to demonstrate how AI can smuggle ransomware into a target device. Malware developers can do a “shotgun spread blast” against a company with ransomware, but there’s a high chance they won’t manage to infect the essential computers. As such, the alert may go up too soon for the malware to reach the most prominent targets.

DeepLocker was teleconferencing software that smuggled in a unique strain of WannaCry. It didn’t activate the payload, though; instead, it would merely perform its duties as a teleconferencing program.

As it did its job, it would scan the faces of the people that used it. Its goal was to infect a specific person’s computer, so it monitored everyone as they used the software. When it detected the target’s face, it would activate the payload and cause the PC to be locked down by WannaCry.

Adaptive Worms That Learn From Detection

One theoretical use of AI in malware is a worm that “remembers” every time an antivirus detects it. Once it knows what actions cause an antivirus to spot it, it then stops performing that action and finds another way to infect the PC.

This is particularly dangerous, as modern-day antivirus tends to run off of strict rules and definitions. That means all a worm needs to do is find a way in that doesn’t trip the alarm. Once it does, it can inform the other strains about the hole in the defense, so they can infect other PCs easier.

Independence From the Developer

Modern-day malware is quite “dumb;” it can’t think by itself or make decisions. It performs a series of tasks that the developer gave it before the infection happened. If the developer wants the software to do something new, they have to broadcast the next list of instructions to their malware.

This center of communication is called a “command and control” (C&C) server, and it has to be hidden very well. If the server is discovered, it could lead back to the hacker, often ending with arrests.

If the malware can think for itself, however, there is no need for a C&C server. The developer unleashes the malware and sits back as the malware does all the work. This means the developer doesn’t need to risk outing themselves while giving commands; they can just “set and forget” their malware.

Monitoring User Voices for Sensitive Information

If an AI-driven malware gets control over a target’s microphone, it can listen in and record what people are saying nearby. The AI then pieces through what it heard, transcribes it into text, then sends the text back to the developer. This makes life easier for the developer, who doesn’t have to sit through hours of audio recording to find trade secrets.

How Can a Computer “Learn?”

Malware can learn from its actions through what’s called “machine learning.” This is a specific area of AI, related to how computers can learn from their efforts. Machine learning is useful for AI developers because they don’t need to code for every scenario. They let the AI know what’s right and what’s not, then let it learn through trial and error.

When AI trained by machine-learning faces an obstacle, it tries different methods to overcome it. At first, it will do a poor job at passing the challenge, but the computer will note what went wrong and what can be improved. Over the course of several iterations of learning and trying, it eventually gets a good idea of what the “correct” answer is.

You can see an example of this progress in the video above. The video shows an AI learning how to make different creatures walk properly. The first few generations walk as if they are drunk, but the later ones hold their posture. This is because the AI learned from the previous failures and did a better job on the later models.

Malware developers use this power of machine learning to figure out how to correctly attack a system. If something goes wrong, the system logs this error and notes what they did that caused that problem. In the future, the malware will adapt its attack patterns for better results.

How Can We Defend Against Malware-Driven AI?

The big problem with machine-learning AI is that they exploit the current way that antiviruses work. An antivirus likes to work via straightforward rules; if a program fits a specific niche that an antivirus knows is malicious, it blocks it.

AI-driven malware, however, won’t work via hard and set rules. It will continuously prod at the defenses, trying to find a way through. Once it has made its way in, it can perform its job without hindrance until the antivirus receives updates specific to the threat.

So, what’s the best way to fight off this “smart” malware? Sometimes you need to fight fire with fire, and the best way to do that is to introduce AI-driven antivirus programs. These don’t use static rules to catch malware, like our current models. Instead, they analyze what a program is doing and stops it if it’s acting maliciously, according to the antivirus’s opinion.

A Future Defined by Artificial Intelligence

Basic rules and simple instructions won’t define malware attacks in the future. Instead, they’ll use machine learning to adapt and shape themselves to counter whatever security they meet. It may not be as exciting as how Hollywood depicts malicious AI, but the threat is very much real.

If you’d like to see some less-scary examples of Artificial Intelligence, check these AI-powered websites.

Image Credit: sdecoret/Depositphotos

Read the full article: How Artificial Intelligence Will Shape the Future of Malware

MakeUseOf