Reality Check: The Benefits of Artificial Intelligence

AIThority Logo

Gartner believes Artificial Intelligence (AI) security will be a top strategic technology trend in 2020, and that enterprises must gain awareness of AI’s impact on the security space. However, many enterprise IT leaders still lack a comprehensive understanding of the technology and what the technology can realistically achieve today. It is important for leaders to question exasperated Marketing claims and over-hyped promises associated with AI so that there is no confusion as to the technology’s defining capabilities.

IT leaders should take a step back and consider if their company and team is at a high enough level of security maturity to adopt advanced technology such as AI successfully. The organization’s business goals and current focuses should align with the capabilities that AI can provide.

A study conducted by Widmeyer revealed that IT executives in the U.S. believe that AI will significantly change security over the next several years, enabling IT teams to evolve their capabilities as quickly as their adversaries.

Of course, AI can enhance cybersecurity and increase effectiveness, but it cannot solve every threat and cannot replace live security analysts yet. Today, security teams use modern Machine Learning (ML) in conjunction with automation, to minimize false positives and increase productivity.

As adoption of AI in security continues to increase, it is critical that enterprise IT leaders face the current realities and misconceptions of AI, such as:

Artificial Intelligence as a Silver Bullet

AI is not a solution; it is an enhancement. Many IT decision leaders mistakenly consider AI a silver bullet that can solve all their current IT security challenges without fully understanding how to use the technology and what its limitations are. We have seen AI reduce the complexity of the security analyst’s job by enabling automation, triggering the delivery of cyber incident context, and prioritizing fixes. Yet, security vendors continue to tout further, exasperated AI-enabled capabilities of their solution without being able to point to AI’s specific outcomes.

If Artificial Intelligence is identified as the key, standalone method for protecting an organization from cyberthreats, the overpromise of AI coupled with the inability to clearly identify its accomplishments, can have a very negative impact on the strength of an organization’s security program and on the reputation of the security leader. In this situation, Chief Information Security Officers (CISO) will, unfortunately, realize that AI has limitations and the technology alone is unable to deliver aspired results.

This is especially concerning given that 48% of enterprises say their budgets for AI in cybersecurity will increase by 29 percent this year, according to Capgemini.

Automation Versus Artificial Intelligence

We have seen progress surrounding AI in the security industry, such as the enhanced use of ML technology to recognize behaviors and find security anomalies. In most cases, security technology can now correlate the irregular behavior with threat intelligence and contextual data from other systems. It can also use automated investigative actions to provide an analyst with a strong picture of something being bad or not with minimal human intervention.

A security leader should consider the types of ML models in use, the biases of those models, the capabilities possible through automation, and if their solution is intelligent enough to build integrations or collect necessary data from non-AI assets.

AI can handle a bulk of the work of a Security Analyst but not all of it. As a society, we still do not have enough trust in AI to take it to the next level — which would be fully trusting AI to take corrective actions towards those anomalies it identified. Those actions still require human intervention and judgment.

Biased Decisions and Human Error

It is important to consider that AI can make bad or wrong decisions. Given that humans themselves create and train the models that achieve AI, it can make biased decisions based on the information it receives.

Models can produce a desired outcome for an attacker, and security teams should prepare for malicious insiders to try to exploit AI biases. Such destructive intent to influence AI’s bias can prove to be extremely damaging, especially in the legal sector.

By feeding AI false information, bad actors can trick AI to implicate someone of a crime more directly. As an example, just last year, a judge ordered Amazon to turn over Echo recordings in a double murder case. In instances such as these, a hacker has the potential to wrongfully influence ML models and manipulate AI to put an innocent person in prison. In making AI more human, the likelihood of mistakes will increase.

What’s more, IT decision-makers must take into consideration that attackers are utilizing AI and ML as an offensive capability. AI has become an important tool for attackers, and according to Forrester’s Using AI for Evil report, mainstream AI-powered hacking is just a matter of time.

AI can be leveraged for good and for evil, and it is important to understand the technology’s shortcomings and adversarial potential.

The Future of AI in Cybersecurity

Though it is critical to acknowledge AI’s realistic capabilities and its current limitations, it is also important to consider how far AI can take us. Applying AI throughout the threat lifecycle will eventually automate and enhance entire categories of Security Operations Center (SOC) activity. AI has the potential to provide clear visibility into user-based threats and enable increasingly effective detection of real threats.

There are many challenges IT decision-makers face when over-estimating what Artificial Intelligence alone can realistically achieve and how it impacts their security strategies right now. Security leaders must acknowledge these challenges and truths if organizations wish to reap the benefits of AI today and for years to come.

Comments are closed.