Cybersecurity practitioners are convinced they need AI — now comes the hard part

Simon Howe Headshot

Like other domains, cybersecurity has been eyeing the promise of artificial intelligence (AI) and machine learning (ML) algorithms for some time.

Even back in 2019, nearly two-thirds of respondents to a Capgemini survey thought AI would ‘help identify critical threats’. Additionally, 73% of Australian organisations believed they would eventually be unable to respond to attacks without the assistance of AI.

Analysis by PwC since then confirms that AI is ‘making organisations more resilient to cybercrime’.

There’s been a steady movement for certain cybersecurity functions to harness AI/ML since then. It is now particularly prevalent in malware detection and increasingly for automating analysis and decision-making in data-intensive fields like incident detection and response.

But cybersecurity is also a particularly challenging space for AI/ML use. It’s fast-moving, and the stakes are high. Algorithms put to work in security need themselves to be trustworthy and secure, and this in itself is not easy to solve.

Through our work in this space over the past seven-plus years, some ground rules have emerged. In particular, some keys to success in this space include clean data, a good business case, and the ongoing involvement of domain and technical experts.

Getting these right will solve many of the current challenges and bring cybersecurity functions closer to the AI-augmented operating model they need. Read on:

Leave a Reply

You must be logged in to post a comment.