With human error being at the fore of our security woes, AI offers a way to tackle this and let the technology do the work. Whether it can answer all our security questions, however, is a slightly more complex conversation.
It’s no secret that when it comes to security, the biggest weakness is always the human factor. Our propensity for error means that we continue to fall prey to both targeted cyber attacks, as well as those that flourish as a result of carelessness, lack of education or basic misunderstanding of how our systems work.
With the rise of AI in security technology and our lives more broadly, there might just be a way to tackle such issues. While the use of AI in security is in many ways still nascent, it has also very quickly become integral to our security infrastructures, both at home and at work. It is described by commentators as no longer just “a nice to have”, but an essential element of security solutions. As such, it’s natural to ask just how ‘intelligent’ the technology is. If it is truly intelligent, to what extent can it address the human factor? If security’s biggest weakness is the mistakes that people continue to make, have we possibly found a way to mitigate that risk by taking humans out of the security equation and using AI instead?
The answer, unfortunately, is not simple, because security is not simple. Its complexity only increases as technology becomes more advanced and the collection of solutions in the market grows in number and scope. This can lead to overload and mismanagement of security solutions, which takes a natural path towards error and all that brings with it, according to Tim Woods, VP of security alliances at security vendor FireMon.
“Security takes so many different shapes, but it can be an abundance of technology where you don't have the resources to manage it. [There] can be mismanagement of those technologies [and] over time, if they're mismanaged, they become less effective and they open you up to more risk. As complexity in the environments go up, the probability of human error creeping into the equation also goes up.”
The complexity question doesn’t end there. Frank Dickson, research VP of cyber security products at IDC, notes that with digital transformation being top of the agenda for so many organisations, complexity is magnified.
“Everybody's talking about digital transformation. Systems that normally weren’t hooked together are now hooked together. Ways to create a better experiences for our customers, such as being more responsive, more agile mean we're implementing all these solutions. And what that's doing is exponentially increasing the complexity of IT. And complexity is the enemy of security.”
Along with complexity, the power of habit can lead to weaknesses in security, Woods notes. By repeating the same actions in our daily technology routines, we leave ourselves – and often our employers – exposed to the risk of security weakness. For hackers looking for a way in, monitoring targets’ daily routines and habits can help to identify moments of laziness or weakness. And habitual behaviour can also lead to inertia, with security having become woven so tightly into the fabrics of our everyday lives that many of us simply don’t carry out vital security actions through lack of effort, desire or recollection.
For example, identity and access management often involves a number of “mundane tasks” that can get overlooked and can expose a firm to security weakness, according to Cheryl Martin, a cyber security partner in EY’s UK Financial Services practice.
“With IAM, some of those more mundane tasks absolutely must be done in a period of time, but sometimes get overlooked because of human error. For example, people leaving an application without declassifying access or thinking ‘It doesn't matter, I'll do that another week’.”
This, she says, is an area where AI absolutely can help to overcome the security weaknesses humans exhibit. “If you bring in AI and you've got the algorithm that says ‘every time someone from HR leaves this application, all of this information must be declassified’, and that’s done automatically, that’s a great way where issues can be overcome to counteract human inertia.”
AI can also tackle the human habit factor, FireMon’s Woods notes, pointing out that AI can cut out some of the bad habit factors we form when signing into applications, such as inadvertently and repeatedly exposing passwords or using the same PIN for a variety of gadgets.
“We see different types of multi-factor authentication using biometrics and face recognition and integrating all kinds of different things to make us not repeat some of the same human fail traits that we sometimes introduce into the process,” he explains.
However, there are issues to be found. Woods notes that technologies like firewalls that are starting to try and leverage AI need to be managed effectively, otherwise they may become less effective and open users up to more risk. Further, with security solutions being so “fragmented”, according to Martin, AI may be effective in one part of an organisation, but not even implemented in another, meaning significant gaps in security.
“Even if I put AI into one part of the solution… it might do something for me in my own environment, but it isn't going to enable me to look at the much broader area in my end-to-end ecosystems, because I've got multiple different people who are using different security platforms and different security processes. I've got different business risk I'm interacting with, and I may well be sharing my data, so even though I might have the most fantastic AI capabilities, I'm only going to be as good as my weakest link.”
The other, and perhaps most concerning, issue is that where we make advances in AI and security, so too do the hackers. The use of emerging technologies is not limited to ‘the good guys’. Just like us, cyber criminals go to work every day to make money. They also need to find ways to automate their systems, be more efficient and, ironic as it may sound, manage their business risk.
“AI isn't just being used by the good guys,” IDC’s Dickson points out. “We think about the bad guy in the abstract, but realistically, they’re business people, so they're looking to get your critical data and profit from it. And the more time they spend trying to get to your critical data is the less money they're making, so the quicker they can do it, the better.”
While it’s clear that AI has a role to play when it comes to managing security and eliminating human error from the process, it’s questionable as to what extent it can eradicate it all together. AI can carry out the repetitive tasks that humans slip up on because they get careless, bored, forgetful or lazy. It can also operate security checks and balances at lightning speeds in comparison to humans, while also noticing erroneous entries into applications or irregular activity in documented behaviours. But all commentators agree that, ultimately, security needs rational thought.
IDC’s Dickon notes that “the more AI tools we use, the better”, but he also points out that AI tools are designed to replicate the techniques and procedures of “our smartest security professionals”, which means that when those processes are automated, it assumes that we know both what the attack will be and where the attack is coming from, both of which require rational thought.
“It's a known process, and the issue with it is that the bad guys are always adapting. And because they're always adapting, we constantly have to be looking for new and different and crazy ways that they're going to attack us. We have to look for new openings,” he points out. “The attack surface is constantly changing and sometimes we don't see where the new vulnerabilities are. So you're always going to have to have people involved in the process to do that unstructured thinking to look for these kinds of things. AI can make us more efficient, they can make it more effective, but it can never be the solution.”
Martin agrees. She also points out that rational thinking is essential for effective security, noting that there is no ‘yes’ or ‘no’ answer to security, which at the moment is more or less what AI can offer.
“Do we think that security could be more effective? I think it can be, absolutely,” Martin notes. “However, I think there is a concern from some organisations that if they allow the robots to run the organisation, does the robot truly understand the business risk that the business is facing and can it dynamically change as the threats and the threat profile dynamically change? And I would say currently, with AI version 1.0, no. I think it has a place to absolutely solve some issues, but it's some of them. I think that's the key.”
To enable comments sign up for a Disqus account and enter your Disqus shortname in the Articulate node settings.