A new dawn: AI and cyber security
February 2024 | FEATURE | RISK MANAGEMENT
Financier Worldwide Magazine
February 2024 Issue
Cyber security programmes can help organisations protect the huge quantities of data they store and utilise from theft or misuse. Such data includes personally identifiable information (PII), protected health information (PHI), intellectual property (IP), proprietary data and trade secrets, among others. Without robust cyber security, organisations cannot protect against data breaches and other forms of cyber attack, leaving them exposed to phishing schemes, ransomware attacks, identity theft and other data breaches, followed by potential financial and reputational loss.
Applications and benefits of AI in cyber security
Artificial intelligence (AI) and machine learning (ML) are being used in cyber security to tackle huge volumes of malware, detect spam and business email compromises, and analyse network traffic, among other things. They are being deployed in conjunction with and to enhance more traditional tools, such as antivirus protection, data-loss prevention, fraud detection, identity and access management, intrusion detection and risk management. According to Acumen Research, the global market for AI-based cyber security products was about $15bn in 2021, and is expected to explode to roughly $135bn by 2030.
“While AI and ML are not new technologies in the space, advancements and innovations within AI and ML are vast and creating more opportunity than ever,” says David Spillane, systems engineering director at Fortinet. “Key benefits of their use can include overall cyber security cost reductions, faster, better and smarter cyber security intelligence leading to advantages over attackers, and more opportunities for the future workforce to take on higher-value tasks.”
The ability to analyse enormous sets of data and find patterns means AI can help detect attacks more accurately than humans. It creates fewer false-positives and means responses can be prioritised based on real-world risks. AI can also simulate social engineering attacks, helping cyber security teams to spot potential vulnerabilities before criminals exploit them. It can also rapidly analyse incident-related data, so that swift action can be taken to contain a threat.
“AI, particularly when powered by well-trained large language models (LLMs), can rapidly detect and report misconfigurations and vulnerabilities and assist in combatting malicious attacks faster than humans ever could,” says Matt Hillary, vice president of security and chief information security officer at Drata. “When configured and trained accordingly, AI can help suggest and even support the remediation of vulnerabilities and response to security alerts.
“When allowed access to data sources like open source intelligence (OSINT), AI will have the unmatched ability to create a tailored dossier on how to potentially attack – or protect – an organisation. Cyber security professionals can use this capability to ultimately pit the ‘bots against the bots’ for a more robust stance in battles ahead,” he adds.
Perhaps the most important application of AI is to help companies predict and prevent cyber attacks before they happen. “AI is a force for good and businesses can see a huge increase in cyber security capability when using it,” notes Mr Spillane. “AI can do things better, faster and smarter, and this is true when it is being used to prevent cyber attacks. It can provide vital analysis, information and threat identification more quickly to cyber security professionals, allowing them to predict what attacks might happen and to protect against them, as well as making predictions itself based on data. Staying one step ahead of threat actors is easier with AI.”
For Dane Sherrets, solutions architect at HackerOne, the emergence of generative artificial intelligence (GenAI) is one of the most important developments in the AI space of late. “Despite periodic advancements in AI research since the mid-20th century, the latest excitement around AI has been fuelled by significantly increased accessibility, largely due to the democratisation of AI tools and resources,” he says. “GenAI is popping up in all manner of software every day, with many businesses already announcing AI-powered features and user experiences.”
Humans overseeing machines
Though there are many positives to be derived from AI applications, companies need to consider and manage the risks when deploying them. “The challenge comes with implementation, and any organisation using GenAI must have security and confidentiality embedded in its approach,” urges Mr Spillane. “Giving GenAI LLMs access to large data sets comes with the risk that if those repositories of data are breached, it could be at a greater scale than the threats you are trying to protect against.
“It is essential that organisations are conservative with the data they allow GenAI LLMs to ingest, particularly for cyber security, and take their time with integration to ensure it is done safely and securely with human oversight at every step,” he adds.
Companies therefore need to ensure they deliver regular, bespoke user training alongside the systems they put in place. “As with any new technology added to an organisation’s stack, it cannot be allowed to compromise cyber security posture,” says Mr Spillane. “An important part of preventing this is ongoing user training and education to improve worker understanding around the new kinds of threats out there, and how AI systems could be attacked. Cyber security might be helped by AI, but it still requires humans, and so continued training is vital.”
Clearly, at least as far as cyber security is concerned, AI is not here to replace human intervention. “Potential risks to overreliance on AI in cyber security still include potentially taking action or making decisions that do not mirror the same level of broad awareness, depth of understanding, creativity or integrity that a human has,” says Mr Hillary.
The other side of the coin
But even as companies expand and hone their integration of AI, the technology is of course also being used by cyber criminals to create new attack vectors and improve the success rate of existing ones. According to HackerOne’s Hacker-Powered Security Report 2023, 53 percent of hackers use AI in some way, with 61 percent saying they will use and develop hacking tools from GenAI to identify more vulnerabilities.
“While advances in GenAI may actually eradicate some common vulnerability types as the technology helps improve cyber security defences, other exploits and cyber attack techniques will explode in effectiveness,” warns Mr Sherrets. “Attacks like social engineering via deepfakes will be more convincing and fruitful than ever. GenAI lowers the barrier to entry for cyber criminals who can attack systems without having to know how to code. In addition, phishing is getting even more convincing.”
As Mr Hillary points out, cyber criminals are readily using OSINT to gather information about a company, which is the first stage of a breach. “An attacker will do extensive research using OSINT and other means to evaluate an organisation’s weak spots and develop a dossier on how to potentially hack a company,” he explains. “They will look for the systems that are public facing, and the vulnerabilities associated with those systems. AI can easily and quickly empower the questions to create an attack plan to go after a company.
“To counteract that, organisations must keep on top of evaluating their weak spots and regularly assess their cyber security status, to make sure there are no new or public-facing vulnerabilities as an easy target for potential attackers,” he adds.
Threat to critical infrastructure
One area in which cyber attacks have been increasing is critical infrastructure. These attacks are occurring amid a rise in state-aligned groups, deepening geopolitical tensions and growing hostile cyber activity generally. In the UK, for example, the National Cyber Security Centre (NCSC) warned of the “enduring and significant” threat to the country’s critical infrastructure.
But according to Mr Spillane, AI can be used to enhance the resilience of critical infrastructure. “By using AI to strengthen the overall cyber security posture of the organisations in charge of this infrastructure, the infrastructure itself will, by proxy, be more secure,” he says. “With the sheer complexity of critical infrastructure, AI can also help predict where vulnerabilities lie in these vast networks more quickly and easily than human teams.”
In the view of Mr Sherrets, AI has the capability to help teams detect threats faster and with more accuracy. Since GenAI can automate repetitive tasks, humans are freed up to focus on more strategic or higher-priority activities for risk mitigation. “However, if we move too quickly with GenAI, I can see overreliance on its application for vulnerability discovery and disclosure, adding more friction to security team efforts,” he says. “The dangers include ‘confidently wrong’ false positives that slow security teams down during critical stages of vulnerability remediation or false negatives that allow critical vulnerabilities to go overlooked.
“The ingestion of confidential vulnerability information could also place critical infrastructure at risk if these details fall into the hands of cyber criminals,” he adds.
Regulating AI
In this quickly evolving space, there is growing urgency to regulate AI. But regulation needs to strike a balance between fostering AI innovation and safeguarding societal interests. To date, however, critics argue that regulations in many jurisdictions do not sufficiently address the potential risks and ethical concerns posed by advances in AI. It is certainly difficult for regulations to keep pace with emerging AI-enabled risks.
“AI regulation is in its infancy and the rules currently in place will soon be updated and adapted – indeed, they must be if we are going to ensure the continued correct use of AI,” argues Mr Spillane. “As such, current regulations are not putting companies off AI implementation, which is a double-edged sword. While it is great that organisations want the benefits this technology can provide, most are not taking on AI in a secure way.”
Going forward, collaboration will be needed to explore the boundaries and implications of new AI technologies. “The fact that global tech companies such as DeepMind and OpenAI agreed to have their latest AI products tested before launch is a great first step toward safeguarding the technology,” suggests Mr Sherrets. “This global collaboration will help inform how to best move forward in regulating technology. However, GenAI is so new it is challenging to predict all of the effects it may have, good and bad. That means it will also take time to understand its full impact on cyber security and what regulations will result in the safest adoption and use of the technology.
“If we do not fully understand the full effects of GenAI, we cannot fully understand what regulation will help,” he continues. “That is why I think any regulations that encourage slow and steady adoption may have the best effect in the short term. Slower, safer adoption of this new emerging technology ensures the use of these models will not significantly increase an organisation’s security risk.”
Innovation in defence
In the coming years, companies will continue to experiment with AI applications in their cyber security programmes. Meanwhile, malicious actors will find bolder, more sophisticated ways to use it for their ends.
We are still in the early days of understanding the risks and benefits of AI. But the digital world is more connected than ever, and it will take a collective effort from the cyber security community and the private and public sectors to contain AI-related risks.
Organisations will need to determine whether they are ready for AI to take a leading role in their defence against cyber attacks. They must be aware of AI’s potential risks and benefits, and take steps to address the challenges. Innovation on both sides of the equation must be considered to maximise AI in the cyber security space.
© Financier Worldwide
BY
Richard Summerfield