Smart shields: leveraging AI in defensive cyber security

October 2024  |  SPOTLIGHT | RISK MANAGEMENT

Financier Worldwide Magazine

October 2024 Issue


As the world becomes increasingly reliant on digital infrastructure, the volume of sensitive data flowing through the internet has surged and will only continue to increase. The rise of remote work in response to the coronavirus (COVID-19) pandemic has further accelerated this trend and added risk factors, such as prevalent use of unsecured devices and logins from atypical locations. Such circumstances create fertile grounds for cyber criminals eager to exploit vulnerabilities, extort companies and wreak havoc.

For instance, in May 2024, the hacking group ShinyHunters claimed responsibility for stealing the personal information of over 500 million Ticketmaster customers, including credit card numbers and ticket sales data, and demanding a ransom from Ticketmaster. Similarly, in April 2024, hackers gained access to Roku systems through stolen login credentials and compromised approximately 576,000 Roku accounts. The breach was discovered during Roku’s monitoring of account activity following a previous attack earlier this year.

Data breaches can cause enormous negative impacts on a company – potentially eroding organisational trust and costing a small fortune. In 2023, the global average cost of responding to a data breach was $4.45m, a 15 percent increase from 2021. This financial burden is further exacerbated by the rising cost of cyber insurance policies, which in 2021 rose to $1589 and in 2022 increased by another 25 percent on average, with some policyholders experiencing rate hikes of up to 80 percent year-over-year.

Against this backdrop, artificial intelligence (AI) has emerged as a potentially revolutionary tool in defensive cyber security. By being able to ingest massive amounts of information and come to appropriately reasoned decisions, AI tools could be applied in context, such as vulnerability management, threat detection, threat alert and incident response to assist security teams in developing a more proactive and adaptive defensive strategy. Enhanced defences could, in turn, improve a company’s compliance with its privacy and security obligations under applicable laws. And more efficient processes could significantly slow or even reverse the quickly ballooning cost of defensive cyber security, making robust protection more accessible to organisations of all sizes.

The double-edged sword of AI

Before diving into the details of leveraging AI in defensive cyber security, it is important to recognise that most technologies are a double-edged sword, and AI is no exception. In the last few years, threat actors have quickly caught on to the power of AI in developing more sophisticated cyber attacks with less effort than ever before. They have since used these tools to auto-generate highly personalised phishing communications, deepfake individuals with authority to process large financial transactions, write streams of new malware, evade traditional security defences and quickly adapt to new countermeasures.

In one alarming case, an employee at a multinational Hong Kong-based financial firm was deceived into transferring $25m to scammers who leveraged AI-enabled deepfake technology. The scammers initially made their transaction request via email, posing as the firm’s chief financial officer (CFO). When the employee expressed suspicion about the email’s legitimacy, the scammers orchestrated a sophisticated and interactive video call with him, while deepfaking the CFO’s face and voice to verbally ‘authorise’ the transaction. Relying on the presumed genuineness of the video call, the employee completed the transaction.

Harnessing AI for cyber defence

But despite inevitable misuse, AI may also become a revolutionary tool in defensive cyber security to counter such threats. These tools will likely not supplant human security experts outright, but instead serve as invaluable assets which security teams may incorporate into their defensive arsenals. The quickly proliferating volume of data, number of threat actors and complexity of networks render manual defence increasingly impractical or even impossible. By integrating AI tools in a variety of contexts, organisations can alleviate this pressure, bolster defence, enhance human efficacy and improve legal compliance.

Vulnerability management. Vulnerability management is the process of identifying and remediating weaknesses in an organisation’s network and systems. At present, it involves scanning the current configuration of those systems and comparing it to a set of common vulnerabilities and exposures (CVE), which are crowdsourced and submitted to a central organisation. Approval of a new CVE could take days to months, depending on its complexity. AI tools can enhance vulnerability management by more proactively predicting potential vulnerabilities based on real-time historical data and patterns pulled both from the company’s own systems as well as public information sources about recent exploitations. Once predicted or identified, AI tools could also generate guidance on how to remediate the vulnerability or even automatically implement such remediation upon human approval. This approach relieves bandwidth stress, effectuates near-immediate remediation and improves accuracy.

Threat detection. AI can also improve early and accurate detection of actual threats by quickly reviewing massive volumes of data and scrutinising irregular code, network activities or user behaviours. For example, AI tools may quickly analyse code uploaded to a company’s production environment against real-time historical data from public sources about recent malware variants that risk system harm or provide unauthorised remote access. Likewise, AI tools may continuously learn what ‘normal user activity’ looks like for a given organisation – and how it changes over time or with different users – to more accurately detect deviations that could signal an intrusion or malicious activity. Lastly, AI may effectively detect threats cloaked inside phishing emails, smishing messages, scam calls or even scam video calls by analysing more comprehensive metadata, content, links, image quality and voice modulation, and proactively block communications before they reach the user. Given that phishing and similar social engineering tactics remain a predominant cause of data breaches, the latest detection and interception tools could significantly impact a company’s security posture and legal compliance.

Threat alert. Upon detecting potential threats, AI tools can also more effectively alert security teams. Increased sophistication and ability to analyse mass contextual data provides a quicker and more accurate alerting process with fewer false positives. AI tools may also provide more contextual relevance to security teams to assist them in accurately prioritising and understanding threats, and even provide actionable insights to facilitate the team’s further investigation or remediation.

Incident response. Incident response often involves managing many different workstreams, tasks, goals and stakeholders. Moreover, organisations have a legal obligation in most jurisdictions to act swiftly to investigate, and if appropriate, notify impacted individuals and regulators. AI can streamline and expedite this process by automating and optimising key processes. For example, AI can rapidly analyse forensic records, logs and network traffic to ascertain and organise forensic investigation details, such as attack entry point, impacted systems and accounts, threat actor path, indicators of compromise, and links to known threat actors. Likewise, the current process for analysing impacted data still relies on time-intensive and costly manual review even after the use of automatic filtering tools; as AI becomes more reliable, it could reduce or even eliminate the need for such manual review. The results of such AI-facilitated impacted data analysis, along with AI-generated forensic findings, can then be fed into documents to help assess legal obligations and, if necessary, prepare notifications to impacted individuals and regulatory authorities within the required, and often very short, timeframes.

Tech and regulation

As cyber threats become more frequent and sophisticated, the role of AI in predicting, detecting, alerting and responding to such threats becomes increasingly vital. Existing cyber security tools in the marketplace already leverage AI to some extent. As the technology advances and refines, the number of defence tools leveraging AI is expected to skyrocket. However, with such growth likely comes equal regulation governing the use of AI and continued safeguarding of personal or sensitive information. Data protection laws at the state, federal and international level already mandate companies to implement ‘reasonable’ security practices. At some point in the future, perhaps even the near future, incorporation of AI tools may become the standard for ‘reasonableness’ in some companies, based on their size, industry and type of data processed. Simultaneously and contrastingly, these laws espouse the importance of data minimisation – the principle that a company should not process more personal information than is necessary for the stated purpose – which may be in tension with some AI tools that rely on mass processing of information.

Aside from general data protection laws, we are also starting to see new AI-specific laws, which limit the contexts in which AI may be deployed and the ways in which AI may impact individuals, their lives and their rights. These laws usually focus on sensitive contexts like employment and criminal justice, though it is possible they may in the future touch the cyber security context as well.

Overall, while AI presents new challenges from technical and regulatory standpoints, its potential as a formidable ally in defensive cyber security is undeniable. As these AI-enabled tools come online, companies will need to be thoughtful in evaluating not only the tool’s effectiveness, but also its appropriateness in light of company operations, data processing practices and legal obligations. But by thoughtfully harnessing AI’s capabilities, organisations can fortify their defences against evolving cyber threats, ensure compliance with their obligations, protect sensitive data in an increasingly interconnected world, and prevail in the unending arms race that is cyber security.

 

Matthew Baker is a partner and Michelle Molner and Lucy Soyinka are associates at Baker Botts LLP. Mr Baker can be contacted on +1 (415) 291 6213 or by email: matthew.baker@bakerbotts.com. Ms Molner can be contacted on +1 (512) 322 5415 or by email: michelle.molner@bakerbotts.com. Ms Soyinka can be contacted on +1 (214) 953 6541 or by email: lucy.soyinka@bakerbotts.com.

© Financier Worldwide


BY

Matthew Baker, Michelle Molner and Lucy Soyinka

Baker Botts LLP


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.