The Evolving Cybersecurity Battlefield: How Generative Artificial Intelligence May Change Attacks and Defenses

|

The potential of generative artificial intelligence (“GenAI”) has captured the imagination of people throughout the world.  GenAI is a type of artificial intelligence (“AI”) that generates new content and ideas based on learning from large data models.  ChatGPT is perhaps the most recognized example of GenAI.  With barriers to usage dropping, organizations have found that these tools foster additional creativity and promote efficiency, and there is a growing view that GenAI will ultimately transform both the workplace and our everyday lives. 

Unfortunately, threat actors have also found GenAI to be a new target and a valuable tool to launch cyber attacks.  Fortunately, the cybersecurity community has been working to address the new threats and vulnerabilities that GenAI poses.  Cybersecurity best practices related to GenAI are beginning to emerge and defenders are finding that they too can leverage this new tool to ensure a successful defense. 

GenAI Uses in Attacks

Earlier this year, the U.S. Department of the Treasury released a report identifying cyberthreat actors’ uses of AI, including for social engineering, malware/code generation, vulnerability discovery, and disinformation.[1]  Although these areas are not new to the cybersecurity community, AI has increased the sophistication of attacks and has lowered barriers to successfully leveraging them.  Social engineering, particularly phishing using identity impersonation, is one example.  Earlier this year, an employee of a multinational company was tricked into transferring $25 million to threat actors who posed as the company’s CFO and other known staff members in a video conference call using deepfake technology.[2]  The employee’s initial concerns were assuaged because those on the call looked and sounded like people he recognized.[3] 

In addition to increased sophistication in phishing attacks, threat actors are using GenAI to increase their productivity in launching attacks, including conducting research, identifying targets, and assisting in developing code.  OpenAI, which developed ChatGPT, recently reported disrupting more than 20 cyber threat operations and networks in 2024 that attempted to use OpenAI.[4]  In doing so, OpenAI published a report identifying the key insights from its analysis to help companies understand and identify suspicious behavior.[5]  While OpenAI reports they have not yet seen evidence that threat actors have had meaningful progress in developing emerging malware, they do report that threat actors are using GenAI tools to write code and to research how they can evade detection.[6] 

Addressing GenAI Vulnerabilities

To address the new threats and vulnerabilities, cybersecurity teams have been busy developing new threat models[7] for GenAI applications developed and used by organizations.  Industry groups like Open Worldwide Application Security Project (“OWASP”)[8] have identified the top vulnerabilities related to GenAI applications,[9] which security teams can apply to threat modeling exercises to ensure the usage of appropriate mitigating controls to lower related information security risk.  As with other technologies, cybersecurity teams should not look at GenAI applications in a vacuum but rather should analyze and apply controls to upstream data inputs and outputs that are sent downstream. 

The technique of prompt injection is a vulnerability that has garnered a significant amount of attention.  Prompt injection is a specific type of cyberattack whereby a threat actor inserts malicious input into a GenAI model to bypass security controls, allowing the threat actor to manipulate the GenAI system.  Conceptually, prompt injections are like SQL injections, which have been used by threat actors for years to compromise such things as online web applications.  This could include the return of offensive content that the tool has otherwise been instructed to withhold, automatically applying a high score to a test submitted for a grade, or causing a chatbot to perform unauthorized acts on a website.  Cybersecurity teams that protect environments leveraging GenAI must address the security of overall environments, including upstream systems that control the input (e.g., detecting abnormal prompts) and systems that use the output (e.g., detecting abnormal output).  In other words, cybersecurity teams should borrow from best practices already implemented to address similar types of attacks (e.g., SQL injection). 

Using GenAI to Improve Information Security Defense

Although GenAI has introduced significant challenges for cybersecurity teams, it also heightened capabilities for a successful defense.  Cybersecurity defenders can use the same tools as cybersecurity threat actors.  Cybersecurity teams can quickly identify the potential to use new GenAI tools when performing threat modeling and risk assessment exercises.  These new tools can serve as mitigating controls for existing and arising risk.  For example, GenAI tools will continue to assist with analysis of logs, which are a critical source of information in a cybersecurity investigation, and to achieve faster review of content upstream to a particular technology.  Moreover, GenAI can be applied to identify output that is abnormal, potentially identifying threat actors’ attempts to exfiltrate data.  Organizations are even finding it helpful to use GenAI to impute the correct function to execute in response to security alerts and metrics to help security operations decrease the review, analysis, and response times. 

Conclusion

The integration of GenAI into the field of cybersecurity may present a double-edged sword.  In many ways, it enables threat actors to leverage new technologies to create increasingly sophisticated attacks that may become more difficult for organizations to detect.  But it also has the potential to be a powerful tool in the fight against cybersecurity threats, serving as a proactive defense mechanism against evolving threats.  Organizations can proactively leverage GenAI’s ability to identify, evaluate, and mitigate risk.  Reactively, GenAI can be used to identify abnormal behavior, evaluate vast amounts of data, and execute appropriate responses to attacks.  As GenAI is increasingly integrated into the workplace, the agility and productivity of AI-driven solutions will become increasingly vital to securing organizations’ information systems.  By turning GenAI into a sword, organizations can better defend their systems and sensitive information and strengthen their overall security posture.

[1] Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, U.S. Department of the Treasury (Mar. 2024), at 16 (https://home.treasury.gov/system/files/136/Managing-Artificial-Intelligence-Specific-Cybersecurity-Risks-In-The-Financial-Services-Sector.pdf).

[2] Heather Chen & Kathleen Magramo, Finance Worker Pays Out $25 Million After Video Call With Deepfake “Chief Financial Officer,” CNN (Feb. 4, 2024) (https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html).

[3] Id.

[4] Influence and Cyber Operations: An Update, OpenAI (Oct. 2024), at 3 (https://cdn.openai.com/threat-intelligence-reports/influence-and-cyber-operations-an-update_October-2024.pdf).

[5] Id.

[6] Id. at 4, 15.

[7] Threat modeling is a process for identifying, evaluating, and mitigating risk in systems or applications.

[8] OWASP is a nonprofit that produces free and open resources in the fields of internet of things (“IoT”), system software and web application security.

[9] Top 10 for LLMs and Generative AI Apps, OWASP, https://genai.owasp.org/llm-top-10/.