Editorial
The New Age of Security
By Sidra Kamal
The evolution of Artificial Intelligence (AI) and Machine Learning (ML) has been a cornerstone of digital transformation over the past decade. Starting from supervised learning, AI and ML have rapidly advanced with unsupervised, semi-supervised, reinforcement, and deep learning. The latest frontier in AI technology is Generative AI (GenAI), developed using deep neural networks to learn the patterns and structures of large training corpuses to generate similar new content. GenAI can produce various forms of content, including text, images, sound, animation, source code, and other data types.
The launch of ChatGPT (Generative Pre-trained Transformer) by OpenAI in November 2022 significantly disrupted the AI/ML community. ChatGPT demonstrated the power of GenAI by reaching the general public, revolutionizing perceptions of AI/ML. According to Netskope’s Cloud and Threat Report 2024, ChatGPT was the most popular generative AI application in 2023, accounting for 7% of enterprise usage. The tech industry is now racing to develop sophisticated Large Language Models (LLMs) capable of creating human-like conversations, exemplified by Microsoft’s GPT model, Google’s Bard, and Meta’s LLaMa. Within a year of its release, ChatGPT reached 100 million users, suggesting widespread use and familiarity with GenAI tools.
Impact of GenAI on Cybersecurity and Privacy
AI has successfully replaced traditional rule-based approaches with more intelligent technology. However, the evolving digital landscape is elevating the sophistication of cyber threat actors. Traditionally, cyberspace faced high volumes of relatively unsophisticated intrusion attempts. The introduction of AI-aided attacks has ushered in a new era, transforming cyberattack vectors. AI/ML has enhanced the effectiveness of cyber attacks, making cyber offenders more powerful. Consequently, GenAI has attracted significant interest from the cybersecurity community for both defense and offense.
GenAI tools like ChatGPT can help cyber defenders safeguard systems from malicious intruders. These tools leverage information from LLMs trained on extensive cyber threat intelligence data, including vulnerabilities, attack patterns, and indications of attack. Cyber defenders can use this information to enhance their threat intelligence capabilities by extracting insights and identifying emerging threats. GenAI tools can also analyze large volumes of log files, system output, or network traffic data during cyber incidents, speeding up and automating the incident response process. Additionally, GenAI models can help create security-aware human behavior by training people for sophisticated attacks and aid in secure coding practices by generating secure code and producing test cases.
However, GenAI also poses significant risks if misused by cyber offenders. Attackers can use GenAI to perform cyber attacks by extracting information or circumventing ethical policies. They can leverage GenAI tools to create convincing social engineering attacks, phishing attacks, attack payloads, and malicious code snippets that can be compiled into executable malware files. Although OpenAI’s ethical policies restrict LLMs like ChatGPT from directly providing malicious information, attackers can bypass these restrictions using techniques such as jailbreaking and reverse psychology. According to the ISC2 AI Cyber 2024 report, 75% of respondents are moderately to extremely concerned that AI will be used for cyberattacks or other malicious activities.
GenAI for Cyber Offense
Cyber offenses are hostile actions against computer systems and networks intended to manipulate, deny, disrupt, degrade, or destroy existing systems maliciously. While offensive actions are generally malicious, cyber defenders can also use them to test their defense systems and identify vulnerabilities. Information on cyber defense is more readily available, while information on cyber offenses is limited due to legal and ethical constraints. However, easy access to LLM models like ChatGPT can help attackers circumvent these constraints.
Social Engineering Attacks
Social engineering manipulates individuals into performing actions or divulging confidential information. ChatGPT’s ability to understand context and generate human-like text can be exploited for social engineering attacks. For example, an attacker with basic personal information about a victim could use ChatGPT to generate a message that appears to come from a colleague or superior, requesting sensitive information or actions. According to a survey by LastPass in 2024, more than 95% of respondents believe dynamic content through LLMs makes detecting phishing attempts more challenging.
Phishing Attacks
Phishing attacks involve posing as trustworthy entities to extract sensitive information from victims. AI systems like ChatGPT can craft highly convincing and personalized phishing emails, effectively imitating legitimate communication from trusted entities. These AI-powered phishing emails exploit psychological principles like urgency and fear, prompting recipients to act impulsively, thereby increasing the success rate of phishing attacks. The same LastPass survey indicates that phishing will remain the top social engineering threat to businesses throughout 2024.
Automated Hacking
Hacking involves exploiting system vulnerabilities to gain unauthorized access or control. AI models like ChatGPT can automate hacking procedures, identifying system vulnerabilities and devising strategies to exploit them. For instance, AI-assisted tools like PentestGPT, built on ChatGPT, automate aspects of penetration testing, helping ethical hackers identify vulnerabilities. However, the same principles can be exploited by malicious actors to automate unethical hacking procedures.
Attack Payload Generation
Attack payloads execute unauthorized actions such as deleting files, harvesting data, or launching further attacks. ChatGPT can generate attack payloads by training on specific target system details. For example, an attacker could use ChatGPT to generate SQL injection payloads for a vulnerable database system. Although generating these payloads requires detailed information and technical knowledge, the potential for misuse is significant.
Generative AI, particularly models like ChatGPT, presents both opportunities and challenges in cybersecurity. While these tools can enhance cyber defense capabilities, their potential for misuse by cyber offenders cannot be ignored. As GenAI tools become more accessible, understanding their implications from a cybersecurity perspective is crucial. According to the HiddenLayer AI Threat Landscape Report 2024, 98% of companies view some of their AI models as vital for business success, underscoring the importance of securing these transformative technologies.