Cybercriminals weaponize large language models and Agentic AI to launch smarter, harder-to-detect attacks.
Cybercriminals are increasingly exploiting AI tools like OpenAI’s GPT series to commit malicious attacks. Even though large language model (LLM) technology is revolutionising productivity like never before—there is a growing need to understand how cybercriminals manipulate it.
One common tactic is crafting highly convincing phishing emails and messages tailored to specific targets. These AI-generated texts avoid typical grammar mistakes and closely mimic real communication styles, allowing cybercriminals to bypass traditional email filters.
Another alarming trend is the use of LLMs to write polymorphic malware—code that constantly changes its form to evade antivirus detection. These polished attacks have successfully slipped past security defences, leading to major data breaches and significant financial fraud.
Attackers also exploit a vulnerability called “prompt injection,” whereby specially crafted inputs trick AI chatbots or other LLM-powered systems into revealing confidential information or performing unintended actions. For example, in Europe in 2024, a German telecom provider’s AI customer service bot was manipulated via prompt injection to disclose sensitive user data, exposing critical security gaps in AI applications.
LLMs can also be misused to steal sensitive data. If users inadvertently feed private or confidential information into AI tools, cybercriminals might extract that information later, causing privacy breaches. Moreover, AI models help automate reconnaissance tasks, enabling hackers to gather detailed information about targets and identify system vulnerabilities more efficiently, making attacks more precise and damaging.
The latest advancement in AI-assisted technology is agentic AI. Although cybercriminals are not yet widely using AI agents for large-scale hacks, cybersecurity experts warn that these AI-powered attacks are likely to appear soon in the real world.
Researchers have demonstrated that AI agents can perform complex attacks—for instance, the company Anthropic showed its AI model Claude successfully replicated an attack designed to steal sensitive information.
Although agentic AI is still in its early stages, experts say that given how quickly the technology is advancing, experts argue that it’s only a matter of time before such attacks become widespread.
Many businesses have fallen victim to AI-powered ransomware attacks where management initially thought only corporate data was targeted, but employee information was also compromised.
Attackers use AI to automate decisions on which data to steal to maximize damage and craft highly believable SMS and email scams mimicking official messages.
As AI rapidly integrates into everyday life, businesses, governments, and individuals must remain vigilant and collaborate to build stronger safeguards.
Experts recommend that tech leaders adopt a multi-layered defence: tightening access and monitoring AI usage, deploying AI-driven detection tools, and educating users to spot AI-generated scams.