How Cybercriminals Are Leveraging Artificial Intelligence for Sophisticated Cyberattacks

The widespread adoption of AI has made it a tool for both innovation and cybercrime, with malicious actors using AI to automate attacks and exploit vulnerabilities in systems and algorithms. Kaspersky’s research highlights the growing risk, emphasizing the need for robust defenses against AI-driven threats.


15 August 2024 – The rapid adoption of advanced AI systems by individuals and businesses has introduced both opportunities and significant risks. While these systems are highly adaptable for tasks like content generation and code creation through natural language prompts, their accessibility has also enabled malicious actors to exploit AI for increasingly sophisticated cyberattacks. These adversaries are now using AI to automate and accelerate their attacks, making them more complex and effective.

AI as a Double-Edged Sword

Cybercriminals have found several ways to harness AI for nefarious purposes:

  1. Malware Creation: AI tools such as ChatGPT are being utilized to develop malicious software and automate attacks across multiple targets.
  2. Data Theft: AI programs can log users’ smartphone inputs by analyzing acceleration sensor data, potentially capturing sensitive information like messages, passwords, and bank codes.
  3. Autonomous Botnets: Swarm intelligence enables the operation of autonomous botnets, which can communicate and self-repair to restore malicious networks after disruption.

Recent research by Kaspersky highlights the growing threat of AI in password cracking. Many passwords are stored encrypted with cryptographic hash functions, making them difficult to reverse-engineer. However, password database leaks are alarmingly common, affecting companies of all sizes. In July 2024, the largest-ever leaked password compilation was published online, containing approximately 8.2 billion unique passwords.

Kaspersky’s Lead Data Scientist, Alexey Antonov, noted, “Our analysis of this massive data leak revealed that 32% of user passwords are weak enough to be cracked using a simple brute-force algorithm and a modern GPU in under an hour.” He added, “By training a language model on this database, we found that 78% of passwords could be cracked using AI, three times faster than traditional brute-force methods. Only 7% of passwords are strong enough to withstand prolonged attacks.”

Social Engineering and Deepfakes

AI is also being exploited for social engineering, enabling the creation of convincing content, including text, images, audio, and video. Threat actors can use large language models like ChatGPT-4 to generate sophisticated phishing messages, overcoming language barriers and creating personalized emails based on social media information. These AI-generated phishing attempts can even mimic the writing styles of specific individuals, making them more difficult to detect.

Deepfakes present another significant cybersecurity challenge. What once was experimental technology has now become a widespread issue, with criminals using deepfakes for celebrity impersonation scams that have resulted in substantial financial losses. Deepfakes are also used to hijack user accounts and send fraudulent audio money requests using the account owner’s voice.

One of the most elaborate deepfake attacks occurred in February in Hong Kong, where scammers used a deepfake video conference to impersonate company executives, convincing a finance worker to transfer approximately $25 million.

AI Vulnerabilities and Cyber Defense

Beyond exploiting AI for malicious purposes, adversaries are also targeting AI algorithms themselves. These attacks include:

  1. Prompt Injection Attacks: These involve manipulating large language models to bypass prompt restrictions.
  2. Adversarial Attacks: Here, hidden information in images or audio is used to confuse AI systems and force them into making incorrect decisions.

As AI becomes increasingly integrated into our daily lives through products like Apple Intelligence, Google Gemini, and Microsoft Copilot, addressing AI vulnerabilities has never been more critical.

Kaspersky’s Defensive Use of AI

Kaspersky has been leveraging AI technologies to protect its customers for many years. The company employs various AI models to detect emerging threats and continuously researches AI vulnerabilities to enhance its defenses. Kaspersky is also actively studying harmful AI techniques to provide robust protection against AI-driven cyberattacks.

Author: Terry KS

Share This Post On