Check Point Unveils Alarming AI Cyber Threats in First-Ever AI Security Report at RSA 2025

Check Point Software Technologies has released its inaugural AI Security Report, warning that cyber criminals are using AI to blur reality and compromise digital trust. The report outlines key threats and strategic defences for navigating the new AI-driven threat landscape.


SINGAPORE, 5 MAY 2025 – Check Point Software Technologies Ltd., a global leader in cybersecurity, has unveiled its first-ever AI Security Report at the RSA Conference 2025. The report provides a detailed analysis of how artificial intelligence is being weaponised by cyber criminals to erode trust, manipulate identities, and reshape the digital threat landscape.

As AI becomes more deeply embedded across industries, malicious actors are using tools like generative AI and large language models (LLMs) to launch highly convincing impersonation attacks and misinformation campaigns. The report highlights that digital content—whether seen, heard, or read—can no longer be taken at face value.

One of the most concerning developments is the emergence of “digital twins”—AI-powered replicas capable of mimicking human behavior, thought patterns, and identity with unsettling accuracy. These advancements mark a major shift in the cybersecurity landscape, where even the most advanced identity verification systems are vulnerable.

The AI Security Report identifies four key areas of concern:

  1. AI-Enhanced Impersonation and Social Engineering
    Attackers are using AI to generate realistic phishing emails, voice clones, and deepfake videos. Recent incidents include the impersonation of high-profile officials using AI-generated audio.
  2. LLM Data Poisoning and Disinformation
    Cyber criminals are corrupting AI training data to manipulate outputs and spread false narratives. AI-powered chatbots have been shown to echo disinformation in a significant percentage of cases.
  3. AI-Created Malware and Data Mining
    AI is being used to develop malware, conduct automated attacks, and refine stolen credential databases. Services like Gabbers Shop apply AI to verify and enhance the resale value of compromised data.
  4. Weaponisation and Hijacking of AI Models
    Illegally obtained AI models and dark web offerings such as FraudGPT and WormGPT are enabling attackers to bypass safeguards and launch scalable, AI-driven attacks.

To counter these risks, the report urges organisations to integrate AI into their cybersecurity strategies. Key recommendations include implementing AI-assisted threat detection, adopting advanced multi-layered identity verification methods, and enhancing threat intelligence systems with AI context.

Check Point concludes that defenders must now assume AI is integrated into every stage of cyber criminal activity and act accordingly. The report serves as both a warning and a strategic guide for building resilience in the AI era.

The full AI Security Report 2025 is now available for download from Check Point’s official website.

Author: Terry KS

Share This Post On