The Dark Side of AI: Kaspersky Expert Warns of Psychological Dangers in Cybersecurity

Kaspersky’s Vitaly Kamluk highlights the potential psychological impacts of AI in cybercrime, including the “suffering distancing syndrome” among cybercriminals and the delegation of responsibility in AI-driven cybersecurity processes. Kamluk proposes guidelines for safely harnessing AI’s benefits.


5 September 2023 – In a thought-provoking analysis, Vitaly Kamluk, the Head of Research Center for Asia Pacific at the Global Research and Analysis Team (GReAT) at Kaspersky, has shed light on the potential psychological implications of the growing influence of Artificial Intelligence (AI) in cybercrime and IT security.

As cybercriminals increasingly harness AI for malicious activities, Kamluk raises a concerning prospect: they may attempt to shift the blame onto the technology, thus distancing themselves from the repercussions of their cyberattacks. This phenomenon, labeled as the “suffering distancing syndrome,” stems from the fact that virtual thieves who steal from unseen victims do not witness the suffering they cause, which contrasts with the stress experienced by physical assailants on the streets.

Kamluk also highlights another psychological by-product of AI, termed “responsibility delegation.” As cybersecurity processes become increasingly automated and delegated to neural networks, human actors may feel less responsible in the event of a cyberattack, especially within corporate settings. This shift raises questions about accountability and transparency in the face of evolving AI technologies.

To address these emerging challenges and safely embrace the benefits of AI, Kamluk proposes several key guidelines:

1. Accessibility: Restrict anonymous access to intelligent systems built on vast datasets. Maintain a history of generated content and establish methods for identifying the origin of synthesized content. Implement procedures to handle AI misuse and abuse, supported by clear reporting channels that blend AI-based support with human validation.

2. Regulations: Encourage discussions on marking content generated with AI assistance, enabling users to identify AI-generated material quickly and reliably. Consider licensing AI development activities to regulate potentially harmful applications, akin to controls placed on dual-use technologies.

3. Education: Promote awareness about detecting artificial content, validating its authenticity, and reporting potential misuse. Integrate AI education into school curricula, differentiating between AI and natural intelligence while emphasizing responsible use. Educate software developers on ethical AI usage and the consequences of misuse.

Kamluk acknowledges the dual nature of AI, emphasizing its potential to be both a powerful tool and a source of concern. While generative AI has enabled content synthesis akin to human creation, the responsible utilization of AI technology requires secure directives.

Kaspersky will continue the discourse on the future of cybersecurity at the upcoming Kaspersky Security Analyst Summit (SAS) 2023, scheduled from October 25th to 28th in Phuket, Thailand. This event will bring together anti-malware researchers, global law enforcement agencies, Computer Emergency Response Teams (CERTs), and senior executives from various sectors worldwide.

Author: Terry KS

Share This Post On