AI is fast becoming the new productivity engine across Malaysia’s corporate landscape. From copilots that summarise reports to chatbots that draft emails, employees are discovering clever ways to work faster and smarter. But in the rush to embrace innovation, a new blind spot has emerged: shadow AI.
Shadow AI refers to the use of unapproved or unmanaged AI tools by employees, often without the knowledge or oversight of IT or security teams. The Sophos Future of Cybersecurity in Asia Pacific and Japan 2025 report reveals that 91% of Malaysian organisations have already adopted some form of business AI tool – yet 36% also admit that staff are using unauthorised AI platforms. That means a significant portion of the workforce is experimenting with powerful tech in the shadows, without proper governance or visibility.
This kind of unmanaged access creates real problems. Sensitive company data, customer records, or intellectual property can end up being fed into public AI models – often without the user fully understanding the risk. The same Sophos report highlights that 41% of Malaysian organisations don’t even know which AI tools are in use, and 35% have discovered security vulnerabilities within the tools themselves. In some cases, these weaknesses could expose confidential data to external parties or bad actors looking for an easy in.
What’s important to understand is that most shadow AI usage isn’t malicious. It’s often driven by employees trying to be more efficient, for example analysts trying to speed up workflows, marketers testing content generation, execs chasing quick insights. The problem arises when these tools operate outside of sanctioned environments. One unvetted app. One accidental upload. One misconfiguration. That’s all it takes.
And in highly regulated sectors like finance, telco, and government-linked companies, poor AI governance isn’t just a risk, it’s a compliance landmine. Whether it’s the Personal Data Protection Act (PDPA) or industry-specific rules, data leaving the guardrails can lead to serious repercussions. Meanwhile, overworked cyber teams are left scrambling to cover more ground with fewer resources.
So, how can organisations regain control without slamming the brakes on innovation?
1. Visibility must come first
A sound AI governance model must be built on zero-trust principles and continuous monitoring. Organisations need to know who is using AI, what data it’s touching, and where that data is going. To get there, traditional cybersecurity postures need to stretch. AI introduces a whole new attack surface, and security needs to cover every layer, from data and identity to endpoints and user behaviour.
2. AI policies must move beyond paper
Many organisations already have AI use policies, but policies alone don’t move the needle. Awareness programs are needed that go beyond technical checklists. Teams must be trained to identify when they’re interacting with external AI systems and why data governance is critical, not just bureaucratic.
3. Leadership needs to steer the shift
Outright bans rarely work – they only drive AI use further underground. Instead, CISOs and executive leaders need to point teams towards approved, secured, and monitored tools. Shadow AI flourishes in environments where innovation is stifled or IT is seen as a blocker. Flip that script. Enable experimentation, but with clear rules of engagement.
We’re entering what some are calling the “Generative Age”, where AI becomes embedded in everyday work. But innovation and security can’t be seen as opposing forces. They are two sides of the same coin. Shadow AI won’t disappear; people will always look for faster, more efficient ways to get things done. The real question is whether organisations face it head-on with governance and visibility or leave the door open to the next big breach from the inside out.
At the end of the day, AI isn’t the enemy … unmonitored AI is. Malaysian businesses that act now to build clear frameworks, gain visibility, and hold people accountable will not only reduce risk, but unlock AI’s true potential in a way that’s both secure and sustainable.
This article is contributed by Aaron Bugal, Field CISO, APJ, Sophos




