Anxiety is growing among Chief Information Security Officers (CISOs) in security operation centres, particularly around Chinese AI giant DeepSeek.
AI was heralded as a new dawn for business efficiency and innovation, but for the people on the front lines of corporate defence, it’s casting some very long and dark shadows.
Four in five (81%) UK CISOs believe the Chinese AI chatbot requires urgent regulation from the government. They fear that without swift intervention, the tool could become the catalyst for a full-scale national cyber crisis.
This isn’t speculative unease; it’s a direct response to a technology whose data handling practices and potential for misuse are raising alarm bells at the highest levels of enterprise security.
The findings, commissioned by Absolute Security for its UK Resilience Risk Index Report, are based on a poll of 250 CISOs at large UK organisations. The data suggests that the theoretical threat of AI has now landed firmly on the CISO’s desk, and their reactions have been decisive.
In what would have been almost unthinkable a couple of years ago, over a third (34%) of these security leaders have already implemented outright bans on AI tools due to cybersecurity concerns. A similar number, 30 percent, have already pulled the plug on specific AI deployments within their organisations.
This retreat is not a sign of Luddism but a pragmatic response to an escalating problem. Businesses are already facing complex and hostile threats, as evidenced by high-profile incidents like the recent Harrods breach. CISOs are struggling to keep pace, and the addition of sophisticated AI tools into the attacker’s arsenal is a challenge many feel ill-equipped to handle.
A growing security readiness gap for AI platforms like DeepSeek
The core of the issue with platforms like DeepSeek lies in their potential to expose sensitive corporate data and be weaponised by cybercriminals.
Three out of five (60%) CISOs predict a direct increase in cyberattacks as a result of DeepSeek’s proliferation. An identical proportion reports that the technology is already tangling their privacy and governance frameworks, making an already difficult job almost impossible.
This has prompted a shift in perspective. Once viewed as a potential silver bullet for cybersecurity, AI is now seen by a growing number of professionals as part of the problem. The survey reveals that 42 percent of CISOs now consider AI to be a bigger threat than a help to their defensive efforts.
Andy Ward, SVP International of Absolute Security, said: “Our research highlights the significant risks posed by emerging AI tools like DeepSeek, which are rapidly reshaping the cyber threat landscape.
“As concerns grow over their potential to accelerate attacks and compromise sensitive data, organisations must act now to strengthen their cyber resilience and adapt security frameworks to keep pace with these AI-driven threats.
“That’s why four in five UK CISOs are urgently calling for government regulation. They’ve witnessed how quickly this technology is advancing and how easily it can outpace existing cybersecurity defences.”
Perhaps most worrying is the admission of unpreparedness. Almost half (46%) of the senior security leaders confess that their teams are not ready to manage the unique threats posed by AI-driven attacks. They are witnessing the development of tools like DeepSeek outpacing their defensive capabilities in real-time, creating a dangerous vulnerability gap that many believe can only be closed by national-level government intervention.
“These are not hypothetical risks,” Ward continued. “The fact that organisations are already banning AI tools outright and rethinking their security strategies in response to the risks posed by LLMs like DeepSeek demonstrates the urgency of the situation.
“Without a national regulatory framework – one that sets clear guidelines for how these tools are deployed, governed, and monitored – we risk widespread disruption across every sector of the UK economy.”
Businesses are investing to avert crisis with their AI adoption
Despite this defensive posture, businesses are not planning a full retreat from AI. The response is more of a strategic pause rather than a permanent stop.
Businesses recognise the immense potential of AI and are actively investing to adopt it safely. In fact, 84 percent of organisations are making the hiring of AI specialists a priority for 2025.
This investment extends to the very top of the corporate ladder. 80 percent of companies have committed to AI training at the C-suite level. The strategy appears to be a dual-pronged approach: upskill the workforce to understand and manage the technology, and bring in the specialised talent needed to navigate its complexities.
The hope – and it is a hope, if not a prayer – is that building a strong internal foundation of AI expertise can act as a counterbalance to the escalating external threats.
The message from the UK’s security leadership is clear: they do not want to block AI innovation, but to enable it to proceed safely. To do that, they require a stronger partnership with the government.
The path forward involves establishing clear rules of engagement, government oversight, a pipeline of skilled AI professionals, and a coherent national strategy for managing the potential security risks posed by DeepSeek and the next generation of powerful AI tools that will inevitably follow.
“The time for debate is over. We need immediate action, policy, and oversight to ensure AI remains a force for progress, not a catalyst for crisis,” Ward concludes.
See also: Alan Turing Institute: Humanities are key to the future of AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Read the full article here