Cyber Security & AI Risk: Key UK Developments - September 2025
- Thibault Williams

- Sep 23, 2025
- 2 min read
Cybersecurity and AI risks are evolving faster than ever, with regulators, threat actors, and industry leaders all reshaping the landscape. Summer 2025 has brought significant updates, from new legislation and regulatory alerts to practical shifts in how AI is being used across cyber operations. Here’s what you need to know, and what it means for your organisation.
Cyber Security and Resilience Bill
What’s changing:
The scope of regulatory oversight is being expanded beyond just essential services to include more digital service providers, supply chain actors, managed service providers, data centres and other critical infrastructure.
Regulators are being given stronger powers, including proactive investigation ability, higher enforcement potential, and (in some cases) cost-recovery mechanisms to fund oversight.
Incident reporting requirements will become more stringent: more types of incidents must be reported (including ransomware events or attacks across supply chains), sometimes within tighter timelines, to improve visibility of cyber threats.
What this means for you: Expect new obligations by 2026-27. Strengthen internal governance and assess third-party resilience now.
NCSC Alert: Rise in AI-Powered Phishing & Deepfakes
The NCSC has warned that cybercriminals are increasingly using AI to enhance phishing attacks and create convincing deepfakes.
Phishing messages are now well-written and free of the usual errors, making them harder to detect, while deepfake audio and video are allowing attackers to impersonate trusted individuals such as colleagues or executives.
In addition, criminals are able to personalise attacks at scale, imitating communication styles and exploiting public data, so phishing campaigns are faster, more adaptive, and far more difficult to spot with traditional defences.
What this means for you: We recommend using LLM-based filters, anomaly detection & updating training to counter AI-driven threats.
3. BT & BAE Systems Pilot Generative AI in SOCs
BAE is investigating how generative AI can fundamentally change how SOCs operate, especially for alert triage, anomaly detection, and vulnerability exposure.
They believe GenAI could help generate hypotheses about threats using historical data, giving SOCs a deeper, more proactive defence capability.
Similarly, BT is using AI/machine learning to improve both detection (e.g. anomaly detection) and response. They’re researching automated/semi-automated threat response and working on how humans and machines can collaborate in SOC environments.
What This Means For You: AI can be utilised to augment defences; however, they must be deployed securely and transparently.
CDEI Guidance on AI Assurance
Published guidance from CDEI focuses on building confidence in AI systems by measuring, evaluating, and communicating whether they meet relevant criteria such as safety, fairness, robustness, transparency, data protection, regulations, and organisational values.
However, they are keen to state it’s not just about checking compliance, but about providing trustworthy evidence throughout an AI system’s lifecycle — from design to deployment.
This guidance complements the UK's pro-innovation AI White Paper.
What this means for you: AI governance expectations are rising - especially for public-sector procurement and compliance-led industries.

The message is clear: the regulatory bar is rising, and AI is amplifying both risks and opportunities. If you need support in strengthening your digital resilience or governing AI responsibly, the team at TMW Resilience can help. Contact us today to discuss your next steps.




Comments