CMA conduct requirements for Google search and AI Overviews

Ofcom’s consultation on combatting mobile messaging scams closes on 28 January 2026, which matters for AI governance because scam campaigns increasingly scale through automated content generation and rapid targeting. Any new rules that raise detection and disruption duties for networks can indirectly shape how AI-enabled fraud is handled in telecoms ecosystems.

DMA specification and the Mills Review

According to Reuters, the Commission opened two formal specification proceedings under the Digital Markets Act to shape how Google must provide access for rivals to certain services and data connected to AI and search. Google is reported as warning about risks to privacy and innovation, while the Commission frames the process as a structured compliance dialogue with a six-month endpoint.

Grok deepfake enforcement and the UK data library push

According to Reuters the EU opened a new formal line of scrutiny around Grok after non-consensual sexualised deepfakes circulated on X, with potential DSA exposure framed around systemic risk management rather than one-off removals. The story matters because it treats generative tools as part of platform risk architecture and not as a separate product bolt-on.

Online safety investigations and public sector AI transparency tools

This report tracks concrete shifts in UK AI governance that change what organisations may need to do in practice. The strongest signals this fortnight were online safety enforcement moving into active investigations, and central government tightening the practical foundations for public sector AI use through data readiness and transparency tooling.