According to Reuters the EU opened a new formal line of scrutiny around Grok after non-consensual sexualised deepfakes circulated on X, with potential DSA exposure framed around systemic risk management rather than one-off removals. The story matters because it treats generative tools as part of platform risk architecture and not as a separate product bolt-on.

According to Stateline drugmakers are increasingly using AI across trials and regulatory workflows, including to accelerate document-heavy processes and operational decisions. The governance angle is that regulators and litigants will expect validation, traceability, and defensible records when AI affects regulated submissions and outcomes.

Regulation

  • The European Commission announced formal DSA proceedings focused on Grok and X’s recommender systems, signalling that deployment choices, risk assessment, and mitigation measures will be examined as compliance questions. For AI governance tracking, this is a clean example of platform law being used to pressure-test generative feature rollouts.

  • DSIT published a National Data Library progress update describing completed discovery work, publication of AI-ready dataset guidance, and five kickstarter projects that test new access models, including a legal-data stream intended to enable AI-powered legal guidance for SMEs. This is a practical UK governance move because it frames data access, public trust, and audit-friendly sharing as enabling infrastructure for AI adoption.

Cases

  • Google (Reuters) agreed to pay $68 million to settle a class action alleging Google Assistant unlawfully recorded and shared private conversations triggered by false activations, with the settlement pending court approval. The practical governance lesson is that voice and ambient AI features remain high-risk without strong controls, clear user notice, and evidentiary logging of what is captured and why.

Academia

  • The Institute for Law and AI working paper ‘Automated Compliance and the Regulation of AI’ argues that AI progress can reduce the cost of compliance tasks and proposes ‘automatability triggers’ so certain obligations activate when compliance can be largely automated. For governance design, it offers a practical mechanism for staging requirements without locking in disproportionate early burdens.

Events

  • techUK is hosting a UK government virtual roundtable on SME perspectives on the EU AI Act and Cyber Resilience Act on 5 February 2026, framed around practical impact and UK policy response signals.

  • UK Finance is running an online AI Governance session on 6 February 2026, which is directly relevant for governance controls, oversight, and accountable deployment in regulated environments.

  • The Data Lab is running ‘Upskill for an AI-ready future’ with an in-person session in Glasgow on 4 February 2026, 10:30 to 12:00, aimed at Scottish SMEs and decision-makers.

  • Thomson Reuters is hosting ‘The AI lawyers swear by, Introducing CoCounsel Legal UK’ as a live webinar on 29 January 2026, 12:00 GMT.

Takeaway

The strongest signal is enforcement gravity shifting towards system design choices, especially where generative features amplify harmful content at scale. In parallel, UK work on AI-ready public data access shows how governance is also being built through infrastructure choices that determine what can be shared, audited, and trusted. Together these moves tighten expectations that AI deployment must be explainable in operational terms, not just described in principles.

Sources: European Commission, Department for Science Innovation and Technology, Reuters, Institute for Law and AI, techUK, UK Finance