• Scotland – AI infrastructure and water use scrutiny. Digit.FYI reports rising concern that large AI-driven data centres could be straining Scotland’s water resources, prompting calls for tighter transparency and environmental governance around AI infrastructure siting and cooling.
  • Global – AI and regulatory complexity for companies. Verdict highlights how overlapping AI, privacy and sectoral rules are driving regulatory complexity, arguing that organisations need to embed AI governance and privacy risk assessment within compliance workflows rather than treat AI as a bolt-on issue.
  • Kazakhstan – Push for ban on fully autonomous weapons. Astana Times reports Kazakhstan’s Foreign Ministry calling for an international ban on fully autonomous weapons, emphasising ethical and legal risks of AI-driven lethal systems and aligning with broader UN debates on meaningful human control.

Regulation

  • UK (health) – MHRA & NICE funded to tighten digital mental-health AI regulation. MHRA announces a £2 million grant from Wellcome for joint work with NICE to develop clearer, proportionate regulation and evaluation of digital mental health technologies, including AI-powered tools and VR therapies. The programme will extend to Autumn 2028 and includes creating a dedicated “digital mental health technology AI airlock” and exploring international reliance mechanisms.
  • UK (science) – AI for Science Strategy Expert Group terms of reference. DSIT publishes the terms of reference for the AI for Science Strategy Expert Group, setting out its advisory role to government on using AI to accelerate scientific discovery, including remit, membership and governance arrangements.
  • UK (cross-government) – Government Social Research Strategy on ethical AI use. The Government Social Research (GSR) Strategy 2025–2029 highlights “practical and ethical uses of Artificial Intelligence (AI) across the research pipeline” as part of government evidence-generation, signalling a push to integrate AI tools while maintaining robust ethical safeguards.
  • International – Thailand orders biometric Worldcoin-style project to halt and delete data. Thailand’s Personal Data Protection Commission (PDPC) (The Record) and Ministry of Digital Economy and Society have ordered World (formerly Worldcoin) to suspend iris-scan enrolment and delete around 1.2 million biometric templates collected from Thai citizens, citing breaches of the Personal Data Protection Act (PDPA), including concerns over consent and purpose limitation. The order is reported across several outlets summarising the PDPC decision.

Cases

  • BestMediaInfo Bureau. United States – OpenAI files detailed response in Raine v OpenAI (teen suicide suit). In the California wrongful-death case Raine v OpenAI, OpenAI has filed its first detailed court response, described in multiple reports as denying legal responsibility for a 16-year-old’s suicide allegedly linked to extensive ChatGPT use. The filing reportedly argues that the death was “not caused” by ChatGPT, cites extensive crisis-support prompts in the logs, and challenges causation and foreseeability of harm, while acknowledging the death as “devastating”. 
     
  • Cybernews. Global music/AI – Warner Music settles with Suno and pivots from litigation to licensing. Warner Music Group has ended its lawsuit against AI music generator Suno and agreed an undisclosed partnership, reportedly involving licensed use of Warner’s catalogue and new revenue-sharing models. The settlement illustrates a shift from pure infringement claims towards negotiated licensing frameworks for generative-AI music tools. 

Academia / Policy Commentary

  • EU – Civil-society critique of AI Act delays and “deregulation” drive. EDRi-gram’s 27 November issue reports on concerns that recent EU moves to delay parts of the AI Act and broader “Digital Omnibus” simplification agenda risk weakening protections, with digital-rights groups warning that deregulatory pressure may prioritise industry flexibility over fundamental rights.
  • EU – Academic critique of AI Act delay as privileging big tech. An analysis on AIhub/The Conversation argues that proposals to postpone certain high-risk AI obligations signal a policy shift that “prioritises big tech over fairness”, emphasising the risk that delayed enforcement could cement market power and undercut rights protections for marginalised groups.
  • US – USPTO guidance on AI and IP reaffirmed. World IP Review reports on updated guidance from the USPTO clarifying its stance on AI in patent practice, reaffirming that AI systems cannot be named as inventors while emphasising disclosure, human contribution and misuse risks around generative AI tools in prosecution. 

Business

  • Music industry – AI licensing model emerging from Warner–Suno settlement. The Warner–Suno deal is also significant on the business side: rather than simply blocking generative-AI music tools, a major rightsholder has opted for a partnership that could normalise licensed training and revenue sharing, potentially shaping future AI–music business models and negotiations (Cybernews).
  • Global – AI as a driver of reg-tech demand. The Verdict analysis notes that as AI embeds itself in compliance and financial workflows, vendors are positioning AI-enabled regulatory technology as a way to keep pace with new AI-specific and data-protection requirements, indicating continued investment in AI governance tooling.

Adoption of AI

  • England – Technology in schools survey highlights generative-AI use. A new Department for Education  report on the 2024–2025 technology in schools survey finds increasing awareness and use of generative AI tools in English schools, with the report examining both perceived benefits and concerns around reliability, assessment integrity and data protection.
  • UK legal profession – Bar Council blog on “staying ethical with AI”. The Bar Council publishes a new blog, “Staying ethical with AI at the Bar”, reinforcing updated guidance on generative AI use by barristers, highlighting duties not to outsource legal judgement to AI tools and warning of misconduct risks where outputs are not independently verified.
  • Australia (NSW government) – Practical bulletin on generative AI in public administration. A NSW Government bulletin, summarised by Holding Redlich, sets out key considerations for agencies using generative AI, including procurement due diligence, transparency towards citizens, confidentiality and record-keeping obligations, and the need for risk-based internal policies.
  • UN / labour – Workers’ exposure to AI across development stages. AI for Good (ITU) hosts a session on workers’ exposure to AI at different stages of AI system development, illustrating how international bodies are beginning to systematise analysis of AI’s impact on labour markets, from data labelling to deployment. 

Events

  • EU / online – EU Digital Omnibus, GDPR and AI Act explained. OneTrust is hosting “EU Digital Omnibus explained: GDPR, AI Act & ePrivacy changes” on 8 December 2025, focusing on how the omnibus reforms and AI Act interact with existing EU digital laws and what this means for compliance teams.
  • EU AI Office – Workshops on high-risk AI classification. The European AI Office lists workshops on high-risk AI classification for Annex I products on 10–12 December 2025, aimed at clarifying high-risk system criteria, horizontal concepts and practical implementation for providers and deployers.
  • Global – ISO/IEC 42001 AI governance webinar. NQA will run “Inside the AI Governance Journey: Leadership insights and perspectives on ISO/IEC 42001” on 12 December 2025, providing a practitioner-oriented briefing on the new AI management system standard and its implications for AI compliance programmes. 

Takeaway

Regulatory attention today is strongly focused on public-sector and safety-critical uses of AI: the UK is tightening governance around digital mental health tools, scientific applications and government research, while professional bodies such as the Bar Council are sharpening ethical expectations for legal practitioners. Internationally, enforcement actions against biometric identity schemes and high-profile litigation over generative-AI harms continue to test the boundaries of liability and data-protection law. For AIJurium, these developments reinforce the centrality of governance frameworks that connect sector-specific regulation (health, education, justice) with broader AI-specific and data-protection obligations, particularly where vulnerable groups and high-risk applications are involved.

Sources: GOV.UK, MHRA, NICE, Department for Science, Innovation and Technology, Government Social Research Profession, Bar Council, Digit.FYI, EDRi, The Conversation, AIhub, Astana Times, The Record, Holding Redlich, AI for Good (ITU), World IP Review, Verdict, CyberNews, BestMediaInfo.