• Ofcom – AI and media literacy in the UK. Ofcom warns that AI is ‘changing the information game’ and argues that media literacy policy must account not only for deepfakes but also for how recommender systems and generative tools shape people’s understanding of news and information. The piece highlights that responsibility sits both with individuals and with platforms, broadcasters and other intermediaries, stressing that AI-era literacy must include knowing when ‘automation is at work’ and when to demand accountability from providers.

  • OECD – AI and competition in downstream markets. The OECD’s Global Forum on Competition held a roundtable on ‘Artificial Intelligence and Competitive Dynamics in Downstream Markets’, examining how AI-driven pricing, recommendation and ranking tools may entrench market power or facilitate tacit collusion in sectors like retail, mobility and digital platforms. The session focuses on how competition authorities can adapt investigative techniques and remedies to increasingly opaque algorithmic systems, including questions around data access and audit rights for enforcers. 

  • GOV.UK (Prime Minister’s Office) - In a major economic speech on a “Britain built for all”, the Prime Minister explicitly lists artificial intelligence among the regulatory areas where government intends to “remove barriers to business”, alongside planning, industrial policy, pensions and capital investment, signalling that AI rules are now framed as part of a wider productivity and growth agenda rather than an isolated tech concern. 

  • Scottish AI Alliance - The Alliance’s events calendar shows a dense December schedule under the AI Scotland: National AI Adoption Programme, including today’s “So You’re Thinking of Using AI: Crieff” workshop, framed as an SME’s “first step” towards responsible AI adoption and focused on foundational concepts, benefits, best practice and identifying use cases. 

Regulation

  • Hungary – Act LXXV of 2025 implementing the EU AI Act enters into force. Most provisions of Hungary’s Act LXXV of 2025, according to Digital Policy Alert and CEE Legal Matters, which implements the EU Artificial Intelligence Regulation at national level, entered into force today, 1 December 2025. The law establishes the domestic institutional framework for AI governance, including designation of an AI market surveillance authority and national procedures for notification, enforcement and sanctions covering AI systems placed on the Hungarian market or whose outputs are used in Hungary. The law designates the National Accreditation Authority as AI Notification Authority and the Minister for enterprise development as AI Market Surveillance Authority, with fines of up to HUF 13.3 billion for serious violations and a new Hungarian Artificial Intelligence Council to advise on strategy and ethics. 

  • Australia – updated federal AI policy and mandatory AI impact assessments. Australia’s Digital Transformation Agency has published an updated Policy for the responsible use of AI in government, strengthening governance of AI across Commonwealth agencies. From today, agencies must conduct an AI impact assessment for all use cases falling within the policy’s scope, with the policy positioning these assessments as a mandatory control alongside clearer requirements on risk management, documentation and senior accountability for AI deployments. 

  • GOV.UK (Office for Digital Identities and Attributes / DSIT) - Guidance on “Find registered digital identity and attribute services” has been updated to state that the “statutory register of digital identity and attribute services” is now a public list of government-registered providers certified against the UK digital identity and attributes trust framework, with an explicit note that the word “statutory” was added to mark the commencement of legislation on 1 December 2025. This quietly converts what was a policy register into a legal reference point for verifying certified digital identity services, many of which rely on automated and AI-supported verification. 

Cases

  • Nieman Lab. United States – Politico newsroom arbitration on AI tools and union safeguards. An arbitrator has ruled that Politico management violated AI safeguards in its union contract when rolling out two AI-powered editorial products (the ‘LETO’ live-speech summariser and ‘Report Builder’ policy write-ups), finding that they materially affected journalists’ work without the required 60-day bargaining period and failed to meet contractual standards for ‘newsgathering’. The ruling notes that AI outputs contained ‘erroneous and even absurd’ content and underscores that AI must not be used as a shortcut around agreed editorial and ethical standards, ordering a bargaining period and remedies for past violations. 

Academia

  • Pasquale, Malone & Ting – ‘Copyright, Learnright, and Fair Use: Rethinking Compensation for AI Model Training’. The Northwestern Journal of Technology and Intellectual Property has published a major article proposing a new exclusive right – a ‘learnright’ – allowing copyright holders to license or refuse AI training on their works. The authors argue that existing fair-use and licensing debates inadequately address the scale and economic impact of generative AI training, and suggest learnrights as a structured way to share revenues from AI systems while reducing legal uncertainty in ongoing training-data lawsuits.

  • Springer. Artificial Intelligence and Law - A recent open-access article, “Overcoming sentencing inconsistency - a proposal for algorithmic guidelines and juridical misalignment index”, explores how algorithmic tools could help reduce sentencing disparities, while warning that misalignment between judicial reasoning and model outputs risks entrenching opacity and bias. This work is important for UK and EU debates on AI in criminal justice, where transparency, explanation and institutional control over AI-assisted decisions are central concerns. 

Business 

  • AIthority. SAI360 – acquisition of Plural Policy to expand AI compliance tooling. Risk and compliance company SAI360 has announced the acquisition of Plural Policy, a platform that uses AI to track and analyse regulatory change. The deal is framed as a way to strengthen AI-driven ‘regulation intelligence’ and change-management tooling for clients facing increasingly complex digital, data and AI regulatory frameworks, including the EU AI Act and related omnibus reforms. 

Adoption of AI

  • European Commission – DigComp 3.0 integrates AI competence into digital skills. The European Commission has released an updated Digital Competence Framework (DigComp 3.0), introducing over 500 learning outcomes and systematically integrating ‘AI competence’ – including generative AI – across all competence areas. The update is presented as a policy and practical tool to close digital-skills gaps, making AI understanding, critical use and basic governance concepts part of standard digital literacy efforts across the EU. 

Takeaway

AI governance is advancing through implementation rather than new legislation. Hungary’s AI Act structures enforcement and sanctions at national level, while Australia turns AI impact assessments into mandatory controls. The UK and EU both signal that AI literacy and digital-competence frameworks are becoming core governance tools. The Politico arbitration shows labour law increasingly shaping acceptable AI deployment, and contemporary scholarship is pushing toward new rights regimes for AI training. Together, these developments reflect a global shift from abstract principles to operational, enforceable and institution-centred AI governance.

Sources: Ofcom; OECD; Prime Minister’s Office (UK); Scottish AI Alliance; Digital Policy Alert; CEE Legal Matters; Digital Transformation Agency (Australia); DSIT; Nieman Journalism Lab; Northwestern Journal of Technology and Intellectual Property; Springer (Artificial Intelligence and Law); SAI360; European Commission.