Council of Europe. The Chair of the Committee on Artificial Intelligence uses a parliamentary conference at the UK Parliament to urge MPs to treat the new Framework Convention on Artificial Intelligence as a central tool for protecting democracy, human rights and the rule of law, and to focus on ratification and implementation rather than abstract debates on technology.
OSCE Parliamentary Assembly. Special Representative Federica Onori calls for co ordinated parliamentary oversight of AI at the same London conference, stressing that AI is already shaping work, communication and public life and that parliaments should prioritise monitoring and impact understanding before legislating, so that scrutiny spans human rights, security and democratic processes.
Expleo. The latest AI Pulse survey reports that UK businesses show rising trust in domestic AI regulation while doubts persist among EU respondents, with boards increasingly treating AI governance as a strategic differentiator rather than a pure compliance burden.
Regulation
Department for Energy Security and Net Zero. Terms of reference for a new independent review task the UK’s AI Champion for Clean Energy Lucy Yu with mapping how AI is currently used in Great Britain electricity networks, identifying regulatory, data and skills barriers, and recommending governance measures to scale AI in grid planning and operations safely by summer 2026.
European Commission. A new Data Act Legal Helpdesk goes live to give companies, public authorities and other organisations direct support on applying the EU Data Act, complementing soft law tools such as model contract terms and guidance on reasonable compensation and trade secret protection to reduce transaction costs and improve legal certainty for data sharing and cloud switching.
EU AI Office. One year on, the voluntary AI Pact now covers more than three thousand organisations and over two hundred formal pledgers, using webinars and support platforms as a pre compliance track for the AI Act and feeding back into forthcoming guidelines on high risk classification, transparency, incident reporting and obligations for providers and deployers.
GOV.UK. The government launches a call for views on the future of the BBC through a Royal Charter review and associated privacy notice that explicitly mentions the use of automated tools and AI techniques to analyse consultation responses, signalling how public sector bodies must integrate data protection, algorithmic transparency and fair processing duties when deploying AI in large scale public engagement.
Academia
European Systemic Risk Board. The Advisory Scientific Committee’s report on artificial intelligence and systemic risk isolates eleven features of AI, warns that their interaction could amplify existing vulnerabilities in financial markets and infrastructures, and calls for calibrated macroprudential responses, including adjustments to conduct, competition and prudential rules where AI driven herding or model monoculture emerges.
Parliamentary AI governance research. New work on guidelines for AI in parliaments, coordinated by the Westminster Foundation for Democracy and academic partners, distils ethical, transparency and accountability principles into a forty point framework that legislatures can adapt when deploying AI in internal workflows or in legislative scrutiny, with an explicit focus on preserving human decision making authority.
FRA. The European Union Agency for Fundamental Rights report on high risk AI systems proposes a structured methodology for assessing impacts on rights such as privacy, non discrimination and access to services, recommending that providers and deployers integrate fundamental rights impact assessments into compliance with the EU AI Act and related sectoral law.
Browne Jacobson. A new practice note on AI in local government distils lessons from training with public authorities, emphasising the need for clear procurement standards, recorded legal bases for data use, human review of automated outputs and tailored governance policies when councils adopt AI tools for planning, enforcement and service delivery.
Adoption of AI
People Management. A new piece on recruitment risk argues that converging employment law reforms, fraud prevention duties and AI regulation are reshaping employee screening, pushing employers towards more automated checks while increasing expectations that HR teams evidence fairness, transparency and proportionality in AI assisted vetting.
DSIT. The Department for Science, Innovation and Technology publishes an algorithmic transparency recording for its Parlex tool, which uses sentence generation to help draft and refine policy documents, providing a concrete example of how central government is documenting purpose, data inputs, human oversight and risks for generative systems used inside the policy process.
Eurostat. Fresh survey data on generative AI uptake in the European Union shows that almost one third of people aged sixteen to seventy four used a generative AI service in 2025, with wide variation between member states, reinforcing how uneven adoption rates will complicate the design of harmonised but proportionate AI rules across the bloc.
Events
IAPP. The IAPP UK Data Protection Intensive in London in March 2026 flags dedicated sessions on AI governance, DPIAs and regulatory enforcement under UK data protection law, offering a near term venue for tracking how privacy regulators and practitioners are operationalising AI specific guidance.
techUK. The EU AI Act Summit scheduled in London for February 2026 will bring regulators, in house counsel and compliance leads together to discuss implementation, enforcement models and secondary legislation under the EU AI Act, making it a key forum for understanding practical cross border compliance expectations.
CREATe. The AI Regulation Early Career Researchers Conference planned at the University of Glasgow in spring 2026 will convene scholars working on AI governance, intellectual property, platform regulation and data protection, and is likely to surface emerging doctrinal and empirical debates that will shape the next wave of AI law scholarship.
Takeaway
Today’s developments show AI regulation moving deeper into the plumbing of infrastructure, finance and public administration while creative workers and local authorities probe how real accountability can work in practice. The mix of grid governance reviews, cultural sector pressure on copyright and AI, systemic risk analysis and transparency records confirms that the decisive questions now concern power over data and models inside existing legal systems rather than the drafting of headline AI statutes alone.
Sources: GOV.UK, European Commission, Eurostat, Council of Europe, Expleo, Equity, Musicians' Union, ESRB, FRA, BrowneJacobson, DSIT, IAPP, CityandFinancial, CREATe, OSCE Parliamentary Assembly