Algorithms at Work, Human Rights Scrutiny and AI Transparency

Ministry of Justice (MoJ) published an Engineering AI Governance Framework for its engineering teams, setting practical rules for using tools like GitHub Copilot and for building bespoke AI systems across the development lifecycle.

GOV.UK. European Parliament adopted a resolution urging the Commission to propose new rules on algorithmic management at work, including transparency, worker consultation and restrictions on intrusive monitoring.

European Parliament. The European Parliament has adopted a resolution calling on the European Commission to propose specific rules on algorithmic management at work, insisting that employment decisions must not rely solely on automated systems, that workers have a right to know what data are processed and that biometric or emotional data should not be harvested for workplace monitoring, which positions labour protection as a front line for EU AI governance.

UK Parliament. The Joint Committee on Human Rights held oral evidence on Human Rights and the Regulation of AI, taking testimony from legal practitioners, the Law Society and international experts to probe how UK regulation should respond to risks such as opaque automated decision making, cross border data flows and concentration of AI power. 

GOV.UK. The UK government has published a consultation prospectus on creating a single construction regulator, including plans to analyse consultation responses using an AI tool called Consult AI under strict constraints that prevent training on personal data, illustrating live experimentation with AI assisted regulatory analysis after the Grenfell Tower Inquiry. 

GOV.UK. New publications for the ten year Health Plan for England report that staff and the public see foundational digital infrastructure as a credibility test, warning that widespread deployment of advanced tools such as artificial intelligence will not be realistic without interoperable systems and single patient records, which implicitly reframes AI roll out as dependent on basic data governance. 

India Legal. The India government told the Lok Sabha it has set up an expert committee on generative AI and copyright, which has already released a first working paper assessing whether the Copyright Act 1957 can cope with AI training and AI‑generated works.

Klobuchar Senate. United States (Congress) – Senators Amy Klobuchar and Shelley Moore Capito introduced the Artificial Intelligence Scam Prevention Act, a bipartisan bill that would ban using AI to impersonate any person with intent to defraud.

Regulation

European Commission. The Commission has released the first draft Code of Practice on Transparency of AI Generated Content under the AI Act, setting out obligations for providers and deployers whose systems fall under article 50 paragraphs 2 and 4, including marking and labelling requirements and a timeline for feedback until late January with a second draft due in mid March 2026, which makes content transparency one of the earliest concrete implementation tracks of the Act.

GOV.UK. The Department for Science, Innovation and Technology annual report and accounts emphasise that artificial intelligence remains central to departmental activity, referencing an AI Opportunities Action Plan and the rebranding of the AI Safety Institute into the AI Security Institute with a focus on national security risks, signalling a continuing preference for a risk based and security anchored governance model rather than a single AI statute.

Cases

Mondaq. A United States Supreme Court order declining to review a machine learning patent case leaves in place a Federal Circuit decision that treats claims which merely apply machine learning to automate tasks or analyse data as potentially unpatentable abstract ideas unless they disclose specific improvements to the underlying technology, reinforcing a high bar for patent protection over generic AI implementations. 

McCullough Robertson Lawyers. New guidance in New South Wales and Queensland on the use of artificial intelligence in courts, including updated practice directions on expert evidence and responsible use of generative tools, illustrates how Australian judiciaries are starting to codify expectations for transparency, verification and human responsibility when AI is used in litigation workflows. 

Academia

Private Law Theory. A new paper by Ying Ye on the copyright protection of AI generated content in video games examines whether outputs co created by game engines and users qualify as works and how authorship and originality should be assessed when generative systems supply stylistic or structural elements, offering a concrete test bed for applying copyright doctrines to hybrid human machine creativity.

Piper Alderman. An intellectual property and technology year in review from Australia identifies the National AI Plan and related reforms as key drivers of change in generative AI regulation, mapping how copyright exceptions, licensing models and liability rules are evolving as governments move from consultation to structured policy programmes.

Adoption of AI

Ministry of Housing Communities and Local Government. The single construction regulator consultation explains that responses will be processed using the Consult AI tool under a design where themes are drafted by the system then checked by policy officials, with explicit safeguards that data are not used to train models and bias checks are applied, which makes this one of the clearest UK examples of regulated AI use in core policy making (GOV.UK).

Department of Health and Social Care. The analogue to digital chapter of the ten year Health Plan records that frontline staff view investment in interoperable records and basic digital tools as a prerequisite for credible AI deployment, implicitly warning that premature reliance on artificial intelligence in health without repairing core data infrastructure would undermine both trust and effectiveness (GOV.UK).

Department for Science Innovation and Technology. The DSIT performance report notes that the AI Security Institute now concentrates on risks to national security from advanced systems and situates this within a wider AI Opportunities Action Plan, signalling that institutional capacity for AI risk assessment is becoming embedded rather than experimental in the UK administrative landscape (GOV.UK).

Events

CNPD. The Luxembourg National Commission for Data Protection is hosting a conference on The AI Act in Action on 20 January 2026, focused on bridging policy and practice in the age of AI and featuring government and industry stakeholders on implementation challenges. 

Future Bridge. The fourth AI Legal 2026 conference will gather in house and law firm practitioners to discuss topics such as the interplay between the EU AI Act and the General Data Protection Regulation, legal research with AI and AI driven contract management, positioning AI governance as a mainstream compliance and operations theme for legal departments.

Maastricht University – “AI and the Future of Tax Law”: a two‑day conference opening today (online) on how AI reshapes tax administration, enforcement and taxpayer rights.

Takeaway

The silver thread today is that AI governance is being pulled tighter around fundamental rights and institutional practice while large scale reforms try to loosen some regulatory bolts. Parliamentary hearings and NHS analysis show rights and accountability moving to the centre of UK debates just as EU and US initiatives push toward “streamlined” AI and data rules. For AI law and governance work this combination means opportunities to shape standards in courts, health systems and corporate governance, but only if the underlying rights impacts are tracked as closely as the legislative tweaks.

Sources
European Commission, European Parliament, UK Parliament, GOV.UK, Maastricht University, McCullough Robertson Lawyers, Piper Alderman, Private Law Theory, CNPD, Future Bridge, Mondaq, India Legal, Klobuchar Senate.