AI training data and courtroom misuse under scrutiny

UK Government (DSIT). A new press release confirms that members of the former International Network of AI Safety Institutes have recommitted to joint work on benchmarks and testbeds under the renamed International Network for Advanced AI Measurement, Evaluation and Science, with an explicit focus on improving the comparability and robustness of AI measurement and evaluation practices across major economies.

Jersey Evening Post. Jersey’s Chief Minister highlighted that digital technology, including artificial intelligence, will be “crucial” to curbing future public spending, signalling plans to expand automation and data-driven decision-making across government while recognising the need to manage digital exclusion and ensure that human oversight remains central for sensitive public services.

Regulation

European Commission. The Commission has opened a formal antitrust investigation into whether Google’s use of web publishers’ and YouTube content for AI purposes breaches EU competition rules, focusing on how content is scraped, whether publishers have effective control over reuse in generative services, and whether Google’s practices distort competition in emerging AI markets.

Council of the EU. Research ministers agreed the Council’s position on amendments to the EuroHPC Regulation to support “AI gigafactories”, extending the EU’s supercomputing joint undertaking so that high-performance computing resources can be used more extensively for training frontier AI models, and setting a mandate for negotiations with Parliament on governance, access conditions and safeguards.

Government of India (DPIIT). A working paper on the AI-copyright interface proposes a hybrid “One Nation, One Licence, One Payment” framework, under which AI developers would receive a statutory blanket licence to use all lawfully accessed copyrighted content for training in exchange for revenue-based mandatory remuneration, administered by a new Copyright Royalties Collective for AI Training (CRCAT) and overseen by a government rate-setting committee.

Cases

The Times of India (India). The Supreme Court of India has identified what it describes as the first known instance of AI misuse in its filings, after a litigant in the Omkara Assets Reconstruction v Gstaad Hotels dispute submitted a rejoinder containing fabricated case law apparently generated with AI tools, prompting the bench to treat the matter as serious and raising questions about litigants’ and advocates’ duties when relying on AI in written submissions.

Chicago Sun-Times (USA). A Cook County Circuit Court judge has sanctioned attorney Larry Mason and his former firm Goldberg Segalla for nearly 60,000 USD after a defence filing for the Chicago Housing Authority cited a non-existent case produced by ChatGPT, reinforcing that courts may impose significant financial penalties on lawyers and firms that fail to verify AI-assisted work and mislead the court.

Academia

Gadens. A legal insight on “Intellectual property and AI” examines how Australian IP law is being retrofitted to address generative systems, highlighting unresolved issues around ownership of training inputs, the protectability and originality of AI-assisted outputs, and the role of contracts and internal policies in allocating IP risk in AI development and deployment.

DLA Piper. Innovation Law Insights analyses an Italian Council of State decision on AI in public procurement, extracting principles for contracting authorities such as transparency over automated components of bids, ensuring equal treatment when AI tools are used by some bidders but not others, and maintaining meaningful human judgment in the assessment and supervision of AI-assisted performance.

Burges Salmon. A.new Half-Yearly Disclosure Update surveys key English disclosure cases, including matters involving AI-generated evidence and GDPR-related disclosure disputes, and emphasises that parties experimenting with AI in document review or evidence generation must still meet duties of candour, preserve relevant data and manage proportionality in line with existing civil procedure rules.

Adoption of AI

Cyberagentur (Germany). The German Agency for Innovation in Cybersecurity has launched the multi-year “Forensics of Intelligent Systems” programme, funding three consortia to develop methods and tools for analysing learning AI systems and detecting manipulation in a manner that meets evidential standards in court, aiming to bridge gaps between digital forensics, AI security and legal admissibility.

Kennedys. Commentary on AI use by expert witnesses, based on the 2025 Bond Solon Expert Witness Survey, reports that around one fifth of experts now use AI tools in their work, yet stresses that courts expect transparency about any such use and that responsibility for the opinions given remains with the human expert, not the tool provider.

Events

UK Parliament. On 17 December 2025 the Joint Committee on Human Rights will hold an oral evidence session on “Human Rights and the Regulation of AI” at Westminster, continuing its inquiry into how human rights standards should guide the design and oversight of AI systems in the UK.

Council of Europe/UK Parliament. A Council of Europe conference on Artificial Intelligence will take place on 15–16 December 2025 in the Palace of Westminster, bringing together parliamentarians, regulators and experts to discuss common standards for trustworthy AI, including accountability, oversight and international coordination.

BCS SGAI. The Forty-fifth SGAI International Conference on Artificial Intelligence (AI-2025) will run in Cambridge from 16–18 December 2025, providing a long-running UK-based academic forum on AI methods and applications and offering opportunities to track how technical developments intersect with safety, governance and regulatory debates.

Takeaway

Today’s developments point to a tightening legal perimeter around AI training and use. Policymakers are moving towards structured licensing and royalty models for training data, while competition and infrastructure rules are being adapted for frontier systems, and courts are signalling that misuse of AI in litigation will attract sanctions. For practitioners, the direction of travel is towards clearer ex ante rules on content access and governance, and stronger ex post accountability when AI tools undermine evidential integrity or trust.

Sources

European Commission, Council of the EU, UK Government (DSIT), The Times of India, Department for Promotion of Industry and Internal Trade (India), Law Trend, Chicago Sun-Times, Gadens, DLA Piper, Burges Salmon, Cyberagentur, Dubai Financial Services Authority, Kennedys, UK Parliament, Council of Europe, BCS SGAI