Parliamentary Pressure, AI Security Warnings and Expanding Data Litigation

The Guardian. Over 100 UK parliamentarians across parties have endorsed a coordinated call, led by nonprofit Control AI, for binding regulation of the most powerful AI systems, urging the Prime Minister to resist pressure to weaken rules and stressing risks comparable to nuclear weapons and pandemics if advanced systems are left largely self governed. 

NCSC. The UK National Cyber Security Centre warns that prompt injection should not be treated as a niche variant of SQL injection but a distinct and potentially more dangerous class of attack that exploits how AI systems process instructions, urging organisations to treat prompt injection as a strategic security risk in AI deployments rather than a minor technical bug. 

Tech Policy Press. A new commentary argues that current UK law and policy do not provide effective protection from chatbot related harms, highlighting gaps in consumer protection and safety standards and suggesting that regulators have been slower than the speed at which conversational AI is being integrated into everyday services. 

Regulation

Department for the Economy (Northern Ireland). A new ‘AI Strategic Direction’ document sets out Northern Ireland’s economic approach to AI, framing AI as a growth driver and outlining priorities for skills, innovation support and infrastructure while flagging the need to balance competitiveness with ethical use and risk mitigation.

OECD. The ‘AI and the global productivity divide’ working paper analyses how AI may widen or narrow productivity gaps between low and middle income countries and richer economies, stressing that without complementary policies on skills, infrastructure and governance AI could entrench inequalities rather than deliver shared growth.

Cases

JURIST/US District Court (SDNY). Reports confirm that The New York Times has filed a federal copyright lawsuit against Perplexity AI, alleging large scale unauthorised copying, distribution and display of millions of articles to power its generative AI products and search summaries, continuing the wave of newsroom versus AI litigation focused on training data and output reproduction.

Law360/US Supreme Court – The US Supreme Court has declined to hear an appeal in a machine learning patent eligibility dispute involving Recentive Analytics, leaving in place a lower court decision and signalling continuing judicial reluctance to revisit the boundaries of patent protection for AI related inventions under current US doctrine.

Gulte/Delhi High Court. Actor N. T. Rama Rao Jr (Jr NTR) has reportedly moved the Delhi High Court seeking protection of his personality rights against unauthorised commercial use, including in the context of AI enabled image and voice replication, illustrating how generative technologies are accelerating personality and publicity rights disputes in India. 

CourtListener – United States District Court (S.D.N.Y.). In the OpenAI copyright MDL, Magistrate Judge Ona Wang partly granted plaintiffs’ motion and denied OpenAI’s cross motion on text and social media discovery, holding that Rule 26 obligations rest with counsel of record, rejecting OpenAI’s reliance on Sam Altman’s and Greg Brockman’s personal counsel to filter “purely personal” messages without a log, and ordering OpenAI’s outside counsel to rerun agreed search terms on forensic images, provide hit counts and detailed logs for privacy and privilege redactions, and meet and confer on what counts as “purely personal”, while accepting the Daily News plaintiffs’ proportionality based collection process.

Academia

Verfassungsblog. A new piece on ‘Artificial Intelligence and Human Rights Courts’ examines how regional human rights courts confront cases involving surveillance systems, facial recognition and algorithmic decision making, arguing that courts must build sufficient technical capacity to assess complex AI evidence while preserving standards of proportionality and due process. 

Oxford Law Faculty (Border Criminologies). A blog on ‘Criminalisation at European Borders and the Role of Artificial Intelligence’ analyses the deployment of AI tools in migration control, warning that predictive surveillance and risk scoring can reinforce criminalisation dynamics at borders and calling for stronger human rights safeguards in legal and institutional frameworks.

DLA Piper – Innovation Law Insights. Practitioner analysis on AI in public procurement uses a recent Italian Supreme Court decision to illustrate how procurement law constrains the acquisition and deployment of AI systems, emphasising transparency of criteria, explainability of automated evaluations and the need for contracting authorities to anticipate liability when algorithms influence tender outcomes.

Adoption of AI

Regulatory Policy Committee (UK Government blog). The Regulatory Policy Committee describes how it has been experimenting with AI tools to support impact assessment scrutiny, using AI to summarise evidence and identify issues while stressing that final judgments remain human led and that experimentation must be accompanied by careful attention to bias, transparency and accountability.

UK Government – Environment Agency. A government update on drought management notes that water companies and regulators are increasingly using AI to detect leaks and optimise networks, with senior officials emphasising data sharing, innovation and analytics as part of a coordinated approach to climate related water risks and infrastructure investment. 

Geneva Bern Area (GGBa). Authorities in Geneva have inaugurated an AI health hub at Campus Biotech to advance AI driven healthcare, research and neurotechnology, positioning the hub as a focal point for collaboration between regulators, health providers and researchers on safe and effective clinical use of AI tools.

Events

Global Big Data Conference. The ‘Global Artificial Intelligence Virtual Conference’ will take place online from 15 to 17 December 2025, bringing together vendors, developers and policymakers to discuss AI applications, risk management and governance frameworks in a multi sector setting.

Ganitara International Computing and AI Conference – The Ganitara International Computing and AI Conference 2025 runs online on 14 and 15 December 2025 and will cover advances in computing and AI, offering opportunities for researchers and practitioners to engage with emerging technical trends that will shape future regulatory debates. 

Takeaway

Today’s picture combines growing political pressure in the UK for binding controls on frontier AI with more granular regional and international strategies on how AI supports growth and productivity. Litigation over training data and personality rights continues to expand the legal risk landscape, while academics and practitioners focus on how courts, borders and procurement law can realistically absorb complex AI systems. For public institutions and organisations in the UK and EU, the immediate challenge is to align experimentation and adoption with evolving regulatory expectations and a rapidly thickening body of case law and soft law guidance.

Sources: UK Government, NCSC, The Guardian, Tech Policy Press, Department for the Economy (Northern Ireland), OECD, JURIST, The AI Insider, Verfassungsblog, Oxford Law Faculty, DLA Piper, Regulatory Policy Committee, ILO, Geneva Bern Area, Global Big Data Conference, Ganitara International Computing and AI Conference, CourtListener