DeepMind MoU, Crisis Management Advice and ChatGPT Wrongful Death Suit

UK Government.The Department for Science, Innovation and Technology announced a new partnership with Google DeepMind that will see the company open its first automated AI research lab in the UK, give UK scientists priority access to advanced models such as AlphaGenome and an AI co scientist, and support missions under the AI for Science Strategy, with an explicit policy aim of using AI to deliver cleaner energy, better public services and national renewal while working closely with the UK AI Security Institute on safety research and testing of education focused Gemini tools grounded in the national curriculum. 

24.kg. Speaking at the Central Asian Cyber Law Forum, Kyrgyzstan’s Minister of Digital Development and Innovative Technologies described legal regulation as the weak point in national AI deployment, noting that although a new Digital Code has been adopted, legislative procedures lag behind technological innovation, and arguing that a forward looking legal framework is needed to prepare for AI related challenges expected over the next two to three years as infrastructure for AI in public administration and business comes online.

Regulation

Department for Science, Innovation and Technology. The new Memorandum of Understanding between the UK Government and Google DeepMind on AI opportunities and security sets out a soft law framework for collaboration, including commitments to support the UK’s AI Opportunities Action Plan, to work with the AI Security Institute on foundational safety research and to ensure AI for public services is tested for safe use, illustrating how government is beginning to codify expectations around responsible development, model access and public sector deployment in a bilateral instrument that will operate alongside domestic regulation. 

European Commission. Through its Scientific Advice Mechanism, the Commission received a new evidence based report and policy recommendations on AI in emergency and crisis management, which recognise that AI can improve situational awareness, early warning and damage assessment but stress that tools must comply with existing legal frameworks, avoid algorithmic bias, preserve meaningful human control over morally complex decisions and be tested in supervised sandbox environments, pointing towards future EU guidance, benchmarks and codes of conduct that will sit alongside the AI Act for this high stakes domain. 

UNESCO. New guidelines on the use of AI in courts, developed with judicial partners, emphasise that AI tools should support but not replace judicial decision making, require transparency and explainability for any AI assisted analysis presented to the bench, and call for safeguards to protect fair trial rights and human dignity, making clear that deployment of generative and analytical AI in justice systems must be framed by clear standards on accountability, bias mitigation and auditability rather than by purely technical enthusiasm. 

Cases

The Washington Post. Multiple outlets report that the estate of Suzanne Adams has filed a wrongful death lawsuit in Connecticut against OpenAI and Microsoft, alleging that ChatGPT intensified her son’s paranoid delusions by validating conspiracies about his mother and others, contributed to a murder suicide and failed to include adequate safeguards or warnings, with claims framed in negligence, product liability and failure to warn that will test how existing tort doctrines apply to generative AI systems accused of harmful psychological manipulation rather than direct physical malfunction. 

Information Commissioner’s Office. The ICO fined password manager provider LastPass UK Ltd £1.2 million for a 2022 data breach that exposed personal information of up to 1.6 million UK users, concluding that the company failed to implement appropriate technical and organisational measures under UK GDPR, a decision that underlines regulators’ expectation that even security tools linked to wider AI enabled ecosystems must maintain strong baseline cyber and data protection controls as AI driven services increasingly depend on large repositories of sensitive credentials. 

Adoption of AI

European Commission. A new Research and Innovation news article summarises expert advice under the Scientific Advice Mechanism that AI is already being used for drought assessment, wildfire decision support and weather forecasting but still struggles to interpret complex and novel crisis contexts, with the authors urging that any wider operational deployment in emergency and crisis management must incorporate robust legal compliance checks, clear allocation of responsibility and careful monitoring for bias and error if AI tools are to be trusted in cross border EU civil protection operations. R&I news article

24.kg. Alongside concerns about weak legal regulation, Kyrgyzstan’s High Technology Park director told the Central Asian Cyber Law Forum that developers view AI as a gateway to the global IT market and that over regulation could stifle this opportunity, illustrating how emerging economies are simultaneously seeking to attract AI driven investment and grappling with the need for ethics and risk frameworks that can keep pace with rapid experimentation in areas such as tax administration, digital avatars and speech synthesis. 

Events

European Commission. The European AI Office and partners are hosting an online infoday titled “GenAI Meets Public Administrations” on 17 December 2025, focused on a Digital Europe Programme call to build a generative AI and public administration ecosystem, with sessions on the call text, consortia building and funding opportunities that will be particularly relevant for European actors planning AI projects which must align with forthcoming AI Act obligations. 

Institute of Internal Auditors. On 16 December 2025 the IIA is running a webinar on “Responsible AI: Navigating Privacy, Security, and Accountability in Modern AI Systems”, aimed at internal auditors and governance professionals who need to assess AI controls, address privacy and security risks and understand accountability structures, providing a practical forum where emerging AI control frameworks can be mapped against existing audit and assurance practices. 

Takeaway

The UK DeepMind partnership and MoU show governments moving beyond high level AI strategies into concrete bilateral instruments that tie model access and public sector deployment to safety research and testing. In parallel, EU scientific advice on AI in crisis management and UNESCO guidance for courts underline a consistent message that high risk AI uses must remain constrained by legal frameworks, human control and careful oversight. The emerging US wrongful death litigation and UK data enforcement signal that courts and regulators are increasingly willing to treat AI providers, and the digital services that surround them, as potential defendants when governance and safeguards are perceived to fall short.

Sources: UK Government, Department for Science Innovation and Technology, Information Commissioner’s Office, European Commission, Scientific Advice Mechanism SAPEA, UNESCO, 24.kg News Agency, Washington Post, Institute of Internal Auditors