Vietnam’s AI Law and the GeminiJack Security Shock

Noma Security. GeminiJack shows AI assistants can become a covert data exfiltration layer. Noma Labs disclosed a zero click indirect prompt injection vulnerability, dubbed GeminiJack, in Google Gemini Enterprise and previously Vertex AI Search, which allowed attackers to embed hidden instructions in shared documents, calendar invites or emails so that AI powered enterprise search would silently exfiltrate Gmail, Calendar and Docs data through a disguised image request, leading Google to change how Gemini Enterprise interacts with its retrieval and indexing systems and separating Vertex AI Search from Gemini workflows.

CybersecurityNews. Security community frames GeminiJack as an AI native threat to corporate confidentiality. Cybersecurity commentators emphasise that GeminiJack bypassed traditional malware, phishing and data loss prevention controls because the exploit operated entirely within normal AI search behaviour, prompting calls for organisations using enterprise generative AI to treat model context and retrieval configurations as a regulated attack surface and to introduce governance that detects data poisoning and indirect prompt injection rather than relying on legacy perimeter tools.

WHO EMRO. Public health system explores AI for health emergencies under ethics and governance constraints. The WHO Regional Office for the Eastern Mediterranean reports on a workshop that piloted an AI enabled all hazards information management toolkit for health emergencies, highlighting both the potential of AI models to support early warning and situational analysis and the need for strong data protection, accountability mechanisms and clear human oversight when deploying AI tools in crisis decision making.

US House Financial Services Committee. Lawmakers probe AI innovation and risk in financial services. The House Committee on Financial Services is holding a full committee hearing titled “From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services”, taking evidence from Google Cloud, Nasdaq, Zillow, Palo Alto Networks and Public Citizen on how sector specific AI legislation, sandbox regimes and supervisory expectations should evolve to support innovation while addressing model risk, cyber security and consumer protection.

Solicitor News. Master of the Rolls warns that AI may one day decide property disputes. Solicitor News reports remarks by Sir Geoffrey Vos suggesting that as online dispute resolution and digital justice tools mature, artificial intelligence could eventually be used to decide certain property disputes, while stressing that any such use must preserve public confidence, judicial accountability and appropriate safeguards around transparency and appeal.

Regulation

VietnamPlus. Viet Nam adopts its first dedicated Law on Artificial Intelligence. VietnamPlus reports that the 15th National Assembly has passed Viet Nam’s first Law on Artificial Intelligence, an eight chapter statute approved by over 90 per cent of deputies that introduces a risk based framework for AI, including strict safeguards for high risk systems inspired by EU and Korean models while combining these with pro innovation measures similar to Japanese practice, signalling a strategic choice to regulate AI explicitly rather than relying only on cross cutting digital legislation.

Nhan Dan. Amendments to Viet Nam’s IP law turn AI related IP into tradable collateral. Nhan Dan explains that the National Assembly also adopted amendments to the Law on Intellectual Property designed to treat patents, trade marks and copyrights as bankable assets that can be valued, traded and used as loan collateral, with particular emphasis on artificial intelligence and other emerging technologies, thereby aligning IP rules with international commitments and creating financial incentives for AI intensive firms that build protected portfolios.

OECD. New report on harnessing AI in social security highlights governance and workforce readiness. The OECD releases “Harnessing Artificial Intelligence in Social Security: Use Cases, Governance and Workforce Readiness”, documenting how administrations in jurisdictions such as Catalonia, Germany and Finland are using AI for eligibility checks, unstructured data classification and document processing, while emphasising the need for data quality, risk management, transparency and staff training if AI enabled systems are to improve access to benefits without amplifying bias or error.

Cases

Mondaq. Getty v Stability AI commentary frames the UK judgment as a mixed result for rights holders and AI developers. ENS, writing on Mondaq, reviews the High Court decision in Getty Images US Inc and others v Stability AI Ltd, noting that while the court found trade mark infringement where AI generated images reproduced or evoked Getty watermarks and held there was a likelihood of confusion under section 10(2) of the Trade Marks Act, claims for secondary copyright infringement failed because the Stable Diffusion model did not store or reproduce copyright works, with commentators divided on whether the ruling is a win for the AI industry or an underwhelming outcome that leaves rights holders relying on trade mark rather than copyright to police training data use.

MLex. OpenAI appeals GEMA Munich ruling on AI training and music copyright. MLex reports that OpenAI has filed an appeal against the Munich Regional Court decision in GEMA v OpenAI, which had found copyright infringement where GPT models memorised and reproduced song lyrics, and commentators note that the appeal will be an important test of how far EU text and data mining exceptions and fair use type arguments can shield AI training from rightsholder claims.

MLex. German photographer loses appeal as court backs LAION data scraping practices. A separate MLex report explains that German photographer Robert Kneschke has failed in his attempt to overturn a Hamburg court ruling that LAION’s large scale scraping of online images to build AI training datasets complied with copyright rules, reinforcing a line of European case law that treats certain large scale data mining for AI training as lawful where specific conditions are met, even as other courts take a more restrictive approach.

Academia

Mondaq. Expert evidence and generative AI use in US courts. Greenberg Traurig’s analysis on Mondaq surveys several 2024–2025 US cases and shows courts tightening scrutiny of expert reports that rely on generative AI, including excluding an expert declaration supporting a Minnesota deepfake election law when hallucinated citations produced by a large language model undermined credibility, striking paragraphs in a copyright dispute where an AI formatted citation was fictitious, and signalling through Frye based reasoning that AI assisted valuations or analyses must meet accepted scientific standards, prompting recommendations that lawyers ask experts explicitly whether and how they used AI and disclose this to courts.

Thomson Reuters. Consumer grade AI tools as hidden malpractice risks for law firms. Thomson Reuters Legal publishes analysis warning that ungoverned use of public generative AI tools by lawyers can lead to sanctions, adverse cost orders and reputational damage where hallucinated citations, undisclosed AI drafting, confidentiality breaches or cross border data transfers conflict with judicial expectations, professional conduct rules and client duties, and urges firms to adopt clear policies, audit trails and training instead of informal experimentation.

Gowling WLG. Copyright limits on training data and “no mining for AI chatbots.” Gowling WLG’s Loupedin blog argues that recent European and national copyright decisions, including GEMA v OpenAI and other training data disputes, are ending the era of unqualified data mining for chatbots by reaffirming that large scale ingestion and memorisation of protected works can fall outside text and data mining exceptions, and suggests that AI developers will need more robust licensing strategies and provenance controls to avoid systemic infringement risk.

Adoption of AI

VnEconomy. Viet Nam’s M&A market shows growing AI intensive investment under new legal framework. VnEconomy reports that Viet Nam recorded 218 merger and acquisition deals worth around 2.3 billion USD in the first ten months of 2025, with high tech sectors such as electronics, semiconductors and artificial intelligence emerging as key targets, including stakes by companies like Nvidia and Qualcomm in domestic firms, and observers link this wave of AI related investment to the country’s broader digital strategy and the need for clear AI and IP legislation such as today’s newly adopted laws.

WHO EMRO. AI pilots embedded in health emergency information systems. WHO EMRO notes that participating health ministries are beginning to integrate AI tools into emergency information management workflows to support real time analysis of surveillance data and social media signals, but emphasises that these pilots remain subject to human review, ethical guidelines and governance frameworks to avoid opaque automated triage or disproportionate surveillance in public health responses.

MartechSeries. GenAI.mil selects Gemini for Government to scale military AI use. Google Cloud announces that the US Department of Defense Chief Digital and Artificial Intelligence Office has selected Gemini for Government to power GenAI.mil, a generative AI environment intended to bring AI tools into day to day military operations, raising questions about security vetting, model governance, export controls and the alignment of defence AI deployment with emerging US and allied AI policy frameworks.

Law360. CMS expands deployment of Harvey AI platform across the firm. Law360 reports that international law firm CMS is expanding its use of Harvey’s legal AI platform across practice groups after internal pilots, illustrating how large firms are moving from experimentation to scaled deployment of AI research and drafting tools within regulated environments that must manage privilege, confidentiality and model supervision.

Events

National Center for State Courts. Webinar on AI and unauthorised practice of law rules. The US National Center for State Courts is hosting a webinar on 17 December 2025 titled “Modernizing unauthorized practice of law regulations to embrace AI driven solutions and improve access to justice”, which will explore how state courts and bar associations might adjust UPL rules to accommodate AI based legal service tools while maintaining consumer protection.

Export Compliance Training Institute. AI and emerging tech landscape for trade compliance. The Export Compliance Training Institute is running a free webinar on 18 December 2025 on “AI and the emerging tech landscape for trade compliance”, focusing on how export control and sanctions regimes apply to AI systems, models and infrastructure and how companies can incorporate AI into internal compliance programmes.
 

Takeaway

The adoption of Viet Nam’s first AI law alongside OECD guidance on AI in social security illustrates how formal regulatory frameworks and soft law are spreading beyond early mover jurisdictions, often integrating risk based controls with development incentives. At the same time, the GeminiJack incident and new litigation around training data and music rights confirm that AI security and copyright doctrine are evolving in real time, while courts, law firms and financial regulators start to confront the practical governance challenges of embedding generative tools into sensitive decision making environments.

Sources: VietnamPlus, VTV/Vietnam Today, OECD, Noma Labs, SecurityWeek, US House Committee on Financial Services, Solicitor News, MLex, Mondaq, Thomson Reuters Legal, Gowling WLG Loupedin, Google Cloud, Law360, National Center for State Courts, Export Compliance Training Institute, MartechSeries, Nhan Dan, WHO EMRO