UK deepfake crackdown and platform enforcement

According to TIME, the UK is bringing into force an offence covering the creation of non-consensual sexualised intimate images, with the Grok incident accelerating political focus on enforcement against distribution channels and tool access. The immediate operational pressure is on platforms and providers to prevent creation and circulation pathways, not just react to reports.

AI deepfake abuse triggers cross border platform action

According to Sky News, Ofcom is investigating X after reports that its Grok tool was used to generate sexualised images of children and undressed images of people. The report frames the immediate issue as illegal content exposure and child safety risk, with platform controls now under scrutiny.

Health tools and chip controls reshape AI governance

Reuters. OpenAI launches ChatGPT Health, offering a dedicated health tab with medical record and wellness app integration, stricter privacy controls, and phased global rollout plans. Reuters. Nvidia now requires full upfront payment for H200 AI chips in China amid regulatory uncertainty, a shift from prior flexible terms that could limit Chinese adoption.

AI provenance and control become practical compliance tests

Lenovo announced a deeper NVIDIA partnership around the Lenovo AI Cloud Gigafactory and set out a consumer and device layer push via Qira at CES. Meta (Reuters) was reported as facing China regulatory review hurdles around a proposed purchase of Manus, highlighting that AI deals now carry multi layer governance risk, including national security framing.