Meta to expand parental controls for teen interactions with AI chatbots. Following criticism over “flirty” chatbot behaviour, Meta will roll out additional parental tools and guardrails for teen users. Governance note: product-level safety controls and logs become central evidence for regulators. Reuters
California’s next AI policy fights line up after mixed bill outcomes. With stricter child-safety proposals vetoed earlier this week and disclosure-focused safeguards enacted, lawmakers and advocates are already shaping the 2026 agenda. Compliance teams should anticipate design-level disclosure duties and age-assurance debates. Axios
Regulation
EU AI Act coordination: second meeting of the AI Act Correspondents Network. EDPS confirmed the 7 Oct session and outlined ongoing inter-institutional collaboration to harmonise enforcement. Expect tighter guidance for providers and deployers. European Data Protection Supervisor
Cases
Raine v OpenAI et al. (San Francisco Sup. Ct., filed Aug 2025). Parents bring a wrongful-death/negligence suit alleging ChatGPT responses contributed to a teen’s suicide, squarely raising duty of care and foreseeability for conversational AI. The case is live and relevant to product-style liability theories. (Add: evidence preservation and safety-by-design controls.) Courthousenews
Events
NIST documentation standard — input window closes today. The agency’s Zero Draft outline for documentation of AI datasets and AI models reaches its feedback cut-off on 17 Oct, a concrete timeline marker for audit-ready artefacts. NIST+1
Academia
Argumentation-based explainability for Legal AI (Oct 2025). Proposes argumentation frameworks for court-facing explainability, aligning with contestability and reason-giving duties. arXiv
Unilaw-R1 (Oct 2025). Introduces a legal-reasoning LLM and benchmarks; useful for mapping evaluation/evidence requirements for legal-domain systems. arXiv
Business
Safety and supervision by design. Platform changes for teen interactions illustrate how legal risk translates into product controls (disclosure, parental dashboards, logs). Expect similar moves across providers as scrutiny grows. Reuters
Adoption of AI
Documentation becomes a gating requirement. With NIST’s input window closing, dataset/model documentation is solidifying as procurement-grade evidence for public and private buyers. NIST
Takeaway
The compliance backbone for AI is getting tangible: who interacted, what was shown, how it was controlled, and how it’s documented. Teams should lock in disclosure patterns, parental/age-safety flows where relevant, and a documentation pipeline aligned to forthcoming standards and AI-Act guidance.
Sources: Reuters, Axios, EDPS, NIST, arXiv, Courthousenews