According to the Financial Times, the House of Lords has backed an amendment to ban social media for under-16s, intensifying the UK policy debate on age checks and safety-by-design obligations that interact with automated content and recommendation systems. The bill now returns to the Commons, keeping legislative pressure on verification and enforcement pathways that will shape AI-enabled online risk controls. 

According to The Times, the growing use of AI to support official public-sector decisions is raising compliance concerns around transparency, challengeability, and procedural fairness in administrative law. The article frames opacity and overreliance risks as likely flashpoints for oversight and litigation where reasons and accountability must remain intelligible to affected individuals. 

Regulation

  • According to Reuters, South Korea has implemented its AI Basic Act, introducing requirements including human oversight for specified high-impact applications and labelling duties for generative AI outputs, with an initial grace period before penalties apply. The development adds another major compliance regime that will influence cross-border product design and disclosure practices for AI providers and deployers. 

  • Ofcom has joined international online safety regulators to publish shared expectations on age assurance, emphasising accuracy, reliability, fairness, and data protection compliance when implementing age checks. The statement positions age assurance as a foundational enforcement lever under the UK Online Safety Act, relevant where AI tools are used to estimate or infer age and to manage access controls at scale. 

  • Competition Bureau Canada has released a “What We Heard” report from its consultation on algorithmic pricing, highlighting risks including anticompetitive conduct and harms linked to limited transparency. The Bureau signals that competition enforcement and policy work will increasingly scrutinise algorithm-driven pricing practices, including those using machine learning optimisation techniques. 

Academia

  • The Institute for Law & AI has published a January 2026 working paper on “Automated Compliance and the Regulation of AI,” examining how compliance tasks may be automated and what that implies for regulatory design and enforcement. The paper’s framing is directly relevant to operationalising AI Act-style obligations via internal controls, monitoring, and auditability tooling. 

  • JIPITEC has published analysis of Article 50 transparency provisions in the EU AI Act, focusing on how labelling and disclosure obligations may function in practice for different AI system types. The discussion is a useful reference point for organisations building user-facing notices and internal decision logs to evidence compliance. 

Events

  • Araki International IP&Law is hosting a free online webinar on practical preparation for EU AI Act compliance, including governance and risk assessment perspectives, with an explicit focus on corporate implementation. The session is scheduled for 18 February 2026 and is described as Japanese-language only. 

  • Clifford Law Offices CLE is hosting a free webinar on responsible AI use in legal practice, covering ethical duties, risk mitigation, and litigation trends linked to AI-driven disputes. The session is scheduled for 19 February 2026 and is delivered online with registration required. 

Takeaway

South Korea’s AI Basic Act going live reinforces a global compliance trajectory towards concrete, auditable duties, particularly labelling of AI-generated content and governance controls for high-impact deployments. The same-day focus on age assurance and algorithmic pricing shows regulators increasingly treating AI-adjacent systems as enforceable points of consumer protection, competition integrity, and online safety compliance.

Sources: Reuters, Ofcom, Competition Bureau Canada, Financial Times, The Times, Institute for Law & AI, JIPITEC, Araki International IP&Law, Clifford Law CLE