According to MLex, India is shifting away from a standalone AI Act and instead will lean on existing laws to regulate artificial intelligence risks within its legal system, foregoing a comprehensive new statute for now. 
According to The Korea Herald, South Korea will begin enforcement of its Artificial Intelligence Act, positioning itself as one of the first jurisdictions to bring a comprehensive AI law into effect alongside the EU. 
According to Radiology Business, U.S. healthcare executives are calling for a unified federal AI policy framework to preempt the patchwork of state AI regulations they say hinder innovation and consistency in medical AI deployment. 
According to Bloomberg Law reporting, multiple U.S. states are targeting AI systems used to advise on employee compensation, signalling emerging sectoral regulation at the state level on AI in employment contexts. 
According to WinBuzzer, experts raised concerns about Perplexity offering free AI chatbots to law enforcement given known error rates, sparking debate over public safety use of generative AI outputs. 
According to IAPP analysis, debates continue around balancing AI innovation with fairness and transparency, especially in areas intersecting copyright and algorithmic accountability. 

Regulation

  • The EDPB and EDPS have issued Joint Opinion 1/2026 on the proposed ‘Digital Omnibus on AI’, putting data protection and fundamental rights concerns directly into an AI Act simplification debate. It is a reminder that any streamlining effort is likely to be judged on whether safeguards and enforceability are weakened in practice. 

  • Reuters reports that a US House panel is set to vote on the ‘AI Overwatch Act’, which would give Congress a formal review window over export licences for advanced AI chips. If it progresses, it strengthens the direction of travel towards treating frontier compute supply chains as a legislative oversight issue rather than a purely executive one. 

  • MLex reports that the European Commission is expected to miss the 2 February deadline to finalise guidance on classifying high-risk AI systems under the EU AI Act. If confirmed, the delay matters operationally because firms will be making design and conformity decisions while a key interpretive layer remains unsettled. 

  • According to Linklaters reporting on UK developments, the UK Financial Conduct Authority has been urged by Parliament’s Treasury Committee to provide clearer regulatory guidance on AI risks in the financial sector. 

Cases

  • CT Insider reports that a Connecticut lawyer is accused of filing briefs that included AI-generated fake case citations, with opposing counsel seeking sanctions in a contract dispute. The dispute is another practical warning that courts may treat unverifiable AI-assisted authorities as a professional conduct and costs issue, not a harmless drafting error. 

Academia

  • A 2026 chapter in Springer’s AI Law and Ethics literature maps emerging legal and ethical frameworks and highlights how data protection, product liability, and sector rules are being pulled into AI governance conversations. It is useful as a structured reference point for how compliance questions are being framed across sectors rather than inside AI law alone. 

  • A 2026 paper hosted by UC Law SF’s scholarship repository argues for governance reforms for integrating AI into corporate board decision-making and oversight. It is a practical corporate governance angle that connects accountability to board processes rather than treating AI risk as a purely technical control problem. 

  • A new research paper proposes a machine‑readable AI Deployment Authorisation Score as a global standard for determining legal permission for high‑risk AI systems, arguing this could address gaps in current governance frameworks. 

  • Another study examines internal deployment gaps in frontier AI regulations, showing how high‑risk AI used inside organisations may evade existing regulatory oversight and proposing approaches to address these blind spots. 

Events

  • The European Patent Office is running ‘Search and Examination Matters 2026’ as a free online conference on how AI is impacting patent search and examination, which is directly relevant to AI and IP process integrity. 2 to 5 February 2026. 

  • Bricker Graydon is running a free webinar on legal risks and privacy strategies where AI intersects with student data governance, including vendor contracting and bias, which may be relevant for public-sector and education compliance teams. 28 January 2026. 

Takeaway

The anchor development is the EDPB and EDPS stepping into the ‘Digital Omnibus on AI’ debate, signalling that simplification proposals will be tested against enforceable rights safeguards rather than speed alone. Alongside US movement on AI chip export oversight, and another sanctions-linked AI citation dispute, the day reinforces a single governance theme, accountability is shifting from general AI aspiration to specific decision points that regulators, legislators, and courts can audit.

Sources: MLex, The Korea Herald, Radiology Business, Bloomberg Law, WinBuzzer, IAPP, arxiv, European Data Protection Board, Reuters, CT Insider, Springer, UC Law SF Scholarship Repository, European Patent Office, Austrian Standards, Bricker Graydon