The Hiroshima Process Code of Conduct sets a globally coordinated, voluntary framework for organisations developing advanced AI systems, particularly foundation and generative models. It defines actions for responsible design, development, deployment and use, centred on risk identification, transparency, governance, cybersecurity, mitigation of societal and human rights risks, and alignment with democratic values. The Code builds on the OECD AI Principles and responds to concerns that powerful models amplify safety, security and societal harms. A core requirement instructs organisations to “take appropriate measures throughout the development of advanced AI systems… to identify, evaluate, and mitigate risks across the AI lifecycle” (Action 1). Although non binding, the instrument is intended as a global reference point preceding future regulatory frameworks and encouraging the emergence of international standards, governance tools and cross sector collaboration.