G7 Leaders Create “Hiroshima AI Process” to Discuss Generative AI Regulation
The Group of Seven (G7) nations want to create an international standard for generative AI and announced the first steps toward reaching that goal during their recent meeting in Japan. The “Hiroshima AI Process” will produce a report by the end of the year on setting standards for safety and trustworthiness, according to a summary of a working lunch.
Hiroshima Ai Process
The G7 pointed toward a need or making AI more trustworthy and suggested regulation has not matched the advances and spread of the technology. Though “the common vision and goal of trustworthy AI may vary,” the organization said it hopes to set up at least the initial standards for generative AI. The plan is to start a ministerial forum for the “Hiroshima AI Process,” which will lay out the issues and some options to address problems that are already arising including disinformation and copyright infringement. The G7 also formally asked the Organisation for Economic Cooperation and Development to come up with its own analysis of how different policies for generative AI may be developed. The G7 leaders choosing to include generative AI in their agreed-upon statement comes after a meeting of the G7 digital ministers that similarly suggested the need for AI rules to address potential risks.
“We recognize the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organizations…to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects,” the G7 said in its leadership communique. “In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of this year. These discussions could include topics such as governance, safeguard of intellectual property rights including copy rights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies.”
OpenAI CEO Sam Altman and other generative AI company executives have started drawing a lot more attention from regulators in recent months. Altman, along with IBM chief privacy officer Christina Montgomery and NYU professor Gary Marcus just testified to Congress on the technology and the need for regulation. That followed a meeting at the White House with President Biden and Vice President Harris, and higher-ups in the administration to discuss the same topic. Altman met with several dozen lawmakers the evening before the hearing for dinner, which likely helped make the room a bit friendlier during the actual hearing. Meanwhile, the European Union is moving ahead with generative AI regulation as part of the AI Act and China has issued strict rules surrounding the technology, especially with regards to deepfake usage.