OpenAI Safety

OpenAI Grants Board Veto Power and Forms in Generative AI Safety Advisory Group

OpenAI has publicized new safety measures to address the growing concerns surrounding the potential dangers of generative AI. The company has announced the formation of a new safety advisory group that will provide recommendations directly to the organization’s leadership and has granted the board of directors veto power over decisions related to AI safety.

OpenAI Safety

Safety concerns were floated frequently as the (still unconfirmed) reason surrounding OpenAI CEO Sam Altman’s abrupt ousting before his eventual return. Regardless of whether that was the core of the board’s argument for the firing, the evolving discussion around AI risks puts a lot of attention on OpenAI’s approach to safety. The company’s new “Preparedness Framework” claims to establish a clear methodology for identifying and addressing any of the bigger risks associated with the generative AI models under development.

OpenAI is specifically looking to minimize “catastrophic” risks, those that could damage the economy and human life. The company has set up new safety mechanisms divided by the development stage of the specific AI models. They include a safety systems team for in-production models like those powering ChatGPT and a preparedness team for the frontier still in development, which will attempt to spot and measure risks. There’s also the “superalignment” team for more theoretical superintelligent AI models, which OpenAI has been building up since this summer.

Each model is evaluated for concerns around cybersecurity, persuasion (vulnerability to disinformation), model autonomy, and chemical, biological, radiological, and nuclear (CBRN) threats. The evaluation covers predefined measures and restrictions. Any models with too high a possible risk will be shut down or at least not developed until a solution is available.

“We are investing in the design and execution of rigorous capability evaluations and forecasting to better detect emerging risks. In particular, we want to move the discussions of risks beyond hypothetical scenarios to concrete measurements and data-driven predictions. We also want to look beyond what’s happening today to anticipate what’s ahead. This is so critical to our mission that we are bringing our top technical talent to this work,” OpenAI explained in a blog post. “We are creating a cross-functional Safety Advisory Group to review all reports and send them concurrently to Leadership and the Board of Directors. While Leadership is the decision-maker, the Board of Directors holds the right to reverse decisions.”

The new governance process mandates that safety recommendations be concurrently sent to both the board and the leadership, including CEO Sam Altman and CTO Mira Murati. Although the leadership makes the final decision on deploying or shelving a model, the board now has the authority to reverse these decisions. This restructuring is a response to previous concerns about high-risk products or processes being approved without adequate board oversight. The recent changes in the board composition, including the appointment of figures like Bret Taylor and Larry Summers, who are not AI experts, add a new dimension to the decision-making process. How well these new plans will play out in practical terms will be evaluated just as closely as the models themselves.

OpenAI Creates ‘Superalignment’ Team to Safeguard Against a Generative AI Skynet

OpenAI Startup Fund’s Converge Accelerator Starts Accepting New Applicants

Sam Altman Returns as OpenAI CEO With (Mostly) New Board of Directors