OpenAI said Tuesday it has established a new committee to make recommendations to the company’s board about safety and security, weeks after dissolving a team focused on AI safety.
In a blog post, OpenAI said the new committee would be led by CEO Sam Altman as well as Bret Taylor, the company’s board chair, and board member Nicole Seligman.
The announcement follows the high-profile exit this month of an OpenAI executive focused on safety, Jan Leike. Leike resigned from OpenAI leveling criticisms that the company had underinvested in AI safety work and that tensions with OpenAI’s leadership had “reached a breaking point.”
It also comes after the departure of Ilya Sutskever, another leader of OpenAI’s so-called “superalignment” team focused on ensuring that AI development serves human needs and priorities. Sutskever played a key role in Altman’s surprise ouster as CEO last year, only to reverse course and later throw his support behind Altman’s return.
Earlier this month, an OpenAI spokesperson told CNN that dismantling the superalignment team and reassigning those employees across the company would help it better achieve its superalignment goals.
In its blog post Tuesday, OpenAI also said it has begun training a new AI model to succeed the one currently powering ChatGPT. The company said the new AI model succeeding GPT-4 would be another step along the way to artificial general intelligence.
“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company said.
“A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days,” the blog post added. “At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
Read the full article here