OpenAI CEO leads safety committee for AI model training
San Francisco: OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.
Microsoft-backed (MSFT.O) OpenAI’s chatbots with generative AI capabilities, such as engaging in human-like conversations and creating images based on text prompts, have stirred safety concerns as AI models become powerful.
“A new safety committee signifies OpenAI completing a move to becoming a commercial entity from a more undefined non-profit-like entity,” said D.A. Davidson managing director Gil Luria.
“That should help streamline product development while maintaining accountability.”
The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly.
This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.
Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman.
Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.