OpenAI says it’s setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot
OpenAI has announced the establishment of a safety and security committee, as well as the development of a new AI model to replace its current GPT-4 system that powers its ChatGPT chatbot
The San Francisco-based startup revealed in a blog post on Tuesday that the committee will provide advice to the full board on “critical safety and security decisions” related to its projects and operations. This move comes amid ongoing discussions about AI safety at OpenAI, which came under scrutiny after researcher Jan Leike resigned, accusing the company for letting safety “take a backseat to shiny products.”
OpenAI co-founder and chief scientist Ilya Sutskever also stepped down, and the “superalignment” team they jointly led, which focused on AI risks, was disbanded. Despite not addressing the controversy directly, OpenAI stated it has “recently begun training its next frontier model” and maintains that its AI models are industry leaders in terms of capability and safety. “We welcome a robust debate at this important moment,” the company added.
AI models are prediction systems trained on extensive datasets to produce text, images, video, and human-like conversation on demand. Frontier models represent the most advanced and powerful AI systems.
The safety committee comprises company insiders, including OpenAI chief executive Sam Altman and chairman Bret Taylor, four technical and policy experts from OpenAI, and board members Adam D’Angelo, chief executive of Quora, and Nicole Seligman, former Sony general counsel.
The committee’s first task is to scrutinise and enhance OpenAI’s procedures and safety measures, with plans to present its advice to the board within 90 days. OpenAI has committed to publicly disclosing the recommendations it decides to implement “in a manner that is consistent with safety and security.”