After almost a decade overseeing superintelligence as chief scientist of artificial intelligence startup OpenAI, Ilya Sutskever left in May. One month later, he’s started his own AI company — Safe Superintelligence Inc.
The Fed needs to start cutting rates now, strategist says CC Share Subtitles Off
English view video The Fed needs to start cutting rates now, strategist says
The Fed needs to start cutting rates now, strategist says CC Share Subtitles Off
English The Fed needs to start cutting rates now, strategist says
“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” Sutskever wrote Wednesday on X, formerly Twitter. “We will do it through revolutionary breakthroughs produced by a small cracked team.”
Sutskever is joined in his new venture by Daniel Gross, who formerly directed Apple’s AI efforts, and Daniel Levy, another ex-OpenAi researcher. The startup has offices in Tel Aviv and Palo Alto, California.
Alongside Jan Leike — who also left OpenAI in May and now works at Anthropic, an AI firm started by former OpenAI employees — Sutskever led OpenAI’s Superalignment team. The team was focused on controlling AI systems and ensuring that advanced AI wouldn’t pose a danger to humanity. It was dissolved shortly after both leaders departed.
Safe Intelligence — as implied from its name — will be focusing on similar safety efforts to what Sutskever’s old team did.
Advertisement
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” the company’s co-founders wrote in a public letter. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
The firm is also promising that its “singular focus” will prevent management interference or product cycles from getting in the way of their work. Several members of OpenAI, including founding member Andrej Karpathy, have left the company in recent months. Several former staffers last month signed an open letter raising the alarm over “serious risks” at OpenAI over oversight and transparency issues.
Sutskever was one of OpenAI’s board members who attempted to oust fellow co-founder and CEO Sam Altman in November, who was quickly reinstated. The directors had criticisms over Altman’s handling of AI safety and allegations of abusive behavior. Former board member Helen Toner has said Altman’s manipulative behavior and lies had created a toxic culture that executives labeled “toxic abuse.”