Three well-known generative AI pioneers have formed Safe Superintelligence Inc., a startup that will focus on safe superintelligence (SSI).

In a post, former OpenAI leaders Ilya Sutskever and Daniel Levy and Daniel Gross, a former Y Combinator partner, announced the company's role and mission. Sutskever was OpenAI's chief scientist and Levy was an OpenAI engineer.

Here's the Safe Superintelligence Inc. mission in a nutshell. The three founders wrote:

"SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Constellation Research analyst Chirag Mehta broke down the Safe Superintelligence launch means and the open questions.

  • What does safe superintelligence mean exactly? "We at least know what AGI means, but no one can meaningfully describe what 'Safe Superintelligence' actually means," said Mehta.
  • The company may actually distraction from safe AI. "This launch might likely have the opposite effect--a distraction from focusing on making AI systems safe today before we cross the AGI or superintelligence Rubicon.
  • Researchers unite! "This effort will likely attract many researchers and technologists who have been passionate about advancing the domain but are frustrated with limitations and changing strategies of current AI companies," said Mehta.
  • Future direction of Safe Superintelligence. "It is unclear in which direction the company goes. I will be surprised if they themselves are clear about their milestones," said Mehta. "It would be worth watching who they hire, who they raise money from, and who they might work with as their design partners. That would reveal more details beyond a lofty mission statement."
  • Lines are drawn. "This will likely drive a deeper wedge into the OpenAI-Sam Altman and Stability AI networks as many of them considered this to be the original mission of OpenAI. As M.G. cleverly put it, “I'm reminded of Coca-Cola Classic. Safe Superintelligence sounds a lot like 'OpenAI Original'," said Mehta.
  • Now hiring for the movement. "The larger enterprise software community will largely ignore this launch, but for serious AI aficionados it would be a dream to be part of a movement in Palo Alto or Tel Aviv—two magnificent cities that have largely defined the next generation landscape and are on a way to define the next one," said Mehta.