Reuters reported last week that OpenAI staff researchers wrote a letter to the board warning an internal project named Q* could represent a breakthrough in creating AI that could surpass human intelligence in a range of fields. That letter was sent ahead of Altman’s firing.
The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.
Neither OpenAI nor its largest backer Microsoft have publicly confirmed the existence of Q*, much less the possibility that it is a dangerous breakthrough in AI technology. OpenAI didn’t respond to requests for comment.
These sorts of claims aren’t new, either. A Google engineer claimed in 2022 that an unreleased AI system had become sentient. The claim caused a brief flurry of excitement before the engineer was fired and the company denied the claim.
The only detail given in the report about Q*’s capabilities was that it could solve certain mathematical problems at the level of grade-school students. That has led to skepticism about how serious an advance Q* could be. Elon Musk suggested his own Grok chatbot could outdo Q* by both solving math problems and fundamental philosophical questions.
Should we be worried??