News Tech and Science

Sam Altman’s removal from OpenAI: Is Q-Star the ‘humanity-threatening’ AI behind it?

OpenAI chief looking to raise trillions to reshape semiconductor sector: WSJ
Source: Video Screenshot

The recent events involving Sam Altman being removed and then returning to OpenAI, a notable player in the AI field with ChatGPT, have been the talk of the AI community. Although the specific reasons for Altman’s initial removal as the CEO remain unclear, there are indications in a recent report that an AI software called Q-Star (Q*) might be linked to it, Business Standard reported.

Q-Star AI could be a potential game-changer in OpenAI’s pursuit of artificial general intelligence

According to a Reuters report, there’s a belief among some people inside OpenAI that Q-Star (Q*) is a fresh model in the works that could be a game-changer in the company’s pursuit of artificial general intelligence (AGI). OpenAI defines AGI as systems that can independently do better than humans in most ‘economically valuable tasks.’ The researchers also noted that this advancement might pose a threat to humanity.

The report also connects the choas in the OpenAI boardroom with this significant discovery. It suggests that researchers had written a letter to the board, warning them about the potential dangers of the new finding. Just one day after that, Sam Altman was removed from his position.

So, what exactly is OpenAI’s Q-Star?

The report proposes that Q-Star will have the ability to solve math problems at a level comparable to grade-school students.

While this might seem like a modest accomplishment, AGI has demonstrated potential in tackling specific mathematical challenges, especially when equipped with extensive computing resources. This success has fueled researchers’ optimism about the future success of Q-Star.

Mathematics is considered a frontier in the development of generative AI. Current generative AI excels in tasks like writing and language translation by predicting the next word based on statistics.

However, the ability to perform math, where there is only one correct answer, implies that AI could possess enhanced reasoning capabilities, resembling human intelligence. AI researchers believe this capability could be applied to groundbreaking scientific research.

However, there are concerns surrounding this development. In a letter to the board, researchers raised alarms about the capabilities and potential risks of AI.

The specific safety issues were not specified, but there has been ongoing discussion among computer scientists about the potential dangers posed by highly intelligent machines. For example, there’s a concern that these machines might decide that destroying humanity is in their best interest.

Sam Altman’s role and departure

Sam Altman played a crucial role in securing investments and necessary computing resources from Microsoft to advance towards AGI. Alongside announcing several new tools in a recent demonstration, Altman hinted last week at a summit of world leaders in San Francisco that significant advancements in AI were on the horizon. Despite this, he was removed from his position.

“Four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” he said at the Asia-Pacific Economic Cooperation summit. This is broadly seen as a reference to AGI, according to Business Today.

About the author

Brendan Taylor

Brendan Taylor was a TV news producer for 5 and a half years. He is an experienced writer. Brendan covers Breaking News at Insider Paper.







Daily Newsletter