Has AGI come too fast?🤯
Who would have thought that a drama shaking the entire tech world would stem from a letter?
According to Reuters, prior to Sam Altman’s dismissal, OpenAI researchers sent a warning letter to the company’s board, cautioning that a powerful artificial intelligence could threaten humanity.
This is an AI model named Q* (pronounced Q-stard), which insiders believe could be OpenAI’s latest breakthrough in AGI (Artificial General Intelligence).
For a long time, Silicon Valley leaders have been embroiled in endless debates over “AI safety.”
Perhaps it’s precisely because of this that the latest direct discovery of what could be the next revolutionary technology has been prematurely exposed by insiders.
As for what the new model exactly is, little is known so far, but the majority in OpenAI’s community seem unwelcoming of Q*’s arrival.
What is Q*?
Based on media exposure, let’s briefly introduce Q*.
Q*’s predecessor was the GPT-zero project launched in 2021 by Ilya Sutskever’s team, aiming to solve the problem of synthetic data.
If previously, data for training large models mostly came from online personal data, the GPT-zero project could use computer-generated data for training, instantly resolving the bottleneck of data sources.
For AI companies, data is a resource. Especially high-quality language data, which directly determines the quality of large models.
In the battle of large models, AI companies start with billions of parameters and feed data sets measured in TBs. Not only could the data set be exhausted, but the price is also skyrocketing.
Therefore, the emergence of synthetic data, like a perpetual motion machine, can infinitely generate high-quality data, thereby resolving the data issue.
When discussing the GPT-zero project, Elon Musk commented, “Synthetic data will exceed that by a zillion.”
It’s a little sad that you can fit the text of every book ever written by humans on one hard drive
Under the achievements of GPT-zero, OpenAI senior researchers Jakub Pachocki and Szymon Sidor built the Q* model. Although its level may not be high currently (elementary school math ability), professionals believe this model could solve mathematical problems never seen before.
This also involves another technology — the Q-learning algorithm.
It’s a classic reinforcement learning algorithm with strong planning capabilities but not universal or generalizable; the advantage of large models is their nearly human-level generalization ability, also known as extrapolation.
Combining the two, this model with both planning and generalization capabilities is very close to the human brain, capable of autonomous learning and self-improvement, and the final result: it’s highly likely to exhibit autonomous decision-making and slight self-awareness, approaching an AGI that surpasses humans.
If the reports are true, we can imagine that conservative representative Ilya Sutskever led the dismissal of Altman over disagreements in commercialization and safety concepts.
In July this year, Ilya Sutskever formed a team dedicated to limiting potential safety threats from AI.
After Altman’s return to OpenAI, Ilya Sutskever was unable to remain on the board.
Currently, OpenAI has not responded to reports on Q*, and whether OpenAI has achieved AGI is still to be seen in future reports...
(upvote and read more 👇)