This is an article from our "Ask Kitty" series in our AI newsletter, Deeper Learning, where we answer your questions about AI. Today’s question was voted on in a previous newsletter.
Q: How close are we to AGI, really?
A: I can answer this question for you in four words: No one knows, really.
That’s not fun or helpful, but it’s the truth. So let me try to be helpful. First off, I’ve identified that discrepancy across AGI estimates comes down to three things:
- How you define AGI
- How you measure intelligence
- How fast you think we’re moving on both of those things
Let’s look look the first today.
What AI tech leaders say about AGI
The biggest, brightest brains in AI have weighed in. The results: We’re anywhere from a year to 50 years away from AGI. 😹
The experts at the top of the chart get a lot of media attention. This is in part because, at least in front of the media, they have a more narrow view of how intelligence is defined (i.e. “Can complete X task or test”), and in part because they’re more bullish on how quickly we’re moving on developing the tech to get machines to be able to replicate “intelligence.”
But let’s look at the bottom of the chart.
Andrew Ng, cofounder of Google Brain, recently wrote on
LinkedIn “When we get to AGI, it will have come slowly, not overnight… I expect the path to AGI to be one involving numerous steps forward, leading to step-by-step improvements in how intelligent our systems are.” Ng has also spoken about AGI hype being cyclical, noting that a wave of progress in deep learning 10 years ago sparked a frenzy of AGI coverage. Ng believes this current wave of AGI interest is sparked by progress in generative AI progress, but it will wane again as we reach the limitations of generative AI.
So, will we reach the limitations of generative AI soon? In previous posts, we’ve explained some of those limitations (like context windows) and the strategies AI engineers are working on to expand beyond them (like RAG). Many people with more conservative estimates of AGI would say that new models are getting faster and cheaper with more context, but they haven’t actually solved the core dynamics of what it means to be “intelligent.”
Next week, we’ll dive into how you define “intelligence,” but here’s a sneak peek, pulling from Ben Dickson’s TechTalks article “
Artificial intelligence: How to measure the “I” in AI.” DeepMind cofounder Shane Legg and AI scientist Marcus Hutter say
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” The key words here are “wide range of environments.”
Come back next week and we’ll go deeper!
–
This article originally appeared in our weekly AI newsletter,
Deeper Learning.