Artificial General Intelligence, or AGI, is very near according to Elon Musk! The consensus is that it is the stage at which the AI model gains enough skills to outperform humans. However, things are not so crystal clear as experts have different opinions on this.
Highlights:
- Artificial general Intelligence aims to develop software that can perform equally or better than humans on cognitive tasks.
- Elon Musk recently predicted that an AGI will probably be smarter than a single human by 2025.
- Previous remarks by OpenAI CEO Sam Altman, and Google CEO Sundar Pichai contradict this belief.
Elon Musk’s Latest Prediction for AGI
Following the rise of artificial intelligence systems like GPT, Claude 3, and Gemini, AGI is an important term that is being discussed regularly in today’s AI-driven world.
AGI is a branch of theoretical AI research that aims to develop software with human-like intelligence and the capacity for self-learning. It will be achieved when an AI system learns to perform as well or better than humans on a wide range of cognitive tasks such as learning, reasoning, perceiving, and problem-solving.
Elon Musk responded to a viral clip from the Joe Rogan podcast with ‘Futurist’ Ray Kurzweil about when we will achieve AGI. He predicted that AGI will probably be smarter than any single human being by next year, that is, 2025. He also said that by 2029, it will be smarter than all humans combined.
Here is what he said on X:
AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined. https://t.co/RO3g2OCk9x
— Elon Musk (@elonmusk) March 13, 2024
While Ray said in the clip that AI will achieve human-level intelligence by 2029, Musk’s prediction is 4 years ahead of it. This makes us wonder when an AI will surpass our intelligence to reach the pinnacle of human invention: AGI.
What Can an AGI System Do?
An AGI system should be capable of understanding: common sense, logic, causes & effects, background knowledge, with the ability to transfer learning and create new things. It should be able to:
- Handle different kinds of knowledge
- Understand sentiments
- Have a general way of approaching any task
- Think exactly like or better than a human
- Handle different kinds of learning and learning algorithms
- Understand belief-based systems
The current AI systems such as GPT-4 and Claude3 are types of Artificial Narrow Intelligence (ANI). ANI is designed to perform a single task or a set of tasks based on how it has been programmed.
As opposed to this, AGI aims to perform any type of task that a human can. Models like GPT-4 and Claude 3 are types of ANI with some signs of AGI. Thus, future systems like GPT-4.5 GPT-5 will get closer to achieving the bigger concept.
While Artificial General Intelligence will be very much better at problem-solving, personalization, and automation, it also has some potential risks involved. There are security and ethical concerns involved and a big threat that may lead to large-scale unemployment.
Which Current AI Shows Resemblances to AGI?
With OpenAI on its way to releasing GPT-4.5 Turbo and GPT5 soon, it indicates significant progress towards achieving AGI. It is universally believed that GPT-4 itself is the building block to reach the ultimate aim of AGI.
The GPT-4.5 and GPT-5 systems are expected to be faster and have better processing capabilities, better understanding, multilingual support, emotional intelligence, multimedia support, creative content generation, increased assistance, a diverse training dataset, and the latest cutoff periods.
Here is an interesting episode in the podcast by Lex Fridman where OpenAI CEO Sam Altman discusses GPT5, the flaws of GPT4 and the future of AGI.
Some days ago, US-based startup Cognition Labs launched their groundbreaking product Devin AI, which they are calling the ‘first AI software engineer’.
Devin seems to be one of the systems closest to achieving Artificial General Intelligence since it can learn unfamiliar technologies, debug errors, build and deploy end-to-end apps, fine-tune AI models by itself, and perform real-world jobs as well! It has also achieved a 13.86% success rate on the SWE-Bench benchmark which evaluates AI models on different software engineering tasks.
All these factors contribute to Devin being close to AGI since it can almost entirely perform various human-like cognitive tasks. However, since Devin has not been released to the public, we cannot jump to conclusions about it this soon.
Another AI startup, Magic, is building what it calls a ‘coworker’ and not a ‘copilot’. It is claimed that Magic is also building AGI. In June 2023, they announced their in-progress model LTM-1 on X which said:
Meet LTM-1: LLM with *5,000,000 prompt tokens*
— Magic.dev (@magicailabs) June 6, 2023
That's ~500k lines of code or ~5k files, enough to fully cover most repositories.
LTM-1 is a prototype of a neural network architecture we designed for giant context windows. pic.twitter.com/neNIfTVipt
Although Magic AI hasn’t mentioned when it plans to release its first product, the company has said it intends for the AI coworker to be just one step toward their ultimate goal. Magic is building towards AGI and particularly safer one to reduce the harm it could potentially have in the future.
What do Other Industry Experts think about AGI?
OpenAI’s founder and CEO, Sam Altman, has been one of the loudest voices when speaking about the benefits AGI would cause to humanity. He believes that it will be the most powerful technology that we have ever invented. While speaking to the Time Magazine in 2023, he mentioned:
“I think AGI will be the most powerful technology humanity has yet invented. If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that. It’s a very different world. It’s the world that sci-fi has promised us for a long time—and for the first time, I think we could start to see what that’s gonna look like.”
Sam Altman, CEO of OpenAI
However, Meta’s Chief AI Scientist Yann LeCun believes that the current Large Language Models (LLMs) powering chatbots like GPT and Claude are not on the right path to achieve AGI. In a discussion with the same magazine, he said:
“It’s astonishing how [LLMs] work, if you train them at scale, but it’s very limited. We see today that those systems hallucinate, they don’t really understand the real world. They require enormous amounts of data to reach a level of intelligence that is not that great in the end. And they can’t really reason. They can’t plan anything other than things they’ve been trained on. So they’re not a road towards what people call “AGI.” I hate the term. They’re useful, there’s no question. But they are not a path towards human-level intelligence.”
Yann LeCun, Chief AI Scientist at Meta
Lecun thinks that the functioning of LLMs is remarkable when trained extensively, but their capabilities are quite restricted. He thinks that these systems tend to generate outputs that are not truly grounded in reality; they often produce unreliable or incorrect information and don’t understand the real world.
Moreover, they heavily rely on vast amounts of data to achieve a relatively modest level of intelligence. They cannot reason or plan beyond the scope of their training data, which means they cannot progress toward achieving AGI.
While LLMs have their utility, they do not represent the best path toward attaining human-level intelligence. Thus, LeCun hates the term ‘AGI’.
But on the other end of the spectrum, Google CEO Sundar Pichai turned down all the hype around AGI and stated that the future systems are going to be extremely capable so it doesn’t really matter if it’s “AGI”. He said the following in an interview with the New York Times:
“When is it A.G.I.? What is it? How do you define it? When do we get here? All those are good questions. But to me, it almost doesn’t matter because it is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you reached A.G.I. or not; you’re going to have systems which are capable of delivering benefits at a scale we’ve never seen before, and potentially causing real harm. Can we have an A.I system which can cause disinformation at scale? Yes. Is it A.G.I.? It really doesn’t matter.”
Sundar Pichai, CEO of Google
This shows another approach to thinking about the impact of AI in the future. Pichai believes that the systems built in the future will be so beneficial that it doesn’t matter whether we have reached AGI or not. He further says that these extraordinary systems might potentially cause harm to the world as well.
Conclusion
It’s clear that while the pursuit of AGI will be transformational, it needs to be run with caution. It should maintain a balanced approach, blending technological ambition with safety and ethical considerations. Stay tuned for further updates in this race to achieve AGI!