{"id":2538,"date":"2024-03-18T10:21:37","date_gmt":"2024-03-18T10:21:37","guid":{"rendered":"https:\/\/favtutor.com\/articles\/?p=2538"},"modified":"2024-03-19T09:38:32","modified_gmt":"2024-03-19T09:38:32","slug":"agi-elon-musk-experts-prediction","status":"publish","type":"post","link":"https:\/\/favtutor.com\/articles\/agi-elon-musk-experts-prediction\/","title":{"rendered":"AGI in 2025? Elon Musk&#8217;s Prediction Clashes with Other Experts"},"content":{"rendered":"\n<p>Artificial General Intelligence, or AGI, is very near according to Elon Musk! The consensus is that it is the stage at which the AI model gains enough skills to outperform humans. However, things are not so crystal clear as experts have different opinions on this.<\/p>\n\n\n\n<p><strong>Highlights:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Artificial general Intelligence aims to develop software that can perform equally or better than humans on cognitive tasks.<\/li>\n\n\n\n<li>Elon Musk recently predicted that an AGI will probably be smarter than a single human by 2025.<\/li>\n\n\n\n<li>Previous remarks by OpenAI CEO Sam Altman, and Google CEO Sundar Pichai contradict this belief.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Elon Musk&#8217;s Latest Prediction for AGI<\/strong><\/h2>\n\n\n\n<p>Following the rise of artificial intelligence systems like GPT, <a href=\"https:\/\/favtutor.com\/articles\/claude-3-benchmarks-comparison\/\">Claude 3<\/a>, and <a href=\"https:\/\/favtutor.com\/articles\/gemini-vs-gpt-4\/\">Gemini<\/a>, AGI is an important term that is being discussed regularly in today\u2019s AI-driven world. <\/p>\n\n\n\n<p>AGI is a branch of theoretical AI research that aims to develop software with human-like intelligence and the capacity for self-learning. It will be achieved when an AI system learns to perform as well or better than humans on a wide range of cognitive tasks such as learning, reasoning, perceiving, and problem-solving.<\/p>\n\n\n\n<p><strong>Elon Musk responded to a viral clip from the Joe Rogan podcast with &#8216;Futurist&#8217; Ray Kurzweil about when we will achieve AGI. He predicted that AGI will probably be smarter than any single human being by next year, that is, 2025. He also said that by 2029, it will be smarter than all humans combined.<\/strong><\/p>\n\n\n\n<p>Here is what he said on X:<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined. <a href=\"https:\/\/t.co\/RO3g2OCk9x\" target=\"_blank\">https:\/\/t.co\/RO3g2OCk9x<\/a><\/p>&mdash; Elon Musk (@elonmusk) <a href=\"https:\/\/twitter.com\/elonmusk\/status\/1767738797276451090?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">March 13, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>While Ray said in the clip that AI will achieve human-level intelligence by 2029, Musk&#8217;s prediction is 4 years ahead of it.  This makes us wonder when an AI will surpass our intelligence to reach the pinnacle of human invention: AGI.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What Can an AGI System Do?<\/strong><\/h3>\n\n\n\n<p>An AGI system should be capable of understanding: common sense, logic, causes &amp; effects, background knowledge, with the ability to transfer learning and create new things. It should be able to:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Handle different kinds of knowledge<\/li>\n\n\n\n<li>Understand sentiments<\/li>\n\n\n\n<li>Have a general way of approaching any task<\/li>\n\n\n\n<li>Think exactly like or better than a human<\/li>\n\n\n\n<li>Handle different kinds of learning and learning algorithms<\/li>\n\n\n\n<li>Understand belief-based systems<\/li>\n<\/ul>\n\n\n\n<p>The current AI systems such as GPT-4 and Claude3 are types of Artificial Narrow Intelligence (ANI). ANI is designed to perform a single task or a set of tasks based on how it has been programmed. <\/p>\n\n\n\n<p>As opposed to this, AGI aims to perform any type of task that a human can. Models like GPT-4 and Claude 3 are types of ANI with some signs of AGI. Thus, future systems like GPT-4.5 GPT-5 will get closer to achieving the bigger concept.<\/p>\n\n\n\n<p>While Artificial General Intelligence will be very much better at problem-solving, personalization, and automation, it also has some potential risks involved. There are security and ethical concerns involved and a big threat that may lead to large-scale unemployment.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Which Current AI Shows Resemblances to AGI?<\/strong><\/h3>\n\n\n\n<p>With OpenAI on its way to releasing GPT-4.5 Turbo and GPT5 soon, it indicates significant progress towards achieving AGI. It is universally believed that GPT-4 itself is the building block to reach the ultimate aim of AGI. <\/p>\n\n\n\n<p>The GPT-4.5 and GPT-5 systems are expected to be faster and have better processing capabilities, better understanding, multilingual support, emotional intelligence, multimedia support, creative content generation, increased assistance, a diverse training dataset, and the latest cutoff periods. <\/p>\n\n\n\n<p>Here is an interesting episode in the podcast by Lex Fridman where OpenAI CEO Sam Altman discusses GPT5, the flaws of GPT4 and the future of AGI.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<div class=\"jeg_video_container jeg_video_content\"><iframe title=\"Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power &amp; AGI | Lex Fridman Podcast #419\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/jvqFAi7vkBc?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<\/div><\/figure>\n\n\n\n<p>Some days ago, US-based startup Cognition Labs launched their groundbreaking product <a href=\"https:\/\/favtutor.com\/articles\/devin-ai-software-engineer\/\">Devin AI<\/a>, which they are calling the \u2018first AI software engineer\u2019. <\/p>\n\n\n\n<p>Devin seems to be one of the systems closest to achieving Artificial General Intelligence since it can learn unfamiliar technologies, debug errors, build and deploy end-to-end apps, fine-tune AI models by itself, and perform real-world jobs as well! It has also achieved a 13.86% success rate on the SWE-Bench benchmark which evaluates AI models on different software engineering tasks. <\/p>\n\n\n\n<p>All these factors contribute to Devin being close to AGI since it can almost entirely perform various human-like cognitive tasks. However, since Devin has not been released to the public, we cannot jump to conclusions about it this soon.<\/p>\n\n\n\n<p>Another AI startup, Magic, is building what it calls a \u2018coworker\u2019 and not a \u2018copilot\u2019. It is claimed that Magic is also building AGI. In June 2023, they announced their in-progress model LTM-1 on X which said:<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">Meet LTM-1: LLM with *5,000,000 prompt tokens*<br><br>That&#39;s ~500k lines of code or ~5k files, enough to fully cover most repositories.<br><br>LTM-1 is a prototype of a neural network architecture we designed for giant context windows. <a href=\"https:\/\/t.co\/neNIfTVipt\" target=\"_blank\">pic.twitter.com\/neNIfTVipt<\/a><\/p>&mdash; Magic.dev (@magicailabs) <a href=\"https:\/\/twitter.com\/magicailabs\/status\/1666116935904292869?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">June 6, 2023<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>Although Magic AI hasn\u2019t mentioned when it plans to release its first product, the company has said it intends for the AI coworker to be just one step toward their ultimate goal. Magic is building towards AGI and particularly safer one to reduce the harm it could potentially have in the future.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What do Other Industry Experts think about AGI?<\/strong><\/h2>\n\n\n\n<p>OpenAI\u2019s founder and CEO, Sam Altman, has been one of the loudest voices when speaking about the benefits AGI would cause to humanity. He believes that it will be the most powerful technology that we have ever invented. While speaking to the <a href=\"https:\/\/time.com\/6344160\/a-year-in-time-ceo-interview-sam-altman\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Time Magazine<\/a> in 2023, he mentioned:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cI think AGI will be the most powerful technology humanity has yet invented. If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that. It&#8217;s a very different world. It\u2019s the world that sci-fi has promised us for a long time\u2014and for the first time, I think we could start to see what that\u2019s gonna look like.&#8221;<\/p>\n<cite>Sam Altman, CEO of OpenAI<\/cite><\/blockquote>\n\n\n\n<p>However, Meta\u2019s Chief AI Scientist Yann LeCun believes that the current Large Language Models (LLMs) powering chatbots like GPT and Claude are not on the right path to achieve AGI. In a <a href=\"https:\/\/time.com\/6694432\/yann-lecun-meta-ai-interview\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">discussion<\/a> with the same magazine, he said:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cIt\u2019s astonishing how [LLMs] work, if you train them at scale, but it\u2019s very limited. We see today that those systems hallucinate, they don&#8217;t really understand the real world. They require enormous amounts of data to reach a level of intelligence that is not that great in the end. And they can&#8217;t really reason. They can&#8217;t plan anything other than things they\u2019ve been trained on. So they&#8217;re not a road towards what people call \u201cAGI.&#8221; I hate the term. They&#8217;re useful, there&#8217;s no question. But they are not a path towards human-level intelligence.&#8221;<\/p>\n<cite>Yann LeCun, Chief AI Scientist at Meta<\/cite><\/blockquote>\n\n\n\n<p>Lecun thinks that the functioning of LLMs is remarkable when trained extensively, but their capabilities are quite restricted. He thinks that these systems tend to generate outputs that are not truly grounded in reality; they often produce unreliable or incorrect information and don\u2019t understand the real world. <\/p>\n\n\n\n<p>Moreover, they heavily rely on vast amounts of data to achieve a relatively modest level of intelligence. They cannot reason or plan beyond the scope of their training data, which means they cannot progress toward achieving AGI. <\/p>\n\n\n\n<p>While LLMs have their utility, they do not represent the best path toward attaining human-level intelligence. Thus, LeCun hates the term \u2018AGI\u2019.<\/p>\n\n\n\n<p>But on the other end of the spectrum, Google CEO Sundar Pichai turned down all the hype around AGI and stated that the future systems are going to be extremely capable so it doesn\u2019t really matter if it&#8217;s \u201cAGI\u201d. He said the following in an <a href=\"https:\/\/www.nytimes.com\/2023\/03\/31\/technology\/google-pichai-ai.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">interview with the New York Times<\/a>:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cWhen is it A.G.I.? What is it? How do you define it? When do we get here? All those are good questions. But to me, it almost doesn\u2019t matter because it is so clear to me that these systems are going to be very, very capable. And so it almost doesn\u2019t matter whether you reached A.G.I. or not; you\u2019re going to have systems which are capable of delivering benefits at a scale we\u2019ve never seen before, and potentially causing real harm. Can we have an A.I system which can cause disinformation at scale? Yes. Is it A.G.I.? It really doesn\u2019t matter.&#8221;<\/p>\n<cite>Sundar Pichai, CEO of Google<\/cite><\/blockquote>\n\n\n\n<p>This shows another approach to thinking about the impact of AI in the future. Pichai believes that the systems built in the future will be so beneficial that it doesn\u2019t matter whether we have reached AGI or not. He further says that these extraordinary systems might potentially cause harm to the world as well.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>It&#8217;s clear that while the pursuit of AGI will be transformational, it needs to be run with caution. It should maintain a balanced approach, blending technological ambition with safety and ethical considerations. Stay tuned for further updates in this race to achieve AGI!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Elon Musk predicts that AGI will be achieved by 2025, but this contradicts views from OpenAI, Google and Meta.<\/p>\n","protected":false},"author":18,"featured_media":2540,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jnews-multi-image_gallery":[],"jnews_single_post":null,"jnews_primary_category":{"id":"","hide":""},"footnotes":""},"categories":[57],"tags":[111,56,102,101,58,63],"class_list":["post-2538","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","tag-agi","tag-ai","tag-devin","tag-elon-musk","tag-google","tag-gpt"],"_links":{"self":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/2538","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/comments?post=2538"}],"version-history":[{"count":5,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/2538\/revisions"}],"predecessor-version":[{"id":2612,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/2538\/revisions\/2612"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/media\/2540"}],"wp:attachment":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/media?parent=2538"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/categories?post=2538"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/tags?post=2538"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}