{"id":3519,"date":"2024-04-11T08:11:30","date_gmt":"2024-04-11T08:11:30","guid":{"rendered":"https:\/\/favtutor.com\/articles\/?p=3519"},"modified":"2024-04-11T08:11:31","modified_gmt":"2024-04-11T08:11:31","slug":"llama-3-release-update","status":"publish","type":"post","link":"https:\/\/favtutor.com\/articles\/llama-3-release-update\/","title":{"rendered":"Update: Meta Confirms When Llama 3 LLM Will Release"},"content":{"rendered":"\n<p>A new model is ready to enter the fray with other LLMs in the coming weeks. Meta, at an event in London on Tuesday, has confirmed their plans to release smaller versions of Llama 3 in May 2024 with the full open-source model released in July of the same year.<\/p>\n\n\n\n<p>Llama 3 is a large language model that will come in a range of sizes from very small to compete with the likes of <a href=\"https:\/\/favtutor.com\/articles\/claude-3-access\/\">Claude Haiku<\/a> or Gemini Nano, to larger full responses and reasoning-capable models like GPT-4 or Claude Opus.<\/p>\n\n\n\n<p>Here is what Meta official said at the event (via <a href=\"https:\/\/techcrunch.com\/2024\/04\/09\/meta-confirms-that-its-llama-3-open-source-llm-is-coming-in-the-next-month\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Techcrunch<\/a>):<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3. There will be a number of different models with different capabilities, different versatilities [released] during the course of this year, starting really very soon.&#8221;<\/p>\n<cite>Nick Clegg, Meta\u2019s President of Global Affairs<\/cite><\/blockquote>\n\n\n\n<p>These comments confirm an earlier report by <a href=\"https:\/\/www.theinformation.com\/articles\/meta-platforms-to-launch-small-versions-of-llama-3-next-week\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">The Information<\/a>. The release timeline indicates it is being developed and released about a year after the previous Llama 2 model.<\/p>\n\n\n\n<p>It\u2019s been nearly a year since the release of their last AI model Llama-2, and these smaller models are intended for faster processing as well as flexibility in deployment to other devices, making them easier to integrate into existing applications.<\/p>\n\n\n\n<p>These smaller models will serve as a precursor to the release of the full, larger version of Llama 3.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How is Llama 3 Different?<\/strong><\/h2>\n\n\n\n<p>Less is known about Llama 3 beyond that it is expected to be multimodal, capable of understanding different types of input data and analyzing them. The model is not expected to have a trillion training parameters like ChatGPT. It is speculated that the model could have over 140 billion parameters.<\/p>\n\n\n\n<p><strong>Llama 3 can be Meta&#8217;s formidable response to OpenAI&#8217;s GPT-4, Anthropic&#8217;s Claude 3, Google&#8217;s Gemini, and the myriad of other LLMs in the arena.<\/strong><\/p>\n\n\n\n<p>The smaller versions of Llama 3 will likely not be multimodal and only the largest model will have multimodal capabilities.<\/p>\n\n\n\n<p>Llama-2 faced great criticism when it was released as it was too limited with fewer parameters than its competitors. Llama 3, which is more complex than its previous versions, is expected to greatly improve in performance.<\/p>\n\n\n\n<p>It is expected to have increased accuracy and fewer hallucinations and answer a wider range of questions that may include some more controversial topics.<\/p>\n\n\n\n<p>Meta AI intends to make Llama the most useful AI assistant in the world, but these models have a long way to go before they can catch up with Anthropic or OpenAI\u2019s models.<\/p>\n\n\n\n<p>The major difference in these companies is the difference in philosophies about where the future of AI is headed. Meta has released all their models open source and believes in a developer-focused outlook rather than releasing proprietary models.<\/p>\n\n\n\n<p>Meta\u2019s competitors in the open-source space are newly released models like Databricks DBRX, Mistral, Stability AI, and Qwen. Even if it&#8217;s not better than the larger SOTA models it could still be monumental if it is SOTA for its size. For example, If they release a 7b model that is better than Mixtral 8x7b.<\/p>\n\n\n\n<p>The competition in the open-source market will increase significantly when it is unveiled to the public.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>There are a lot of positive hopes from the developer community about the release of Llama 3. Many hope that it will be able to compete with the best smaller models, and will help Meta gain a better foothold for the fight with other AI giants like OpenAI and Anthropic.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Find out when is Meta&#8217;s upcoming open-source LLM called Llama 3 will be releasing, and how different it will be from Llama 2.<\/p>\n","protected":false},"author":20,"featured_media":3522,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jnews-multi-image_gallery":[],"jnews_single_post":null,"jnews_primary_category":{"id":"","hide":""},"footnotes":""},"categories":[57],"tags":[56,171,172,72,81],"class_list":["post-3519","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","tag-ai","tag-llama","tag-llama-3","tag-llm","tag-meta"],"_links":{"self":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/3519","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/users\/20"}],"replies":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/comments?post=3519"}],"version-history":[{"count":2,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/3519\/revisions"}],"predecessor-version":[{"id":3523,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/3519\/revisions\/3523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/media\/3522"}],"wp:attachment":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/media?parent=3519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/categories?post=3519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/tags?post=3519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}