Articles by FavTutor
  • AI News
  • Data Structures
  • Web Developement
  • AI Code GeneratorNEW
  • Student Help
  • Main Website
No Result
View All Result
FavTutor
  • AI News
  • Data Structures
  • Web Developement
  • AI Code GeneratorNEW
  • Student Help
  • Main Website
No Result
View All Result
Articles by FavTutor
No Result
View All Result
Home AI News, Research & Latest Updates

Update: Meta Confirms When Llama 3 LLM Will Release

Ruchi Abhyankar by Ruchi Abhyankar
April 11, 2024
Reading Time: 3 mins read
Llama 3 Release Update
Follow us on Google News   Subscribe to our newsletter

A new model is ready to enter the fray with other LLMs in the coming weeks. Meta, at an event in London on Tuesday, has confirmed their plans to release smaller versions of Llama 3 in May 2024 with the full open-source model released in July of the same year.

Llama 3 is a large language model that will come in a range of sizes from very small to compete with the likes of Claude Haiku or Gemini Nano, to larger full responses and reasoning-capable models like GPT-4 or Claude Opus.

Here is what Meta official said at the event (via Techcrunch):

“Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3. There will be a number of different models with different capabilities, different versatilities [released] during the course of this year, starting really very soon.”

Nick Clegg, Meta’s President of Global Affairs

These comments confirm an earlier report by The Information. The release timeline indicates it is being developed and released about a year after the previous Llama 2 model.

It’s been nearly a year since the release of their last AI model Llama-2, and these smaller models are intended for faster processing as well as flexibility in deployment to other devices, making them easier to integrate into existing applications.

These smaller models will serve as a precursor to the release of the full, larger version of Llama 3.

How is Llama 3 Different?

Less is known about Llama 3 beyond that it is expected to be multimodal, capable of understanding different types of input data and analyzing them. The model is not expected to have a trillion training parameters like ChatGPT. It is speculated that the model could have over 140 billion parameters.

Llama 3 can be Meta’s formidable response to OpenAI’s GPT-4, Anthropic’s Claude 3, Google’s Gemini, and the myriad of other LLMs in the arena.

The smaller versions of Llama 3 will likely not be multimodal and only the largest model will have multimodal capabilities.

Llama-2 faced great criticism when it was released as it was too limited with fewer parameters than its competitors. Llama 3, which is more complex than its previous versions, is expected to greatly improve in performance.

It is expected to have increased accuracy and fewer hallucinations and answer a wider range of questions that may include some more controversial topics.

Meta AI intends to make Llama the most useful AI assistant in the world, but these models have a long way to go before they can catch up with Anthropic or OpenAI’s models.

The major difference in these companies is the difference in philosophies about where the future of AI is headed. Meta has released all their models open source and believes in a developer-focused outlook rather than releasing proprietary models.

Meta’s competitors in the open-source space are newly released models like Databricks DBRX, Mistral, Stability AI, and Qwen. Even if it’s not better than the larger SOTA models it could still be monumental if it is SOTA for its size. For example, If they release a 7b model that is better than Mixtral 8x7b.

The competition in the open-source market will increase significantly when it is unveiled to the public.

Conclusion

There are a lot of positive hopes from the developer community about the release of Llama 3. Many hope that it will be able to compete with the best smaller models, and will help Meta gain a better foothold for the fight with other AI giants like OpenAI and Anthropic.

ShareTweetShareSendSend
Ruchi Abhyankar

Ruchi Abhyankar

Hi, I'm Ruchi Abhyankar, a final year BTech student graduating with honors in AI and ML. My academic interests revolve around generative AI, deep learning, and data science. I am very passionate about open-source learning and am constantly exploring new technologies.

RelatedPosts

Candidate during Interview

9 Best AI Interview Assistant Tools For Job Seekers in 2025

May 1, 2025
AI Generated Tom and Jerry Video

AI Just Created a Full Tom & Jerry Cartoon Episode

April 12, 2025
Amazon Buy for Me AI

Amazon’s New AI Makes Buying from Any Website Easy

April 12, 2025
Microsoft New AI version of Quake 2

What Went Wrong With Microsoft’s AI Version of Quake II?

April 7, 2025
AI Reasoning Model Better Method

This Simple Method Can Make AI Reasoning Faster and Smarter

April 3, 2025

About FavTutor

FavTutor is a trusted online tutoring service to connects students with expert tutors to provide guidance on Computer Science subjects like Java, Python, C, C++, SQL, Data Science, Statistics, etc.

Categories

  • AI News, Research & Latest Updates
  • Trending
  • Data Structures
  • Web Developement
  • Data Science

Important Subjects

  • Python Assignment Help
  • C++ Help
  • R Programming Help
  • Java Homework Help
  • Programming Help

Resources

  • About Us
  • Contact Us
  • Editorial Policy
  • Privacy Policy
  • Terms and Conditions

Website listed on Ecomswap. © Copyright 2025 All Rights Reserved.

No Result
View All Result
  • AI News
  • Data Structures
  • Web Developement
  • AI Code Generator
  • Student Help
  • Main Website

Website listed on Ecomswap. © Copyright 2025 All Rights Reserved.