Meta is now on a roll! They introduced the Meta AI assistant on WhatsApp, and they also unveiled Llama 3, and now developers all around the globe are testing it. Some of them also shared their insights on Meta’s Llama 3 and how the tool is performing in several use cases and technical functionalities.
10 Interesting Insights about Llama 3
Llama 3 can highly be useful to several developers worldwide and its availability of integrations with several platforms enables users to utilize it in various scenarios.
Here are 10 initial reactions from developers after testing Llama 3 open-source, from using it to create a web application, to image generation, to testing how fast it is.
1) Creating a React Application
Ashutosh Srivastava, an AI enthusiast tried using Llama 3 for coding on Perplexity Labs. He asked the AI model to develop a single-page react application named “Task Manager”. He further instructed the model to include a well-designed interface and add several functionalities.
Testing Llama-3 for coding on Perplexity Labs:
— AshutoshShrivastava (@ai_for_success) April 19, 2024
I gave a prompt for a small task manager app with all the details, and it built the app in one go. There was a small issue with Tailwind CSS loading. it was able to rectify and provide the solution in next prompt.
Thanks,… pic.twitter.com/tWvqsbhfl9
Llama 3 did an amazing job in creating the source codes for the app.jsx file, the main file in the React application. Another interesting thing is that the user asked the AI model to make the react deployment on Netlify, for which Llama successfully created a json code for Netlify deployment.
The user also wanted a Tailwind CSS styling interface which was also ensured by Llama 3. Lastly, it also provided a README file containing all details on the React application deployment.
At the end of the video, we can see that the user successfully ran all the codes written by it and the Task Manager React application runs successfully. Llama 3 is hence a great AI model for coding-related tasks!
2) Fast Prompt Response
Harsha, an AI enthusiast tried testing Llama 3. He gave a simple prompt to Llama 3 and recorded the response time. The results were great as the prompt token rate was 66.378 tokens/sec and the response token rate was a speeding 19.596 tokens/sec. The user conducted this test on Llama on the MLX platform.
Quite fast… Llama 3 8B instruct on MLX.
— Harsha (@ShellZero) April 18, 2024
64GB M3 Max.
Prompt: 66.378 tokens/sec.
Generation: 19.596 tokens/sec. pic.twitter.com/S2NjTI2jdP
We can see in the video above the response times and also how efficient Llama 3 is in producing the response. Llama is reaching great heights in terms of prompt response times.
3) Searching the Internet
Nazeem, an AI enthusiast used Llama 3 on Page Assist to search the internet. The user asked the LLM about the launching time of the next episode of Sheldon.
Llama did an amazing job in searching the internet and providing the user with accurate information as to when the next episode of Sheldon would be aired. The AI model also provided links where the user could watch the next episode.
Here is an example using the Llama3 model to search the internet on Page Assist 🙂
— Nazeem (@n4ze3m) April 19, 2024
Powered by .@ollama & @LangChainAI pic.twitter.com/quYSTpDGbW
This is a highly groundbreaking feature as several LLMs have been unable to extract live information on the internet by conducting searches, and Meta’s Llama 3 is just here to do that.
4) Image Generation
Meta AI which uses Llama 3 LLM can also generate images. You can also animate your existing images. Here’s a video taken from a user where you can see he generated an image animation of an alien playing games underwater. This is amazing!
Meta AI, which utilizes Llama3, can now instantly generate images as you type the prompt. Additionally, you can 'animate' your images.
— AshutoshShrivastava (@ai_for_success) April 19, 2024
📹@imisra_ pic.twitter.com/baD1mv7Z5I
Nowadays highly efficient image generation capturing all the necessities and features is what developers need.
5) Enhancing the Maestro Platform
Llama 3 can now be used to fully unlock the maximum potential of several platforms. One of these platforms is the Maestro platform.
A user named Pietro Schirano, the founder of EverArt AI tried using Llama 3 to empower the Maestro platform. He asked the Llama 3-enabled Maestro platform to generate a Python script for a game framework, and the tool did that amazingly.
Introducing Maestro-Ollama! 🦙
— Pietro Schirano (@skirano) April 18, 2024
You can now harness the power of the Maestro framework entirely locally using Llama 3 70B via @ollama.
Let that sink in for a second, this is a model that outperforms Claude 3 Sonnet, operating as an agent, completely locally.
What a time! 🔥 pic.twitter.com/aC34Bd6F65
The user even stated that this model outperforms Claude 3 sonnet for running such operations locally, so indeed this is a useful feature for the developer community.
Note that the interface in Maestro is a bit different than the one in Llama, and you have to be more detailed with your prompts.
6) Creating GIFs/Videos
Llama 3 can even create GIFs and videos. Below is a GIF demonstration that was created using the model.
LLAMA3 is out and MetaAI is a thing now!!
— Spencer Yonce (@spence_freeruns) April 19, 2024
Okay, but seriously go try it asap.
There's even an animated function for creating gifs / videos!!
meta . ai pic.twitter.com/FfvLWsjUEO
There’s a function to generate GIFs and Videos in Llama 3 for everyone to utilize.
7) Creating a Tic-Tac-Toe Application
Sinan, an AI enthusiast and developer, tested Llama 3’s model. He used the 70b instruct model of Llama 3 and asked it to create a simple tic-tac-toe application with simple javascript. Llama impressed yet again by providing the HTML, CSS, and JS codes, and a fully functional tic-tac-toe application was created to run these codes.
LLAMA 3 is out and 70B is great 🔥
— Sinan (@sinanisler) April 19, 2024
made a couple of tests on my WordPress AI and recorded one.
made the model code tic tac toe game. I know not a hard game to code but still, it did a great job
I am pretty sure fine-tuned versions for coding will be as good as GPT-4 this… pic.twitter.com/blSKUKsEw3
This shows that Llama 3 has highly efficient coding abilities and you can make use of it to develop applications or even host websites both front end and back end.
8) Utilizing Gradio Interfaces for Applications
Apart from generating images, gifs, and videos, who knew that Llama 3 could be used to create a whole image generator application itself? Here’s a video shared by a user where he asked it to generate“a minimal Gradio app that can generate an image using Stable Diffusion XL Turbo, using Diffusers library”.
Wow, just built an image generation app using @OpenWebUI with llama3:70b-instruct from @ollama! This video is NOT sped up, 1X speed on Macbook Pro M1 Max 64G. 100% local.
— cocktail peanut (@cocktailpeanut) April 18, 2024
"a minimal Gradio app that can generate an image using Stable Diffusion XL Turbo, using Diffusers library" pic.twitter.com/00nuz4oPSa
Llama did amazingly well in creating a code for the Gradio app, with all the required sections and functionalities. Although the user didn’t run the code, the code looks very efficient and may be able to deploy the image generator application correctly. The code contains the required modules such as gradio and torch needed to run the application.
The same user decided to test the model again. This time he asked Llama 3 to create a Python code that can take a video file as input and reverse the frames in the video. Here also he demanded a Gradio interface. Llama did amazingly well yet again in scripting the Python code with all the modules and necessities. The user accessed Llama 3 locally with Open-WebUI.
Run Llama 3 Locally with Open-WebUI
— cocktail peanut (@cocktailpeanut) April 18, 2024
This video is NOT sped up. The model downloaded from @ollama (llama3:instruct), running with @OpenWebUI, on a Macbook Pro M1 Max 64G.
You can run it right now on your own machine (Windows, Linux, Mac) with 1 click. https://t.co/sDpI4TEiwa pic.twitter.com/qe5PIv3hQm
So, this shows us that Llama can work well in integrating several modules and creating efficient codes and applications. This is a feature that developers would love.
9) Local RAG
You can utilize fully local Retrieval Augmented Generation with Llama 3’s model on Ollama’s platform. Here’s a video from a user where he utilized Llama 3 locally through Ollama and you can see the RAG feature in action as he asks several real-time questions about Llama 3 itself. The model does well in extracting this information and answering the questions.
Fully local RAG with llama 3 on @ollama. No edits but 2x video speed. Now waiting for @Teknium1 to release a function-calling Hermes llama 3.
— Ashpreet Bedi (@ashpreetbedi) April 18, 2024
Code: https://t.co/LQr9DQV5OM pic.twitter.com/S8g4RW3pZC
Several LLMs have come up that are locally utilizing RAG and we are happy to witness that Meta’s Llama 3 is one of these LLMs.
10) The Apple Test
A user named Vitor de Lucca tested Llama 3 on the Apple test. The test was to create ten sentences that end with the word Apple. Llama 3 did amazingly well in achieving success with this task. It provided ten unique sentences that end with the word Apple.
Still not believing Llama 3 can actually pass the Apple test perfectly.
— Vitor de Lucca 🏳️🌈 (@vitor_dlucca) April 18, 2024
Not even Claude 3 Opus do this right…. pic.twitter.com/EML1GkPlRr
Not many LLMs have been successful with this test. The user even stated that Claude 3 Opus hasn’t also been able to run this test properly yet. But here we have Meta’s Llama 3 making wonders. This gives us an idea about how powerful and efficient Llama 3 is at generating responses that require deep thinking mixed with Natural Language Processing.
Conclusion
Llama 3 is a huge advancement from Meta to the world of generative AI. This LLM is highly powerful as it shows fast prompt response times, high-quality image generations, creating and scripting efficient codes and applications, powerful locally enabled services and also passing several tests.