Elon Musk-backed Grok can now process a wide variety of visual input data like images, documents, and diagrams!
Highlights:
- xAI revealed Grok 1.5V, its first multimodal model capable of immense visual processing.
- A standout feature is its ability to convert logical diagrams into executable code.
- Grok 1.5V outperforms its peers in the new RealWorldQA benchmark that measures real-world spatial understanding.
The Capabilities of Grok 1.5V
Grok-1.5V is their first-generation multimodal model that can now process visual information as well. It will be available existing Grok users soon.
This release comes mere weeks after xAI released their updated chatbot model Grok 1.5 featuring enhanced reasoning capabilities and a 128,000 tokens context. The model showed advanced coding and math capabilities and was praised for its minimal censoring and ability to give answers to controversial questions.
Don’t forget that Grok 1 is also open-source now!
Grok 1.5V outperforms GPTt-4 in text-reading, mathematics, and real-world question-answer. The real-world QA capabilities might just be the most impressive feature Grok has displayed yet.
The company described 7 vision-based use cases for Grok in their blog from building Python code from a flowchart to counting calories based on the nutritional information on the packets. Here is an overview of the possible uses of this model in different domains.
Let’s take a look at all the use cases!
1) Writing Code from Diagrams
This development is so impactful because it enables any person with a strong logical base to become a programmer without detailed knowledge of any language!
It empowers people to start building immediately without having to learn the finer intricacies of code.
2) Calculation and Real-world understanding
Far too many times AI models make errors in calculation even when all information is provided to them in text form. Here the model extracts information from the image and performs accurate mathematical calculations:
We wonder if it will be able to analyze further more complicated mathematics like percentages and taxes. If so, imagine how efficient doing your taxes will be, simply by uploading records and asking the model to evaluate it!
3) Understanding Images
The next three examples show the depth of image understanding shown by Grok. From a simple child’s drawing, it inferred the elements in the drawing and built a story around those elements:
It even explained a meme in a pop culture context and analyzed a defective wood image to diagnose the problem:
The possible use cases of this particular application in industries like healthcare after fine-tuning are endless.
4) Data extraction and problem-solving
Grok also shows great ability to extract data from images and convert it to the required format as a CSV file or as a dataframe. It can give detailed solutions to competitive coding problems including the test cases:
The possibility that this ability will be misused in competitive coding competitions is very high, which seems a problem that platforms like leetcode will have to solve as soon as possible:
Grok 1.5 V’s standout ability is its real-world understanding, as they explained in their official announcement:
“In order to develop useful real-world AI assistants, it is crucial to advance a model’s understanding of the physical world. Toward this goal, we are introducing a new benchmark, RealWorldQA. This benchmark is designed to evaluate basic real-world spatial understanding capabilities of multimodal models. While many of the examples in the current benchmark are relatively easy for humans, they often pose a challenge for frontier models.”
Conclusion
The developers of Grok 1.5 vision anticipate significant growth in multimodal capabilities across all data types like audio, videos along with documents and images. The real-world understanding is a step in the right direction to achieve AGI eventually.