Google is again changing the AI landscape with the help of its ‘Genie’, an AI that can let you create 2D game environments very easily. Let’s find out more about Google’s Genie AI and take a closer look at the technology behind its technology!
Highlights:
- Google announces Genie, an AI platform to generate 2D video game worlds.
- Can convert various inputs from single texts to synthetic images to even sketches.
- It is a foundation model and paves the way for various generalist agents and virtual worlds.
What is Genie AI and How it Works?
Google announced the release Genie AI tool that allows developers to create 2D-based video game environments based on a single text or image prompt. Genie stands for “Generative Interactive Environments”.
According to Google, Genie is a foundational AI model trained mostly from internet videos that can generate a variety of playable video games. The video games generated can range from various genres to others based on the type of input provided. You can provide it with single text input to synthetic images and drawn sketches.
I am really excited to reveal what @GoogleDeepMind's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts. pic.twitter.com/TnQ8uv81wc
— Tim Rocktäschel (@_rockt) February 26, 2024
The gap between Generative AI and content produced continues to narrow, following the release of OpenAI’s Sora’s text-to-video technology and now with Genie AI’s video game generation technology.
According to Google’s technical paper, Genie can be called a foundational model with its startling 11B parameter size. It has been trained over a large dataset of over 200,000 hours of publicly available video sources online. These videos allow the generation of 2D video games based on a frame-by-frame basis rather than requiring an actual gaming engine.
Genie works without training any action labels. Its ability to pick up precise controls from online videos sets it apart. This is a problem because, in most cases, Internet videos lack labels indicating which action is being done or even where the area of the image needs to be controlled.
Surprisingly, Genie not only determines which aspects of observation are typically under control but also deduces a variety of hidden behaviors that are homogeneous among the created environments.
What can Genie AI do?
Genie can create 2D-based video games based on various sorts of input forms. Here we will look at the vast input forms that can be used to process and analyze into fruition.
Images of Video Games
Genie can take a single image as input and produce a new interactive environment of video games. They use Imagen2 for this state-of-the-art model and produce frames from the input images which they can bring to life.
Below are some of Genie’s generated video games from static images:
Sketches to Video Games
Ever imagined a frame-by-frame analysis of your Sketches in the form of Video Games? Genie can do that!
Genie’s abilities to go beyond are quite intriguing as it can even generate video games from human-based input images such as sketches.
Below are some of the generated video games from sketches-based inputs:
Real-World Images to Video Games
The ultimate power comes in the form of simulating real-world images in the form of 2D-generated video games. Although the output is still 2D, it quite amazingly captures all the necessary motions and action sequences as can happen in the real world frame by frame.
Below are some of the generated video games from real-world-based image inputs:
Looking into Genie’s Architecture
The build is quite intricate.
Genie’s architecture is mainly based on a Vision Transformer. Google’s Deepmind team adopted an efficient ST-transformer architecture, keeping in mind the quadratic memory costs of videos associated with the transformers. This allows for balancing model capacity with computational constraints across all model components.
An ST transformer is different from all traditional transformer models. A feed-forward layer (FFW) comes after interleaved spatial and temporal attention layers in an ST-transformer’s 𝐿 spatiotemporal blocks, which are conventional attention blocks. This plays a huge role when it comes to Genie’s frame-by-frame analysis of 2D video games.
When looked further into this model consists of 3 main components:
1) Video Tokenizer
This part serves as a fundamental building block for the video game generation. The video tokenizer is responsible for efficiently processing massive video data into vast controllable units called tokens. These tokens form a part-by-part integration for the output 2D video generation.
2) Latent Action Model
This section carefully examines how the videos’ frames flow from one to the next. These flow movements can be anything from sprinting and jumping to interacting with the game’s elements.
These are termed latent actions. The analysis of latent actions is performed by an encoder. It first takes input from all previous video frames and produces a corresponding output of various latent frames. The following frame is then predicted by a decoder using all of the prior frames’ input as well as any latent actions.
3) Dynamics Model
The dynamics model is a MaskGIT transformer that just has decoders. It predicts the next frame tokens at each time step by using the tokenized video and latent actions. Google employs an ST-transformer once more, which allows them to use tokens from every frame and token action due to its causal nature.
It produces predictions for all the next frames. A cross-entropy loss between the predicted and ground-truth tokens is used to train the model.
In short, It creates the next graphic outcome based on the current condition of the game world, which includes the player’s actions. In the end, this constant process of prediction produces the appearance of an interesting and participatory gaming experience.
Are there Any Limitations?
Notably, Genie has certain restrictions and is still in development. Following are the limitations it has as of now:
- Limited visual quality: Genie can only produce games at a frame rate of one frame per second, which affects the visual fidelity.
- Access restricted to researchers: It is still a Google DeepMind research project and is not currently accessible to the general public.
- Ethical considerations: The potential for misuse requires careful thought, just like with any strong technology. To ensure responsible development and implementation, Google is working on ethical elements.
The Future of Generalist Agents and Generative AI
Genie holds a vast potential for training a multitude of Generalist Agents. However, the quantity of readily available games frequently limits the ability to use game environments as a useful testbed for creating AI agents. The AI agents of the future can be trained in an endless curriculum of newly created worlds thanks to Genie.
Google also stated that Genie is a universal method that can be applied for training various virtual worlds and can be applied to several domains without requiring any domain knowledge at all.
We trained a smaller 2.5B model on action-free videos from RT1. As was the case for Platformers, trajectories with the same latent action sequence typically display similar behaviors. This indicates Genie can learn a consistent action space which may be amenable to training embodied generalist agents.
via Google’s announcement blog
It is anticipated that the Genie will revolutionize creativity in a variety of fields once it is unleashed. Its capacity to create interactive worlds with little input will pave the way for innovative developments in the fields of education and entertainment in the future.
Conclusion
Genie’s announcement has left the world of Generative AI in chaos and people can’t wait to get their hands on the latest video game generation technology. However, we must not forget that this is just the initial testing phase and Genie can encounter much more limitations.