Just a few days ago a powerful AI video-generating model called Dream Machine was launched by Luma AI. Now we have another competitor to enter the Generative AI market called Gen-3 Alpha.
Highlights:
- Runway ML just released their next series of AI video-generating models called Gen-3 Alpha.
- It is a highly powerful model that can generate videos from both text and image prompts.
- Comes with several capabilities such as photorealistic humans and fine-grained temporal control.
Runway ML’s Gen-3 Alpha is here
Gen-3 Alpha by Runway ML is a powerful text/image-to-video generating model that has been trained on both videos and photos. The model has been trained in several modes to make it more efficient and improved compared to the previous Gen-2 model.
Let’s take a look at Gen-3 Alpha’s impressive state-of-the-art features that make it highly in demand even just hours after its release.
The ability to generate Photorealistic Humans will make Gen-3 Alpha the top choice over other video editing tools in the future. Gen-3 Alpha opens up new possibilities for narrative by producing human characters who are highly expressive and capable of a wide range of gestures, actions, and emotions.
So, imagine you want realistic-looking characters portraying specific human beings and showing open emotions, so yes Gen-3 Alpha might just be the tool for you. You can almost generate anything and everything capturing the gamut of human body emotions and facial gestures.
Overall, Gen-3 Alpha has been trained in a collaborative effort by a huge group of research scientists, engineers, and artists. It has been made highly efficient to display high-quality cinematic shots in diverse styles, lighting, and genres.
7 Prompts That You Should Try With Gen-3 Alpha
Here are 7 Prompts that we collected and you should try them with Gen-3 Alpha. We have also attached the videos obtained from Gen-3 Alpha from these prompts. These prompts will also give you an idea of how to frame your prompts to make the most out of Gen-3 Alpha’s video generation capabilities.
Prompt 1: “A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head.”
The video generated from this prompt is wild. You can see the video begins with a normal middle-aged man with closed eyes and looking depressed, but as the video rolls on his expressions start to change.
This Runway Gen-3 alpha prompt is wild! 🔥
— Ammaar Reshi (@ammaar) June 17, 2024
“A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head.”pic.twitter.com/6NQkYQ39zY
He becomes happy as soon as the wig of curly hair drops on his head along with a sunglass. His whole look and aura change completely! This change in emotions, picture quality, and facial gesture representations is on another level from Gen-3 Alpha.
The transition is beyond words.
Prompt 2: “Handheld camera moving fast, flashlight light, in a white old wall in an old alley at night a black graffiti that spells ‘Runway’.”
Take a close look at this video. The video begins just as the prompt asks it to, with a handheld camera moving fast with a flashlight. The scene begins in an old dark alley at night and the camera moves forward for a few seconds.
Gen-3 Alpha can add text inside videos. Massive work from Runway, this is really impressive! 🫡
— AshutoshShrivastava (@ai_for_success) June 17, 2024
Prompt: Handheld camera moving fast, flashlight light, in a white old wall in a old alley at night a black graffiti that spells 'Runway'.pic.twitter.com/mN2YjN9eP7
Now here comes the interesting part. The camera turns to the left and you can see the word ‘Runway’ printed on the white wall in the form of graffiti. This is so amazing, it looks like Gen-3 Alpha can also generate texts in the videos, all you have to do is mention it properly in your prompt.
Prompt 3: “A wide shot of a vast, subterranean city that explores the intricate architecture and bustling markets filled with inhabitants”
I’m amazed by the multiple types of scenes involved in this video generation. It almost feels like multiple scenes were joined in a single video.
“A wide shot of a vast, subterranean city that explores the intricate architecture and bustling markets filled with inhabitants”
— Emily Golden (@em_golden) June 17, 2024
Generated with Gen-3 Alpha by @runwayml pic.twitter.com/z7WDBKJgsu
If you look closely, you will see the scene begins showing the city markets under a cave. However, as the scene rolls on you can see the camera come out of the cave, showing a more detailed and lighted-up view of the city markets filled with inhabitants blooming with trades and goods.
Maybe this scene is a perfect representation of the smooth transition including fine-grained temporal control. This is why it feels almost like different scenes were merged together.
Prompt 4: “Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.”
What I especially love about this scene is that the train moving at hyper speed in the background looks too good and realistic to be a generation. The mechanics, the lighting, and frame-to-frame animation adaptations.
@runwayml just introduced an AI model called “Gen-3 Alpha” that generates complex scenes with cinematic movements.
— The AI Pulse (@The_AIPulse) June 17, 2024
1. Prompt: Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city. pic.twitter.com/q8gpToWVog
What makes the scene even more immaculate, are the accurate and realistic reflections caught on the woman’s face and eyes. The train lights reflecting from her face add a new aura and give the video more depth.
Prompt 5: “View out a window of a giant strange creature walking in rundown city at night, one single street lamp dimly lighting the area.”
What a breathtaking and enormous generation from Gen 3 Alpha. It has perfectly understood the prompt word by word and produced this snippet.
4. Prompt: View out a window of a giant strange creature walking in rundown city at night, one single street lamp dimly lighting the area. pic.twitter.com/a0ddn32gyZ
— The AI Pulse (@The_AIPulse) June 17, 2024
Look at this giant creature walking outside, the details on it are crazy. Also what’s more interesting is that its movement motions are perfectly displayed even considering its size. A perfect slow-motion walk for this enormous King Kong and Godzilla combo of a creature!
Great generation and thinking from Runway ML!
Prompt 6: “Close up shot of a living flame wisp darting through a bustling fantasy market at night.”
If you have any doubts about whether Gen-3 Alpha can even generate unrealistic videos, this generation will completely change your mind forever.
6. Prompt: Close up shot of a living flame wisp darting through a bustling fantasy market at night. pic.twitter.com/6BI96ARuIT
— The AI Pulse (@The_AIPulse) June 17, 2024
I’m amazed at how perfectly and realistically Gen-3 Alpha has demonstrated this flame wisp moving forward seamlessly. What’s even more eye-catching is that the flame keeps changing its colors in a rainbow fashion (purple, blue, green, etc).
And also, we have to give credit to the amazing background details, look at the lights and the market stores, looks like it’s a live human interaction spot.
I almost feel this scene is out of a movie scene, regardless of the unrealistic prompt queries.
Prompt 7: “A first-person POV shot rapidly flies through open doors to reveal a surreal waterfall cascading in the middle of the living room.”
What a royal scene generation from Gen-3 Alpha. See how the camera rolls forward and the door opens like a magic door. I’m mostly amazed by the interior designs generated by Gen-3 Alpha. The choices of objects and the details on them are too good.
5. Prompt: A first-person POV shot rapidly flies through open doors to reveal a surreal waterfall cascading in the middle of the living room. pic.twitter.com/Tbry4rLrVY
— The AI Pulse (@The_AIPulse) June 17, 2024
Now here comes the most interesting part, the waterfall. The waterfall motion is almost next to perfection, also the water flowing from one room to another gives it a whole new depth. It’s almost like a real-life natural waterfall was placed in the room, rather than an artificial one.
Great thinking and frame construction from Gen-3 Alpha.
Overall, from all the above prompts, it is definitely said you have to clearly describe your prompts so that Gen-3 Alpha can generate them for you. Clearly, mention how the scenes should roll on and how the objects should be displayed. If you want a specific customized object, character, or drawing in the video, properly describe it with well-suited words.
Although it is clear Gen-3 Alpha is quite efficient at understanding prompts, considering its high-class generations from moderate-size prompts.
Conclusion
Gen-3 Alpha by Runway ML has already taken two steps forward in the race to become the best video-generating model. Its ultra-powerful features and high-class generations make it quite an interesting choice for Video Editors, Content Creators, and Developers worldwide.