{"id":6519,"date":"2024-12-10T08:38:26","date_gmt":"2024-12-10T08:38:26","guid":{"rendered":"https:\/\/favtutor.com\/articles\/?p=6519"},"modified":"2024-12-10T08:39:59","modified_gmt":"2024-12-10T08:39:59","slug":"sora-turbo-ai-videos-examples","status":"publish","type":"post","link":"https:\/\/favtutor.com\/articles\/sora-turbo-ai-videos-examples\/","title":{"rendered":"10 Stunning Videos made with OpenAI&#8217;s Sora Turbo"},"content":{"rendered":"\n<p>OpenAI has finally launched Sora as a standalone product in its arsenal. They first <a href=\"https:\/\/favtutor.com\/articles\/sora-ai-video-generator-openai\/\">previewed<\/a> the AI video generator in February and since then, it has been the most anticipated AI tool.  They have released the Sora Turbo, faster than the model they previewed earlier this year.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>10 Examples of what SORA can create<\/strong><\/h2>\n\n\n\n<p>Now that the general public can try it, people are already sharing videos of what SORA can do. We curated some of the most amazing examples to showcase its mind-blowing capabilities. Users can generate videos with 1080p resolution for up to 20 seconds.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1) Fire in a Cave<\/strong><\/h3>\n\n\n\n<p>One of the simplest examples to test an AI video generator is creating &#8216;fire&#8217; videos to see how real it look and how it changes the lighting in the background.<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">2\/Fire in the cave <a href=\"https:\/\/t.co\/5fCFCEN0y1\" target=\"_blank\">pic.twitter.com\/5fCFCEN0y1<\/a><\/p>&mdash; God of Prompt (@godofprompt) <a href=\"https:\/\/twitter.com\/godofprompt\/status\/1866367855807352832?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 10, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>You can see, it has performed well in both aspects with decent camera movements. This is a decent use case to create &#8216;stock&#8217; videos for documentaries and movies.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2) Dog on Surfboard<\/strong><\/h3>\n\n\n\n<p>Check the prompt and the output:<\/p>\n\n\n\n<p>Prompt: &#8220;<em>A golden retriever, with a shiny wet coat, skillfully balances on a surfboard as it rides a gentle wave at Pacifica Beach. The dog&#8217;s tongue hangs out in excitement, and its eyes are focused on the horizon. The backdrop includes the wide expanse of the ocean with rolling waves and a clear blue sky.<\/em>&#8220;<\/p>\n\n\n\n<p><strong>Output: <\/strong><\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">Sora \u2014 &quot;A golden retriever, with a shiny wet coat, skillfully balances on a surfboard as it rides a gentle wave at Pacifica Beach. The dog&#39;s tongue hangs out in excitement, and its eyes are focused on the horizon. The backdrop includes the wide expanse of the ocean with rolling\u2026 <a href=\"https:\/\/t.co\/s5epk81RvB\" target=\"_blank\">pic.twitter.com\/s5epk81RvB<\/a><\/p>&mdash; edwin (@edwinarbus) <a href=\"https:\/\/twitter.com\/edwinarbus\/status\/1866192355763900798?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 9, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>It will be difficult for a normal viewer to find whether it is a real video or fake. The water and surf motion are also great for a short-duration video.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3) News Report<\/strong><\/h3>\n\n\n\n<p>Now comes the key challenge for any text-to-video AI: creating faces in closeup. Here is an example of a news report with two different anchors:<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">Sora is launched today and is about to change everything&#8230;<br><br>But not for Europe and China (for now) \ud83d\ude05<br><br>Here&#39;s what it can already do <a href=\"https:\/\/t.co\/d5CPrfilxv\" target=\"_blank\">pic.twitter.com\/d5CPrfilxv<\/a><\/p>&mdash; Eyisha Zyer (@eyishazyer) <a href=\"https:\/\/twitter.com\/eyishazyer\/status\/1866187026346758345?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 9, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>While the model still struggles with text, we want to focus on the close-up shots of the people. It will be hard to distinguish what&#8217;s real and what&#8217;s not.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>4) Products Videos<\/strong><\/h3>\n\n\n\n<p>One of the users on X shared an interesting use case for Sora. Since you can also upload images, why not upload your product images and Sora can turn it into a short 360-degree type video?<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">Static image -&gt; video product showcase <br><br>Generated using OpenAI Sora <a href=\"https:\/\/t.co\/XCS3cOLczX\" target=\"_blank\">pic.twitter.com\/XCS3cOLczX<\/a><\/p>&mdash; Jacob Posel (@dtcjacob) <a href=\"https:\/\/twitter.com\/dtcjacob\/status\/1866315755601424712?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 10, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>While it will not output such good results for every type of product, you can make it work with better prompting.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>5) Witch in the Forest<\/strong><\/h3>\n\n\n\n<p>When it comes to complex videos, Sora struggles. Look at the example:<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">8. Walking through forest <a href=\"https:\/\/t.co\/Tf9Qr8j06D\" target=\"_blank\">pic.twitter.com\/Tf9Qr8j06D<\/a><\/p>&mdash; Min Choi (@minchoi) <a href=\"https:\/\/twitter.com\/minchoi\/status\/1866191039951683785?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 9, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>It did a great job with smoke and the witch. But if you notice, the camera movements are not consitent. First, it was fast, then it stutters and then there is an abrupt cut with fast motion again. Also, the trees looks like made of plastic with not a single movements. Such little details are still a problem for AI vidoe generators.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>6) Struggles with Smoke<\/strong><\/h3>\n\n\n\n<p>One more thing Sora and many AI video models struggles with is Smoke:<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">6. Burning building <a href=\"https:\/\/t.co\/LDQQjMBpFh\" target=\"_blank\">pic.twitter.com\/LDQQjMBpFh<\/a><\/p>&mdash; Min Choi (@minchoi) <a href=\"https:\/\/twitter.com\/minchoi\/status\/1866191036319481896?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 9, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>It can make fire but not smoke. The smoke in the background looks more real, while smoke in the front building doesn&#8217;t. This comes to the same problem with phyiscs that OpenAI must fix in the upcoming variants to widen up the use cases, especially for the entertainment industry.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>7) Mushroom people<\/strong><\/h3>\n\n\n\n<p>One of the first things people try with such tool is weird video. Say hello to mushroom people:<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">5\/ Mushroom people? I thought they were real&#8230; <a href=\"https:\/\/t.co\/6xPsZh9tHt\" target=\"_blank\">pic.twitter.com\/6xPsZh9tHt<\/a><\/p>&mdash; God of Prompt (@godofprompt) <a href=\"https:\/\/twitter.com\/godofprompt\/status\/1866367998610842050?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 10, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p> We have to say that it is quite an stunning example. The movement is so smooth in this case with a slight simulation of clothes to make it look real.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>8) Blending Scenes<\/strong><\/h3>\n\n\n\n<p>Sora also provides the option to &#8216;blend&#8217; two videos, so that there is a smooth transition between them. Here is an example:<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">Sora&#39;s &quot;Blend&quot; option might be my favorite feature so far. I wanted to see how it would handle two very different shots. <a href=\"https:\/\/t.co\/eqj5pPxPbb\" target=\"_blank\">pic.twitter.com\/eqj5pPxPbb<\/a><\/p>&mdash; Blaine Brown (@blizaine) <a href=\"https:\/\/twitter.com\/blizaine\/status\/1866208282500530503?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 9, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>Except from the hair movement, it really did a good job.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>9) Eye Movement<\/strong><\/h3>\n\n\n\n<p>Eyes are not easy to figure out but Sora did present impressivr results. Here the user prompted an AI-generated image (made with Midjourney) to Sora.<\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">SORA FIRST TESTS \u2192 WOAH.<br><br>Sharpness&#8230;clarity&#8230;eye motion&#8230;emotion.<br><br>The subtle details are impressive.<br><br>MIDJOURNEY PROMPT:<br>cinematic still, extreme closeup side profile shot, Ancient Mayan Chieftan in the Yucatan jungle gazing anxiously, weathered skin, intricate colorful\u2026 <a href=\"https:\/\/t.co\/12KCS0raTQ\" target=\"_blank\">pic.twitter.com\/12KCS0raTQ<\/a><\/p>&mdash; Rory Flynn (@Ror_Fly) <a href=\"https:\/\/twitter.com\/Ror_Fly\/status\/1866292690452623709?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 10, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>The sharpness is clearly astonishing but the key thing is the eye movements: how it focuses with the camera, blinking and how smooth the pupils behave.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>10) Cute Cat<\/strong><\/h3>\n\n\n\n<p>Let&#8217;s end this list with a cute cat: <\/p>\n\n\n\n<div align=\"center\"><blockquote class=\"twitter-tweet\" data-media-max-width=\"560\"><p lang=\"en\" dir=\"ltr\">Best I&#39;ve gotten so far from Sora AI video.<a href=\"https:\/\/twitter.com\/OpenAI?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">@OpenAI<\/a> <a href=\"https:\/\/t.co\/jcxKqxo6d2\" target=\"_blank\">pic.twitter.com\/jcxKqxo6d2<\/a><\/p>&mdash; Christopher Fryant (@cfryant) <a href=\"https:\/\/twitter.com\/cfryant\/status\/1866220105845137765?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">December 9, 2024<\/a><\/blockquote> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/div>\n\n\n\n<p>AI has really come a long way.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Takeaways<\/strong><\/h2>\n\n\n\n<p>Note that OpenAI officially confirmed that the model still has many limitations. It can generate unrealistic results for longer videos and restricts image uploads to counter deepfakes. Still, the model is worthy compared to its competitors.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We curated a list of 10 videos generated using OpenAI&#8217;s SORA Turbo version to show its mind-blowing capabilities.<\/p>\n","protected":false},"author":8,"featured_media":6521,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jnews-multi-image_gallery":[],"jnews_single_post":null,"jnews_primary_category":{"id":"","hide":""},"footnotes":""},"categories":[57],"tags":[56,60,62],"class_list":["post-6519","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","tag-ai","tag-openai","tag-sora"],"_links":{"self":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/6519","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/comments?post=6519"}],"version-history":[{"count":4,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/6519\/revisions"}],"predecessor-version":[{"id":6524,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/posts\/6519\/revisions\/6524"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/media\/6521"}],"wp:attachment":[{"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/media?parent=6519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/categories?post=6519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/favtutor.com\/articles\/wp-json\/wp\/v2\/tags?post=6519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}