I’ve now been using RunwayML’s latest Gen3 Alpha for a week and I can tell you that I am simultaneously amazed and disappointed (but mostly amazed). While I have bashed Runway in the past, I can confidently say that this is the best AI video generation tool available, right now. Sora and Kling may produce amazing demos, but if you can’t get your hands on them, who cares? If you really want to work with AI video generation, this is the tool I recommend.
Full disclosure, I paid $100 for month of unlimited generations, and these are the only circumstances I would consider using Runway. Generative AI is a lot like gambling and if you like the exhilarating thrill of ‘just one more prompt’, you’ll be fine. However, with Runway’s entry level ($15) plan, you’ll blow through your credits, end up with a dozen clips, and half of them will be useless. I’m only going to do this for a month, because I have some free time and I’ve been making up to a hundred clips a day and it’s been totally worth it.
That said, this expense is totally unsustainable unless you have the big budget of an advertising agency or a Hollywood studio. While some of the outputs are truly amazing, it’s still not consistent enough to put together a consumer facing product. If you were using it for a pre-visualization or a prototype and saying “This is what we want the film to feel like: Now go and make it”, it would be great. There’s still a lot of morphing and random weirdness to make it useful for a mainstream use case. If you’re doing an experimental art project or a video for a band and you have the time and budget to prompt a hundred times over and cherry pick the results, then go for it. You’ll still need to be some sort of AfterEffects of VFX wizard and be patient enough to fix all the tiny glitches.
While much of the output was truly amazing, several requests produced an error message because, I suspect, it lacked the vocabulary to describe the thing I was asking for. I don’t know how big their model is, but I assume it will get better as they add more training data. However, when it’s working with subject matter it understands, it’s out of this world. Every single day, I’ve produced results that had me shouting in excitement. Like these mountains and clouds, first attempt.
I’ve been going back and forth between Gen3 and Gen2 and there are some differences to note here. Gen2 allows for Text to Video as well as Image to Video, meaning I can start with a Midjourney image and animate it. Gen3 only accepts a text prompt. Gen2 also has an intuitive interface that gives you a variety of camera controls as well as their ‘Motion Brush’ that allows you to select and animate multiple regions of your image. Warning: Use both of these tools sparingly, as your video will rapidly become distorted if you push these features to far. While Gen3 lacks these controls, they are expected to be rolled out soon. So I’ve been using Gen2 (with these features) when I need to exert more artistic control and using Gen3 when “I’m feeling lucky”. While not everything works out, overall I’m really impressed with the tool and really happy with my workflow. I’m sure I’ll miss it at the end of July, when I return to financial reality.
Also, both generations will allow you to upload an audio file or record your own voice in real time to add lip synching to your generated video clip. As long as the subject is facing the camera and has clearly defined facial features, it usually produced impressive results.
As a final thought, let’s just look at how far Runway has come in just over a year. Every 3-4 months, I’ve been remaking the same fake beer commercial for Antarctic Amber so I can get a timestamp for the ‘state of the art’ at that moment. I made a short video showcasing the improvements. For the record, I spent 10 minutes prompting and 10 minutes editing to get a result that is a huge improvement over what was possible a year ago.
That’s it for this week, I’ll get back to the regular AI news on Thursday. Hope you’re enjoying the heatwave, I’m hiding out in my basement. As always, if you have any questions, or you’d like me to do a deeper dive on anything, just send me a message.