Ironically, I wrote in yesterday’s newsletter that as of July 1, it hadn’t arrived. However, by mid morning it just appeared in my workspace as I was building something in Gen2. I started testing it immediately and I’ll just cut to the chase here: It’s unbelievable! While I have bashed on Runway in the past in this newsletter, I will say that they have taken a huge leap forward and it is currently the best AI video generation tool you can get your hands on. Keep in ind, that this is just the Alpha release and it should be incorporating additional features later this month. First, a few complaints. Yes, it still looks like AI video and is prone to weird glitches. Yes, it’s not consistent enough to use in a corporate branded commercial (yet). And it’s really expensive, like $1 for 10 seconds of video and you’ll blow through your $15/month intro plan in a dozen attempts.
However the quality is truly amazing and rivals everything we’ve seen from OpenAI’s Sora. And while I had a little lag time yesterday as everybody piled on, it was quickly resolved and most of my generations took less than a minute. While not every prompt was perfect, I was really impressed with the success rate. In their guide, they had a sample prompt about a woman in an orange dress in a jungle. I then modified that five times to create these six clips, adding details like beach and temple and praying hands. This was the result after my first ten minutes.
I’m going to assume that those amazing results we see in the demo reels are heavily cherry picked and you should be prepared to repeat your requests dozens if not hundreds of times. However, even my disasters had a certain charm and a lush render quality, like this attempt at a “A corgi running through the surf, small waves, at sunset”
This “Drone footage of the Oregon coast and Haystack rock” is not accurate, but certainly usable if you just needed to describe a dramatic beach scene.
Some of my Bigfoot attempts looked too much like scary monsters, so I wrote “Bigfoot playing the banjo, warm, happy, smiling” and received this…it’s not wrong.
Remember, these are all from text prompts without any starter image and they each took about a minute. If you consider how much Runway has improved in the last year it’s really amazing. They are expected to add an image to video capability later this month.
Full disclosure, I am paying almost $100 for one month for the unlimited plan so I can afford to experiment to my heart’s content. This would not be worth it on any of the cheaper plans and (at least for me) not sustainable for the long term. You would need to have the big budget of an Ad agency or Film production company to make it worth it. How would you use this, since it’s not ready for prime time? I think it would be awesome for brainstorming, concept development, and pre-visualization. Need to sell someone on an idea? Mock it up in Runway and get approval the same way I’m using Midjourney for images. I’ll be building things with it for the entire month of July.
Finally, Runway has really thrown down the gauntlet here and I expect the other players to follow and the overall quality to improve drastically by the end of the year. If you are a video or motion graphics professional, I would strongly recommend you get familiar with it now. This feels like an iPhone moment, or the advent of Photoshop or the internet, or even Midjourney two years ago. Nothing will ever be the same again.