Runway

Runway Rolls Out Generative AI Video Editor App for iOS

Synthetic media software startup Runway has released an iOS app with video-to-video generative AI tools. Runway’s mobile app employs its proprietary Gen-1 AI model to process uploaded videos and transform them to match a menu of preset styles or a text prompt.

Runway Video

Runway app users can record up to 15 seconds of video through the app or upload pre-existing videos, then peruse a collection of visual fashions to transform a film into a world of watercolor, replace people with charcoal sketches, or make a stop-motion movie with clay figures.  If the user has an idea not on the list, they upload a reference image or type out their concept and the AI will translate the words into a film style. Runway will offer four variations on that theme as a preview for users to pick among, then spend a few minutes completing the transformation of the film to that style, though copyrighted materials are not allowed.

Runway charges on a credit-per-second model at about 14 credits per second of video. The free version of the app comes with just 525 credits and won’t make more than five seconds of video.  The standard plan costs $144 a year and supplies 625 credits a month, unlimited projects, and 1080p video. The pro plan expands the AI tool menu and ups the credits per month to 2,250 for $345 a year.

“Runway is the leading next generation creative suite that has everything you need, to make anything you want. And now, its most popular tools are available right from your phone,” the app description states. “Introducing the powerful Gen-1 AI magic tool that can now convert 15-second videos into stunning new creations using image or text prompts.”

Runway recently introduced an updated AI model called Gen-2, but the mobile app only uses Gen-1 for now, with mobile access to Gen-2 in the work. Runway was one of the early contributors to the open-source AI image generator Stable Diffusion and has expanded into entertainment since its 2018 founding, including providing technology used for visual effects in the film, “Everything Everywhere All at Once,” specifically when Michelle Yeoh and Stephanie Hsu’s characters travel to a universe where they are semi-mobile rocks.

Hour One Debuts Generative AI ‘Video Wizard’ for Turning Text Prompts into Finished Videos

Adobe Expands Generative AI Toolkit Firefly to Video

Meta Teaches Doodles to Dance With Open-Source Generative AI Project