The AI startup’s runway on Monday was released by What Its, who claims to be one of the AI-powered video generators.
Called the Gen-4, this model is deployed to individuals and corporate customers in the company. Runways claim to be able to generate consistent characters, locations and objects throughout the scene, maintain a “consistent world environment” and play elements from different perspectives and positions within the scene.
“The Gen-4 can use visual references in conjunction with instructions to create new images and videos with consistent style, subject matter, location, and more.[a]No need for tweaks or additional training. ”
The Gen-4 sets a new standard for video generation, and is a noticeable improvement over the Gen-3 Alpha. It has great ability to produce highly dynamic videos with realistic movements and excellent rapid adherence and consistency of subject, objects and style with the best in class world…pic.twitter.com/w9aco5boj7
– Runway (@runwayml) March 31, 2025
Supported by investors such as Salesforce, Google, and Nvidia, Runway offers a suite of AI video tools, including video generation models such as Gen-4. They face tough competition in the video generation sector, such as Openai and Google. However, the company fought to differentiate itself, signing contracts with major Hollywood studios and collecting millions of dollars to fund the film using AI-generated videos.
Runway says that Gen-4 allows users to use reference images of these characters to generate consistent characters across lighting conditions. To create a scene, the user can provide an image of the subject and explain the composition of the shot they want to generate.
With Gen-4, you can use visual references and with Gen-4, you can create new images and videos with consistent styles, subjects, locations, and more. Allows continuity and control within the story.
To test the narrative features of the model, we put together a summary…pic.twitter.com/iyz2baew2u
– Runway (@runwayml) March 31, 2025
“The Gen-4 has excellent ability to generate highly dynamic videos with realistic movement and consistency of subject, objects and style. “Runway Gen-4 [also] Visually generative models represent important milestones in their ability to simulate real physics. ”
Gen-4, like all video generation models, was trained with a huge number of video examples to generate composite footage to “learn” the patterns of these videos. The runway refuses to say where the training data came from, fearing it would sacrifice its competitive advantage. However, training details are also a potential source of IP-related litigation.
Appropriately, the runway faces suits that artists bring to it, as well as other generative AI companies accusing the accused of training models with copyrighted artwork. The runway claims that a doctrine known as fair use protects it from legal implications. It is not yet clear whether the company will win.
The runway interests are somewhat high, and it is said to raise new rounds of funding that value the $4 billion company. According to the information, Runway hopes to reach $300 million annual revenue this year after the launch of products like the video generation model API.
But lawsuits against the runway are shaking, and generative AI video tools, as we know, are threatening to overturn the film and television industry. A 2024 survey commissioned by the Animation Guild, a union representing Hollywood animators and cartoonists, found that 75% of film production companies that employed AI cut, consolidated or eliminated employment after incorporating the technology. The study also estimates that by 2026, more than 100,000 US entertainment jobs will be disrupted by generator AI.