Runway has been building visual tools for the creative industry for the past seven years. Now it’s looking at Robotics, a new opportunity for its technology.
The New York-based runway is known for its generational AI world models of video and photography, or for its large-scale language models that create simulated versions of the real world. Most recently, the company released its video generation model, Gen-4, in March, and its video editing model, Runway Aleph, in July.
As Runway’s World Models began to improve and become more realistic, the company began to receive inbound interest from robotics and autonomous driving auto companies considering using Tech.
“Even though entertainment is a huge ever-growing territory for us, we believe this ability to simulate this world is extremely useful beyond entertainment,” Germandis said. “Training is much more scalable and cost-effective. [robotic] A policy that interacts with the real world, whether it’s robotics or self-driving. ”
Germanidis said collaboration with robotics and autonomous automobile companies was not the first expected runways when they launched in 2018. It wasn’t until robotics and other companies reached out to them that they realized they had much broader use cases than other companies in the industry were uplifting.
Robotics companies are using Runway technology to train simulations, Germanidis said. He added that simply training robots and self-driving cars in real-world scenarios is costly, long time and difficult to expand for businesses.
Although Runway knows that it will not replace actual training with any means, Germanidis said that it can get many valuable simulations with Runway’s models because it has the ability to be very specific.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Unlike actual training, these models make it easier to test specific variables and situations without changing anything else in the scenario.
“You can take a step back and then simulate the effects of different actions,” he said. “If you take over this turn or take this action, what will happen to you? Creating those rollouts from the same context is really hard to do in the physical world, essentially, to keep all other aspects of your environment the same and test the effect of the particular action you want to take.”
It’s not just the companies considering addressing this. For example, Nvidia released the latest version of Cosmos World Models earlier this month, in addition to other robot training infrastructures.
The company does not expect to release a “line of completely independent models” for its robotics and autonomous automobile customers, Germanidis said. Instead, runways fine-tune existing models to better serve these industries. The company is also building a dedicated robot team.
Germanidis added that these industries were not the first pitch to the company’s investors, but are being mounted on the expansion. Runway raises more than $500 million at a $3 billion valuation from investors such as Nvidia, Google and General Atlantic.
“The way we think about a company is really built on principles, not on the market,” Germanidis said. “The principle is this simulation, this idea that we can build a better and better representation of the world. Once we have those very powerful models, we can use them in different markets, different industries. [The] The industry we expect is already there and will change further as a result of the power of generative models. ”
Source link