When Bildery joined Nvidia’s lab in 2009, it only employed around 12 people and focused on Ray Tracing, a rendering technique used in computer graphics.
The once small research lab now employs more than 400 people and helped transform from a video game GPU startup in the 90s into a $4 trillion company that fuels the artificial intelligence boom.
Currently, the company’s research lab is set to develop the technologies needed to unleash the power of robotics and AI. And some of the lab work has already been featured in the product. On Monday, the company announced new Set World AI models, libraries and other infrastructure for robot developers.
Dally, now a chief scientist at Nvidia, began consulting for Nvidia in 2003 while working at Stanford. A few years later, when he was ready to step down from becoming head of Stanford’s computer science division, he planned to take the sabbatical. Nvidia had a different idea.

David Kirk, who ran the lab at the time, and Jensen Fan, CEO of Nvidia, saw a more permanent position in the lab as a better idea. Dally told TechCrunch that he wore a “full coat press” and ultimately convinced him about why the pair were going to join Nvidia’s lab.
“It’s become a perfect fit for my interests and talent,” Darry said. “I think everyone is always looking for a place in life that is the biggest, you know, and you can make the world a contribution, and for me it’s definitely nvidia.”
When Dally took over the lab in 2009, expansion was no less than anything. Researchers quickly began working in areas other than raytracing, including circuit design and VLSI, or very large integrations. This is a process that combines millions of transistors on a single chip.
Research Lab has not stopped expanding since then.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
“We are constantly looking at exciting new areas, so we try to understand what makes the most positive difference for the company, but some of them do a great job, but we have a hard time saying [we’ll be] This was a huge success,” Darry said.
For a while, it was building a better GPU for artificial intelligence. Early in the future AI boom, Nvidia began messing around with the idea of AI GPUs in 2010. This is more than a decade ago for the current AI Frenzy.
“I said this was amazing. It would completely change the world,” Dary said. “We have to start double this. Jensen said that when I told him, we started specializing in GPUs for it and developing a lot of software to support it, and began to engage with researchers around the world who were doing it long before it was clearly related.”
Physical AI Focus
With Nvidia currently serving as the commander of the AI GPU market, tech companies are beginning to look beyond AI data centers for new areas of demand. That search led Nvidia to physical AI and robotics.
“I think in the end robots will become giant players in the world. Basically, we want to create the brains of every robot,” Darry said. “To do that, you need to develop important technologies.”
That’s where Sanja Fidler, Vice President of AI Research at Nvidia, appears. Fiddler joined Nvidia’s lab in 2018. At the time, she was already working on a simulation model of a robot with a team of MIT students. He was interested when she told Huang about what they were working on at the researcher’s reception.
“I couldn’t resist participating,” Fiddler told TechCrunch in an interview. “It’s exactly like that. You know, it was just such a great topic and at the same time fitted with a very wonderful culture. Jensen told me, come and work with us, not for us, do you know?”
She joined Nvidia and created a lab in Toronto called Omniverse, an Nvidia platform focused on building physical AI simulations.

The first challenge in building these simulated worlds was finding the 3D data you needed, Fiddler said. This involves finding the right amount of potential images to use the technology needed to turn these images into 3D renditions that can be used.
“We invested in this technology called differentiable rendering. Fiddler said, “You go [from] Rendering means from 3D to images or videos, right? And we want it to go the other way. ”
World model
Omniverse released the first version of the model in 2021 that transforms images into 3D model Ganverse 3D. After that, you need to work on figuring out the same process in the video. Fiddler said he created these 3D models and simulations through the neural nerve reconstruction engine, which he first announced in 2022, using videos of robots and self-driving cars.
She added that these technologies are the backbone of the company’s cosmos family’s global AI model, which was announced at CES in January.
Currently, the lab is focusing on making these models faster. When playing video games and simulations, they want technology to respond in real time, Fiddler said of the robots they are working to make reaction times even faster.
“Robots don’t have to see the world at the same time, just as the world works,” Fiddler said. “You can see it like 100 times faster. So, if you can make this model significantly faster than today, it would be very useful for robotic or physical AI applications.”
The company continues to advance this goal. Nvidia has unveiled a fleet of New World AI models designed to create synthetic data that can be used to train robots at Monday’s Siggraph Computer Graphics Conference. Nvidia has also announced new libraries and infrastructure software aimed at robot developers.
Nvidia’s research team remains realistic despite the current hype about advancements and robots, particularly humanoids.
Both Dally and Fidler said the industry has been on at least a few years off as it has humanoids in your home, and Fidler compares it to the hype and timelines about autonomous vehicles.
“We’ve made a lot of progress and I think we know that AI is a real enabler,” Darry said. “We know generative AI since we started with visual AI for robot perception, but it is extremely valuable for task and motion planning and manipulation. As we solve these individual small problems, and as the amount of data we need to train our network increases, these robots grow.”
Source link