There was a great interest in Mira Murati’s Thinking Machines Lab is building with $2 billion in seed funding and an all-star team of former Openai researchers who joined the lab. In a blog post published Wednesday, Murati’s lab showed the world one of its projects: creating an AI model with reproducible answers.
A research blog post entitled “Non-decisive defeat in LLM inference” attempts to unlock the root causes of what introduces randomness into AI model responses. For example, asking ChatGpt with the same question several times could give you a wide range of answers. This has been largely accepted as a fact in the AI community. Today’s AI models are considered non-deterministic systems, but this is considered a solutionable problem.
A post written by Heac of Thinking Machines Lab Researcher Horace claims that the root cause of the randomness of AI models is the way the GPU kernel (a small program running inside Nvidia’s computer chip) is sewn together in the inference process (all if you press Enter in ChatGpt). He suggests that careful control over this layer of orchestration makes AI models more deterministic.
It points out that not only creating more reliable responses for businesses and scientists, but also obtaining AI models to generate reproducible responses could also improve reinforcement learning (RL) training. RL is the process of rewarding AI models for correct answers, but if all the answers are slightly different, the data is a bit noisy. According to HE, creating a more consistent AI model response can make the entire RL process “smooth”. Thinking Machines Lab tells investors that it plans to use RL to customize the AI model for its business.
Murati, former Chief Technology Officer at Openai, said in July that the first product from Thinking Machines Lab will be announced in the coming months, “helps for developing custom models and startups.” It is still unknown what the product is, or whether the technique of this study will be used to generate more reproducible responses.
Thinking Machines Lab also says it plans to frequently publish blog posts, codes and other information about its research to “not only benefit the public but also improve its own research culture.” This post, the first in the company’s new blog series called “Connectionism,” appears to be part of that effort. Openai also committed to opening research when it was founded, but as the company grew more closed. We will check if Murati’s lab is faithful to this claim.
Research blogs offer a rare glimpse at one of Silicon Valley’s most secret AI startups. While it doesn’t reveal exactly where technology is heading, it shows that Thinking Machines Lab is tackling some of the biggest issues with AI Research’s frontiers. The actual test is whether Thinking Machines Lab can solve these problems and create a product around research to justify its $12 billion valuation.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025