The challenge when discussing deep learning models is often understanding why the model behaves the way it does. Whether it’s xAI’s repeated struggle sessions to fine-tune Grok’s bizarre politics, or ChatGPT’s struggles with sycophants and mundane hallucinations, connecting neural networks with billions of parameters isn’t easy.
Guide Labs, a San Francisco startup founded by CEO Julius Adebayo and chief scientific officer Aya Abdelsalam Ismail, is providing an answer to that question today. On Monday, the company open sourced Steelling-8B, an 8 billion parameter LLM trained on a new architecture designed to make actions easier to interpret. All tokens generated by the model can be traced back to the origin of the LLM’s training data.
This can be as simple as determining which factual references the model cites, or as complex as understanding the model’s understanding of humor or gender.
“If there are a trillion ways to encode gender, and I encode it in a billion of the trillion things that I have, I have to make sure that I can find all the billion things that I encoded. And I have to be able to reliably turn it on and turn it off,” Adebayo told TechCrunch. “You can do that with the current model, but it’s very fragile…It’s kind of one of those holy grail questions.”
Adebayo began this research while completing his PhD at MIT and is a co-author of a widely cited 2018 paper showing that existing methods of understanding deep learning models are unreliable. That work ultimately led to the creation of a new way to build an LLM. Developers insert conceptual layers into the model that classify data into categories that can be tracked. This requires more up-front data annotation, but by leveraging other AI models, we were able to train this model as our largest proof of concept to date.
“The kind of interpretability that people do… is model-based neuroscience, and we turn it on its head,” Adebayo said. “What we’re really doing is designing a model from scratch so that we don’t have to do any neuroscience.”

One concern with this approach is that it may eliminate some of the new behaviors that make LLM so interesting: the ability to generalize in new ways about things that have not yet been trained. Adebayo says that’s still happening in his company’s model. His team tracks what it calls “discovered concepts,” which models discover on their own, much like in quantum computing.
tech crunch event
boston, massachusetts
|
June 9, 2026
Adebayo argues that this interpretable architecture will be something everyone will need. For consumer LLMs, model builders will be able to use these techniques to block the use of copyrighted material and better control output on subjects such as violence and substance abuse. Regulated industries require more controllable LLMs. For example, in the financial industry, models that evaluate loan applicants should consider things like financial record rather than race. There is also a need for interpretability in scientific research, another area where Guide Labs has developed technology. Protein folding has been a huge success for deep learning models, but scientists need more insight into why the software is finding promising combinations.
“This model shows that training interpretable models is no longer a kind of science, but an engineering problem,” Adebayo said. “We’ve cracked the science and we’ve been able to extend it. There’s no reason why this kind of model can’t match the performance of frontier-level models with more parameters.”
According to Guide Labs, Steelling-8B can achieve 90% of the power of existing models, but uses less training data thanks to its new architecture. The company’s next steps, which emerged from Y Combinator and raised a $9 million seed round from Initialized Capital in November 2024, are to build a larger model and start offering APIs and agent access to users.
“The current way we train models is so primitive that democratizing inherent interpretability will be good for our role in humanity in the long run,” Adebayo told TechCrunch. “We’re chasing these models that are going to be super intelligent, so you don’t want something mysterious to you making decisions for you.”
Source link
