Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

APT28 uses webhook-based macro malware to target European companies

OpenAI brings in consultants to promote the company

Wormable XMRig campaign uses BYOVD exploit and time-based logic bombs

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Guide Labs Debuts New Kind of Interpretable LLM
Startups

Guide Labs Debuts New Kind of Interpretable LLM

userBy userFebruary 23, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The challenge when discussing deep learning models is often understanding why the model behaves the way it does. Whether it’s xAI’s repeated struggle sessions to fine-tune Grok’s bizarre politics, or ChatGPT’s struggles with sycophants and mundane hallucinations, connecting neural networks with billions of parameters isn’t easy.

Guide Labs, a San Francisco startup founded by CEO Julius Adebayo and chief scientific officer Aya Abdelsalam Ismail, is providing an answer to that question today. On Monday, the company open sourced Steelling-8B, an 8 billion parameter LLM trained on a new architecture designed to make actions easier to interpret. All tokens generated by the model can be traced back to the origin of the LLM’s training data.

This can be as simple as determining which factual references the model cites, or as complex as understanding the model’s understanding of humor or gender.

“If there are a trillion ways to encode gender, and I encode it in a billion of the trillion things that I have, I have to make sure that I can find all the billion things that I encoded. And I have to be able to reliably turn it on and turn it off,” Adebayo told TechCrunch. “You can do that with the current model, but it’s very fragile…It’s kind of one of those holy grail questions.”

Adebayo began this research while completing his PhD at MIT and is a co-author of a widely cited 2018 paper showing that existing methods of understanding deep learning models are unreliable. That work ultimately led to the creation of a new way to build an LLM. Developers insert conceptual layers into the model that classify data into categories that can be tracked. This requires more up-front data annotation, but by leveraging other AI models, we were able to train this model as our largest proof of concept to date.

“The kind of interpretability that people do… is model-based neuroscience, and we turn it on its head,” Adebayo said. “What we’re really doing is designing a model from scratch so that we don’t have to do any neuroscience.”

Image credit: Guide Labs

One concern with this approach is that it may eliminate some of the new behaviors that make LLM so interesting: the ability to generalize in new ways about things that have not yet been trained. Adebayo says that’s still happening in his company’s model. His team tracks what it calls “discovered concepts,” which models discover on their own, much like in quantum computing.

tech crunch event

boston, massachusetts
|
June 9, 2026

Adebayo argues that this interpretable architecture will be something everyone will need. For consumer LLMs, model builders will be able to use these techniques to block the use of copyrighted material and better control output on subjects such as violence and substance abuse. Regulated industries require more controllable LLMs. For example, in the financial industry, models that evaluate loan applicants should consider things like financial record rather than race. There is also a need for interpretability in scientific research, another area where Guide Labs has developed technology. Protein folding has been a huge success for deep learning models, but scientists need more insight into why the software is finding promising combinations.

“This model shows that training interpretable models is no longer a kind of science, but an engineering problem,” Adebayo said. “We’ve cracked the science and we’ve been able to extend it. There’s no reason why this kind of model can’t match the performance of frontier-level models with more parameters.”

According to Guide Labs, Steelling-8B can achieve 90% of the power of existing models, but uses less training data thanks to its new architecture. The company’s next steps, which emerged from Y Combinator and raised a $9 million seed round from Initialized Capital in November 2024, are to build a larger model and start offering APIs and agent access to users.

“The current way we train models is so primitive that democratizing inherent interpretability will be good for our role in humanity in the long run,” Adebayo told TechCrunch. “We’re chasing these models that are going to be super intelligent, so you don’t want something mysterious to you making decisions for you.”


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleParticle’s AI news app listens to podcasts and finds interesting clips so you don’t have to
Next Article Wormable XMRig campaign uses BYOVD exploit and time-based logic bombs
user
  • Website

Related Posts

OpenAI brings in consultants to promote the company

February 23, 2026

Particle’s AI news app listens to podcasts and finds interesting clips so you don’t have to

February 23, 2026

Secretary of Defense summons Antropic’s Amodei over military use of Claude

February 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

APT28 uses webhook-based macro malware to target European companies

OpenAI brings in consultants to promote the company

Wormable XMRig campaign uses BYOVD exploit and time-based logic bombs

Guide Labs Debuts New Kind of Interpretable LLM

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.