Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

‘Buy Now, Pay Later’ is expanding fast, and that should worry everyone

Amazon Satellite Network changes brand name, withdraws affordable price tag

TechCrunch Mobility: Robotaxi expansions that really matter

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » New ‘dragon hatching’ AI architecture modeled after the human brain could be a key step towards AGI, researchers claim
Science

New ‘dragon hatching’ AI architecture modeled after the human brain could be a key step towards AGI, researchers claim

userBy userNovember 13, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Researchers have designed a new type of large-scale language model (LLM) that they propose can bridge the gap between artificial intelligence (AI) and more human-like cognition.

Researchers at AI startup Pathway, which developed the model, say the model, called “Dragon Hatching,” is designed to more accurately simulate how neurons in the brain connect and strengthen through learning experiences. They described this as the first model that can “generalize over time,” meaning it can automatically adjust its neural wiring in response to new information.

In the study, uploaded to the preprint database arXiv on September 30, the team assembled this model as a successor to existing architectures that underpin generative AI tools such as ChatGPT and Google Gemini. They further suggested that this model could provide the “missing link” between today’s AI technologies and more advanced brain-inspired intelligence models.

you may like

“There’s a lot of discussion going on right now, especially with inference models and synthetic inference models, about whether you can extend inference beyond the patterns we’ve seen in data retention, and whether you can generalize inference to more complex or longer inference patterns,” Adrian Kosowski, co-founder and chief scientific officer at Pathway, said on the Super Data Science podcast on October 7.

“The evidence is largely inconclusive and the answer is generally no. Currently, machines do not generalize reasoning the way humans do. We believe this is a major challenge. [the] The architecture we are proposing has the potential to bring about significant changes. ”

A step towards AGI?

Teaching AI to think like humans is one of the field’s most important goals. However, reaching this level of simulated cognition, often referred to as artificial general intelligence (AGI), remains difficult.

A key challenge is that human thinking is inherently messy. Our thoughts rarely appear as neat, linear sequences of connected information. Rather, the human brain is a chaotic tangle of overlapping thoughts, sensations, emotions, and impulses constantly competing for your attention.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Diagram of a network of connected lines and points.

(Image credit: JESPER KLAUSEN / SCIENCE PHOTO LIBRARY/Getty Images)

In recent years, LLM has brought the AI ​​industry even closer to simulating human-like reasoning. LLMs are typically powered by Transformer models (Transformers), a type of deep learning framework that allows AI models to connect words and ideas during a conversation. Transformers are the “brains” behind generative AI tools like ChatGPT, Gemini, Claude, etc., allowing them to interact with and respond to users with (at least most of the time) convincing levels of “awareness.”

Transformers is extremely sophisticated, but it also represents the cutting edge of existing generative AI capabilities. One reason is that they don’t learn continuously. Once an LLM is trained, the parameters governing it are locked. This means that new knowledge must be added through retraining or fine-tuning. When LLM encounters something new, it simply generates a response based on what it already knows.

imagine dragon

Dragon Hatchling, on the other hand, is designed to dynamically adapt its understanding beyond the training data. It does this by updating its internal connections in real time as it processes new inputs, similar to how neurons strengthen or weaken over time. This may support continuous learning, the researchers said.

you may like

Unlike typical Transformer architectures, which process information sequentially through stacked layers of nodes, Dragon Hatchling’s architecture behaves like a flexible web that reorganizes itself as new information is revealed. Tiny “neuronal particles” continually exchange information and adjust their connections, strengthening some and weakening others.

Over time, new pathways form that help the model retain and apply what it has learned to future situations, effectively giving the model a type of short-term memory to influence new inputs. However, unlike traditional LLMs, Dragon Hatchling’s memory comes from continuous adaptation within the architecture, rather than from context stored in the training data.

In testing, Dragon Hatchling performed similarly to GPT-2 on benchmark language modeling and translation tasks. This is an impressive feat for a brand new prototype architecture, the researchers note.

Although the paper has not yet been peer-reviewed, the team hopes the model will serve as a foundational step toward autonomously learning and adapting AI systems. In theory, this could mean that AI models get smarter the longer they’re online, for better or worse.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAntibiotic-resistant infections are on the rise, UKSHA warns
Next Article There is a weakness in Earth’s magnetic field that is growing stronger, endangering astronauts and satellites.
user
  • Website

Related Posts

History of Science: “Patient Zero” Infects SARS, an Older Relative of the New Coronavirus — November 16, 2002

November 16, 2025

Scientists invent a way to create and dye rainbow-colored fabric in the lab using E. coli

November 15, 2025

Where is the darkest place in the solar system? What about space?

November 15, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

‘Buy Now, Pay Later’ is expanding fast, and that should worry everyone

Amazon Satellite Network changes brand name, withdraws affordable price tag

TechCrunch Mobility: Robotaxi expansions that really matter

How much of the AI ​​data center boom will be powered by renewable energy?

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.