Dr. Yichuan Zhang, CEO and co-founder of Voltsbit, examines the flaws in a market dominated by homogeneous AI-based models and how diversity is needed in the age of AI-natives.
Our future with artificial general intelligence (AGI) is probably not as dystopian as thought. AI has already proven to be one of the most powerful drivers of human progress over the past few years, and we are already seeing exciting signs of what is possible.
Researchers are making strides to find solutions to problems that have plagued us for decades. Nuclear fusion, long thought to always take 30 years, is now being modeled and optimized using AI in ways that revolutionize the pace of experimentation. If this trajectory continues, the impact will extend far beyond energy to climate modeling, resource optimization, and other global challenges.
But the more pressing question is not whether AI will bring breakthroughs, but who will benefit from it. And what matters is who can build with them. At this point, the answer is surprisingly few.
Limits of current innovation
Most of today’s AI products are structurally the same. It is placed on top of a small number of underlying models trained on roughly similar datasets. The result is a market where apparent diversity masks an underlying homogeneity with different interfaces, but increasingly the same intelligence layer.
If the current trajectory continues, this small number of model providers will define more than just the capabilities of AI systems. These define the boundaries of the innovation itself and who can participate in it.
Unfortunately, a world where all AI products work the same way because they rely on the same underlying intelligence is a world of lost differentiation. To put it more subtly, we live in a world where technology makes it difficult to express individuality.
Therefore, it is essential to reallocate capacity so as not to slow down the progress we have made. However, this requires change at the most important level: the intelligence layer.
Formation of intelligence layer
For individuals and organizations to meaningfully participate in the AI-native era, they need the ability to train and own the models that power their applications. This is where live learning becomes important.
A static model, no matter how large, is a snapshot. Improvements occur through regular provider-managed retraining cycles. However, live-trained models continually evolve in production by incorporating new data and giving you control over what and how the model learns. You can think of it as the difference between renting intelligence and owning it.
If we want a unique, AI-native future, all individuals and organizations must own and have the ability to shape the intelligence layers that power AI applications. It is important that the switch is not too late.
Bright future in the AI native era
A word of warning. Building technology is only part of the challenge. The real challenge is to make it accessible at scale in an easy-to-use and sustainable way.
In order to make live learning systems truly valuable, it is essential to understand how users actually interact, adapt, and derive value from them. To do so, technical teams must observe how intelligence evolves when it is placed directly in the hands of users.
What matters is not the first iteration itself, but what it reveals: how users shape the model, what kinds of feedback loops emerge, and how intelligence behaves when it is no longer centrally managed. Those insights will inform what happens next and ensure a bright future for everyone.
in the long run
The closest thing to an AGI future that benefits us all is an incremental change, not an instantaneous transformation. Without a doubt, individuals will increasingly lead AI-assisted lives. We are already seeing it happen. The organizations we all interact with every day will become increasingly AI-native, incorporating AI into their core operating models.
Although this is unlikely to be the final state, an early version of the AGI world could look like this in the coming years. And if that future is to arrive, which it most certainly is, the current trajectory will matter.
If the AI ecosystem continues to be dominated by a small number of underlying model providers, we as a society will move toward a world where AI-assisted experiences are powerful but increasingly standardized. This is wrong.
It is important to ensure the democratization of live learning. A world where any person or organization can train and own the intelligence layer that powers an AI application, and where that intelligence continues to evolve in production through feedback and new data.
In doing so, the performance and evolution of AI systems is no longer tied to the roadmaps of a few providers. The intelligence that powers them belongs to the Creator and is shaped by those who use it. That’s the difference between joining the AI Native Era and inheriting someone else’s version of the AI Native Era.
Source link
