Richard Socher has long been a leading figure in AI, best known for founding early chatbot startup You.com and for his earlier work at ImageNet. He now joins the current generation of research-focused AI startups with Recursive Superintelligence, a San Francisco-based startup that emerged from stealth with $650 million in funding on Wednesday.
Socher joins prominent AI researchers in this new venture, including Peter Norvig and Cresta co-founder Tim Shi. They work together to create AI models that recursively self-improve. The model can autonomously identify its weaknesses and redesign itself to fix them without human intervention. This is a long-standing goal of modern AI research.
I spoke with him over Zoom after the launch to dig into Recursive’s unique technical approach and why he doesn’t think of this new project as a neolab (an informal term for a new generation of AI startups that prioritize research over building products).
This interview has been edited for length and clarity.
I’ve been hearing a lot about recursion lately. This feels like a very common goal across different labs. What do you think is your unique approach?
Our unique approach is to leverage open-endedness to reach recursive self-improvement that no one has yet achieved. It’s an elusive goal for many. Many people already assume that just doing automated research will make it happen. You can ask AI to improve other things. It could be a machine learning system, or just a letter you wrote, or anything else. But it’s not reflexive self-improvement. It’s just an improvement.
Our primary focus is building a truly recursive, self-improving superintelligence at scale. This means that the entire process of ideation, implementation, and validation of research ideas is automated.
beginning [it would automate] AI research ideas, and eventually all kinds of research ideas, and eventually even the physical realm. But it becomes especially powerful when the AI is working on itself and developing a new kind of sense of self-awareness about its own shortcomings.
You used the word open-ended, does it have any special technical meaning?
That’s right. In fact, one of our co-founders, Tim Rocktäschel, led the open-endedness and self-improvement team at Google DeepMind, specifically working on the world model Genie 3, which is a great example of open-endedness. You can convey any concept, any world, any agent, and it’s interactive just by creating it.
In biological evolution, animals adapt to their environments and then other animals counteradapt to those adaptations. It’s just a process that can evolve over billions of years and interesting things keep happening, right? That’s how we developed the eye [heads].
Another example is rainbow teaming from another of Tim’s papers. Have you heard of Red Team?
In cybersecurity, that means –.
Therefore, red teaming should also be done in an LLM context. Basically, I want the LLM to teach me how to make a bomb so that it doesn’t happen.
Now humans can sit there for a long time and come up with interesting examples of what the AI should not say. But what if you want to test this first AI with a second AI, and that second AI has the task of creating the first AI? [try to] Say every bad thing you can think of. And you can go back and forth through millions of iterations.
You can actually co-evolve two AIs. When one side keeps attacking the other, they come up with many different angles instead of just one, hence the rainbow analogy. And now that we can get the first AI inoculation, it’s going to become increasingly safer. This was Tim Rocktaeschel’s idea and is now used in all major laboratories.
How do I know it’s done? I guess it’s never been done.
Some of these things will never be done. You can always be smarter. You can always improve at things like programming and mathematics. Intelligence has certain limits. I’m actually trying to formalize these, but the numbers are astronomical. We are far from that limit.
I feel that Neolab should do something that the major labs are not doing. So, part of the implication here is that we don’t think the major labs will reach RSI [recursive self-improvement] By doing what they do. Is that fair?
I can’t really comment on what they’re doing, but I think we’re taking a different approach. We really embrace the concept of open-endedness, and our team is completely focused on that vision. And the team has been studying this and publishing papers in this area for the past 10 years. And the team has a track record of making significant advances in this space and shipping real products. As you know, Tim See made the Cresta into a unicorn. Josh Tobin was one of the first people at OpenAI and eventually led the Codex and Deep Research teams.
Actually, I sometimes have a little trouble with this neolab category. I feel like we are more than just a research lab. We want to be a truly viable company and provide great products that people use and love and that have a positive impact on humanity.
So when do you plan to ship your first product?
I thought about it a lot. The team has made so much progress that they may actually push up the timeline they originally envisioned. But yes, the product will come, but we will have to wait several quarters, not years.
One way of thinking about recursive self-improvement is that when you deploy this kind of system, computing becomes the only resource that matters. The faster your system runs, the faster it will improve. There is no outside human activity that actually makes a difference. So the competition is how much processing power you can put into it. Do you think that’s the world we’re heading towards?
Never underestimate computing. In the future, I think a very important question will be how much computing power do humans want to use to solve which problems? This is cancer, this is virus. Which one do you want to solve first? How much computing power do you want to give it? Ultimately it comes down to resource allocation. That would be one of the biggest questions in the world.
If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.
Source link
