The AI company founders have a reputation for making bold claims about the potential of technology to reshape the field, particularly the science. But Face co-founder and chief science officer Thomas Wolf has a more measured take.
In an essay published on X on Thursday, Wolf said he fears that AI will be “yes on the server” if there is no breakthrough in AI research. He detailed the current AI development paradigm does not provide AI that allows creative problem solving outside the box.
“The main mistake people usually make is thinking [people like] Newton or Einstein only expanded the best students, and linearly extrapolating the top 10% of students will bring the genius back to life,” Wolf writes. “To create Einstein in a data center, you need a system that knows all the answers, but also a system that allows anyone else to think or dare to ask questions.”
Wolf’s claims contrast to those of Openai CEO Sam Altman, who said in an essay earlier this year that “urgent” AI can “substantially accelerate scientific discoveries.” Similarly, human CEO Dario Amodei predicts that AI will help develop treatments for most types of cancer.
Wolf’s problems with AI today – and what he thinks technology is headed is not to generate new knowledge by connecting previously unrelated facts. Even though most of the internet is freely used, Wolf said AI is closing the gap between what humans already know, as we understand now.
Some AI experts, including former Google engineer François Charette, have expressed similar views, arguing that while AI can remember inference patterns, it is unlikely that it will be able to generate “new inferences” based on new circumstances.
Wolf believes that AI Labs are essentially building “very obedient students.” Today’s AI is not encouraged to question and propose ideas that could violate training data, he said.
“To create Einstein in a data center, you need a system that knows all the answers, as well as a system that allows you to ask questions that no one else thinks or dares to ask,” Wolf said. “What if everyone is wrong about this?” When all textbooks, experts, and general knowledge suggest that they are not. ”
Wolf believes that the AI’s “valuation crisis” is partially condemning this disillusionment situation. He points to benchmarks commonly used to measure improvements in AI systems. Most of it consists of questions with clear and obvious “closed” answers.
As a solution, Wolf proposes “moving to a scale of knowledge and reasoning” that allows the AI industry to elucidate whether it is possible to adopt a “bold counterfactual approach,” to make general suggestions based on “small tips,” and to ask “non-obvious questions” that lead to “new research paths.”
The trick is to understand what this measure looks like, Wolf admits. But he thinks it’s worth the effort.
“[T]He is the most important aspect of science [is] The skill of asking the right questions and even challenging what you learn,” Wolf said. “A+ is not required [AI] Students who can answer all questions with general knowledge. We need B students to see what everyone else missed and ask questions. ”
Source link