In a policy paper published Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and the AI Safety Dirt Dan Hendrycks Center said the US should not pursue a Manhattan Project-style push to develop AI systems with “superhuman” intelligence, also known as AGI.
The paper, entitled “Superintelligence Strategy,” argues that the US’s aggressive bid to exclusively control close AI systems could potentially encourage violent retaliation in the form of cyberattacks.
“[A] Manhattan Project [for AGI] “We assume that rivals do not move to prevent it, but acquiesce to permanent imbalances or total suicide,” the co-authors write. “What begins as a driving force for superware pomp and global control risks driving hostile countermeasures and escalation of tensions, which undermines the very stability that the strategy aims to ensure.”
The paper, co-authored by three highly influential figures in the American AI industry, comes just a few months after proposing a “Manhattan Project Style” initiative to fund AGI developments modeled after the American atomic bomb program in the 1940s. Energy Secretary Chris Wright recently said that the US was “starting a new Manhattan project” with AI while standing in front of the supercomputer site alongside Open co-founder Greg Brockman.
Superintelligence Strategy Paper has challenged the ideas defended by several American policy and industry leaders in recent months that government-supported programs pursuing AGI are the best way to compete with China.
In Schmidt, King and Hendrick’s opinion, the United States is like a standoff of AGIs that is different from mutually guaranteed destruction. Just as global power over nuclear weapons that could trigger preemptive attacks from the enemy does not seek monopoly, Schmidt and his co-authors argue that it should be noted that the United States competes to control a very powerful AI system.
While comparing AI systems to nuclear weapons may sound extreme, world leaders already believe AI is a military advantage. Already, the Pentagon says the AI is helping to speed up the military’s killing chain.
Schmidt et al. Here’s a concept they call mutually guaranteed AI malfunction (Maim). There, the government can actively disable the threats of AI projects, rather than wait for the enemy to weaponize AGIs.
Schmidt, Wang, and Hendrycks propose that the US shift its focus to how other countries develop ways to prevent the creation of close AI. The co-authors argue that the government should “expand” [its] It restricts enemy access to advanced AI chips and open source models, as well as an arsenal of cyberattacks to threaten AI projects managed by other countries.
The co-authors identify dichotomy made in the world of AI policy. The devastating consequences from AI development are a natural conclusion, and there is a “destiny” that we believe is a national advocate for slowing AI progress. On the other side is the ostrich. The “Ostrich” believes that the nation should accelerate the development of AI and essentially hope that it all goes well.
This paper proposes a third method. A measured approach to developing AGIs that prioritize defensive strategies.
That strategy is particularly noteworthy from Schmidt, who previously spoke out about the need for the US to actively compete with China in developing advanced AI systems. Just a few months ago, Schmidt released Op-Ed’s “Deepseek,” which was marked a turning point by Deepseek, who had marked a turning point in an AI race against China.
The Trump administration appears to be dead to move forward in American AI development. However, as the co-authors point out, the US decisions on AGI do not exist in the vacuum.
Seeing the world pushes the limits of AI, Schmidt and his co-authors suggest that it might be wise to take a defensive approach.
Source link