Should humanity pull the brakes of artificial intelligence (AI) before putting our survival at risk? Public opinion is being split into its future, as technology continues to change industry and everyday life, making it more feasible to the outlook for AI models that are consistent with human intelligence.
But what do we do when AI surpasses human intelligence? Experts call this moment a singularity. This means that technology transcends artificial general information (AGI) as it transcends artificial general information (AGI) and becomes a close entity that can recursively self-improve and escape human control.
You might like it
Most readers of the comments believe we are even thinking too much about slowing down the trajectory towards urgent AI. “It’s too late. I’m grateful that I’m old and not living to see the outcome of this catastrophe,” writes Kate Salgison.
Meanwhile, Cece replied: “[I] I think everyone knows that no one will push that genie back into the bottle. ”
Others thought that AI fear was exaggerated. Some compared AI bookings with the public fear of past technological changes. “All new and emerging technologies have deniers, critics and often crackpots. AI doesn’t make a difference,” Pegg said.
Related: AI is in a “unprecedented system.” We should stop it – and we should stop it before it destroys us –
This view was shared by some followers of Live Science Instagram. “When electricity first appeared, do you think a lot of people asked this same question? People were so afraid of it, they made all sorts of disastrous predictions, most of which came true.”
Others highlighted the complexity of this issue. “It’s an international arms race, and knowledge is there. There’s no good way to stop it. But we need to be careful when AI just keeps us busy (millions or billions of AI agents can be a huge evacuation risk for humans, even if AI is not above human intelligence or reaching AGI), 3jaredsjones3 writes.
“As companies like NVIDIA try to replace all their workforce with AI, safeguards are needed. Still, the benefits of science, health, climate change, technology, efficiency and other key targets brought about by AI could mitigate some of the problems.
You might like it
One comment suggested a regulatory approach rather than halting AI entirely. Isopropyl “imposes heavy tax on closed weight LLM [Large Language Models]both training and inference, and there are no copyright claims on the output. It also facilitates and scales the deployment of consumer hardware rather than HPC, and imposes progressive taxes on larger model training [High-Performance Computing]. ”
In contrast, they suggested that smaller, specialized LLMs, outside of corporate management, consumers themselves suggested “help”. [the] The larger public will develop healthier relationships[s] To ai’s. ”
“These are some good ideas. From pursuing AGI, it’s great to change the incentives to make them already easier to use,” replied 3jaredjones3.
What do you think? Do AI development need to move forward? Share your views in the comments below.
Source link