Given that its capabilities explode, there is a lot of debate about artificial intelligence (AI). From stepping on our work to questioning whether we can trust it in the first place.
However, the AI used today is not the AI of the future. Scientists are increasingly convinced that they are on explicit trains to build artificial general information (AGI). This is a sophisticated type of AI that reasones like humans, and is superior to us in different domains, and even improves our own code and makes it even stronger.
Experts call this moment a singularity. Some scientists say it could happen sooner next year, but most agree that it is likely to build an AGI by 2040.
You might like it
But what? Birthing AI smarter than humans can bring countless benefits, such as quick new science and making fresh discoveries. However, AI that can build a powerful version of itself may not be that big of a news story if its benefits do not match those of humanity. That’s where artificial super intelligence (ASI) comes into play, and the potential risks associated with pursuing something much more capable than us.
AI development is entering a “unprecedented administration,” as experts told Live Science. So should we stop it before it becomes powerful enough to potentially sniff us out with a finger snap? Please let us know in the poll below. Please tell us why you voted for the way you did in the comments section.
– AI is in a “unprecedented system.” We should stop it – and we should stop it before it destroys us –
– There are 32 different ways that AI can go wrong, scientists say – from hallucinations to complete inconsistency with humanity
– Threate the AI chatbot, it lies, cheats, and holds “Let me die”, warning you
Source link