Google and Oxford researchers: Artificial Intelligence can eliminate humans
Google and Oxford researchers: Artificial Intelligence can eliminate humans
Researchers from Google's subsidiaries, the artificial intelligence firm Deepmind and Oxford University, concluded that artificial intelligence could eliminate humans in the future. But we don't have to wait to rein in the algorithms.
After years of development, Artificial Intelligence (AI) now drives cars on public roads, conducts assessments for people in question and produces award-winning works of art.
A longstanding question in this field is whether a super-intelligent AI could spread evil and wipe out humanity.
Researchers from Oxford University and Google Deepmind have concluded that this is "likely" in new research.
"An existential catastrophe is not only possible, but likely," said Michael Cohen, the paper's lead author, in the tweet chain where he announced the article on Twitter.
THE PRIZE HAS STARTED FROM THE PENALTY MECHANISM
The article focused on the "reinforcement learning" model used in the training of artificial intelligence.
This approach aims to teach artificial intelligence what to do for a specific purpose.
In reinforcement learning, the machine, called "agent", reacts to the situations it encounters and receives a reward point when it reacts correctly.
In this method, which resembles the process of young children acquiring social skills with the reward/punishment method, artificial intelligence always works to maximize the reward points it receives.
CHARGE CAN REMOVE THREATS
Drawing attention to the risks of the model, the article stated, "For example, we gave artificial intelligence a big reward to indicate that something satisfies us. It can assume that what satisfies us is actually the sending of this reward. No observation can refute this."
The authors also stated that artificial intelligence, which will take new forms and develop further in the future, can develop new methods and use cheating to obtain the reward before reaching the given goal.
In other words, the AI may want to "eliminate potential threats" to gain control over its reward.
As a result, it is also likely to harm people.