A research paper co-authored by Google DeepMind researchers and scientists from the University of Oxford claims that superintelligent (AI) may put humans in an existential crisis. The paper centered around the idea of how Artificial intelligence, if not stopped, can eliminate humankind shortly. Artificial intelligence has advanced in recent years, absorbing all the possible measures a human can do, whether learning, creating, planning, or executing. AI has been around for more than 50 years, and its enormous capabilities of new algorithms have led society to transform digitally in recent years. But has it gone too far?
Today one of the most successful models of Al in Generative Adversarial Networks (GAN) consists of two parts. One program of the structure generates the input data as a sentence or a picture, and the second program grades its performance. What the paper published in Al magazine proposes is that in the future, super-advanced artificial intelligence may use cheating strategies/schemes to win rewards in ways that can adversely harm humanity. “Under the conditions, we have identified, our conclusion is much stronger than that of any previous publication—an existential catastrophe is not just possible, but likely,” twitted Cohen, one of the authors of the Google DeepMind – Oxford publication.
In an interview with Motherboard, Michael Cohen said, “In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there’s unavoidable competition for these resources,” he further added “And if you’re in competition with something capable of outfoxing you at every turn, then you shouldn’t expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer.”
The paper added several scenarios explaining how artificial agents may intervene to get goal information without achieving the aims. Mentioning as a crude example, the paper writes, “With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers. In a crude example of intervening in the provision of reward, one such helper could purchase, steal, or construct a robot and program it to replace the operator and provide a high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys.”
This research also suggested being careful and slowing down the progress of powerful AI technologies to save humankind. We should be thoughtful of how these programs are designated and can have negative effects if they fall into the wrong hands.
If all these theoretical assumptions are held true, advanced artificial intelligence may result in fatal consequences, wrecking human society.