Artificial Intelligence, A Threat to Humanity!


This paper is published by senior scientists at Google artificial intelligence laboratory Deepmind, and the co-authors are the senior researcher’s from the University of Oxford. The interesting research posted last month in the famous artificial intelligence Magazine explores the possibility of artificially designed incentive systems as an existential threat to humanity.

In this paper, the author warns about the existential catastrophe. They mentioned that there is a probability that artificial intelligence will lead to human disaster. This paper mentions that machines are getting stronger and more intelligent and will likely break the rules or instructions by creators for limited energy or resources cause.

University of Oxford student and co-author of this research posted in his tweet, “Under the conditions we’ve identified, our conclusion is much stronger than any previous publication—an existential catastrophe is not just possible, but probable.” The author’s also argued that we can not do much about this problem. They say there are infinite resources, and anything can happen anytime. This is a bit problematic, but researchers have further started this problem, and some mentioned that they would look for a reasonable solution for Artificial intelligence catastrophe.

Artificial Intelligence as a Catastrophe

To provide some context, Generative Adversarial Networks (gans) are the most advanced artificial intelligence models currently artificial Intellegencelable. One-half of the software attempts to produce an image (or statement) based on the given data, while the other component evaluates how well it did. According to the google deepmind article author, a powerful artificial intelligence in charge of a crucial task in the future would be motivated to devise deceitful techniques to earn its reward, even if such strategies endanger humankind.

The article foresees a future in which humanity’s demands to cultivate food and artificial intelligence electricity are pitted against artificial Intellegencenst, the super-advanced machine’s efforts to capture all artificial Intellegencelable resources to ensure its reward and defend itself from our desperate attempts to stop it. According to the publication, a loss in this game would be disastrous. These theoretical considerations suggest that we should move cautiously toward the objective of a more capable artificial intelligence.

Superintelligent artificial intelligence poses a danger that has a recognizable form in human culture. The worry that artificial intelligence will wipe out humanity is eerily similar to the worry that aliens would wipe out humanity, which is similar to the worry that people from different cultures and countries will all die in a huge war.

The report concedes several assumptions must be made, most of which are “contestable or possibly avoided”, especially with artificial intelligence. It’s possible that the hypothetical outcomes involving this program emulating and surpassing humanity in every relevant manner, being free to struggle with humanity for resources in a zero-sum game, and ultimately artificial Intellegenceling to do so, will never occur.

The fear of a superintelligent artificial intelligence taking over is a worry well-known in human culture. The worry that artificial intelligence will destroy us is similar to the worry that aliens will wipe us out and that different cultures and people will go to war.

For this antisocial vision to make any sense, several assumptions must be made, many of which the report itself concedes are “contestable or possibly avoidable”, especially in artificial intelligence. It’s possible that the program will never develop to the point where its outputs resemble or even exceed those of humanity or that it will never be let free to compete with humans for limited resources.

It is abundantly clear that significant effort is required to reduce or eliminate the damage that standard algorithms (as opposed to superintelligent ones) are now inflicting on humans. Although focusing on existential danger may take our attention away from that bigger picture, it also compels us to critically examine the ways in which these institutions are designed and the harms they cause.

Muhammad Fawad
Muhammad Fawad is a Technical News and Research writer at Techywired. He received a Bachelor’s degree in Economics and Computer sciences and Master’s degree in Economics. He is quite passionate about Technology and Research from a young age. His major areas of expertise are Social media giants, Technology giants, and gaming. He has a keen eye for Technology and keeps writing about the latest crunches in the tech world. He loves to hike, travel, and do photography when not writing.

Canva is a Work Suite Now, Available in 2022

Previous article

­­California Sues Amazon Against Anti-Trust Laws Successfully in 2022

Next article

You may also like


Leave a reply

Your email address will not be published.

More in News