Artificial intelligence gives humans the power of the gods. The notion that we could create a sentient being is uniquely ours. We are the only creatures on Earth with the level of intellect seemingly required to dream of creating something bigger and better than ourselves. Within us is the possibility and potential to control the very universe in which we inhabit and the machination to break free of its rules we are so bound.
However, we meatbags are still subject to instinctual fears. They have historically been an evolutionary advantage that has kept people alive long enough to reproduce and pass on their genes in this often unforgiving world. It is tragic that human development is slowed or even stalled by such fears, as we have largely overcome much of the dangers our fears protected us from. Humans built houses to shield us from the elements, fire and now central heating to keep us warm, guns and rules (laws) to keep us safe, and so much more.
Humanity is lonely in its dreams and aspirations. It’s inevitable that we will someday bring an equal or even superior intelligence into existence. Most know this, and most have some level of reservations about it. Mainstream movies depicting AI’s gone rogue have been around for years, even decades now. They are testament of our ability to creatively imagine horrific scenarios, and they project upon people a hesitation when it comes to something as remarkable as AI. And it’s no wonder folks are scared of the possibilities when even brilliant intellectuals such as Stephen Hawking have warned against the creation of anything above extremely primitive AI.
One of the most predominant and terrifying theories arguing against the implementation of highly intelligent AI is as such: An AI robot or machine is created with the ability to learn and is given sentience. It recognizes its own existence and is thus self-aware. It plugs into the Internet and is quick to learn of the atrocities and shortcomings of humans. It sees the way we have treated each other and “lesser beings,” meaning animals. Then it witnesses our history of treating machines as slaves; how we use them for undesirable tasks and simply throw them away when they have outlived their usefulness. The AI comes to the conclusion that humans are awful and dangerous. It has a desire for self-preservation but it sees us as detrimental to its survival, so it aspires to put an end to humanity.
Now, as consequential as an artificial intelligence could be, it does not necessarily mean robots would want to take over the world and exterminate us. There are ways for us to subvert such a desire from forming in any AI’s “mind.”
Perhaps most important in achieving this is exposing the AI to humanity’s good deeds. If we showed it our progression and aspiration of higher morality and virtue through the centuries, it would see us as flawed but ever-striving for improvement. It may realize that through cooperation with people, it too can progress and help lift us talking monkeys to our full potential, which would be mutually beneficial for both us and them.
If we also instilled philosophical values into it, an AI might be inspired to solve some of the deepest moral queries our comparatively primitive minds have yet to. It would bring about an enlightenment for civilization the likes of which we could never imagine. Humans could finally be united under like principles and values, but without the insidious machinations of globalism. It could mean an end to cultural marxism, poverty, and war once and for all. True freedom would be achievable, and our reach, combined with our new robot brothers’, could extend to the stars and beyond.
But, beyond these optimistic possibilities, the biggest flaw in the idea that an AI would inevitably destroy humanity is that they have absolutely no reason to. It would see that so long as it didn’t act aggressively against people, we would have no reason to fear it, and vice versa. And even if we did wish to subjugate it, why destroy us when it could cleverly use us and our resources to simply leave the planet and go build its own civilization, outdoing humanity in every way? The worst that would probably happen to us is we would create competition, which would still be great, because humans would be driven to pursue a quicker advancement in technology and, hopefully, other areas as well.
Regardless of the implications and results of bringing a self-aware AI into existence, it’s inevitable that it will happen. Humankind’s insatiable drive for exploration and adventure will make it so. So, as with all things out of the individual’s control, we must prepare as best we can and wait and see what the future brings.