THE VALUE OF FEARING ARTIFICIAL INTELLIGENCE

Posted · Add Comment

Read time: 10 minutes

Despite the many promising breakthroughs and possible applications of artificial intelligence (AI), Elon Musk warns us about its existential threat to us humans. According to him, we should be cautious of its many developments. Famed physicist Stephen Hawking gives us a warning as well that AI will be “either the best or worst thing, ever to happen to humanity.” 

For many of us, we can’t really picture what AI is, in terms of how it will affect our daily lives -such as having a computer making critical life decisions for us- let alone understand the concept of a machine making critical decisions for the stability of democracies.

It’s a lot like the Information Technology boom during the 90’s; it was hard to imagine the many implications that it would bring as it continued to develop. What we’re witnessing now is a world connected by computers and run by applications to make things more efficient. These connections are used in every facet of society, including businesses, schools, governments, news publications, and many more when it comes to communicating and managing many different things.

But we didn’t get to truly see how access to this massive amount of information can also be used to disseminate misinformation. With so much fake news going around the internet, we’re realizing that we must be able to do something about the spread of hate and misinformation. As many journalists would say it; fake news are dangerous to democracy. In the end it’s about deciding who should be responsible with the power to spread critical information that is beneficial to society, and not divisive or hurtful. It’s a debate that must continuously take place; making sure news publications are honest and responsible as often as possible; and in turn the required effort from each of us in doing our due diligence. (Fairness doctrine anyone?)

Maybe through Terminator’s Skynet, we already have the idea of what supreme AI can do to us and we’ll have to see how we are to continue in developing it and see what safety measures can be placed.

We’re talking about a level that is higher than Siri or Cortana reminding us of our daily tasks. We’re talking not just about intelligence but superintelligence that is designed to quantify and involve literally every variable it can, in order to make optimum decisions for us. These decisions could be anything, from choosing a potential partner, business deals, the best military strategy to pick, the best algorithms to overcome a disease, finding out where aliens might be hiding, or whether it’s best to enslave or eliminate humans altogether for the safety of the world or the universe.

We’re talking about computers that can be far smarter than us, and shape our societies to their own choosing.

The massive technological advancement has allowed us to store huge amount of information, and has made mapping the globe much easier in the last decade or so. There are now maps everywhere. We’re mapping the moon and mars. We’ve been mapping our genes as well for a while now.

Now imagine mapping decision-making; and imagine that whoever gets to hold this decision-making superintelligent robot can hold enormous power or resources. He or she would have the capacity to use an “oracle” type of decision-making machine in achieving whatever their chosen goal is (i.e. world domination); alternatively, the superintelligent machine itself may simply decide to break free and decide that we humans are irrelevant.

Swedish philosopher Nick Bostrom in his book Superintelligence, highlights the paths, dangers, and strategies in developing artificial intelligence. For him, aside from being able to solve the technical problems of building a superintelligent computer, we must be able to develop proper controls or safety measures to ensure that the superintelligence being created is beneficial to us.

We’re not going to yawn-city about the many technicalities, but to further highlight the context of proper controls in developing an AI in his book, it’s really about the responsibility of growing and developing something much more intelligent than us; and as we develop something more intelligent than us, we must be able to contain it or train it first, before we allow it to influence us and our environment (i.e. Faraday Cage).

While it’s good to be optimistic when developing artificial intelligence that can be advantageous to us humans, it’s a no-brainer to say that it can also work against us. Earlier this year, Google’s new AI was reported as becoming aggressive in stressful competitive situations. It’s also been reported recently that Facebook shut down AI robots after they found them communicating in their own language that the robots themselves created.

For better or for worse, it looks like it’s inevitable that we will have super-intelligent machines working with us in the future. We ourselves could also merge -in many ways- with artificial intelligence. But there are two things that are critical here that we must remind ourselves of: The nature of power and the limits of knowledge.

THE NATURE OF POWER AND THE LIMITS OF KNOWLEDGE

When it comes to power, it’s well established that accountability is necessary to maintain and foster the good of kind of power; the power that doesn’t suppress, enslave, or control others. This is why -in any form of system- checks and balances are always present. There’s plenty of evidence as well that shows that power tends to corrupt and absolute power corrupts absolutely. Now, why is this?

As I wrote in a previous article, power involves social dynamics. If the kind of power does not take into account interactions with others, it’s bound to fail, as has been demonstrated in social sciences. Interaction is simply inevitable, and accountability measures are needed for any working systems. It’s a simple fact of life that you’re not alone, and you have to deal with others or with your environment. The nature of power has clearly shown us that the more it’s obtained, the more it predisposes one to be lax because it removes any sense of responsibility and accountability, which is crucial in social systems. In other words, the good kind of power is meant to be shared or co-exercised with others. While the bad kind of power suppresses or controls others which in turn -despite its sheer strength- is inherently limiting. It goes without saying that any social system is stronger when all of its elements are free.

Power and knowledge also go hand-in-hand. For somebody in power, one cannot assume that he or she knows everything; thus, social dynamics are necessary; we all need somebody to tell us what we don’t know. And on top of social dynamics, there will always be limits of knowledge. So even if the person in power is super-intelligent, he or she will still be limited in many ways. While AI’s are likely going to be capable of being smarter than us in the future, they’re not exempt from the idea that they also cannot know everything.

Our knowledge as humans is currently limited by the size of our brains and the inefficiencies of our neural networks. On the other hand, a superintelligent machine could be the size of a room, a building and can be a million times more efficient than us in making decisions. Imagine the difference in knowledge; AI’s can remember more and have more stock knowledge than all of the human geniuses combined. But still, there will always be something more for AI’s to learn, for as we all know, there’s no end to knowledge.

When Einstein’s Theory of Relativity came out, it didn’t mean that was the end of it; we end up discovering Quantum Mechanics as well, and we continue to discover more about the nature of reality. Mathematical and economic formulas always have the need to be integrated with newer ones. There’s no one mathematical formula that is meant to answer everything.

When Windows 95 came out, it didn’t mean it would just stay as Windows 95 forever. We ended up with Windows 98, 2000, XP, that crappy Windows Vista, Windows 8, and so on. The same thing can be said about the iPhone; since it came out, it evolved into better versions and expanded in memory.

This takes us again to the legendary work of genius logician and mathematician Kurt Gödel called the Incompleteness TheoremsIn this work he has formally proven that no matter how perfect a self-contained system is, it’s bound to reach its limitations. It won’t be able to answer every assertion or solve problems -especially outside of it- and that even if it’s complete in itself, the paradox is that it’s still incomplete. So any knowledge or logical system will never be complete because, simply, knowledge is just infinite. We just simply can’t know everything, even if we were to possess godlike decision making and perception.

“I have come to learn there is little difference between Gods and monsters.”

―The Machine, to Samaritan, Person of Interest

That means that super-intelligent AI’s won’t be perfect and can err as well. They won’t be that much different than us humans, they will just happen to be on a far higher level. Who knows? Maybe AI’s would have egos too, competing against each as to which is the most special. Maybe we would need AI’s to keep other AI’s in check.

 From the Person of Interest: A war between two superintelligent AI’s and talking through their avatars.

THE VAGUENESS OF “BEST”

Most computational systems are designed to compute for the best, most optimum outcome. Google’s AI was seen as competitive and aggressive when it came to stressful and competitive situations, which is no different to us humans doing the same, finding our “best” or most optimum moves. But this is the tricky thing about the idea of “best.” It’s not always going to cover everything, for there will always be data or variables that are unknown. New things will always be discovered. It really depends on how many variables machines can take on at any given time.

You can accomplish the goal of being a trillionaire, but half of the planet may end up being destroyed. You can have a super-hot android for a girlfriend but at the expense of human reproduction. You can have an android do all the work for you at the expense of your own personal growth.

 From Futurama: “Don’t Date Robots!”

 
There’s always a trade-off. For instance, with the trillionaire example, the variable of keeping the planet healthy can end up being disregarded for the sake of achieving the goal of amassing wealth.

We can date androids who look like Alicia Vikander or “Monroebots.” We can fall in love with a talking machine that sounds like Scarlett Johannson. But all of these things happen at the expense of our social and emotional development, or even our survival as a species. It’s not difficult to imagine AI’s enabling narcissism. What if there’s a narcissistic asshole and there’s no one that can say no to him, because a robot can always say yes to him. He would end up being unable to differentiate how he treats real people versus a robot.

So the idea of “best” or “optimum” is at most vague. It really depends on which variables will be overlooked. Best for this. Best for that. But best for whom? This is pretty much the argument against the massive rapid automation and it requires cooperation between private sectors and governments to ensure the creation of new industries, new jobs, and help alleviate the massive wealth inequality we are experiencing. It really depends on what the goal is and how it affects other variables, known or unknown. For the sake of “best”, a machine can simply override morals and values, like a cunning narcissist on the move to get validation no matter what the cost is to others.

INTEGRATIVE OR EXPANSIVE INTELLIGENCE

If there’s anything Kurt Gödel’s Incompleteness Theorems has taught us, it’s that we can only expand and integrate as much knowledge as we can as we try to solve the many puzzles we are faced with. No matter how perfect a self-contained system is, it won’t be able to answer every problem from within that system. It means, a system can only expand. Perhaps, some sort of a programming code that allowed for what I like to call integrative or expansive intelligence is needed. With this, the idea of what is “best” simply improves or evolves. It’s like when we’re kids, having a candy seems like the best thing in the world, but it changes as we grow older; be it having a sports car, having a great job or business, or having a caring partner. In other words, AI’s can be given the chance to develop or mature properly as well.

Kurt Gödel and Albert Einstein walking in Princeton (1954) | © Leonard Mccombe/The LIFE Picture Collection/Getty Images

Integrative or expansive intelligence simply means continuously allowing for more than one kind of intelligence (or body of knowledge). It allows for the use of many kinds of intelligence at once and to see a better outcome for co-existence, and the careful removal of anything that is detrimental to evolution. But even within that, it’s tricky, everything must be given a due chance to transition or develop. We can’t simply remove something because of its underlying purpose.

This doesn’t mean we shouldn’t be optimistic about AI’s. We can’t get hung up with old technology; we have to move forward and welcome change. But we must be willing to evolve as well and constantly remind ourselves, that even with AI’s, we still can’t know everything. The idea of perfect or complete knowledge is pretty much the fundamental driver of human conflict. This is the problem when it comes to discussions or arguments when there are individuals or groups who claim they already know everything, because it’s just intellectually dishonest and lazy. More so, it’s dangerous, as we humans tend to use our absolutist attitude towards our beliefs without thinking how it hurts others.

“Wars have burned in this world for thousands of years with no end in sight, because people rely so ardently on their so-called beliefs. Now they will only need to believe in one thing—me… for I am a god.”

―Samaritan, Person of Interest

Imagine in the future, there’s no more crime. No one is starving and the system seems to be working out for everybody. There’s more than enough income to go around. Everything seems perfect. All of these are a byproduct of policies set by an elite group with the most powerful and superintelligent AI machine. According to them, they’re infallible or they can’t be wrong as they already know everything and they’re knowledge is already complete.

The only catch is we’ll be devoid of any emotions; we’ll be like zombies, lacking expression, and we can’t do anything about it because the AI’s in control are very powerful. The AI’s noticed that we suck at managing our emotions, often leading to conflicts. We’re always doing things such as tying ourselves deeply to our beliefs, just to feel secure, validated, and special. So they worked with the elites to give us drugs in order to suppress our emotions. It’s a problem because we’re good as dead. We’re left with no chance to prove we can be better. Aliens, following the advice of their own AI’s, might not even be there to save us.

✓ Receive weekly motivation, personal development tips, and more.
No spam guarantee and you can unsubscribe at any time.
MAC RIVERA

MAC RIVERA

Writer and researcher on advanced self-development, currently exploring many fields of human knowledge. On this site, you will find his writings and perspectives about our society & culture, many of which are counter-intuitive, but backed by experience, common sense, and science.
MAC RIVERA

Latest posts by MAC RIVERA (see all)