Philosophy in the Age of Machines (Part 2)

Will AI superpower or usurp human dominance

Ken Ryu
18 min readAug 20, 2018

We assume since we build the machines, program the machines, and define the algorithms, that we are in control. This assumption is valid today. Tomorrow is another question. It is not hard to visualize a world in which machines are given the authority to handle these functions. We are already beginning to move in that direction with machine learning algorithms.

The Limits of Human Imagination

Humans are very limited in our ability to work in big numbers and predict the speed of technological innovations. We speak of exponential improvements but our minds are stuck in linear mode and without the necessary imagination to comprehend more than a handful of years into the future with any accuracy.

In “2001”, the 1968 novel by Arthur Clarke, Clarke portends an ominous future. A space exploration voyage turns tragic as the computer (HAL) disregards human requests and applies his own superior logic. Clarke was overly aggressive with his conscious machine prediction but undershot other futuristic predictions. The astronauts read newspapers which are electronic and update content on a daily basis. Kind of like a Harry Potter news scroll. In this example, Clarke falls short of the reality of a world of information changing at real-time speed driven by the miniaturization of compute power and the advent of the Internet.

It is a fool’s errand to attempt to predict the year in which computers will outperform humans in the majority of cognitive function, but the day is coming and may be not far off.

Can machines think outside the box?

Computers are ahead of humans in simple games such as chess and Go, but still can’t write poetry or compose music beyond humans. With natural language and massive compute power, these domains will also move towards computer-generated productions in the near future.

Computers excel at number crunching and can mash-up existing data structures to deliver improved solutions. Going back to the chess example, a computer can record billions of games and evaluate each board pattern to determine an optimized move that will provide the best likelihood for a victory. This brute force method of decision-making is like driving using a rear-view mirror, but a high-powered mirror for sure. Computers think in probabilities. A computer can deliver an optimized response (the move) against the predicted result (does the move improve the computer’s board position) from past game results to work towards the ultimate goal (winning the game).

This leads to an important question. Is art derivative or divinely inspired? The answer is not a simple black-and-white answer. Some art is convention-shatteringly innovative, while other work simply borrow and steal from existing forms. Computers can produce high-quality derivative art, but can they or will they be able to innovate better than humans? To bring it down to a pragmatic question, “would a computer have come up with cubism without Picasso?”

Going back to the divine-inspired versus derivative question, where does cubism fall? If Picasso was divinely-inspired, then humans may have an innate god-given advantage over machines.

Does brute force trump love?

We get into a grey area when the power and control of computers over our lives becomes so invasive that despite the human advantage of god-given inspiration, computing interference and dependency obviates those advantages. Love and humanity was suppressed by the brutality and maniacal world order vision of Hitler. In the end, humanity survived and the Nazis were expelled. Was this triumph of good over evil an example that there is a god and a universal life force that guides an invisible hand in favor of harmony and synchronicity over chaos and destruction?

During WWII, 50 millions people lost their lives prematurely. Since that time, we have built weapons of mass destruction and an interconnected web that could dwarf those loses.

A cyber-war could cripple our global economy, bringing famine, war and anarchy to billions. A nuclear war could eviscerate billions and choke our environment. If Hitler were alive today and had access to our massive nuclear and cyberwar power, would he be tempted to unleash this fury?

Taking the “War Games” example, what if a computer system were to calculate the advantages of such actions, but instead of determining the future of such large scale destruction as unwise, proceeded with the campaign.

If our machines are built with our human limitations and biases, what happens if an Einsteinian genius develops a compute system with annihilation or human subjugation as its mission. Einstein is a particularly interesting figure. He was a proponent of peace but his theory of relativity and his work on atomic power lead to the creation of the atom bomb. In his case, he did not have this weaponized objective as the end goal at the time of his epiphany on the potential energy that could be generated using atomic power. Only as the project gained government sponsorship did he realize the destructive potential of the A-bomb. Science and computers are unlocking many mysteries of our physical world as humans attempt to bend nature to our will. Dams, canals, solar energy, wind farms, skyscrapers, deep-sea oil wells, GPS farm tractors are only the beginning. Along the way we have created unintended consequences such as global warming, toxic waste, and air pollution. As machines are increasingly necessary to manage our complex infrastructure, defense systems, agricultural production, energy sources and information distribution, our dependency on machines is frighteningly linked to our survival.

Can’t live without them

Imagine if the Internet was taken offline for a week? Wall Street panic, no access to currency, the electrical grid shut down, highways clogged in gridlock, military and law enforcement without communication, stores looted are but a few of the problems we would face. Such an outage could be caused by a massive glitch, a mega-virus, or most likely, a cyberwar. The other more futuristic problem is the HAL scenario where the machines alter their human-defined objectives and morph to new goals that are detrimental to mankind. These are very real possibilities. It provokes the question of whether a Star Wars or Lord of the Rings life forces exists. Does organic life innately band together when faced with widespread extinction? Do animals, trees, insects and humans unconsciously tune into a common wavelength to resist annihilation? Even with this combined life force, is the resistance coordinated and strong enough to overcome a brute force algorithm set for partial or total destruction?

Are machines the next higher life form in our evolutionary path?

Before humans, it is believed that dinosaurs ruled the land. These beasts dominated other life forms with power, strength and size rather than ingenuity. These giants have been gone for millions of years replaced by homo-sapiens as the dominate life form. It is naive to think that humans could not suffer a similar fate. As we develop man-made devices that could lead to a doomsday crisis, we are certainly our own worst enemy. We are the weakling owner holding a thin, frayed leash to prevent our pit-bull from a rampage. Who are the cockroaches and rodents that will emerge from this new ice age and repopulate the planet if we fail?

Before we start taking axes to our Macbooks, let’s stay in reality. Computers are essential to our well-being. Technology is the tool that is bringing billions out of a life of destitution and ignorance. The Internet unleashes our global knowledge store and shares this treasure freely. Machine learning is destined to be our greatest asset in our race to save our planet from environmental Armageddon issues such as global warming and food and water security. As important, technology provides billions of people the tools and time to chase their passions rather than toil in menial work. Prior to the rise of the middle class emanating from the Industrial Revolution, and the move from the factories to a service and knowledge economy as we are seeing now in the Information Age, only the wealthy and privileged had access to education and financial independence to fully develop their god-given talents in the arts, sports and sciences. This dream potential is now being democratized and globalized. The potential to life a fully realized life is more available than ever. Thanks be to technology. Suffice to say that the genie is out of the bottle and there is no going back. How to prevent the genie from ill deeds will become more critical as the machines get smarter and more autonomous.

In the dinosaur discussion, these masters of the earth appear to have died out due to a massive change in the environment. Many believe that a huge asteroid brought on a sudden ice age that precipitated their demise. Certainly we could face a similarly dire set of events, whether environmental, self-inflicted or celestial, that would send our environment out of balance and make our earth uninhabitable. A second scenario is possible. Let’s call this the Planet of the Apes scenario. Instead of intelligent apes overtaking humans, machines are the usurpers. Will we live in harmony with our machine overlords? Will the machines be benevolent and treat us as treasured pets? Will the relationship be more of a symbioses like the one between sharks and remoras? This is what we hope and expect. Currently our relationship with technology has provided more benefits than ills. We are still masters of our machines and the machines have shown no conscious awakening. Even if our machines do develop some level of consciousness and self-protection, there is no reason to believe that these machines created by humans will not at their core have adopted a human-like moral code.

As we struggle with the incongruous thought of a machine gaining consciousness, we need to consider the long history of life on earth. There has long been a vigorous debate on whether human’s hold the monopoly on higher thinking. The popular view is that beasts lack the intellectual capacity to comprehend our world and are driven solely by base instincts. This is similar to how we view machines, lacking free-will and a resource for humans to use at our whim. This all-too-convenient classification provides humans the moral authority to treat lesser animals as we see fit. The question then is when did humans gain this enlightenment? If we have evolved from our banana-consuming ancestors, at what point did god instill us with divine intellect? Is this higher intellect a function of the evolution of the brain where even apes have a level of intellect, but far lower than humans? This argument offers a plausible answer to this question and allows for the explanation on the key difference between man and monkey. What is a brain and how does it work? The mysteries of the brain are still hidden, but our comprehension of its inner-working are coming into focus. Scientific research tells us that the brain is the center of billions of nerve cells communicating via electric impulses. Hmmm….sounds familiar.

Can an inanimate object gain consciousness?

If we believe that consciousness or more specifically higher consciousness is driven by our superior brains, is it so difficult to imagine a computer with higher processing power than our human brains gaining consciousness? If machines were to bypass humans, it is not certain or likely that they would immediately turn on humans. The transition from a human-dominated world to one where machines hold supremacy is likely to be a gradual and imperceptibly one. There will be no bloody revolution. Instead, the machines will methodically gain more and more control over our lives without us realizing that the inmates are now running the asylum. There will be some groups that will attempt to shut down or reprogram the machines, but these groups will be considered dangerous, radical fringe elements who threaten to do more harm than good. We have become so dependent on machines to survive that any attempts at disrupting our machines would be the equivalent to a patient living on oxygen unplugging her life-support machine. Essentially we need the machines more than they need us.

If we consider our relationship to certain species such as apes, dogs and horses. Which relationship would we serve in a world where we are no longer masters of our universe? For the apes and humans, there is modest competitive for food and land, though mostly we don’t concern ourselves with these primates one way or the other. Our love of dogs is a symbiotic one in which dogs have earned their distinction as man’s best friend. We protect, feed, shelter and even clean up after our 4-legged pals. We no longer have much utility for horses with the advent of the automobile, but we had previously used horses as a beast of burden for various human tasks including transportation. Will we be lesser peers, devoted companions, or useful servants to a higher life force?

Unlike organic beings who compete for land, food, water and air, machines are not burdened with these Darwinian impulses. In a machine dominated world, our actions would merely be considered a set of data variables that need to be considered as the machines set about their various objections. Let’s go back to the example of the chess-playing computer. In a more complex end-goal, a machine might be programmed to develop reliable crop and water resources for life in sub-Saharan Africa. The machines could send out autonomous heavy machinery to dam rivers, dig irrigation ditches, build solar energy farms, plow farmland, and determine which crops will thrive. Great, right? Except that the machines plans may have unintended consequences from its project implementation. Villages and wildlife may be displaced or destroyed. Economic disruption to existing commerce activities may ruin the livelihood of small business owners. There is no free ride, or more profoundly, for every action there is an equal an opposite reaction. In chess terms, the word gambit means a sacrifice of pawns in order to gain a competitive advantage. Just as our leaders make life and death decisions that impact society at large, computers may take over these decisions.

Technology innovation and ethical implications

Self-driving cars are facing these ethics questions today. Humans are not very skilled at driving and machines are well on their way to surpassing humans in efficiency and safety. Death from automobile accidents are common and unavoidable. We accept that a human may kill another human while driving as an unfortunate cost of our dependence on this mode of transportation. Yet we bristle at the thought that an autonomous computer may knowingly run over a pedestrian rather than slam on the brakes or swerve into another lane. The algorithm may calculate and determine that the likely death of the pedestrian is a lesser tragedy than subjecting the driver and other nearby drivers to a head-on or massive multi-car pile up. Gambit indeed. The thought of the cold, calculating decision of the computer versus the instinctual and unintended accident caused by a human driver makes all the difference for our tolerance for such a death. Although in both scenarios the pedestrian is killed, we have a ethical problem with the calculated decision of the self-driving car to keep driving despite its ability to take a different action. Ultimately, the safety, efficiency, and cost of self-driving cars will win. As with any innovation, we will initially be outraged by the innocent victims but will rationalize and come to grips with these costs in due time. Humans are very adaptable in this way.

When automobiles were first gaining popularity, the safety for pedestrians, drivers and passengers was laughably bad. There were many real-life Myrtles being run over by Jay Gatsbys. Due to the excitement and promise of the automobile, the majority of the public accepted these deaths as an unfortunate cost of progress. Computers lack compassion and ethical hangups. They are very good at determining the best way to achieve the end result with the least amount of collateral damage. The computer is looking to achieve a check-mate while maintaining as many pieces as possible. If a queen must be sacrificed to win the game, the computer will proceed without hesitation or remorse. Although this seems callous, let’s consider how life and death decisions are being made today. We elect a leader and then defer to a government body of humans who develop policy and programs to protect the public good. Rather than a benevolent and brainy HAL, we get characters like Vladimir Putin, Donald Trump and Kim Jong-un. We also have well-meaning public leaders like Angela Merkel and Justin Trudeau.

Let’s rewind a bit and return to the example of the machine tasked with the sub-Saharan Africa infrastructure, food and water objective. While that machine may have noble and humanitarian goals in mind, other machines may have competing objectives. An food export corporation’s computers may be programmed to optimize and monopolize trade with sub-Saharan Africa to provide rice, wheat and other food products. As part of the computer’s objective to maintain and maximize trade with this region, it may consider ways to block the other computer’s food independence charter. The food corporate computer might generate bad publicity with stories of villages being destroyed due to flooding caused by dams and endangered species driven to extinction by the proposed public works projects. The computer may initiate lawsuits to tie up the projects in red tape. The food manufacturer might also use its influence with heavy industrial equipment manufactures to pressure them not to supply these projects. There could also be sabotage efforts to derail the project. This is a pretty extreme and unlikely scenario, but illustrates the possibility of machines competing with other machines and how this can escalate into a complex and potentially harmful race for computing supremacy.

In “War Games”, Matthew Broderick’s character comes up with a brilliant idea. Teach the computer the futility of the objective of winning a thermonuclear war. He has the computer play itself. As the computer runs through simulated wars, it discovers that a decisive victory is not possible. In our interconnected world, these isolated game play scenarios are far too simplistic. Computers are used in an endless array of applications with far reaching implications. An election can be won with false news. An electricity grid can be taken offline. A war can be won with cyber-soldiers and drones. Massive long-serving industry giants can be obsoleted and bankrupt in months. Currency manipulation can create runaway inflation and devastate economies of nations. Genetic engineering and medical advances can disrupt billion dollar prescription drug companies. Stock market speculation and arbitrage can create bubbles and crashes. Machines allow us to bring human ideas and missions to massive scale with incredible speed.

Moore’s Law is outpacing our change capacity

Brilliant innovations such as the iPhone can bring digital access to billions in less than a decade. Facebook can create an information fabric to a large percentage of the world’s population. Amazon can develop the most formidable commerce operation the world has seen in 20 years time. Let’s look at the equation.

Brilliant and motivated leader + a big idea + computing power = massive changes at warp speed

Change can be good, and change can be bad. Humans have egos, greed and the capacity for evil. Again with our most famous “evil” persona, Hitler had charisma, passionate support, and disturbing ideas.

Besides machines being directed by humans or organizations with bad intentions, computers have a unique lack of moral questioning. During the Holocaust, Hitler used fear and blind obedience in order to conduct his murderous ethnic cleansing campaign. Unlike the Hitler loyalists who wrestled with their conscious as they subjected their Jewish captives to inhuman treatment and killings, machines would execute such programs without hesitation.

Machine on machine battles

If we view machines as the spawn of human ideas, the objectives of different machines will come into conflict with other machines. Today you can bet that high powered hedge fund computers are battling each other everyday to gain advantage as they arbitrage the stock markets. As humans begin to provide their machines the latitude to take additional decisions by loosening variable constraints and rules, will the machine’s eventually jump the rails and begin to extend beyond their original charter?

Will machines eventually attempt to shut down machines running at a cross purpose to their ultimate objectives? This machine on machine war could play out where the core utilitarian purposes of the machines are compromised as the machine-on-machine war requires more of their focus and takes them away from their primary objectives. For example, a solar energy network and Big Oil may battle for contracts, public sentiment, and price advantages. The solar energy business computers attempt to disrupt refineries and expose environmental hazards created by Big Oil. Big Oil fights back by crippling the solar operation’s computers. The network of the solar energy corporation is crippled as they deal with massive DDoS attacks. Customer and suppliers are unable to communicate with the solar company. The solar company brings in cyber-security experts to protect against future attacks and decide to engage their own hacking experts to payback the oil companies. In the meantime, energy prices and access is compromised as we humans sit nervously on the sidelines as the cyber battle escalates.

This example is woefully lacking in imagination as our future machines are sure to have far more sophisticated strategies to win. You ask a computer to win a chess game, it will not pity their opponent or neglect to make devastatingly aggressive moves. Provided the parameters are well defined, the machines should adhere to these rules and limits. For example, the game will not allow the computer to move a rook diagonally. The more leeway and creativity allowed to the machines, the more complex and dangerous the pathway to victory may be. If the charter as defined for the ultimate goal does not factor in collateral damages or unintended consequences, the machines will proceed to its goal without consideration for the issues created by its tactics. If you ask an autonomous car to drive from point A to point B in the most efficient route without forethought on safety, the computer will consider the flattened kitty cat to be an inconsequential event as it completes its objective.

Machines and morality

Today, computers are ill-equipped to fully grasp human morality and unintended consequences. Though most humans are endowed with an innate sense of right and wrong, we also are limited in our ability to anticipate unintended consequences. Our intentions are good, but our solutions can sometimes be more harmful than helpful. This is the messy pathway that innovation and progress follows. To our credit, once we goof, we eventually detect the unexpected problems and work to fix them. Our most pressing unintended consequence is global warming. Industrialization created tools, communication, transportation, and food and water security, but as a byproduct, we have pollution from our factories and cars and now face a dangerous environmental crisis. We are working to contain this and reverse this problem before it is too late. Machines will be critically important in our race against this environmental time bomb. Machines give us the processing power and the ability to deliver mass scale technology experiments and solutions to contain the crisis. The problem is that as we rush to bandage our wound, we may over correct and create another crisis as severe our the problem being addressed. We zig and zag erratically as we attempt to find the correct course.

We began our journey to our goal of human fulfillment and sustenance with a slow walk. That journey was frustratingly slow. We then discovered that if we harness the power of horses, we could increase our speed on our journey. We graduated to the automobile and paved roads, which proved a convenient method to travel on our way. We further enhanced our ability to navigate to our goal with maps and GPS-enabled directions. We are now toying with rocket-powered transportation. Will we be able to control the direction without flying off course or exploding in transit? This is a parable showing how technology is advancing beyond our control. Our solution? Have the machines manage themselves since humans are ill-equipped to control this run away freight train. As this transition happens are we controlling our destination, or are we captive passengers of our machines?

It could very well be that computers will always lack pneuma (the breathe of life). Organic life forces are endowed with a mystical essence, a god particle if you will. Machines, due to their inorganic nature, may always lack this divine touch. If machines, despite their superior processing capabilities, remain unconscious and defer to its human masters, we may not have to worry about a machine-lead Skynet revolt. Instead, we will be holding on to the reins of a team of wild horses with the hope that we don’t blow it and direct them and ourselves careening over a high cliff.

It is a brave new world. The machines are evolving. Friend or foe? Stay tuned.

In a future post, we will consider the universe at large. Our ability to reach the stars may eventually become reality as our comprehension of theoretical physics matures. If we can find other life sources, what implications with that have for the human race? These questions seemed academic and unanswerable not so long ago, but the rate of scientific discovery is proving to open doors to these questions.

--

--

Ken Ryu
Ken Ryu

No responses yet