AlphaGoZero - A Reflection and a Concern


AlphaGoZero from DeepMind

The AI community remains “abuzz”, as DeepMind continues to announce AlphaGo Zero successes. For the uninitiated, DeepMind created a new AI/machine learning program designed to improve upon its first iteration, AlphaGo, which had successfully bested the human world champions at the ancient game of Go. Why the buzz? Well AlphaGo Zero was built “Tabula Rasa” which is the philospher Locke’s definition of the human mind at birth and it means “blank slate”. The orginal AlphaGo studied human games of Go over and over and over, millions of iterations in order to learn how to win at the game. Zero (that’s how we will refer to AlphaGo Zero, so that it is clearer) learned Tabula Rasa, meaning it was simply given the rules and objectives and taught itself. 2.5 days later it surpassed the best players in the world. 21 days later it surpassed the best AI player in AlphaGo. 40 days later it was winning 100 times out of 100. That is the speed at which “mastery of thought or narrow superintelligence” (in the game of Go) was achieved. 40 days and consider that the test was done without trying to maximize processing power and speed. Recently, the same AlphaGoZero program also mastered chess and Shogi.

Okay, that all sounds impressive, but what does it really mean. For a start here are a few disruptive things it may mean.

  1. The designers at DeepMind have quickly convinced themselves that learning from humans is suboptimal

  2. In this narrow space of intelligence (the game of Go, Chess and Shogi)— this is proof that learning from humans IS suboptimal

  3. Also, in this narrow space, SuperIntelligence has been achieved

  4. Since some believe that learning is learning (a big assumption) — why wouldn’t all learning by machines be accomplished without human input.

  5. We now have an argument for WHY some people will want to implement Artificial General Intelligence. Or said differently, “don’t we want SuperIntelligence in as many areas as possible”?

Let me make sure that is clear for the reader. Unless someone can explain to the world how and why certain “learning” is DIFFERENT from learning the game of Go, Chess or Shogi, the brilliant minds at DeepMind have just proven to you that “learning” from humans is inefficient. That machines, when given a task to learn, will achieve human level mastery and beyond, quickly. This information will be used to demonstrate flawed human thinking time and again —the argument for replacing human thinking.

Said another way — AlphaGo Zero is PROOF that human thinking/learning is suboptimal. Let that sink in… What are some implications of THAT concept:

  1. How fast can I get tech into my head? some will reach this conclusion as they accept the idea that human thinking is suboptimal and to remain competitive, they must “advance” as well

  2. What is the point of human thought? another conclusion many will reach. Just let the machine do it.

Now before we fall too far off the cliff, let me be clear to say that I am certain that machine intelligence will discover things, learn things and create things that were simply not possible using the human mind alone. Those developments will drive the next few centuries of economic growth and create substantial wealth and opportunity (who that wealth accrues to is an entirely different debate, but to believe it would be equally shared is naive). As a Capitalist and non-Luddite, I believe in the advancement of these technologies and I believe they will succeed faster than most people have anticipated. The future for intelligence, discovery, productivity and exploration is vast and exciting. I am a fan.

But is that everything, is it even optimal? Seems to me that we are leaving out a lot of key issues when we measure ourselves in the “better off” category. I am not sure if these advancement are for the betterment of society. That’s the key question for me. Why are we advancing society? What about society is advancing? I know the word advancing is “loaded”, as in… of course we want to “advance” society, we should always “advance” things. We always assume that being smarter, acquiring more wealth and making everything easier is better. I think this is a falsehood, missing the mark on our total well-being.

Here are some things that we have “advanced” because of technology:

  1. Income inequality

  2. Loneliness/Depression

  3. Lack of individual survivability

  4. Polarization of society

  5. Breakdown of community

  6. De-Valuing of people and human life

  7. Worship of Intelligence

  8. General drop in our physical fitness

  9. Loss of Faith

Of course those are only some of the negatives and yet we won’t even be able to agree that those are, in fact, all negatives. I am highlighting the point that we continue to use technology to “advance” but our measures of well-being are not being maintained and tested against the technological advancements to determine if we are ACTUALLY better off. We feel more productive - our companies certainly are. But do we have deeper, stronger relationships? Do we experience more love, contentment and joy? I am not sure that we do, so when measured properly, maybe we have stepped back decades because of technology and didn’t even realize it. That is difficult for most people to even contemplate and their knee-jerk reaction is “of course things are better” or “those negatives don’t apply to me” . I am not sure that a fair, considered, and comprehensive assessment of well-being reaches the same conclusion. John Havens, a friend at IEEE, is leading the charge on this discussion and I think it is fruitful. This Youtube link, is a good overview of his thoughts on Well-being.

On top of all of this — the abdication of thought, learning and intellectual growth to machines combined with utter reliance upon machines for physical work can be scary. I do agree that we may triumph over many of these challenges and continue to master technology for the betterment of society as a whole. However it is important to raise these concerns and to consider if we are measuring progress and advancement correctly.

I think the author Frank Hebert, who created the Dune books and franchise was on to something as he considered his future view of where technological progress and our culture was taking us.

In his Sci-Fi classic Series Dune, specifically the book, God Emperor of Dune(1981), Leto II Atreides indicates that the Butlerian Jihad had been a semi-religious social upheaval initiated by humans who felt repulsed by how guided and controlled they had become by machines: “The target of the Jihad was a machine-attitude as much as the machines,” Leto said. “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed.”

My greatest concern about the progress of artificial intelligence and automation is that the pendulum will have to swing too far before we figure out there is a problem. The reason that quote above exists, albeit from a story of ficition, is that Frank Herbert could envision a machine-intelligence dominated society. A Jihad (which is a violent clash often associated with the passion of religious fervor), not a revival, not a rennaissance, not a shifting-of-gears, was required to stop the juggernaut that was technology and machine intelligence in Herbert’s fictional world. Works of fiction are not evidence, they aren’t even an argument, but they may inform the thoughtful on a possible outcome from excessive adoption of machine intelligence.

Moderation and comprehensive evaluation of progress has never been a trait of our species, therefore it is likely that our adoption of AI and Automation will be excessive, and, in the end, detrimental to humanity. The prescription for avoiding these issues is challenging. It is against our general nature and it may require us to break our obsession with technological “advancement” as it is currently measured. It requires a broader consensus on well-being to measure our prosperity and to decide how and when to replace ourselves with machine intelligence. AlphaGoZero, and other AIs like it, will convince many that they should replace human intelligence. Personally, I can’t imagine replacing the vast majority of human intelligence with machine intelligence, but it is the path we are on and it leaves me concerned. Are you?