Answering Nick Bostrom's Desiderata (Part 1 of 4)

*italicized sections are direct excerpts from Bostrom’s paper

EFFICIENCY (from Bostrom’s 4 sections)

From these observations, we distill the following desiderata:

● Expeditious progress. This can be divided into two components: (a) The path leads with high probability to the development of superintelligence and its use to achieve technological maturity and unlock our cosmic endowment. (b) AI progress is speedy, and socially beneficial products and applications are made widely available in a timely fashion.

ForHumanity is concerned about a rush or a timeframe for achievement. We agree that the applications and potential breakthroughs associated with Machine Superintelligence are life-altering and for that reason we accept that there are significant consequences as well as benefits. Adequately addressing/considering those consequences and crafting intelligent and inclusive solutions for all stakeholders should take priority over the endgame. Furthermore, ForHumanity is skeptical about our world being able to achieve the positive consensus required to avoid significant missteps from a rush to implementation, let alone from a slow and considered approach. ForHumanity values each human beings’ current existence and contribution to society in the present. Therefore, we advocate for a path of GREAT resistance, rather than least resistance. We are the tortoise in this discussion.

Bostrom refers to a SuperIntelligence that can “obviate the need for human labour and massively increase total factor productivity”. At ForHumanity, we prefer to view the whole picture of well-being and productivity on behalf of the human race. Here, Bostrom is focused on the economic math, which candidly I agree with. However, the sizable dislocations of “work” could lead to potential disaster on a societal, moral and quality-of-life level. Today, we cannot fully understand the impact of life without work, how it may diminish our creative process, our sense of self-worth, the value of our finite existence, not to mention the potential for aberrant behavior. The phrase “idle hands are the devil’s workshop” was not created for simple flowery language, it reflects some truth or character flaw in our humanity. It may not be an absolute truth, but it is a challenge. To be well fed, clothed and sheltered is good. To be bored, dull-mined and question the value of ones’ existence probably means that human society is WORSE off, rather than improved, even if the machines are productive and thoughtful.

A reference to a cosmic endowment is beyond arrogant. I despise the word choice “endowment”, which means “an income or form of property given or bequeathed to someone”. Is the cosmos an opportunity, an adventure and a vast unknown, certainly it is. Has it been given to us for our use — absolutely not. Leave aside questions of ownership and other life that may compete with us for the opportunities that exist in the cosmos, the word choice implies a positive economic benefit that is far from certain, especially when the costs are unknown if not ignored.

● AI safety. Techniques are developed that make it possible (without excessive cost, delay, or performance penalty) to ensure that advanced AIs behave as intended. A good alignment solution would enable control of both external and internal behaviour (thus making it possible to avoid intrinsically undesirable types of computation without sacrificing much in terms of performance; c.f. “mindcrime” discussed below).

Today, we already have our top minds in neural networks and machine learning admitting that they don’t fully understand how their technique achieved their goals. In certain circumstance we are choosing to value the ends and ignore the means. This is largely unacceptable and must be corrected for us to have any ability to “properly align” machine SuperIntelligence. Furthermore, why are we doing this? Why are we choosing to replace ourselves in the thinking and potentially creative process? Are we so dissatisfied with our lives and our errors?

As I ponder all of these questions, I always find myself concerned about being perceived as a Luddite. In a microeconomic sense, the VAST majority of technological advances are beneficial, they wouldn’t occur otherwise. The creative process looks at an operation, identifies weaknesses and aims to improve upon them. That is why technology can’t be halted. We would have to collectively choose to fight something innate in ourselves, “the will to improve”. And even if collectively we agree to the halt, invariably, there would be cheaters and in the end game of SuperIntelligence, a cheater might be handsomely rewarded. So, ForHumanity does NOT advocate the halting of technological improvements. Safety and delay are an entirely different discussion. Should there be broad safety protocols ? — absolutely. Should there be national and extra-national contingency plans for the forceable acquisition of rogue technology — absolutely.

Finally on alignment, our world exists in a state of separation and dislocation, often by choice. This represents the single greatest deterrent to positive alignment, there is very little that represents a global consensus on alignment. That is not to say that the world should not try to achieve some consensus and minimium standards. ForHumanity will gladly participate in discussions designed to enhance the safety of AI and preserve the rights of all humanity. We applaud an effort to do so and advocate for the simplest and most basic of ideas to facilitate consensus and up the level of safety which currently stands on the ground floor.

Here is what I know/believe, in advance. There is the greatest incentive to cheat and to a skirt safety and alignment concerns. Therefore, authorities must prepare immediately to monitor progress, evaluate specific and implementable safety protocols and a standard of ethics today. Ideally this authority (Watchdog) is extra-national, supported by National governments and includes input from a broad-based think-tank of more than technologists. And to eliminate religion/faith-based thinking from this discussion, would be to alienate a large portion of the human race. Corporations and governments should openly submit to doctrine of safety and ethics prescribed by this organization, not dissimilar to the global Climate Change proposals. And therein lies a further challenge, getting key actors to cooperate and participate, when it is clearly against their best interest. For example, today, the United States, acting as a sovereign nation, trying to advance its own national agenda, may be loathe to give up its’ current advantage in AI, by submitting to the limits of this extra-national organization (not dissimilar to the rejection of the Kyoto protocols). A more practical solution may exist in the application of nuclear weapons technology, whereby those that have it ought to cooperate from the outset. Whereby China, the United States, Great Britain and Europe ought to collaborate at the outset of AI, on the most basic of safety protocol, like an unavoidable kill switch and something akin to Azimov’s 3 laws.

ForHumanity also advocates for the right to “Opt-out” of this future society. The calculations of cost-benefit here in this desiderata are already out of line with some societal norms and we estimate that an increasing number of people will look to “un-plug” for a variety of reasons such as, “voting rights of machines”, “what constitutes life”, “privacy”, “value of work”. The use and association with machine SuperIntelligence should be a choice. And that choice should be available to ALL people. As a society, our most prized possession is freedom, we want to ensure that the freedom of choice is maximized and cherished for Humanity with respect to machine SuperIntelligence.

● Conditional stabilization. The path is such that if avoiding catastrophic global coordination failure requires that temporary or permanent stabilization is undertaken or that a singleton is established, then the needed measures are available and are 7 implemented in time to avert catastrophe.

As ForHumanity is not nation- state dependent, and focused on achieving the optimum solutions all the benefit of all mankind, this is an easy decision to agree with. Any malevolence generated by an independent actor, designed to marginalize or eradicate a segment of humanity should be opposed with the full force of all. However, this presupposes that it is an obvious threat — (aka Skynet, from the movie Terminator 1984).

ForHumanity would also argue that a large degree of group-think associated with machine SuperIntelligence being a “be-all and end-all” solution for all of mankind fails to appropriately value core traits of humanity itself. Faith, Hope, Work Ethic, Compassion, Grace and Love are traits which we expect will remain elusive for AI. Human decisions made using these traits are often sub-optimal based on economic or even personal welfare value judgments, but that doesn’t make them wrong or meaningless. Many of our best decisions on a personal level may have been self-detrimental, but that is who we are, who we become and how we learn. That is the nature of our humanity and it is to be embraced, NOT optimized away.

Singular, global control designed to optimize the production and implementation of Machine SuperIntelligence violates a series of valuable tenets of humanity and nature. It certainly does not provide political diversification and is highly likely not to provide social/cultural diversification. Rather it will inspire conformity and optimization that Forhumanity would find to be ironically “sub-optimal”. We would prefer coordinated and consensus driven separate actors to achieve an optimal result as a function of diversification.

● Non-turbulence. The path avoids excessive efficiency losses from chaos and conflict. Political systems maintain stability and order, adapt successfully to change, and mitigate socially disruptive impacts.

The primary source of turbulence is likely to be in the job market. ForHumanity believes that an increasing number of jobs will be eliminated by AI and automation. While there are those that argue for job creation, we believe that those job gains will be more than overwhelmed by the job losses. Each new job created will require smaller and smaller amounts of human labor as the cost of automation continues to fall and the capabilities continue to rise. Therefore it is our conclusion that the dramatic shifts in the workforce will create the largest rifts in society and lead to the greatest degree of inefficiency as societies deal with a problem that is unprecedented. As AI and automation will be seen as the culprits for this dislocation, a natural political reaction will be to CREATE friction and inefficiencies. So while we agree with the authors that an efficient process is preferred over an inefficient one, we believe that political challenges will rise to create drag and a more inefficient process. A discussion about solutions to this inefficiency are beyond the scope of Bostrom’s paper or this critique, but are necessary.

Part 2 — to follow.

Dec 15, 2016

Automation and the Destruction of Work — Not If, but when…

It is simply a function of TIME. The nature, speed and quality of our innovative capacity has become extraordinary. Look around, if you see some work being done and don’t believe that that job will eventually be done faster, and more efficiently by a machine, then you are kidding yourself. The ONLY question is TIME. How long will it take us to build a robotic hand that seamlessly opens door knobs of all kinds ( check the DARPA challenges — making good progress). How long before thought and surveillance skills of a police officer are replicated by an agile robot with FAR greater sensory capability. Humans have an innate drive to see a challenge and overcome it. To not reach this conclusion is to doubt our will power and turn your back on centuries of technological advances.

Now that is not to say that EVERY single job will be replaced. I am very comfortable with the idea that innovation will create NEW jobs. History and practicality dictate that. When you reach a new horizon the view is different. New Opportunities arise, new areas of study and we will have to react to our new realities. So in the near term, after every innovation and new horizon, new work will be created. If you can imagine a new job being created by innovation, you should be able to imagine the same new job being replaced by automation, it is only a question of how much TIME does it take for the automation to catch up.

What is work? For me, a simple way to examine work is that it is a combination of two elements PHYSICAL LABOR + INTELLIGENCE. Every job has a different mix of those two elements. If you are a ditch digger, that takes a lot of physical labor and not a lot of intelligence. Whereas being a college professor doesn’t require a lot of physical labor, but certainly uses a lot of intelligence. For centuries, man had developed tools that aid us in our work. Tools to help us dig that ditch (like a backhoe) or tools that help our intelligence (like calculators and computers). We became more and more productive and could produce more output per each employee. The more the tools aided us the more productive we could be. This increase in productivity and profitability drives innovation. That is why automation is inevitable. Anything involving humans has limitation. We get tired, we need sleep and we are not interested in working 24/7. Thus there will always be a drive by the owners of capital to decrease reliance on human labor and replace it with AI & Robotics. AI for the “intelligence” and robotics for the physical labor. Machines operating just on par with humans are ALREADY 3x more efficient based on the 24/7 model. Also machines don’t require health care, annual salary increases, sick and vacation days, lunch and time on their facebook account.

Now you might question my equation of work… and I’m okay with that. Pastors, Imams and Rabbis do work and that requires more than intelligence + labor. I understand that there is a human element missing from my equation, however that human element is relevant in fewer and fewer jobs. Sales is often referenced by people, however Amazon seems to do alright without sales people. So what “humanity” do we bring to the equation? That’s a discussion for another time, but suffice it to say I do not predict that ALL jobs will go away, just many of them. Some jobs may persist because humans actually choose to prefer humans over machines because of that “human touch” (think of all those annoying automated help lines — they haven;t mastered those yet). Furthermore, I recognize that there is nuance to both physical labor and intelligence. It is not a one-size-fits all world and I think that is why most people find it difficult to project automation replacing most work. I don’t find that to be challenging because I know the right motivations are there, increased profit and the desire to innovate/improve what we do. If a job proves to difficult to automate today, that challenge drives innovation and to prove the doubters wrong. That is why I believe that comprehensive automation is simply a function of TIME, not ability.

So if you accept my premise, then the only remaining question of automation replacing MOST work is WHEN? When is the robot nimble and dexterous and precise enough? When is the AI fine tuned enough to replace the intelligence portion of our equation. I own and operate a hedge fund, previously viewed as a “high intelligence” job and I can tell you that that job has become 100% automated, so it certainly can be done and will be done. In each field there are programmers and technicians working to figure it out. 

Many argue that the time to replacement of most work by automation is too far away to matter to us. I can’t tell you how troubled I am by this argument. It assumes that the solutions we need to find to our future problems are at our finger tips. Let’s say that the maximum jobs losses from automation are 100 years away… If the problems and changes to society take us 102 years to solve, we are already 2 years too late. We simply do not know how long it will take to figure things out. In some case we may already be too late to solve them. Time is not a reason for us to avoid thinking about the consequences of automation. Delayed impact is a natural excuse for this generation and we have a great track record of passing the buck down the road and leaving our kids to figure it out (see Social Security and Medicare). I suspect that is actually how most people view the problem. But I believe that waiting for the problem will not work for the truck drivers in this country. 2.4 million American jobs likely gone over the next decade as autonomous drive trucks take over. Think that is Science Fiction? Budweiser already completed their first 1000 mile delivery. Amazon just announced cashier-less convenience stores. That’s another 3.6 mil American jobs gone. When someone can tell me what these people will do for jobs I will rest a little easier. My organization will try to work with these people today to plan for a future that they choose, rather than one that is chosen for them.

The final argument that many make about new innovations is that it will create many more jobs. I have already agreed that each new innovation creates a new job. Pre-Internet, who would have guessed what a Web Development Specialist did for work? Was it like a Beekeeper for spiders, coaxing them to make more intricate designs? So yes, new jobs will be created, I am convinced. It’s the amount of new jobs that I am concerned about. Furthermore, it is the speed at which that new job converts to being fully automated which should have some concerned. As machines begin to program and teach other machines, the learning curve and the innovation speed will accelerate greatly. I am arguing that every single process we do requires less and less human input each day and that decline is accelerating. It will likely catch our society off guard.

Lastly, there is finite capital in the world buying finite goods and those goods require less and less human input. That means simply… less human work to produce the goods and services that we consume. I have heard arguments for “fake” or “fantasy” work, but that is a discussion for another day. Work as we know it is changing and being eliminated, one automation at a time. Join us at ForHumanity as we endeavor to understand what this means for our society and attempt to prepare us for this new reality.

December 8, 2016

Thoughts on the #FutureofWork #Delange2016 Conference:


1) A remarkable collection of speakers. The multi-disciplinary approach of the Scientia team at Rice made for a well-rounded discussion. Bravo to Moshe Vardi for putting together a 1st class program

2) I was blown away by the siloed thinking of the practitioners in the AI and Robotics fields. I want to be careful to have empathy for this as it is clearly vital for each of them (Veloso, Kumar and Banavar) to be successful in their field, but that laser focus puts them deep in the weeds and makes it impossible for them to have a good grasp on the forest

3) That mentality scares me about the people who are self-selecting themselves to provide oversight in the area of AI & Robotics, such as the Diane Bailey rightly scoffed at the self-selection bias. What might be worse is that they will be choosing the 5 non-profit board members to join them.

4) The audience and by extension the world is concerned. The panelists, broadly speaking, spent their time arguing “why not to worry” and I suggest that the audience left unconvinced.

5) There were two primary arguments on why not to worry

a) All technology has been for the benefit of mankind historically. It resulted in economic growth and job creation so of course it will happen again

b) If there are things to worry about they are many many years away

6) Let’s critique those arguments. #1 Most of the arguments centered on historical mistrust of technology through the years. We had frequent quotes from speakers of different authors in the past who felt that the current technology was going to lead to dire consequences. “So what is different now they argued”. “Think of all the amazing new jobs that technology will create”. These arguments are good for making us question whether our concerns are reasonably founded, but they are NOT arguments as to why this time might not be different. Tomorrow I aim to post why I believe that we are on the precipice of an era of technology which will destroy the need for human labor.

With respect to the argument that these concerns might be decades and decades away, and by extension we have better things to worry about, this argument is sadly an obvious red herring. How are we supposed to know how long it will take to solve these problems or be ready to deal with the changes that technology brings. If the problem is 50 years away, but the solution will take us 49.5 years to achieve — then we’d better get to work. As Yogi Berra once opined… “it’s gotten late EARLY”. So don’t try to appease my concern with ambiguous timeframes. The timeframe for the solutions is equally ambiguous.

7) Universal Basic Income was a hearty topic of conversation and generally reviled for a series of reasons that is worthy of its own blog post… so that will be a follow on piece.

8) The conference was on the Future of Work, but many speakers were concerned that even if work did NOT change, there would likely be a continuation if not an acceleration, as a function of leverage, of the income equality gap. I find these argument ring very, very true.

December 1, 2016

Capitalism + Aritifical Intelligence & Robotics = Socialism (Universal Basic Income)

Breaking down an entire lifetime of beliefs

As I have dived deeper and deeper into the future of Artificial Intelligence (AI) and Robotics and their impact on life, society and work, I have reached a conclusion that has shattered me.

For the majority of my 44 years, I have always been a capitalist. My capitalist ideals informed my governmental preferences, smaller government and less government involvement in markets of all kinds. I always made exceptions for certain elements, like border security and the military - those were clearly the domain of the Federal Government. In 1787, I would have been a staunch Federalist standing beside Alexander Hamilton arguing that the Federal Government was uniquely qualified to solve THOSE problems. However, since 1787, the Federal government’s power has grown exponentially into all facets of our lives. During the majority of my 44 years, having lived amidst this pervasive federal government, I have been a staunch anti-federalist. Jefferson and Madison would have been proud as my frequent, if not constant, answer to a problem has been “that’s best solved by individuals, the community and free markets. The federal government has no place and is probably distorting that market”.

So on this capitalist foundation, as I have considered a service like Welfare, I have understood its rationale and disliked it from a far. I believed that the markets could sort out support for the unemployed, and welfare probably encouraged some amount of unemployment. But unfortunately I no longer am able to hold out hope that free-markets can solve the problem of unemployment because NOW and in the FUTURE, capitalism will CAUSE substantial unemployment.

Saying that last sentence hurt. A robust economy should create jobs (says my 44 years of capitalist training) and create wealth for all , even if the owners of capital profited handsomely. But in a world of AI & Robotics, those centuries old ideas, starting with Adam Smith’s Wealth of Nations, are now broken.

What is work? Well I see it as a combination of two things: Physical Effort + Intelligence. Some jobs are easier on one half of the equation, for instance being a college professor is fairly easy on the physical labor side of things and requires substantial portions of Intelligence. While stocking a shelf doesn’t require a lot of mind power, but certain requires plenty of Physical Effort. Since we discovered fire, humans have developed tools that aided our work. Calculators and Personal Computers simply assisted on the intelligence side, while Bulldozers and Farm Combines allowed us to physically accomplish more. These tools were compliments to our intelligence and our physical effort. With the advent of AI & Robotics, we are no longer employing a complimentary tool, we are now fully replacing both intelligence and physical effort. And that is where it all breaks down.

Now don’t get me wrong, we are in the early stages of AI & Robotics. There is still substantial work to be done. Today, Humans are driving the development. So in the near term, we have new jobs being created. It’s just that they aren’t sustainable. Machines are already programming machines. Robots are already building robots and when the two are combined, there is no further need for human involvement. Or even if there were a need for a human, the leverage that he/she could achieve is nearly infinite. It is fairly easy to envision one or two people overseeing the entire manufacturing run of an automobile today for example.

So that gets me to the breakdown. Capitalism will drive AI & Robotics all the way to the end. All the way to the marginalization of human work. It has to, it is simply doing what it does, which is to maximize profit for the owner of capital. As the owner of capital, I can employ someone and pay them a wage and they will work for me for about 8 hrs per day and 250 days a year. They will need vacation and sick days, they will need health insurance and workers compensation and next year they want slightly more wages to do the same job. Or, I can replace the job with a robot, which requires an upfront investment that is substantially higher, but my return on investment is immediate. The robot will work 24 hrs per day 365 days per year. That is more than 3x more productive. No sick days, no workers comp, no wage increase next year. When it comes to the capitalist perspective, this is what we call a “no-brainer”. Granted this is a simplified cost-benefit analysis and in many cases TODAY, the human remains the BETTER choice for the work, but that is simply a function of reducing the cost of the automation and creating automation that can accomplish more and more. Both of those things are progressing, so it is simply a matter of time.

Thus… the breakdown, the place where my head exploded. Capitalism will lead to 60–80% unemployment in the future. Said differently, Capitalism is now DESTROYING jobs, we have reached the turning point and it kills me to say it, because there is a follow on. Substantial structural unemployment of this kind is a problem that can ONLY be solved by the federal government. Universal Basic Income becomes the only solution. So Capitalism + AI & Robotics = Socialism (Universal Basic Income).

Mind completely blown.

More to follow as we dive deeper into many of these elements. Question assumptions covered above and challenge ourselves to think about solutions.

I started to raise awareness for the masses about the expansion of AI & Robotics (not to fight it), to proactively examine rules, laws and practices to protect humanity’s rights in this new world and to always advocate for humanity. I hope you’ll join me.

Future topics include:

  1. Won’t AI and Robotics create a whole bunch of new jobs, like all previous Technology?
  2. Sure AI & Robotics can replace some work but not MY JOB
  3. What is Universal Basic Income (UBI)? Is it the only solution
  4. What is society like without work?
  5. Many many more topics