Machine SuperIntelligence

Answering Nick Bostrom's Desiderata (Part 4 of 4)

*Bostrom’s excerpts are in italics

Responsibility and wisdom. The seminal applications of advanced AI are shaped by an agency (individual or distributed) that has an expansive sense of responsibility and the practical wisdom to see what needs to be done in radically unfamiliar circumstances.

So Bostrom’s primary concern underlying this section of the desiderata is that the machine SuperIntelligence might become so good at persuasion that the idea of “voluntary consent” becomes meaningless as we humans would be powerless to resist this “super” persuasion. I accept this premise and agree with Bostrom that this should be a concern and quite honestly one that should be avoided from the outset.

To avoid these challenges we must be proactive at all steps along the way to design controls , ethics, standards and data inputs that are consistent with out ideals and desires as all of humanity. An enormous task, not only to determine those standards, but then to see them implemented and adhered to is even larger. Here I an entirely with Bostrom, in that I believe NOW is the time to be concerned and proactive.

One of the things that confuses me about machine SuperIntelligence is the all-encompassing nature that forecasters assign to it. Flip the switch and instantly it is in charge of everything, do all of the thinking, exploring everything to be explored. I simply don’t think that needs to be true — or maybe I’m missing something. Are we building a tool? Or are we so dissatisfied with our own efforts as a species that we truly want to replace ourselves. Do we believe that life will so much better when everything has been thought of and discovered. I’m dismayed to tell you that I think AI developers and dreamers believe the latter. Everyone is in a rush to cure cancer, live forever and discover ways to travel at light speed, so since we can’t do it now, let’s create a SuperIntelligence, let it run complete unconstrained and nothing but sunshine and happiness will come out the other end and all problems will be solved. Why is intelligence the answer to every question? Sometimes the answer to “how do I get my baby to stop crying?” is simply the loving touch of the baby’s mother, not the knowledge that the baby wants its mother.

Furthermore, unconstrained, what if we don’t like the answers? What if the SuperIntelligence tells us we have 5 billion too many people? What if it suggests plowing under the majority of the earth’s farms in order to build solar arrays to provide it with enough power to continues its great work. We simply don’t know the conclusions that will be reached. Unconstrained, we can hardly fathom the range of outcomes, so why do it? ForHumanity supports the continued expansion of AI research and development. ForHumanity also advocates for the use of AI, even general AI and machine SuperIntelligence as a tool rather than as an unconstrained, sentient entity.

Our mission at ForHumanity is perfectly aligned with Bostrom’s Responsibility & Wisdom desideratum. We aim to be broadly representative of all of us, a daunting task, I grant. Our four orders of operation are our four A’s. AWARENESS — We will strive to raise awareness of these issues to the masses and solicit quality feedback to enhance our representative nature. ANALYSIS — We will engage the AI and surrounding communities to discuss and iterate proposals for policy and standards and endeavor to be a value-added input into the process. ADVOCACY — where appropriate, ForHumanity intends to engage policy makers to help craft the laws and standards by which AI must abide. AUDIT — to act as an interactive watchdog, working directly with the developers of AI to ensure that best practices are maintained and that policy and industry standards are being adhered to on behalf of humanity.

Via the 4 A’s, ForHumanity unequivocally agrees with Prof. Bostrom’s assertion that there needs to be a proactive process to consider the criteria by which machine SuperIntelligence is being developed. Furthermore, to be integrated into the development process to ensure that the standards are being implemented according to the principles and consistent with best-practices. Finally, if machine SuperIntelligence is achieved, ForHumanity would be well positioned to participate in the governance of the MSI on behalf of humanity. However, ForHumanity will always side with the broad goals and directives of humanity even at the expense of machine SuperIntelligence.

Artificial IntelligenceNick BosMachine Intelligence

Answering Nick Bostrom's Desiderata (Part 3 of 4)

 

Mind crime prevention. AI is governed in such a way that maltreatment of sentient digital minds is avoided or minimized.

Wow is this a troubling question. I understand it, I truly do. Furthermore, I am afraid of being on the same side of history that treated African-Americans as less than human and ignored women as voters for a century and a half. But also, I am science fiction fan. I watch Humans and Westworld and certainly those androids don’t deserve to be mistreated. So it comes down to where the line gets drawn. Are we creating sentient, feeling partners, co-inhabitants, and companions? Or are we creating tools as partners, co-inhabitants and companions?

To begin with, ForHumanity can agree that like any property, it (meaning the sentient machine) is afforded certain protections from theft, abuse and damage caused by those that do not own the property. However, if these sentient digital minds are only property, then the owner is free to do WHATEVER with that property as long as it doesn’t infringe the rights of others. There may be some that quibble with my interpretation of the law here, but I am not trying to make a purely legalistic argument, rather just a moral/intent based argument. If we upgrade these sentient minds to the level of our treatment of pets, then the owner is held to a reasonable standard of good care. The reason for this upgrade in protection is comfortably understood, by most in society, to be because we attribute the ability to “feel” to the pets and the majority of society is uncomfortable seeing a “feeling being” harmed. ForHumanity can support this level of protection for these sentient minds because as human beings we value good treatment. We care to avoid causing hurt and pain to beings that can “feel”. But already, we are pushing the boundaries, can a sentient digital mind feel? Is there a difference between our electrical impulses in the brain and the electrical impulse in a neural network AI, when it comes to something like pain?

So this gets us to the uncomfortable, theoretical space of “what does it mean to be human?” This also takes us to the unquantifiable realms of faith. I know that science struggles with the subjects of faith, but somewhere between 60–80% of the world’s population believes in God. Therefore, for many,there is a uniqueness that is attributable to human beings. There is a value to the soul that places us above all other animals. This belief would extend to machines. Even if everyone accepted that our brains were simply electrical impulses, like a machine, the majority would hold that our souls have value. So like animals, much of humanity will ascribe a specialness for ourselves that places us over and above a sentient digital mind. I suspect that view is in conflict with the existing AI community.

So here is the where I see the danger occurring. As the proponents of the sentient digital mind push towards personhood rights, a significant portion of society will push back. I believe this will lead to a genuine schism in society. If society thought that legalizing gay marriage was a challenging issue, the right to vote or the right to marry a machine will create an uproar. I actually think it will be a breaking point for many and will create the impetus for some societies that choose to break apart. Potentially states or entire countries may reject the movement towards machine personhood. This will be magnified with increased control by machines, especially in the wake of the expected decrease in work as a result of automation. This schism will then get magnified with efforts at genetic modification, cloning and altogether tampering with the “miracle of life”. The idea of living forever, digital downloads of minds and miracle science that could extend life indefinitely are anathema to those that believe in God. Further, these efforts would actually be viewed by many as “playing God” and thus will drive people apart.

From the perspective of science, the concepts of “living forever” and editing genes to eliminate certain diseases seem to be noble goals. I think science is missing some important things about humans. It is is our faults and errors that often define us… even positively. It is our eventual death that gives each day meaning. Science may rob humanity of these beautiful traits and it doesn’t even know that it is stealing. Do Not Resuscitate orders will become Do Not perpetuate orders. Beings who’s lives become extended will likely benefit from that knowledge and experience and that will further divide those that choose mortality, creating a sort of master race.

I could go on and on about this topic and I likely will expand on it elsewhere, but with regards to an answer for Professor Bostrom’s Desiderata, this point is decidedly murky. ForHumanity can support the point to an extent and then will switch sides very quickly if the point is carried too far and treads on some of the key factors that define our humanity.

Population policy. Procreative choices, concerning what new beings to bring into existence, are made in a coordinated manner and with sufficient foresight to avoid unwanted Malthusian dynamics and political erosion.

Why, Why and Why? Here I got frustrated with Bostrom. I have one thing to say, “The One Child Policy”. For those who are not aware, the Chinese government in 1979 instituted a policy of one-child per family. The policy has been so successful that it was phased out in 2015, sarcasm intended.

Essentially, this policy has destroyed the future for China, by destroying the generational balance. Created to stem the tide of over population, it has massively slowed China’s birth rate and the expansion of overall Chinese population, so it achieved its goal. Here’s the problem, it created such a ticking time bomb on social welfare, public policy, gender balance and a million other things beyond the scope of this blog (type one child policy into google for numerous articles on how this failed), that it is hardly a model for future planning. And why would we get this right the next time?

But that is not even the argument, the argument is that the right to procreate is one of our must fundamental and personal rights. ForHumanity is absolutely opposed to any authority imposing any sort of restriction or associated planning on the right to have children. That decision, every time should be made by the mother and the father, solely. I don’t actually believe that this requires further discussion. The answer is “No”.

Now, in a world of scarce resources, where human beings are making their own choice to preserve a limited supply of resources, maybe we can have a discussion. However, the entire previous section of the desiderata was arguing for a multitude of wealth, not a scarcity. Thus this request can be nothing more than authoritarian. Even in a world of scarce resources, you would focus on the resource allocation. If that resource was food and the family choose to have seven children instead of two, then that is their choice to make using the exact same resources. ForHumanity is opposed to any form of central planning or legislated policy with regards to procreation.

Answering Nick Bostrom's Desiderata (Part 2 of 4)

*italicized sections are direct excerpts from Bostrom’s paper

Universal benefit. All humans who are alive at the transition get some share of the benefit, in compensation for the risk externality to which they were exposed.

Bostrom’s point about existential risk is extremely concerning to ForHumanity, and highlights our raison d’etre. These developments in Machine SuperIntelligence (MSI)are likely to occur with much of the public blind to the developments. Then MSI will arrive on the scene and change everything instantly, like the atomic bomb at Hiroshima. Awareness, and also a certain amount of non-proprietary transparency from AI developers, will reap great benefits for the public and facilitate universal benefits. Certainly, substantial expansion of the economy will benefit the world broadly. It is the distribution of that wealth that is the problem. Many will NOT participate in the MSI race and thus their reliance on the generosity of spirit by the creators of this intelligence is extreme.

ForHumanity is supportive of an MSI development process that is driven by corporations, instead of governments, in so far as they are more likely to make the optimum decisions to achieve SuperIntelligence. Corporations today are more global than any nation, often having offices in many of the countries where machine learning is being advanced. Furthermore, those corporations are often publicly traded and thus are already creating a broader, albeit non-inclusive, distribution of the wealth via their shareholders. We also recognize that corporate control has plenty of challenges. Corporations need not be moral. They also are profit maximizing and this is where the problem lies. Once Machine SuperIntelligence is achieved, instantly it should no longer be owned by the corporation or collaborative that achieved the goal. This of course flies in the face of 250 years of capitalistic thought.

Ideally, when that MSI switch is flipped, the group that created it would hand it off to humanity at-large and collect its substantial bonus, but NOT continue to reap the ongoing rewards nor “control it” whatever that may mean. Unlikely the Manhattan Project, a naturally militaristic endeavor at the time, to weaponize machine SuperIntelligence or make it Federally owned, is to both polarize the process and introduce natural bureaucracy, without reasonable benefit. We believe that this would quickly turn into a race to the bottom and would likely lead to increased risk, which is why we prefer corporate control for the time being. However this begs the question of independent oversight. Here ForHumanity advocates for a powerful NGO-esque watchdog group, who can interface freely with AI corporate developers, endorse safe practices and then prepare the masses, governments and the world for AI as we approach general intelligence. Also, this group, if properly governed and guided, might become the steward of a general MSI.

Magnanimity. A wide range of resource-satiable values (ones to which there is little objection aside from cost-based considerations), are realized if and when it becomes possible to do so using a minute fraction of total resources. This may encompass basic welfare provisions and income guarantees to all human individuals. It may also encompass many community goods, ethical ideals, aesthetic or sentimental projects, and various natural expressions of generosity, kindness, and compassion.

The economic explosion predicted by some associated with machine SuperIntelligence seems lofty to me. The projections assume a level of resource availability that simply may not exist on earth. Furthermore, scalability remains a function of physical construction (whether that is chips and motherboards or actual automation) and cannot be achieved instantaneously. However the basic concept of an equitable distribution of the wealth created by an MSI created wealth explosion is an easy idea to agree with, so we support Bostrom on this point.

However, that outcome appears more likely as an outlier . It further assumes that humanity does not choose another pathway, constrained machine SuperIntelligence. I have heard some discussion in the community, people refer to it as the “control problem”, but maybe it is our optimum outcome. Instead of “turning on a switch and watching the SuperIntelligence go”, why not have natural circuit breakers built in where humanity can take a pause, re-asses potential impacts and then continue the process. For instance, if machine SuperIntelligence were NEVER allowed to control currency/capital and instead always relied upon a beneficial human owner or owners, then you have a natural human circuit breaker (think like an appropriations committee). Or if MSI were unable to self-replicate, you have strong bounds under which the intelligence can operate. Finally, the power required for MSI will likely be enormous at the outset. Strong limits on the MSI’s ability to access power may provide the control measures that humanity requires, to at least assess next steps and create a process that is more manageable and more easily understood by humans. This approach probably leads to a dramatically more gradual expansion of both knowledge and wealth creation.

However, should MSI lead to a significant explosion of resources and subsequent wealth, there is no question that this ought to be shared as widely and as readily as possible. Currently, the world does not operate like this. The income inequality gap is at an all-time high and likely to accelerate despite significant humanitarian and charitable efforts to shift wealth from a wealthy class to a less privileged group. The mind shift that needs to occur to properly redistribute this wealth seems to run counter to the desires of our individual humanity. Many of us have a natural desire to compete, to win and expect to be rewarded for our victories. I have more concern at our ability to succeed in this endeavor than to actually achieve MSI. The result of a failure to achieve an equitable redistribution of wealth is that many will remain impoverished and oppressed. We support the idea of “Magnanimity”, but remain skeptical of humanity’s ability to achieve this successfully. My skepticism will not prevent us from trying.

Continuity. The path affords a reasonable degree of continuity such as to (i) maintain order and provide the institutional stability needed for actors to benefit from opportunities for trade behind the current veil of ignorance, including social safety nets; and (ii) prevent concentration and permutation from radically exceeding the levels implicit in the current social contract.

The current growth rate of the wealth inequality globally can be largely attributed to the growth and application of technology. Technology reduces costs and increase efficiency, which is how wealth is created. This wealth accumulates to shareholders and capital owners. In addition, automation increase the amount of return for capital and decreases the return to employees as their labor input is decreased. This accelerates the inequality.

Taking MSI to an extreme, where it is the “last invention humans ever need create”, well then you have all of the wealth concentrated in the owner of the MSI or potentially the MSI itself. This represents a massive problem and must be solved. If one subscribes to the notion that an MSI, discovered and unleashed at the corporate level will be turned over to humanity at-large, then you’ve dealt with the issue. However, that is a big “if”. At least in the United States contract law provides no precedent for ownership by humanity at-large, instead, the only semblance of large scale universal control is at the nation-state level, where we have the powers of eminent domain. This may actually be WORSE. Imagine a corporation is about to turn on an MSI and the Unites States or some other country steps in and says, “I’ll take that”. The fear it would engender in other nations, might lead to poor decisions by other nation-states. Decisions made out of fear. So the ownership question flies in the face of “continuity”.

Job Destruction and the future of work is another substantial “discontinuity”. I won’t go into the argument here as I’ve made it before as have many others, but it is sufficient to say that today’s society is ill prepared for 50–80% unemployment. The problems it will create as a result of wealth inequity and potential damage to individual self-worth are massive upheavals to society where we stand today. So I don’t see how “continuity” is even remotely achievable.

On a separate thought on continuity, I see the advent of MSI creating a substantial divide in society. As machines become more integrated into society and especially when they begin to achieve “personhood” and the associated rights and privileges (lest we think this is Sci-fi talk, the EU parliament has already recommend some steps toward “personhood” for machines). I can see where a schism is likely to form. I suspect this schism will result in a large number of communities and people, “opting out” of a machine intensive society. The “opt-out” option is a fundamental (constitutional in the US?) human right that I know will be challenged time and again by government and by society at-large in a machine driven world. It is vital we protect this right now to maximize continuity for all. Luddite today is already a pejorative word. It, like all other monikers designed to demean, is prejudicial and our society today should support the rights of those who choose NOT to participate in a machine driven/controlled society. I dare say that society should protect and guarantee those rights as I suspect this group will be a minority. Continuity ought to mean “continuity” from today, for all, based upon their right to life, liberty and the pursuit of happiness, even if that means a life without MSI and automation.

Distilling Nick Bostrom’s Machine SuperIntelligence (MSI) Desiderata

In the process of answering Bostrom’s desiderata on Machine SuperIntelligence (MSI), I’ve gotten a series of requests from friends and followers such as , “Desiderata?” or “what the heck did he just say?”. I also got a number of comments like, “I couldn’t get through it” “fell asleep” and “Too dense”. All of this reminded me that part of the mission at ForHumanity is to make sure that all people can understand the challenges that face us from AI and machine SuperIntelligence, so with no disrespect meant towards Professor Bostrom, I would like to distill his desiderata down into a quick read and bullet points. Professor Bostrom does some nice work in creating this document. ForHumanity doesn’t agree with all of it and will be prescribing some alternative thoughts, but unless the masses are more aware of the issues, how can we expect them to respond at all.

  1. Desiderata — fancy word for “the essentials”. So Nick Bostrom is the author of the book SuperIntelligence which shook, Gates, Hawking and Musk to believe that “AI is the single greatest challenge in human history”. Bostrom is trying to layout for us the the things that we need to for a successful discovery and implementation of Machine SuperIntelligence
  2. Machine SuperIntelligence — there are two kinds of artificial intelligence — Narrow and General. We do not have general AI today. We have many, many pockets of narrow AI. The simplest example of narrow AI is a calculator. It does computations faster than we can. Narrow AI is everywhere today, in your amazon, your facebook, your smart phone translators and your car. General AI is defined as a machine intelligence that thinks about all things that humans think about, often times better than we do it. We do not have General AI today. Lots of movies made about this, do not discount the visions that you see in these movies, they are based in aspects of reality.
  3. When will we have General AI? There is no agreement on this point. Some believe as early as 2025 (10% of researchers), others argue that it is more like 30 years away (50% of researchers).
  4. Why should I care about General AI? Many in AI research agree with Bostrom, that the discovery and implementation of Machine SuperIntelligence might be the last thing that Human beings EVER discover. That means a VERY different world from today, no matter how you think about it
  5. So again, why should I care NOW? Best answer that I can give you is that AI is being developed and enhanced EVERYDAY, we don’t know how long it will take us to create the right safeguards, the right guidelines, the right oversight, the right laws, the right changes in society to deal with the implementation of Machine SuperIntelligence. If Machine SuperIntelligence arrives in 2030, but the solutions needed to make that SAFE arrive in 2033, we are already 3 years TOO LATE.
  6. So back to Bostrom’s Essentials — I am going to list them, explain briefly and then leave a note of agreement or disagreement from ForHumanity’s perspective
  7. Efficiency— Bostrom would like Machine SuperIntellgience (MSI) to arrive as quickly and as easily as possible. Eliminate as many obstacles as possible because it would reap such great rewards. Bostrom wants everyone who can benefit to benefit as soon as possible. (DISAGREE)
  8. MSI risk — also known to Terminator movie watchers as SKYNET. The idea here that the introduction of MSI could create a disaster scenario. Bostrom argues for considerable work in this space. (AGREE)

9. Stability and global coordination — since it has never existed, I hope this isn’t an essential. Bostrom argues that rogue application by nations or organizations for their exclusive benefit would be a bad thing and thus we should all come together in harmony. Good luck with that… (AGREE — but really?)

10. Reducing negative impact of MSI — Bostrom is concerned that the move to MSI will have some revolutionary impacts and some of those will be negative. He argues we should deal with them, but makes little or no attempt to figure out what they might be. Here’s one, how about the possibility of 60–80% unemployment in the traditional sense. That might be a challenge to deal with. So here I agree with Bostrom, we want to reduce the negative impact, ForHumanity just plans to try to deal with them(AGREE)

11. Universal Benefit — Bostrom argues that all humankind should benefit from MSI. Of course we agree, the How? seems to be the challenge. (AGREE)

12. Total human welfare — here Bostrom relies upon Robert Hanson’s spectacular estimates of future wealth as a result of MSI. For example, Hanson talks about current global GDP of 77.6 trillion USD and expands it to 71 quadrillion USD. Yeah… if we have it great, let’s not plan on it. (AGREE — lovely idea)

13. Continuity — It is argued that these changes should be met in an orderly, law abiding fashion, where everyone is treated fairly and taken care of properly. (AGREE)

14. Mind Crime Prevention — here Bostrom started to lose me. Arguing already for the rights and protections of a machine mind, seems premature, however European Union authorities have recommended it already for actual policy. This issue is very tricky and I believe will create a schism in society. For now ForHumanity will (DISAGREE, but this need a much longer explanation)

15. Population control — Bostrom argues that because people will be living forever, we can’t have too many people and that births should be controlled and coordinated. First, living forever, really? Digitally, downloading ourselves into machines, really? These are essentials? This issue is filled with difficult theoretical, philosophical and religious issues, that I will simply say that ForHumanity believes the the rights to NOT participate in that world as well. (DISAGREE)

16. Group control, ownership and governance — Bostrom argues that MSI should be governed by an enlightened, selfless and generous group. This group should be thoughtful and creative in their solutions to the new problems created by the MSI — but isn’t that what we created the MSI for? I am being snarky of course. Human beings make mistakes, we have flaws and they are beautiful. I hope that we have a chance to be in charge and that we respond with a grace, self sacrifice and enlightened governance for all people. (AGREE).

So that was Bostrom’s desiderata, I hope this summary was helpful for everyone. I of course will continue to delve deeply into the work as it deserves a thoughtful and thorough response. ForHumanity agrees with the challenge and importance of the discussion, it is my challenge to bring practical solutions to many of these challenges and aid all of us in achieving Bostrom’s magnificent Machine SuperIntelligence world.

Answering Nick Bostrom's Desiderata (Part 1 of 4)

*italicized sections are direct excerpts from Bostrom’s paper

EFFICIENCY (from Bostrom’s 4 sections)

From these observations, we distill the following desiderata:

● Expeditious progress. This can be divided into two components: (a) The path leads with high probability to the development of superintelligence and its use to achieve technological maturity and unlock our cosmic endowment. (b) AI progress is speedy, and socially beneficial products and applications are made widely available in a timely fashion.

ForHumanity is concerned about a rush or a timeframe for achievement. We agree that the applications and potential breakthroughs associated with Machine Superintelligence are life-altering and for that reason we accept that there are significant consequences as well as benefits. Adequately addressing/considering those consequences and crafting intelligent and inclusive solutions for all stakeholders should take priority over the endgame. Furthermore, ForHumanity is skeptical about our world being able to achieve the positive consensus required to avoid significant missteps from a rush to implementation, let alone from a slow and considered approach. ForHumanity values each human beings’ current existence and contribution to society in the present. Therefore, we advocate for a path of GREAT resistance, rather than least resistance. We are the tortoise in this discussion.

Bostrom refers to a SuperIntelligence that can “obviate the need for human labour and massively increase total factor productivity”. At ForHumanity, we prefer to view the whole picture of well-being and productivity on behalf of the human race. Here, Bostrom is focused on the economic math, which candidly I agree with. However, the sizable dislocations of “work” could lead to potential disaster on a societal, moral and quality-of-life level. Today, we cannot fully understand the impact of life without work, how it may diminish our creative process, our sense of self-worth, the value of our finite existence, not to mention the potential for aberrant behavior. The phrase “idle hands are the devil’s workshop” was not created for simple flowery language, it reflects some truth or character flaw in our humanity. It may not be an absolute truth, but it is a challenge. To be well fed, clothed and sheltered is good. To be bored, dull-mined and question the value of ones’ existence probably means that human society is WORSE off, rather than improved, even if the machines are productive and thoughtful.

A reference to a cosmic endowment is beyond arrogant. I despise the word choice “endowment”, which means “an income or form of property given or bequeathed to someone”. Is the cosmos an opportunity, an adventure and a vast unknown, certainly it is. Has it been given to us for our use — absolutely not. Leave aside questions of ownership and other life that may compete with us for the opportunities that exist in the cosmos, the word choice implies a positive economic benefit that is far from certain, especially when the costs are unknown if not ignored.

● AI safety. Techniques are developed that make it possible (without excessive cost, delay, or performance penalty) to ensure that advanced AIs behave as intended. A good alignment solution would enable control of both external and internal behaviour (thus making it possible to avoid intrinsically undesirable types of computation without sacrificing much in terms of performance; c.f. “mindcrime” discussed below).

Today, we already have our top minds in neural networks and machine learning admitting that they don’t fully understand how their technique achieved their goals. In certain circumstance we are choosing to value the ends and ignore the means. This is largely unacceptable and must be corrected for us to have any ability to “properly align” machine SuperIntelligence. Furthermore, why are we doing this? Why are we choosing to replace ourselves in the thinking and potentially creative process? Are we so dissatisfied with our lives and our errors?

As I ponder all of these questions, I always find myself concerned about being perceived as a Luddite. In a microeconomic sense, the VAST majority of technological advances are beneficial, they wouldn’t occur otherwise. The creative process looks at an operation, identifies weaknesses and aims to improve upon them. That is why technology can’t be halted. We would have to collectively choose to fight something innate in ourselves, “the will to improve”. And even if collectively we agree to the halt, invariably, there would be cheaters and in the end game of SuperIntelligence, a cheater might be handsomely rewarded. So, ForHumanity does NOT advocate the halting of technological improvements. Safety and delay are an entirely different discussion. Should there be broad safety protocols ? — absolutely. Should there be national and extra-national contingency plans for the forceable acquisition of rogue technology — absolutely.

Finally on alignment, our world exists in a state of separation and dislocation, often by choice. This represents the single greatest deterrent to positive alignment, there is very little that represents a global consensus on alignment. That is not to say that the world should not try to achieve some consensus and minimium standards. ForHumanity will gladly participate in discussions designed to enhance the safety of AI and preserve the rights of all humanity. We applaud an effort to do so and advocate for the simplest and most basic of ideas to facilitate consensus and up the level of safety which currently stands on the ground floor.

Here is what I know/believe, in advance. There is the greatest incentive to cheat and to a skirt safety and alignment concerns. Therefore, authorities must prepare immediately to monitor progress, evaluate specific and implementable safety protocols and a standard of ethics today. Ideally this authority (Watchdog) is extra-national, supported by National governments and includes input from a broad-based think-tank of more than technologists. And to eliminate religion/faith-based thinking from this discussion, would be to alienate a large portion of the human race. Corporations and governments should openly submit to doctrine of safety and ethics prescribed by this organization, not dissimilar to the global Climate Change proposals. And therein lies a further challenge, getting key actors to cooperate and participate, when it is clearly against their best interest. For example, today, the United States, acting as a sovereign nation, trying to advance its own national agenda, may be loathe to give up its’ current advantage in AI, by submitting to the limits of this extra-national organization (not dissimilar to the rejection of the Kyoto protocols). A more practical solution may exist in the application of nuclear weapons technology, whereby those that have it ought to cooperate from the outset. Whereby China, the United States, Great Britain and Europe ought to collaborate at the outset of AI, on the most basic of safety protocol, like an unavoidable kill switch and something akin to Azimov’s 3 laws.

ForHumanity also advocates for the right to “Opt-out” of this future society. The calculations of cost-benefit here in this desiderata are already out of line with some societal norms and we estimate that an increasing number of people will look to “un-plug” for a variety of reasons such as, “voting rights of machines”, “what constitutes life”, “privacy”, “value of work”. The use and association with machine SuperIntelligence should be a choice. And that choice should be available to ALL people. As a society, our most prized possession is freedom, we want to ensure that the freedom of choice is maximized and cherished for Humanity with respect to machine SuperIntelligence.

● Conditional stabilization. The path is such that if avoiding catastrophic global coordination failure requires that temporary or permanent stabilization is undertaken or that a singleton is established, then the needed measures are available and are 7 implemented in time to avert catastrophe.

As ForHumanity is not nation- state dependent, and focused on achieving the optimum solutions all the benefit of all mankind, this is an easy decision to agree with. Any malevolence generated by an independent actor, designed to marginalize or eradicate a segment of humanity should be opposed with the full force of all. However, this presupposes that it is an obvious threat — (aka Skynet, from the movie Terminator 1984).

ForHumanity would also argue that a large degree of group-think associated with machine SuperIntelligence being a “be-all and end-all” solution for all of mankind fails to appropriately value core traits of humanity itself. Faith, Hope, Work Ethic, Compassion, Grace and Love are traits which we expect will remain elusive for AI. Human decisions made using these traits are often sub-optimal based on economic or even personal welfare value judgments, but that doesn’t make them wrong or meaningless. Many of our best decisions on a personal level may have been self-detrimental, but that is who we are, who we become and how we learn. That is the nature of our humanity and it is to be embraced, NOT optimized away.

Singular, global control designed to optimize the production and implementation of Machine SuperIntelligence violates a series of valuable tenets of humanity and nature. It certainly does not provide political diversification and is highly likely not to provide social/cultural diversification. Rather it will inspire conformity and optimization that Forhumanity would find to be ironically “sub-optimal”. We would prefer coordinated and consensus driven separate actors to achieve an optimal result as a function of diversification.

● Non-turbulence. The path avoids excessive efficiency losses from chaos and conflict. Political systems maintain stability and order, adapt successfully to change, and mitigate socially disruptive impacts.

The primary source of turbulence is likely to be in the job market. ForHumanity believes that an increasing number of jobs will be eliminated by AI and automation. While there are those that argue for job creation, we believe that those job gains will be more than overwhelmed by the job losses. Each new job created will require smaller and smaller amounts of human labor as the cost of automation continues to fall and the capabilities continue to rise. Therefore it is our conclusion that the dramatic shifts in the workforce will create the largest rifts in society and lead to the greatest degree of inefficiency as societies deal with a problem that is unprecedented. As AI and automation will be seen as the culprits for this dislocation, a natural political reaction will be to CREATE friction and inefficiencies. So while we agree with the authors that an efficient process is preferred over an inefficient one, we believe that political challenges will rise to create drag and a more inefficient process. A discussion about solutions to this inefficiency are beyond the scope of Bostrom’s paper or this critique, but are necessary.

Part 2 — to follow.

Dec 15, 2016

Automation and the Destruction of Work — Not If, but when…

It is simply a function of TIME. The nature, speed and quality of our innovative capacity has become extraordinary. Look around, if you see some work being done and don’t believe that that job will eventually be done faster, and more efficiently by a machine, then you are kidding yourself. The ONLY question is TIME. How long will it take us to build a robotic hand that seamlessly opens door knobs of all kinds ( check the DARPA challenges — making good progress). How long before thought and surveillance skills of a police officer are replicated by an agile robot with FAR greater sensory capability. Humans have an innate drive to see a challenge and overcome it. To not reach this conclusion is to doubt our will power and turn your back on centuries of technological advances.

Now that is not to say that EVERY single job will be replaced. I am very comfortable with the idea that innovation will create NEW jobs. History and practicality dictate that. When you reach a new horizon the view is different. New Opportunities arise, new areas of study and we will have to react to our new realities. So in the near term, after every innovation and new horizon, new work will be created. If you can imagine a new job being created by innovation, you should be able to imagine the same new job being replaced by automation, it is only a question of how much TIME does it take for the automation to catch up.

What is work? For me, a simple way to examine work is that it is a combination of two elements PHYSICAL LABOR + INTELLIGENCE. Every job has a different mix of those two elements. If you are a ditch digger, that takes a lot of physical labor and not a lot of intelligence. Whereas being a college professor doesn’t require a lot of physical labor, but certainly uses a lot of intelligence. For centuries, man had developed tools that aid us in our work. Tools to help us dig that ditch (like a backhoe) or tools that help our intelligence (like calculators and computers). We became more and more productive and could produce more output per each employee. The more the tools aided us the more productive we could be. This increase in productivity and profitability drives innovation. That is why automation is inevitable. Anything involving humans has limitation. We get tired, we need sleep and we are not interested in working 24/7. Thus there will always be a drive by the owners of capital to decrease reliance on human labor and replace it with AI & Robotics. AI for the “intelligence” and robotics for the physical labor. Machines operating just on par with humans are ALREADY 3x more efficient based on the 24/7 model. Also machines don’t require health care, annual salary increases, sick and vacation days, lunch and time on their facebook account.

Now you might question my equation of work… and I’m okay with that. Pastors, Imams and Rabbis do work and that requires more than intelligence + labor. I understand that there is a human element missing from my equation, however that human element is relevant in fewer and fewer jobs. Sales is often referenced by people, however Amazon seems to do alright without sales people. So what “humanity” do we bring to the equation? That’s a discussion for another time, but suffice it to say I do not predict that ALL jobs will go away, just many of them. Some jobs may persist because humans actually choose to prefer humans over machines because of that “human touch” (think of all those annoying automated help lines — they haven;t mastered those yet). Furthermore, I recognize that there is nuance to both physical labor and intelligence. It is not a one-size-fits all world and I think that is why most people find it difficult to project automation replacing most work. I don’t find that to be challenging because I know the right motivations are there, increased profit and the desire to innovate/improve what we do. If a job proves to difficult to automate today, that challenge drives innovation and to prove the doubters wrong. That is why I believe that comprehensive automation is simply a function of TIME, not ability.

So if you accept my premise, then the only remaining question of automation replacing MOST work is WHEN? When is the robot nimble and dexterous and precise enough? When is the AI fine tuned enough to replace the intelligence portion of our equation. I own and operate a hedge fund, previously viewed as a “high intelligence” job and I can tell you that that job has become 100% automated, so it certainly can be done and will be done. In each field there are programmers and technicians working to figure it out. 

Many argue that the time to replacement of most work by automation is too far away to matter to us. I can’t tell you how troubled I am by this argument. It assumes that the solutions we need to find to our future problems are at our finger tips. Let’s say that the maximum jobs losses from automation are 100 years away… If the problems and changes to society take us 102 years to solve, we are already 2 years too late. We simply do not know how long it will take to figure things out. In some case we may already be too late to solve them. Time is not a reason for us to avoid thinking about the consequences of automation. Delayed impact is a natural excuse for this generation and we have a great track record of passing the buck down the road and leaving our kids to figure it out (see Social Security and Medicare). I suspect that is actually how most people view the problem. But I believe that waiting for the problem will not work for the truck drivers in this country. 2.4 million American jobs likely gone over the next decade as autonomous drive trucks take over. Think that is Science Fiction? Budweiser already completed their first 1000 mile delivery. Amazon just announced cashier-less convenience stores. That’s another 3.6 mil American jobs gone. When someone can tell me what these people will do for jobs I will rest a little easier. My organization will try to work with these people today to plan for a future that they choose, rather than one that is chosen for them.

The final argument that many make about new innovations is that it will create many more jobs. I have already agreed that each new innovation creates a new job. Pre-Internet, who would have guessed what a Web Development Specialist did for work? Was it like a Beekeeper for spiders, coaxing them to make more intricate designs? So yes, new jobs will be created, I am convinced. It’s the amount of new jobs that I am concerned about. Furthermore, it is the speed at which that new job converts to being fully automated which should have some concerned. As machines begin to program and teach other machines, the learning curve and the innovation speed will accelerate greatly. I am arguing that every single process we do requires less and less human input each day and that decline is accelerating. It will likely catch our society off guard.

Lastly, there is finite capital in the world buying finite goods and those goods require less and less human input. That means simply… less human work to produce the goods and services that we consume. I have heard arguments for “fake” or “fantasy” work, but that is a discussion for another day. Work as we know it is changing and being eliminated, one automation at a time. Join us at ForHumanity as we endeavor to understand what this means for our society and attempt to prepare us for this new reality.

December 8, 2016

Thoughts on the #FutureofWork #Delange2016 Conference:

HTTP://DELANGE.RICE.EDU/CONFERENCE_X/INTERACTIVE_AGENDA.HTML

1) A remarkable collection of speakers. The multi-disciplinary approach of the Scientia team at Rice made for a well-rounded discussion. Bravo to Moshe Vardi for putting together a 1st class program

2) I was blown away by the siloed thinking of the practitioners in the AI and Robotics fields. I want to be careful to have empathy for this as it is clearly vital for each of them (Veloso, Kumar and Banavar) to be successful in their field, but that laser focus puts them deep in the weeds and makes it impossible for them to have a good grasp on the forest

3) That mentality scares me about the people who are self-selecting themselves to provide oversight in the area of AI & Robotics, such as the partnershiponAI.org. Diane Bailey rightly scoffed at the self-selection bias. What might be worse is that they will be choosing the 5 non-profit board members to join them.

4) The audience and by extension the world is concerned. The panelists, broadly speaking, spent their time arguing “why not to worry” and I suggest that the audience left unconvinced.

5) There were two primary arguments on why not to worry

a) All technology has been for the benefit of mankind historically. It resulted in economic growth and job creation so of course it will happen again

b) If there are things to worry about they are many many years away

6) Let’s critique those arguments. #1 Most of the arguments centered on historical mistrust of technology through the years. We had frequent quotes from speakers of different authors in the past who felt that the current technology was going to lead to dire consequences. “So what is different now they argued”. “Think of all the amazing new jobs that technology will create”. These arguments are good for making us question whether our concerns are reasonably founded, but they are NOT arguments as to why this time might not be different. Tomorrow I aim to post why I believe that we are on the precipice of an era of technology which will destroy the need for human labor.

With respect to the argument that these concerns might be decades and decades away, and by extension we have better things to worry about, this argument is sadly an obvious red herring. How are we supposed to know how long it will take to solve these problems or be ready to deal with the changes that technology brings. If the problem is 50 years away, but the solution will take us 49.5 years to achieve — then we’d better get to work. As Yogi Berra once opined… “it’s gotten late EARLY”. So don’t try to appease my concern with ambiguous timeframes. The timeframe for the solutions is equally ambiguous.

7) Universal Basic Income was a hearty topic of conversation and generally reviled for a series of reasons that is worthy of its own blog post… so that will be a follow on piece.

8) The conference was on the Future of Work, but many speakers were concerned that even if work did NOT change, there would likely be a continuation if not an acceleration, as a function of leverage, of the income equality gap. I find these argument ring very, very true.

December 1, 2016

Capitalism + Aritifical Intelligence & Robotics = Socialism (Universal Basic Income)

Breaking down an entire lifetime of beliefs

As I have dived deeper and deeper into the future of Artificial Intelligence (AI) and Robotics and their impact on life, society and work, I have reached a conclusion that has shattered me.

For the majority of my 44 years, I have always been a capitalist. My capitalist ideals informed my governmental preferences, smaller government and less government involvement in markets of all kinds. I always made exceptions for certain elements, like border security and the military - those were clearly the domain of the Federal Government. In 1787, I would have been a staunch Federalist standing beside Alexander Hamilton arguing that the Federal Government was uniquely qualified to solve THOSE problems. However, since 1787, the Federal government’s power has grown exponentially into all facets of our lives. During the majority of my 44 years, having lived amidst this pervasive federal government, I have been a staunch anti-federalist. Jefferson and Madison would have been proud as my frequent, if not constant, answer to a problem has been “that’s best solved by individuals, the community and free markets. The federal government has no place and is probably distorting that market”.

So on this capitalist foundation, as I have considered a service like Welfare, I have understood its rationale and disliked it from a far. I believed that the markets could sort out support for the unemployed, and welfare probably encouraged some amount of unemployment. But unfortunately I no longer am able to hold out hope that free-markets can solve the problem of unemployment because NOW and in the FUTURE, capitalism will CAUSE substantial unemployment.

Saying that last sentence hurt. A robust economy should create jobs (says my 44 years of capitalist training) and create wealth for all , even if the owners of capital profited handsomely. But in a world of AI & Robotics, those centuries old ideas, starting with Adam Smith’s Wealth of Nations, are now broken.

What is work? Well I see it as a combination of two things: Physical Effort + Intelligence. Some jobs are easier on one half of the equation, for instance being a college professor is fairly easy on the physical labor side of things and requires substantial portions of Intelligence. While stocking a shelf doesn’t require a lot of mind power, but certain requires plenty of Physical Effort. Since we discovered fire, humans have developed tools that aided our work. Calculators and Personal Computers simply assisted on the intelligence side, while Bulldozers and Farm Combines allowed us to physically accomplish more. These tools were compliments to our intelligence and our physical effort. With the advent of AI & Robotics, we are no longer employing a complimentary tool, we are now fully replacing both intelligence and physical effort. And that is where it all breaks down.

Now don’t get me wrong, we are in the early stages of AI & Robotics. There is still substantial work to be done. Today, Humans are driving the development. So in the near term, we have new jobs being created. It’s just that they aren’t sustainable. Machines are already programming machines. Robots are already building robots and when the two are combined, there is no further need for human involvement. Or even if there were a need for a human, the leverage that he/she could achieve is nearly infinite. It is fairly easy to envision one or two people overseeing the entire manufacturing run of an automobile today for example.

So that gets me to the breakdown. Capitalism will drive AI & Robotics all the way to the end. All the way to the marginalization of human work. It has to, it is simply doing what it does, which is to maximize profit for the owner of capital. As the owner of capital, I can employ someone and pay them a wage and they will work for me for about 8 hrs per day and 250 days a year. They will need vacation and sick days, they will need health insurance and workers compensation and next year they want slightly more wages to do the same job. Or, I can replace the job with a robot, which requires an upfront investment that is substantially higher, but my return on investment is immediate. The robot will work 24 hrs per day 365 days per year. That is more than 3x more productive. No sick days, no workers comp, no wage increase next year. When it comes to the capitalist perspective, this is what we call a “no-brainer”. Granted this is a simplified cost-benefit analysis and in many cases TODAY, the human remains the BETTER choice for the work, but that is simply a function of reducing the cost of the automation and creating automation that can accomplish more and more. Both of those things are progressing, so it is simply a matter of time.

Thus… the breakdown, the place where my head exploded. Capitalism will lead to 60–80% unemployment in the future. Said differently, Capitalism is now DESTROYING jobs, we have reached the turning point and it kills me to say it, because there is a follow on. Substantial structural unemployment of this kind is a problem that can ONLY be solved by the federal government. Universal Basic Income becomes the only solution. So Capitalism + AI & Robotics = Socialism (Universal Basic Income).

Mind completely blown.

More to follow as we dive deeper into many of these elements. Question assumptions covered above and challenge ourselves to think about solutions.

I started Forhumanity.org to raise awareness for the masses about the expansion of AI & Robotics (not to fight it), to proactively examine rules, laws and practices to protect humanity’s rights in this new world and to always advocate for humanity. I hope you’ll join me.

Future topics include:

  1. Won’t AI and Robotics create a whole bunch of new jobs, like all previous Technology?
  2. Sure AI & Robotics can replace some work but not MY JOB
  3. What is Universal Basic Income (UBI)? Is it the only solution
  4. What is society like without work?
  5. Many many more topics