Answering Nick Bostrom's Desiderata (Part 2 of 4)

*italicized sections are direct excerpts from Bostrom’s paper

Universal benefit. All humans who are alive at the transition get some share of the benefit, in compensation for the risk externality to which they were exposed.

Bostrom’s point about existential risk is extremely concerning to ForHumanity, and highlights our raison d’etre. These developments in Machine SuperIntelligence (MSI)are likely to occur with much of the public blind to the developments. Then MSI will arrive on the scene and change everything instantly, like the atomic bomb at Hiroshima. Awareness, and also a certain amount of non-proprietary transparency from AI developers, will reap great benefits for the public and facilitate universal benefits. Certainly, substantial expansion of the economy will benefit the world broadly. It is the distribution of that wealth that is the problem. Many will NOT participate in the MSI race and thus their reliance on the generosity of spirit by the creators of this intelligence is extreme.

ForHumanity is supportive of an MSI development process that is driven by corporations, instead of governments, in so far as they are more likely to make the optimum decisions to achieve SuperIntelligence. Corporations today are more global than any nation, often having offices in many of the countries where machine learning is being advanced. Furthermore, those corporations are often publicly traded and thus are already creating a broader, albeit non-inclusive, distribution of the wealth via their shareholders. We also recognize that corporate control has plenty of challenges. Corporations need not be moral. They also are profit maximizing and this is where the problem lies. Once Machine SuperIntelligence is achieved, instantly it should no longer be owned by the corporation or collaborative that achieved the goal. This of course flies in the face of 250 years of capitalistic thought.

Ideally, when that MSI switch is flipped, the group that created it would hand it off to humanity at-large and collect its substantial bonus, but NOT continue to reap the ongoing rewards nor “control it” whatever that may mean. Unlikely the Manhattan Project, a naturally militaristic endeavor at the time, to weaponize machine SuperIntelligence or make it Federally owned, is to both polarize the process and introduce natural bureaucracy, without reasonable benefit. We believe that this would quickly turn into a race to the bottom and would likely lead to increased risk, which is why we prefer corporate control for the time being. However this begs the question of independent oversight. Here ForHumanity advocates for a powerful NGO-esque watchdog group, who can interface freely with AI corporate developers, endorse safe practices and then prepare the masses, governments and the world for AI as we approach general intelligence. Also, this group, if properly governed and guided, might become the steward of a general MSI.

Magnanimity. A wide range of resource-satiable values (ones to which there is little objection aside from cost-based considerations), are realized if and when it becomes possible to do so using a minute fraction of total resources. This may encompass basic welfare provisions and income guarantees to all human individuals. It may also encompass many community goods, ethical ideals, aesthetic or sentimental projects, and various natural expressions of generosity, kindness, and compassion.

The economic explosion predicted by some associated with machine SuperIntelligence seems lofty to me. The projections assume a level of resource availability that simply may not exist on earth. Furthermore, scalability remains a function of physical construction (whether that is chips and motherboards or actual automation) and cannot be achieved instantaneously. However the basic concept of an equitable distribution of the wealth created by an MSI created wealth explosion is an easy idea to agree with, so we support Bostrom on this point.

However, that outcome appears more likely as an outlier . It further assumes that humanity does not choose another pathway, constrained machine SuperIntelligence. I have heard some discussion in the community, people refer to it as the “control problem”, but maybe it is our optimum outcome. Instead of “turning on a switch and watching the SuperIntelligence go”, why not have natural circuit breakers built in where humanity can take a pause, re-asses potential impacts and then continue the process. For instance, if machine SuperIntelligence were NEVER allowed to control currency/capital and instead always relied upon a beneficial human owner or owners, then you have a natural human circuit breaker (think like an appropriations committee). Or if MSI were unable to self-replicate, you have strong bounds under which the intelligence can operate. Finally, the power required for MSI will likely be enormous at the outset. Strong limits on the MSI’s ability to access power may provide the control measures that humanity requires, to at least assess next steps and create a process that is more manageable and more easily understood by humans. This approach probably leads to a dramatically more gradual expansion of both knowledge and wealth creation.

However, should MSI lead to a significant explosion of resources and subsequent wealth, there is no question that this ought to be shared as widely and as readily as possible. Currently, the world does not operate like this. The income inequality gap is at an all-time high and likely to accelerate despite significant humanitarian and charitable efforts to shift wealth from a wealthy class to a less privileged group. The mind shift that needs to occur to properly redistribute this wealth seems to run counter to the desires of our individual humanity. Many of us have a natural desire to compete, to win and expect to be rewarded for our victories. I have more concern at our ability to succeed in this endeavor than to actually achieve MSI. The result of a failure to achieve an equitable redistribution of wealth is that many will remain impoverished and oppressed. We support the idea of “Magnanimity”, but remain skeptical of humanity’s ability to achieve this successfully. My skepticism will not prevent us from trying.

Continuity. The path affords a reasonable degree of continuity such as to (i) maintain order and provide the institutional stability needed for actors to benefit from opportunities for trade behind the current veil of ignorance, including social safety nets; and (ii) prevent concentration and permutation from radically exceeding the levels implicit in the current social contract.

The current growth rate of the wealth inequality globally can be largely attributed to the growth and application of technology. Technology reduces costs and increase efficiency, which is how wealth is created. This wealth accumulates to shareholders and capital owners. In addition, automation increase the amount of return for capital and decreases the return to employees as their labor input is decreased. This accelerates the inequality.

Taking MSI to an extreme, where it is the “last invention humans ever need create”, well then you have all of the wealth concentrated in the owner of the MSI or potentially the MSI itself. This represents a massive problem and must be solved. If one subscribes to the notion that an MSI, discovered and unleashed at the corporate level will be turned over to humanity at-large, then you’ve dealt with the issue. However, that is a big “if”. At least in the United States contract law provides no precedent for ownership by humanity at-large, instead, the only semblance of large scale universal control is at the nation-state level, where we have the powers of eminent domain. This may actually be WORSE. Imagine a corporation is about to turn on an MSI and the Unites States or some other country steps in and says, “I’ll take that”. The fear it would engender in other nations, might lead to poor decisions by other nation-states. Decisions made out of fear. So the ownership question flies in the face of “continuity”.

Job Destruction and the future of work is another substantial “discontinuity”. I won’t go into the argument here as I’ve made it before as have many others, but it is sufficient to say that today’s society is ill prepared for 50–80% unemployment. The problems it will create as a result of wealth inequity and potential damage to individual self-worth are massive upheavals to society where we stand today. So I don’t see how “continuity” is even remotely achievable.

On a separate thought on continuity, I see the advent of MSI creating a substantial divide in society. As machines become more integrated into society and especially when they begin to achieve “personhood” and the associated rights and privileges (lest we think this is Sci-fi talk, the EU parliament has already recommend some steps toward “personhood” for machines). I can see where a schism is likely to form. I suspect this schism will result in a large number of communities and people, “opting out” of a machine intensive society. The “opt-out” option is a fundamental (constitutional in the US?) human right that I know will be challenged time and again by government and by society at-large in a machine driven world. It is vital we protect this right now to maximize continuity for all. Luddite today is already a pejorative word. It, like all other monikers designed to demean, is prejudicial and our society today should support the rights of those who choose NOT to participate in a machine driven/controlled society. I dare say that society should protect and guarantee those rights as I suspect this group will be a minority. Continuity ought to mean “continuity” from today, for all, based upon their right to life, liberty and the pursuit of happiness, even if that means a life without MSI and automation.