Answering Nick Bostrom's Desiderata (Part 4 of 4)

*Bostrom’s excerpts are in italics

Responsibility and wisdom. The seminal applications of advanced AI are shaped by an agency (individual or distributed) that has an expansive sense of responsibility and the practical wisdom to see what needs to be done in radically unfamiliar circumstances.

So Bostrom’s primary concern underlying this section of the desiderata is that the machine SuperIntelligence might become so good at persuasion that the idea of “voluntary consent” becomes meaningless as we humans would be powerless to resist this “super” persuasion. I accept this premise and agree with Bostrom that this should be a concern and quite honestly one that should be avoided from the outset.

To avoid these challenges we must be proactive at all steps along the way to design controls , ethics, standards and data inputs that are consistent with out ideals and desires as all of humanity. An enormous task, not only to determine those standards, but then to see them implemented and adhered to is even larger. Here I an entirely with Bostrom, in that I believe NOW is the time to be concerned and proactive.

One of the things that confuses me about machine SuperIntelligence is the all-encompassing nature that forecasters assign to it. Flip the switch and instantly it is in charge of everything, do all of the thinking, exploring everything to be explored. I simply don’t think that needs to be true — or maybe I’m missing something. Are we building a tool? Or are we so dissatisfied with our own efforts as a species that we truly want to replace ourselves. Do we believe that life will so much better when everything has been thought of and discovered. I’m dismayed to tell you that I think AI developers and dreamers believe the latter. Everyone is in a rush to cure cancer, live forever and discover ways to travel at light speed, so since we can’t do it now, let’s create a SuperIntelligence, let it run complete unconstrained and nothing but sunshine and happiness will come out the other end and all problems will be solved. Why is intelligence the answer to every question? Sometimes the answer to “how do I get my baby to stop crying?” is simply the loving touch of the baby’s mother, not the knowledge that the baby wants its mother.

Furthermore, unconstrained, what if we don’t like the answers? What if the SuperIntelligence tells us we have 5 billion too many people? What if it suggests plowing under the majority of the earth’s farms in order to build solar arrays to provide it with enough power to continues its great work. We simply don’t know the conclusions that will be reached. Unconstrained, we can hardly fathom the range of outcomes, so why do it? ForHumanity supports the continued expansion of AI research and development. ForHumanity also advocates for the use of AI, even general AI and machine SuperIntelligence as a tool rather than as an unconstrained, sentient entity.

Our mission at ForHumanity is perfectly aligned with Bostrom’s Responsibility & Wisdom desideratum. We aim to be broadly representative of all of us, a daunting task, I grant. Our four orders of operation are our four A’s. AWARENESS — We will strive to raise awareness of these issues to the masses and solicit quality feedback to enhance our representative nature. ANALYSIS — We will engage the AI and surrounding communities to discuss and iterate proposals for policy and standards and endeavor to be a value-added input into the process. ADVOCACY — where appropriate, ForHumanity intends to engage policy makers to help craft the laws and standards by which AI must abide. AUDIT — to act as an interactive watchdog, working directly with the developers of AI to ensure that best practices are maintained and that policy and industry standards are being adhered to on behalf of humanity.

Via the 4 A’s, ForHumanity unequivocally agrees with Prof. Bostrom’s assertion that there needs to be a proactive process to consider the criteria by which machine SuperIntelligence is being developed. Furthermore, to be integrated into the development process to ensure that the standards are being implemented according to the principles and consistent with best-practices. Finally, if machine SuperIntelligence is achieved, ForHumanity would be well positioned to participate in the governance of the MSI on behalf of humanity. However, ForHumanity will always side with the broad goals and directives of humanity even at the expense of machine SuperIntelligence.

Artificial IntelligenceNick BosMachine Intelligence