Distilling Nick Bostrom’s Machine SuperIntelligence (MSI) Desiderata

In the process of answering Bostrom’s desiderata on Machine SuperIntelligence (MSI), I’ve gotten a series of requests from friends and followers such as , “Desiderata?” or “what the heck did he just say?”. I also got a number of comments like, “I couldn’t get through it” “fell asleep” and “Too dense”. All of this reminded me that part of the mission at ForHumanity is to make sure that all people can understand the challenges that face us from AI and machine SuperIntelligence, so with no disrespect meant towards Professor Bostrom, I would like to distill his desiderata down into a quick read and bullet points. Professor Bostrom does some nice work in creating this document. ForHumanity doesn’t agree with all of it and will be prescribing some alternative thoughts, but unless the masses are more aware of the issues, how can we expect them to respond at all.

  1. Desiderata — fancy word for “the essentials”. So Nick Bostrom is the author of the book SuperIntelligence which shook, Gates, Hawking and Musk to believe that “AI is the single greatest challenge in human history”. Bostrom is trying to layout for us the the things that we need to for a successful discovery and implementation of Machine SuperIntelligence
  2. Machine SuperIntelligence — there are two kinds of artificial intelligence — Narrow and General. We do not have general AI today. We have many, many pockets of narrow AI. The simplest example of narrow AI is a calculator. It does computations faster than we can. Narrow AI is everywhere today, in your amazon, your facebook, your smart phone translators and your car. General AI is defined as a machine intelligence that thinks about all things that humans think about, often times better than we do it. We do not have General AI today. Lots of movies made about this, do not discount the visions that you see in these movies, they are based in aspects of reality.
  3. When will we have General AI? There is no agreement on this point. Some believe as early as 2025 (10% of researchers), others argue that it is more like 30 years away (50% of researchers).
  4. Why should I care about General AI? Many in AI research agree with Bostrom, that the discovery and implementation of Machine SuperIntelligence might be the last thing that Human beings EVER discover. That means a VERY different world from today, no matter how you think about it
  5. So again, why should I care NOW? Best answer that I can give you is that AI is being developed and enhanced EVERYDAY, we don’t know how long it will take us to create the right safeguards, the right guidelines, the right oversight, the right laws, the right changes in society to deal with the implementation of Machine SuperIntelligence. If Machine SuperIntelligence arrives in 2030, but the solutions needed to make that SAFE arrive in 2033, we are already 3 years TOO LATE.
  6. So back to Bostrom’s Essentials — I am going to list them, explain briefly and then leave a note of agreement or disagreement from ForHumanity’s perspective
  7. Efficiency— Bostrom would like Machine SuperIntellgience (MSI) to arrive as quickly and as easily as possible. Eliminate as many obstacles as possible because it would reap such great rewards. Bostrom wants everyone who can benefit to benefit as soon as possible. (DISAGREE)
  8. MSI risk — also known to Terminator movie watchers as SKYNET. The idea here that the introduction of MSI could create a disaster scenario. Bostrom argues for considerable work in this space. (AGREE)

9. Stability and global coordination — since it has never existed, I hope this isn’t an essential. Bostrom argues that rogue application by nations or organizations for their exclusive benefit would be a bad thing and thus we should all come together in harmony. Good luck with that… (AGREE — but really?)

10. Reducing negative impact of MSI — Bostrom is concerned that the move to MSI will have some revolutionary impacts and some of those will be negative. He argues we should deal with them, but makes little or no attempt to figure out what they might be. Here’s one, how about the possibility of 60–80% unemployment in the traditional sense. That might be a challenge to deal with. So here I agree with Bostrom, we want to reduce the negative impact, ForHumanity just plans to try to deal with them(AGREE)

11. Universal Benefit — Bostrom argues that all humankind should benefit from MSI. Of course we agree, the How? seems to be the challenge. (AGREE)

12. Total human welfare — here Bostrom relies upon Robert Hanson’s spectacular estimates of future wealth as a result of MSI. For example, Hanson talks about current global GDP of 77.6 trillion USD and expands it to 71 quadrillion USD. Yeah… if we have it great, let’s not plan on it. (AGREE — lovely idea)

13. Continuity — It is argued that these changes should be met in an orderly, law abiding fashion, where everyone is treated fairly and taken care of properly. (AGREE)

14. Mind Crime Prevention — here Bostrom started to lose me. Arguing already for the rights and protections of a machine mind, seems premature, however European Union authorities have recommended it already for actual policy. This issue is very tricky and I believe will create a schism in society. For now ForHumanity will (DISAGREE, but this need a much longer explanation)

15. Population control — Bostrom argues that because people will be living forever, we can’t have too many people and that births should be controlled and coordinated. First, living forever, really? Digitally, downloading ourselves into machines, really? These are essentials? This issue is filled with difficult theoretical, philosophical and religious issues, that I will simply say that ForHumanity believes the the rights to NOT participate in that world as well. (DISAGREE)

16. Group control, ownership and governance — Bostrom argues that MSI should be governed by an enlightened, selfless and generous group. This group should be thoughtful and creative in their solutions to the new problems created by the MSI — but isn’t that what we created the MSI for? I am being snarky of course. Human beings make mistakes, we have flaws and they are beautiful. I hope that we have a chance to be in charge and that we respond with a grace, self sacrifice and enlightened governance for all people. (AGREE).

So that was Bostrom’s desiderata, I hope this summary was helpful for everyone. I of course will continue to delve deeply into the work as it deserves a thoughtful and thorough response. ForHumanity agrees with the challenge and importance of the discussion, it is my challenge to bring practical solutions to many of these challenges and aid all of us in achieving Bostrom’s magnificent Machine SuperIntelligence world.