Answering Nick Bostrom's Desiderata (Part 3 of 4)

 

Mind crime prevention. AI is governed in such a way that maltreatment of sentient digital minds is avoided or minimized.

Wow is this a troubling question. I understand it, I truly do. Furthermore, I am afraid of being on the same side of history that treated African-Americans as less than human and ignored women as voters for a century and a half. But also, I am science fiction fan. I watch Humans and Westworld and certainly those androids don’t deserve to be mistreated. So it comes down to where the line gets drawn. Are we creating sentient, feeling partners, co-inhabitants, and companions? Or are we creating tools as partners, co-inhabitants and companions?

To begin with, ForHumanity can agree that like any property, it (meaning the sentient machine) is afforded certain protections from theft, abuse and damage caused by those that do not own the property. However, if these sentient digital minds are only property, then the owner is free to do WHATEVER with that property as long as it doesn’t infringe the rights of others. There may be some that quibble with my interpretation of the law here, but I am not trying to make a purely legalistic argument, rather just a moral/intent based argument. If we upgrade these sentient minds to the level of our treatment of pets, then the owner is held to a reasonable standard of good care. The reason for this upgrade in protection is comfortably understood, by most in society, to be because we attribute the ability to “feel” to the pets and the majority of society is uncomfortable seeing a “feeling being” harmed. ForHumanity can support this level of protection for these sentient minds because as human beings we value good treatment. We care to avoid causing hurt and pain to beings that can “feel”. But already, we are pushing the boundaries, can a sentient digital mind feel? Is there a difference between our electrical impulses in the brain and the electrical impulse in a neural network AI, when it comes to something like pain?

So this gets us to the uncomfortable, theoretical space of “what does it mean to be human?” This also takes us to the unquantifiable realms of faith. I know that science struggles with the subjects of faith, but somewhere between 60–80% of the world’s population believes in God. Therefore, for many,there is a uniqueness that is attributable to human beings. There is a value to the soul that places us above all other animals. This belief would extend to machines. Even if everyone accepted that our brains were simply electrical impulses, like a machine, the majority would hold that our souls have value. So like animals, much of humanity will ascribe a specialness for ourselves that places us over and above a sentient digital mind. I suspect that view is in conflict with the existing AI community.

So here is the where I see the danger occurring. As the proponents of the sentient digital mind push towards personhood rights, a significant portion of society will push back. I believe this will lead to a genuine schism in society. If society thought that legalizing gay marriage was a challenging issue, the right to vote or the right to marry a machine will create an uproar. I actually think it will be a breaking point for many and will create the impetus for some societies that choose to break apart. Potentially states or entire countries may reject the movement towards machine personhood. This will be magnified with increased control by machines, especially in the wake of the expected decrease in work as a result of automation. This schism will then get magnified with efforts at genetic modification, cloning and altogether tampering with the “miracle of life”. The idea of living forever, digital downloads of minds and miracle science that could extend life indefinitely are anathema to those that believe in God. Further, these efforts would actually be viewed by many as “playing God” and thus will drive people apart.

From the perspective of science, the concepts of “living forever” and editing genes to eliminate certain diseases seem to be noble goals. I think science is missing some important things about humans. It is is our faults and errors that often define us… even positively. It is our eventual death that gives each day meaning. Science may rob humanity of these beautiful traits and it doesn’t even know that it is stealing. Do Not Resuscitate orders will become Do Not perpetuate orders. Beings who’s lives become extended will likely benefit from that knowledge and experience and that will further divide those that choose mortality, creating a sort of master race.

I could go on and on about this topic and I likely will expand on it elsewhere, but with regards to an answer for Professor Bostrom’s Desiderata, this point is decidedly murky. ForHumanity can support the point to an extent and then will switch sides very quickly if the point is carried too far and treads on some of the key factors that define our humanity.

Population policy. Procreative choices, concerning what new beings to bring into existence, are made in a coordinated manner and with sufficient foresight to avoid unwanted Malthusian dynamics and political erosion.

Why, Why and Why? Here I got frustrated with Bostrom. I have one thing to say, “The One Child Policy”. For those who are not aware, the Chinese government in 1979 instituted a policy of one-child per family. The policy has been so successful that it was phased out in 2015, sarcasm intended.

Essentially, this policy has destroyed the future for China, by destroying the generational balance. Created to stem the tide of over population, it has massively slowed China’s birth rate and the expansion of overall Chinese population, so it achieved its goal. Here’s the problem, it created such a ticking time bomb on social welfare, public policy, gender balance and a million other things beyond the scope of this blog (type one child policy into google for numerous articles on how this failed), that it is hardly a model for future planning. And why would we get this right the next time?

But that is not even the argument, the argument is that the right to procreate is one of our must fundamental and personal rights. ForHumanity is absolutely opposed to any authority imposing any sort of restriction or associated planning on the right to have children. That decision, every time should be made by the mother and the father, solely. I don’t actually believe that this requires further discussion. The answer is “No”.

Now, in a world of scarce resources, where human beings are making their own choice to preserve a limited supply of resources, maybe we can have a discussion. However, the entire previous section of the desiderata was arguing for a multitude of wealth, not a scarcity. Thus this request can be nothing more than authoritarian. Even in a world of scarce resources, you would focus on the resource allocation. If that resource was food and the family choose to have seven children instead of two, then that is their choice to make using the exact same resources. ForHumanity is opposed to any form of central planning or legislated policy with regards to procreation.