Aritificial Intelligence

New C-Suite Position - AI and Automation Risk Management

Many of you will be familiar with the challenges that Facebook is facing.

The Cambridge Analytica saga is a scandal of Facebook's own making | John Harris
Big corporate scandals tend not to come completely out of the blue. As with politicians, accident-prone companies…www.theguardian.com

Internal disagreements about how data has been used. Was it sold? Was it manipulated? Was it breached? It has put the company itself at-risk and highlighted the need for a new position at the C-Suite level, one that most companies have avoided up until now. AI and Automation Risk Management.

Data is the new currency, Data is the new oil. It is the lifeblood of all Artificial Intelligence algorithms, machine and deep learning models. It is the way that our machines are being trained. It is the basis for the best and brightest to begin to uncover new tools for healthcare, new ways to protect security, new ways to sell products. In 2017, just Google and Facebook had a revenue close to $60 billion in advertising alone, all due to data. However, usage of that data is at-risk, because of perceived abuses by Facebook and others.

Data is also and more importantly about PEOPLE. Data is PERSONAL, even if there have been attempts to anonymize it. People need an advocate,inside the company, defending their legal, implied and human rights. This is a dynamic marketplace, with new rules, regulations and laws being considered and implemented all of the time. AI and Automation face some substantial challenges in their development, here is a short list and NONE of these are a priority for engineers and programmers, despite the occassional altruistic commentary to the contrary. As you will see the advancement of AI and Automation requires full-time attention.:

  1. Ethical machines and algorithms — There are millions and millions of decisions being made by machines and algorithms. Many of these decisions are meant to be based upon our value system as human beings. That means our Ethics need to be coded into the machines and this is no easy task with a single set of Ethics. Deriving profit from Ethics, is tenuous at best and there is certain to be a cost.
  2. Data and decision bias — Our society is filled with bias, our perceptions are filled with bias, thus our data is filled with bias. If we do not take steps to correct bias in the data, then our algorithmic decisions will be biased. However, correcting for bias may not be as profitable which is why it needs to be debated in the C-suite.
  3. Privacy — there is a significant push back forming on what is privacy online. GDPR in Europe is a substantial set of laws providing people with increased transparency, privacy and in some cases the right to be forgotten. Compliance with GDPR is one responsibility of the AI Risk Manager.
  4. Cybersecurity and Usage Security (a Know-Your Customer process for data uasge). Companies already engage in cybersecurity, but the responsiblity is higher when you are protecting customer data. Furthermore, companies should adopt the the finance industry standard of “Know Your Customer (KYC)”. Companies must know and understand how data is being used by clients to prevent abuses or illegal behavior.
  5. Safety (can the machines that are being built be controlled and avoid unintentional consequences to human life). This idea is a little farther afield for most, however now that an UBER, autonomous vehicle has been involved in a fatality, it is front and center. The AI Risk Manager’s job is to consider the possiblities of how a machine may injure a human being. Whether that be through hack, negligence, or system failure.
  6. Future of Work (as jobs are destroyed, how does the company treat existing employees and the displaced workers/communities) — This is the PR challenge of the role, depending on how the company chooses to engage it’s community. But imagine for a moment taking a factory with 1000 employees and automating the whole thing. that’s 1000 people directly effected. That’s a community potentially devestated, if all 1000 employees were laid off.
  7. Legal implications of cutting edge technology (in partnership with Legal team or outside counsel) — GDPR compliance, legal liability of machines, new regulations and their implementation. These are the domain of the AI Risk Manager in conjunction with counsel and outside counsel.

This voice is is a C-suite job and must have equal authority to the sources of revenue, in order to stand a chance of protecting the company and protecting the data, i.e. the people who create the data.

I am not here to tell you to stop using data. However if you believe that each of these companies, whose primary purpose is not compliance but instead to make profits, will always use this data prudently is naive at its best. Engineers and programmers solve problems, they have not been trained to consider existential risk, such as feelings, privacy rights, and bias. They see data of all kinds as “food” and “input” to their models. To be fair, I don’t see anything wrong with their approach. However, the company cannot let that exist unfettered. It is risking its entire reputation on using data, which is actually using PEOPLE’S PERSONAL AND OFTEN PRIVATE INFORMATION, to get results.

For many new companies and new initiatives, data is the lifeblood of the effort. It deserves to be protected and safeguarded. Beyond that, since data is about people, it deserves to be treated with respect, consideration, and fairness. Everyone is paying more and more attention to issues like this and companies must react. People and their data need an advocate in the C-Suite. I recommend the Chief of AI and Automation Risk Manager.

AI Safety - The Concept of Independent Audit

For months I have regularly tweeted a response to my colleagues in the AI Safety community about being a supporter of independent audit, but recently it has become clear to me that I have insufficiently explained how this works and why it is so powerful. This blog will attempt to do that. Unfortunately it is likely be longer than my typical blog, so apologies in advance.

Independent audit defined. ForHumanity, or similar entity, that exists, not for profit, but for the benefit of humanity would conduct detailed, transparent and iterative audits on all developers of AI. Our audit would review the following SIX ELEMENTS OF SAFEAI on behalf of humanity:

  1. Control — this is an analysis of the on/off switch problem that plagues AI. Can AI be controlled by its human operators? Today it is easier than it will be tomorrow as AI is given more access and more autonomy.
  2. Safety — Can the AI system harm humanity? This is a broader analytic designed to examine the protocols by which the AI will manage its behavior. Has it been programmed to avoid human loss at all costs? Will it minimize human loss if no other choice? Has this concept even been considered?
  3. Ethics/Standards — IEEE’s Ethically Aligned Design is laying out a framework that may be adopted by AI developers to represent best practices on ethics and standards of operation. 
    There are 11 subgroups designing practical standards for their specific areas of expertise (P7000 groups). ForHumanity’s audit would look to “enforce” these standards.
  4. Privacy — Are global best practices being followed by the company’s AI. Today, to the best of my knoweldge, GDPR in Europe is the gold standard of privacy standards and would be the model that we would audit on.
  5. Cyber security — regarding all human data and interactions with the company's AI, are your security protocols consistent with industry best practices? Are users safe? If something fails, what can be done about it?
  6. Bias — Have your data sets and algorithms been tested to identify bias? Is the bias being correct for, if not, why not? AI should not result in classes of people being excluded from fair and respectful treatment

The criteria that is analysed is important, but it is not the most important aspect of independent audit. Market acceptance and market demand are the key to make independent audit work. Here’s the business case.

We have a well established public and private debt market. It is well over $75 trillion US dollars globally. One of the key driving forces behind the success of that debt market is Independent Audit. Ratings agencies, like Moody’s, Fitch and Standard & Poor’s for decades have provided the marketplace with debt ratings. Regardless of how you feel about ratings agencies or their mandate, one thing is certain, they have provided a reasonable sense of the riskiness of debt. They have allowed for a marketplace to be liquid and to thrive. Company’s (issuers) are willing to be rated and investors (buyers) rely upon them for a portion of their investment decision. It is a system with a long track record of success. Here are some of the features of the ratings market model:

  1. Issuers of debt find it very difficult to issue bonds without a rating
  2. It is a for-profit business
  3. There are few suppliers of ratings which is driven by market acceptance. Many providers of ratings would dilute their value and create a least common denominator approach. Issuers would seek the easiest way to get the highest rating.
  4. Investors rely upon those ratings for a portion of their decision making process
  5. Company’s provide either legally mandated or requested transparency into their financials for a “proper” assessment of risk
  6. There is an appeals process for companies who feel they are not given a fair rating
  7. The revenue stream from creating ratings allows the ratings agencies to grow and rate more and more debt

Now I would like to extrapolate the credit rating model into an AI safety ratings model to highlight how I believe this can work. However, before I do that there is one key feature of the ratings agency model that MUST exist for it to work. The marketplace MUST demand it. For example, if an independent audit was conducted on the Amazon Alexa (this has NOT happened to date) and it failed to pass the audit or was given a subsequent low rating because Amazon had failed some or all of the SIX ELEMENTS OF SAFEAI, then you, the consumer, have to stop buying it. When the marketplace decides that these safety elements of AI are important, that is when we will begin to see AI safety implemented by companies.

That is not to say that these companies and products are NOT making AI safety a priority today. We simply do not know. From my work,there are many cases where they are not, however without independent audit, we cannot know where the deficiencies lie. We also can’t highlight the successes. For all I know, Amazon Alexa would perfectly pass the SIX ELEMENTS OF SAFEAI today. But until we have that transparency, the world will not know.

That is why independent audit is so important. Creating good products safely, is a lot harder than just creating good products. When companies know they will be scrutinized, they behave better — that is a fact. No company will want to have a bad rating published about them or their product. It is bad publicity. It could ruin their business and that is the key for humanity. AI safety MUST become a priority in the buying decision for consumers and business implementation alike.

Now, a fair criticism of independent audit is the “how” part and I concur wholeheartedly, but it shouldn’t stop us from starting the process. The first credit rating would be a train-wreck of a process compared with the analysis that is conducted by today’s analysts. So it will be now, but the most important part of the process is the “intent” to be auditted and the “intent” to provide SAFEAI. We won’t get it perfectly right the first time nor will we get it right everytime, but we will make the whole process a lot better with transparency and effective oversight.

Some will argue for government regulation (see Elon Musk), but AI and the working being done by global corporations have already outstripped national boundaries. It would be very easy for an AI developer to avoid scrutiny that is nationally focused than a process that is transnational and market driven. Below I have created a list of reasons that market driven regulation, which this amounts to, is far superior to government based regulation:

  1. Avoids the tax haven problem associated with different rules in different jurisdictions, which simply shifts development to the easiest location
  2. Avoids government involvement, which frequently had been sub-optimal
  3. Allows for government based AI projects to be rated — huge benefit regarding autonomous weaponary
  4. Tackles the problem from a global perspective, which is how many of the AI developers already operate
  5. Market driven standards can be applied rather than bureaucracy and politics using the rules as tools to maintain their power
  6. Neutrality

Now how will this all work. It’s a huge endeavor and it won’t happen overnight, but I suspect that anyone taking the time to properly consider this will realize the merits of the approach and the importance of the general issue. It is something we MUST do. So here’s the process I suggest:

  1. Funded - we have to decide that this important and begin to fund the audit capabilities of ForHumanity or a like-minded organization
  2. Willingness - some smart company or companies must submit to independent audit, recognizing that today, it is a sausage making endeavor, but one that they and ForHumanity are undertaking together to build this process and to benefit both humanity and their product for achieving SAFEAI
  3. Market acceptance- this is a brand recognition exercise. We must grow the awareness of the issues around AI safety and have consumers begin to demand SAFEAI
  4. Revenue positive, Licensing - once the first audit is conducted, the company and product will be issued the SAFEAI logo. Associated with this logo is a small per product licensing (cents) fee payable to ForHumanity or a like organization. This fee allows us to expand our efforts and to audit more and more organizations. It amounts to the corporate fee payable for the benefits to humanity. (Yes, I know it is likely to be passed on to the consumer)
  5. Expansionary - this process will have to be repeated over and over and over again until the transparency and audit process become pervasive. Then we will know when products are rogue and out of compliance by choice, not from neglect or the newness of the process
  6. Refining and iterative- this MUST be a dynamic process that is constantly scouring the world for best-practices, making them known to the marketplace and allowing for implementation and upgrade. This process should be collaborative with the company being audited in order to accomplish the end goal of AI safety
  7. Transparent - companies must be transparent in their dealings
  8. Opaque - the rating cannot be transparent in the short coming in order to protect the company and the buyers of the product, who still choose to purchase and use the product. It is sufficient to know that there is a deficiency somewhere, but there is no need to make that deficiency public. It will be between ForHumanity and the company itself. Ideally the company learns of their deficiency and immediately aims to remedy the issue
  9. Dynamic - this team cannot be a bureaucratic team, it must be comprised of some of the most motivated and intelligent people from around the world determined to deliver AI safety to the world at-large. It must be a passion, it must be a dedication. It will require great knowledge and integrity
  10. Action-oriented - I am a do-er, this organization should be about auditing not about discussing. Where appropriate and obvious, like in the case of IEEE’s EAD, adoption should supersede new discussions. Take advantage of the great work being done already by the AI safety crowd
  11. And it has to be more than me - As I have written these words and considered these thoughts, it is always with the idea that it is impossible for one person, or even a small group to have a handle on “best-practices”. This will require larger teams of people from many nationalities, from many industries, from many belief systems to create the balance and collaborative environment that can achieve something vitally important to all of humanity.

Please consider helping me. You don’t have to agree with every thought. The process will change daily, the goalposts will change daily, “best-practices” will change daily. You have to agree on two things:

  1. That this is vitally important
  2. That independent audit is the best way to achieve our cumulative goals

If you can agree on these two simple premise then join with me to make these things happen. That requires that you like this blog post. That you share it with friends. That you tell them that it is important to read and consider. It requires you to reconsider how you review and buy AI associated products. It requires you to take some action. But if you do these things, humanity will benefit in the long run and I believe that is worth it.

Drawing A Line - The Difference Between Medicine and Transhumanism

I think I am writing this for myself. I hear about advancements in science that can stimulate and re-activate neurons and I think, Amazing! I listen to Tom Gruber talk about memory technology linked directly to your head and I think, no way Big Brother to the nth degree. Are these very different? Should I have such extreme views? Or am I just nuts? (N-V-T-S for the Mel Brooks fans out there).

I seem to be drawing a line between augmentation and stimulation. Between technology and medicine. Maybe that’s not a fair line, but I do. I recognize that my line is not the line for everyone, so please don’t take this as some edict or law, that I suggest. I know that transhumanism will prevail and people will put tech in their body.

Scientists May Have Reactivated The Gene That Causes Neurons To Stop Growing
In Brief Scientists have found a way of reactivating genes in mice to continue neuron growth. The development could be…futurism.com

 

So why do a draw a line between drug, chemical and external stimulus versus internal, permanent implantation? And where does that line stop/start.

One differentiation that may be key is that medicine is an isolated interaction versus ongoing connectivity. Let me try to explain.

When I take medication or medical treatment that is not implanted, it is a finite decision. Once it enters in my body, it either works or it doesn’t work, but regardless, the medication is done interacting with the outside world. It is me and the after effects, positive or negative. With augmentation/implantation, that device is permanent and it is permanently connected to the outside world. Those are vastly different ongoing relationships. One is distinctly private, the other is inherently public.

This makes a big impact on your value judgement, I suspect. When you take a drug, medication or course of treatment, it’s a one-time interaction with the dosage. One-time interaction with the outside world, essentially invading your body. There is no need for ongoing communications (outside of results-based measurements). It’s between you and your body. Any information obtained by the outside world would have to be granted.

The introduction of tech, whether implanted or nanotechnology designed to treat disease results in a diagnostic relationship between you and the outside world that is no longer controlled by your brains and five senses, in fact that communication is completely out of your control. I believe that is the key distinction and exactly where my discomfort lies — control.

Now the purveyors of these technologies will argue that you have control, but it can never be guaranteed. The only way to guarantee it, would be to 100% eliminate the communication protocol from within your body and the outside world. Nothing is perfectly secure when it comes to technology.

But I also said that it effects your value judgement. This is not a black or white issue. I agree with Tom Gruber and Hugh Herr, if I had a disability and the best way to be restored to full human capability was via augmentation or implantation, I suspect I would choose the tech. Sacrificing the risk that my tech could be compromised to my detriment, simply because my immediate need has been met. I have never had a debilitating disability, but I believe that is the choice I would make and that many people would agree. In the end this is about choice and I believe that all should have the right to choose in this matter. But this particular choice is a value judgment focused on rehabilitation. I think the equation changes, when we are talking about creating super-human traits.

To be clear, super-human traits are those senses or bodily functions that surpass human norms of that sense or function. So in the pursuit of super-human traits, I am happy to accept the introduction of artificial medicines into my body, because they do not create an ongoing communications vehicle with the outside world that may be hijacked. If that medicine can increase my longevity, clear up cloudiness in my brain, make my blood vessels more flexible, then I will take the risk of side effects in order to be super-human in that respect.

But pacemakers present an interesting gray-area. Traditionally a pacemaker would be returning me to traditional human level function. But what if we could each receive a pacemaker at birth that perfectly handled cholesterol, high blood pressure and heart function, completely eliminating heart-related death. Would that be a tech I would accept? I would be making my cardiac functions super-human, but exposing myself to the risk that the device could be hijacked or damaged in it’s communication with the outside world.

A few distinctions, first, such a device probably doesn’t directly tie into my communication control between my body and the outside world. Second, the risk of nefarious interaction with the device probably only limits me to natural human performance, taking away my super-human heart or movement. So it would appear that the risk may be worth the potential reward. However that risk of hijacking is already increased. Could my heart be stopped? That’s a serious consequence.

Now, let’s put that tech in our brains, or any of our fives sense or our mobility. Now the rewards have been espoused by many already. Personally I can see the benefits, but now the risk is just too high. The constant communication between my senses or my brain without outside world opens me up to hacking. It has the potential to change me, to change how I perceive the world. I can be manipulated, I can be controlled. It places my sense of self at risk. In fact, the positive operations of the augmentations themselves changes who I am at my core. And thus, I draw my line…

By drawing this line, I create consequences for myself. Transhumans will pass me by, maybe not in my lifetime, but these consequences apply to all who share my line in the sand. I will struggle to compete for jobs. I will not have a perfect memory. Fortunately, those things don’t define who I am, nor where I derive happiness and I hope that others will remember that when the immense peer pressure to implant technology enters your life.

In the end, I find humanity, with our flaws and failings to be perfect as is, beautifully chaotic. So this concept of weakness and fear being played upon by advancing technology feels sad and contrived, sort of like the carnival huckster. Playing on your fear that you aren’t smart enough, that your memory has some blank spots, that you struggle to remember people’s names when you meet them. Its through our personal chaos, our personal challenges and our personal foibles that we find our greatest victories and our most important lessons learned. I see a path for technology that would aim for Utopia and leave us dull, dreary and bored, automatons searching for the next source of pleasure. I see implanted brain and sense technology — controlling us. Not delivering great joy and happiness in our lives.

I guess that is the problem with all of the “intelligence chasing”. The implication is that none of us are smart enough. Professor Hugh Herr thinks that we all should have bionic limbs, why not be able to leap tall buildings in a single bound. Elon Musk sees it as obvious that we won’t be able to compete with machines because our analog senses are too slow. So he aims to increase the data extraction capabilities of our brains.

What is all this crap for? If we have a single goal in life, isn’t it something akin to happiness, or joy or peace or love? Please explain to me how being smarter gives your peace? Or how being able to jump higher or run faster gives you joy?

Some will try to answer those questions as follows: If you are smarter and have a better memory, you can get a better job and make more money. If you have stronger legs you can run faster, jump higher, be better at sports or manual labor. But let’s take the sports example, if you have “bionic” legs and win a long jumping contest, will that be gratifying at all? Especially if the competitors are not augmented? My reaction is, “who cares”, you should have won… boring.

Regardless of my view there will be people who find happiness in beating others. There is a distinct difference in the joy of victory in a friendly competition versus the joy you feel when you are taking the last job available because you have an augmented brain and the other applicants do not. Quite honestly, I would find the latter shameful, but we live in a society that rewards and cheers the person willing to “invest” in their future. This last point is the reason that transhumanism advance mostly unfettered. Humans are competitive, ruthless and self-centered enough that some people will choose augmentation in order to be superior. Society will laud people who do so because we are convinced that more technology, more brains, more assets and and more skills will lead to happiness. I support the right to augment your brain, to make yourself superior to others and I support your right to make bad choices, I believe this will be one of them.

Privacy - GDPR for the US

You do not have the right to Privacy, at least not guaranteed by your constitution or by law. Years and years of application of the 4th amendment (unreasonable search and seizure) create a world of “reasonable expectation of privacy”. To give you an idea of how narrowly this is defined, in Smith v Maryland (1979) Supreme Court case, the dialing of digits on your telephone constituted consent to the telephone company to release your privacy. This decision is being applied over and over again in the digital age, but technology companies who use your data and information to enormous profit ($24 bil in Q2 2017 for Google alone). Recently the Amici Curiae, brief was prepared by 42 legal scholars to suggest that the times have changed and continued use of the Smith decision is problematic. I agree times 10. I have argued that we need to rebuild our right to privacy from the ground up with a constitutional amendment protecting privacy. The problem is two-fold, one that’s a hard thing to do. 2) James Comey speaking at a Boston Security conference on March 7th 2017 said “There is no such thing as privacy anymore in America”. Which indicates that the authorities (and the largest companies in the world) like their very long reach into your lives and your data.

Recently Europe has actually lead the way on privacy with GDPR (General Data Protection Regulation) which goes into effect in Europe on 25 May 2018. This new law effects any company around the world looking to do business with a European person. It is an excellent start to the rights of individuals in the digital age. I would advocate for something similar here in the US, but I expect the resistance to be substantial from the largest companies in the big data business, Amazon, Facebook, Google and now the ISPs. They have zero interest in your privacy, in fact if everyone went dark on these businesses, they would be hard pressed to exist. Here’s a summary of the key points that GDPR provides for the citizens of Europe:

Consent
The conditions for consent have been strengthened, and companies will no longer be able to use long illegible terms and conditions full of legalese, as the request for consent must be given in an intelligible and easily accessible form, with the purpose for data processing attached to that consent. Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it.

Data Subject Rights

Breach Notification

Under the GDPR, breach notification will become mandatory in all member states where a data breach is likely to “result in a risk for the rights and freedoms of individuals”. This must be done within 72 hours of first having become aware of the breach. Data processors will also be required to notify their customers, the controllers, “without undue delay” after first becoming aware of a data breach.

Right to Access
Part of the expanded rights of data subjects outlined by the GDPR is the right for data subjects to obtain from the data controller confirmation as to whether or not personal data concerning them is being processed, where and for what purpose. Further, the controller shall provide a copy of the personal data, free of charge, in an electronic fromat. This change is a dramatic shift to data transparency and empowerment of data subjects.

Right to be Forgotten
Also known as Data Erasure, the right to be forgotten entitles the data subject to have the data controller erase his/her personal data, cease further dissemination of the data, and potentially have third parties halt processing of the data. It should also be noted that this right requires controllers to compare the subjects’ rights to “the public interest in the availability of the data” when considering such requests. *personally I think this remains a weakness of the law as there is too much subjectivity

Data Portability
GDPR introduces data portability — the right for a data subject to receive the personal data concerning them, which they have previously provided in a ‘commonly use and machine readable format’ and have the right to transmit that data to another controller.

Privacy by Design
Privacy by design as a concept has existed for years now, but it is only just becoming part of a legal requirement with the GDPR. At it’s core, privacy by design calls for the inclusion of data protection from the onset of the designing of systems, rather than an addition. More specifically — ‘The controller shall..implement appropriate technical and organisational measures..in an effective way.. in order to meet the requirements of this Regulation and protect the rights of data subjects’. Article 23 calls for controllers to hold and process only the data absolutely necessary for the completion of its duties (data minimisation), as well as limiting the access to personal data to those needing to act out the processing.

Data Protection Officers
Currently, controllers are required to notify their data processing activities with local DPAs, which, for multinationals, can be a bureaucratic nightmare with most Member States having different notification requirements. Under GDPR it will not be necessary to submit notifications / registrations to each local DPA of data processing activities, nor will it be a requirement to notify / obtain approval for transfers based on the Model Contract Clauses (MCCs). Instead, there will be internal record keeping requirements, as further explained below, and DPO appointment will be mandatory only for those controllers and processors whose core activities consist of processing operations which require regular and systematic monitoring of data subjects on a large scale or of special categories of data or data relating to criminal convictions and offences. Importantly, the DPO:

  • Must be appointed on the basis of professional qualities and, in particular, expert knowledge on data protection law and practices
  • May be a staff member or an external service provider
  • Contact details must be provided to the relevant DPA
  • Must be provided with appropriate resources to carry out their tasks and maintain their expert knowledge
  • Must report directly to the highest level of management
  • Must not carry out any other tasks that could results in a conflict of interest.

As you can see this is a game changer as it relates to your existing privacy relationships in the United States and many other countries. Since privacy is not explicitly guaranteed by the Constitution, the people, have fought to claw back that privacy time and again. The proverbial slippery slope has been tilted against the individual from the outset. Which is why I suggest we change the slope completely and ask for the constitutional amendment. Failing that, I would like to see law makers begin to implement the features above from GDPR with the added increase that it would be law enforcement and not controllers who could consider the public record versus the appropriate right to be forgotten.

Lastly, I call on the technology world to build a series of privacy tool.

  1. Privacy App — designed to track your own data, who has it, what is it, where it goes. This app ought to plug into any aspect of your life and immediately change the privacy settings in an instant.
  2. Right to be Forgotten tool kit. An internet eraser that allows the individual to quickly and completely clean up their online life and their data footprint.

I suspect the both apps would be wildly successful, and thus are already in development. Personally I can’t wait to use them.

Losing a Piece of our Humanity - A Cost of Technological Self-Sufficiency

I’ll admit it, I grew up on Little House on the Prairie. Back in the day when we had 3 to 5 channels to choose from and one decent TV, occasionally my mom got to watch what she wanted to watch and Little House was one of those things. If you are unfamiliar with this TV show which ran from 1974–1982, you missed the adaptation of the identically titled books by Laura Ingalls Wilder. It was her autobiographical tale of life in the 1870’s in the upper midwest of America. So certainly some of what I will discuss here is romanticized and tainted by TV, but I think much of the premise holds.

In the late 1800’s, there was no such thing as self-sufficiency. It was simply impossible. If you were able to grow enough food for your family to survive, then you were in no position to produce the materials for their clothing. And the food you grew would have been very limited one, maybe two crops with a small vegetable garden on the side. You still had to purchase additional goods to help cook meals and maintain a semblance of variety. You weren’t able to produce the nails and horseshoes for your farm and farm animals. If your wagon wheel or plow was damaged, the materials for repair were not readily available. How about that lumber for your home? The point is that you had no self-sufficiency. Time and again, towns were started where the rural folk could gather and begin to build enough custom for the non-farming community to provide much needed specialty services, such as general store, mill and banking. Also schools and churches provided a sense of community where many if not all gathered periodically to meet and socialize.

There is a natural by product of community life back then. It was a necessary form of social cohesion that I think is a cornerstone of our humanity, well developed over thousands of years of social behavior. People genuinely needed each other. You were needed to survive by other people. You were needed to create volume for business and schools. You were needed, in person, for the community to be social. Technology has robbed of us of that need.

In today’s society, we have been taught that needing help from others is a weakness. We strive for self-sufficiency, although it is an illusion. We believe that with our internet connections and electronic wallets, every manner of good can be delivered to us. There is no need to “meet in town”. “Why would I leave the comfort of my couch?” In fact, we continue to develop our technology seemingly to create more and better ways for us to avoid direct human contact at all. We text instead of speaking on the phone now. We Facetime and Skype instead of visiting. We order Fresh Direct or Peapod to avoid the grocery store. All of this creates an illusion that we do NOT need others. But these barriers put us terribly out of practice at the thing we have lost the most from the “need” for community. We have no coping mechanism for dealing with other people, especially around arguments and debate.

We see it all the time now. If someone disagrees with someone else, they can blast them in the comments section or on Facebook. Our media is so busy entertaining us that they insist on having opposing views for every discussion. Those opposing views aren’t even permitted to reach consensus or sort out their issues. Clearly that is not entertaining enough. Empathy is becoming a lost art. When was the last time you heard some news or political pundit, who is split-screened with his/her opponent, stop and opine on the merits of the opponent’s argument, without the proverbial “but” at the end of the sentence.

This isn’t how we were meant to deal with conflict but the difference is we don’t think we “need” any of these people — we are self-sufficient. In the late 1800’s if we got in a fight with the owner of the mill, we had to figure it out. We’d be doing business again real soon. We would see them at the church picnic. Our need for each other reminded us constantly about our humanity. We are real people, with real feelings and real opinions and to co-exist, we had to work through our problems. We had to empathize. We had to sympathize and we had to sort it out. If we didn’t life became worse. There was a COST associated with our failure to compromise and to come-to-terms. None of this even accounts for the occasional personal or family catastrophe, which certainly couldn’t be handled by the individual or even the family. There was no insurance policy. The closest thing to an insurance policy was to be a successful and respected member of your community so they would band together and help you in your time of crisis. Personally, I believe that we were built to help each other.

Now I have no illusions that everything was as bucolic as Little House portrayed life. I am certain that people fought and genuinely didn’t like each other, but it was not with the same vitriol and strident tones. Without the need to face our enemies today, we say terrible, hurtful things, we never compromise and we never empathize anymore.

So in conclusion, I submit that we have sacrificed the very fabric of our society for technology and illusion of self-sufficiency. Until we learn to need each other again, until we choose to empathize, until we remember to put others needs before our own, we will be stuck in a polarized downward spiral of discord, without the skills to right the ship. That’s a cost of technology that few have spoken about and less have measured and counted. But it is real and immensely valuable.

Distilling Nick Bostrom’s Machine SuperIntelligence (MSI) Desiderata

In the process of answering Bostrom’s desiderata on Machine SuperIntelligence (MSI), I’ve gotten a series of requests from friends and followers such as , “Desiderata?” or “what the heck did he just say?”. I also got a number of comments like, “I couldn’t get through it” “fell asleep” and “Too dense”. All of this reminded me that part of the mission at ForHumanity is to make sure that all people can understand the challenges that face us from AI and machine SuperIntelligence, so with no disrespect meant towards Professor Bostrom, I would like to distill his desiderata down into a quick read and bullet points. Professor Bostrom does some nice work in creating this document. ForHumanity doesn’t agree with all of it and will be prescribing some alternative thoughts, but unless the masses are more aware of the issues, how can we expect them to respond at all.

  1. Desiderata — fancy word for “the essentials”. So Nick Bostrom is the author of the book SuperIntelligence which shook, Gates, Hawking and Musk to believe that “AI is the single greatest challenge in human history”. Bostrom is trying to layout for us the the things that we need to for a successful discovery and implementation of Machine SuperIntelligence
  2. Machine SuperIntelligence — there are two kinds of artificial intelligence — Narrow and General. We do not have general AI today. We have many, many pockets of narrow AI. The simplest example of narrow AI is a calculator. It does computations faster than we can. Narrow AI is everywhere today, in your amazon, your facebook, your smart phone translators and your car. General AI is defined as a machine intelligence that thinks about all things that humans think about, often times better than we do it. We do not have General AI today. Lots of movies made about this, do not discount the visions that you see in these movies, they are based in aspects of reality.
  3. When will we have General AI? There is no agreement on this point. Some believe as early as 2025 (10% of researchers), others argue that it is more like 30 years away (50% of researchers).
  4. Why should I care about General AI? Many in AI research agree with Bostrom, that the discovery and implementation of Machine SuperIntelligence might be the last thing that Human beings EVER discover. That means a VERY different world from today, no matter how you think about it
  5. So again, why should I care NOW? Best answer that I can give you is that AI is being developed and enhanced EVERYDAY, we don’t know how long it will take us to create the right safeguards, the right guidelines, the right oversight, the right laws, the right changes in society to deal with the implementation of Machine SuperIntelligence. If Machine SuperIntelligence arrives in 2030, but the solutions needed to make that SAFE arrive in 2033, we are already 3 years TOO LATE.
  6. So back to Bostrom’s Essentials — I am going to list them, explain briefly and then leave a note of agreement or disagreement from ForHumanity’s perspective
  7. Efficiency— Bostrom would like Machine SuperIntellgience (MSI) to arrive as quickly and as easily as possible. Eliminate as many obstacles as possible because it would reap such great rewards. Bostrom wants everyone who can benefit to benefit as soon as possible. (DISAGREE)
  8. MSI risk — also known to Terminator movie watchers as SKYNET. The idea here that the introduction of MSI could create a disaster scenario. Bostrom argues for considerable work in this space. (AGREE)

9. Stability and global coordination — since it has never existed, I hope this isn’t an essential. Bostrom argues that rogue application by nations or organizations for their exclusive benefit would be a bad thing and thus we should all come together in harmony. Good luck with that… (AGREE — but really?)

10. Reducing negative impact of MSI — Bostrom is concerned that the move to MSI will have some revolutionary impacts and some of those will be negative. He argues we should deal with them, but makes little or no attempt to figure out what they might be. Here’s one, how about the possibility of 60–80% unemployment in the traditional sense. That might be a challenge to deal with. So here I agree with Bostrom, we want to reduce the negative impact, ForHumanity just plans to try to deal with them(AGREE)

11. Universal Benefit — Bostrom argues that all humankind should benefit from MSI. Of course we agree, the How? seems to be the challenge. (AGREE)

12. Total human welfare — here Bostrom relies upon Robert Hanson’s spectacular estimates of future wealth as a result of MSI. For example, Hanson talks about current global GDP of 77.6 trillion USD and expands it to 71 quadrillion USD. Yeah… if we have it great, let’s not plan on it. (AGREE — lovely idea)

13. Continuity — It is argued that these changes should be met in an orderly, law abiding fashion, where everyone is treated fairly and taken care of properly. (AGREE)

14. Mind Crime Prevention — here Bostrom started to lose me. Arguing already for the rights and protections of a machine mind, seems premature, however European Union authorities have recommended it already for actual policy. This issue is very tricky and I believe will create a schism in society. For now ForHumanity will (DISAGREE, but this need a much longer explanation)

15. Population control — Bostrom argues that because people will be living forever, we can’t have too many people and that births should be controlled and coordinated. First, living forever, really? Digitally, downloading ourselves into machines, really? These are essentials? This issue is filled with difficult theoretical, philosophical and religious issues, that I will simply say that ForHumanity believes the the rights to NOT participate in that world as well. (DISAGREE)

16. Group control, ownership and governance — Bostrom argues that MSI should be governed by an enlightened, selfless and generous group. This group should be thoughtful and creative in their solutions to the new problems created by the MSI — but isn’t that what we created the MSI for? I am being snarky of course. Human beings make mistakes, we have flaws and they are beautiful. I hope that we have a chance to be in charge and that we respond with a grace, self sacrifice and enlightened governance for all people. (AGREE).

So that was Bostrom’s desiderata, I hope this summary was helpful for everyone. I of course will continue to delve deeply into the work as it deserves a thoughtful and thorough response. ForHumanity agrees with the challenge and importance of the discussion, it is my challenge to bring practical solutions to many of these challenges and aid all of us in achieving Bostrom’s magnificent Machine SuperIntelligence world.