risk management

Technology is an Autocracy - and the Risk from Externalities is Growing

I suspect this is not an idea that many have considered. To date, as a society we really have not appreciated how technological change occurs and we certainly have rarely considered the governance of new change. When we have a discovery or an advancement of science, like all other inventions, it isn’t accomplished based upon the will of the majority. There is no vote, there is no consideration for society at-large (externalities). Rarely is the downside risk considered. One person sees a problem. Their own personal view of that problem and they aim to fix it. That is entreprenuerial, the foundation upon which much of Western Capitalism is built. It is also Authoritarian. One person, one rule and little or no accountability. Scary when you think about it. When you combine this “process” and lack of control with our species’ other great skill, “problem solving”, you create technological innovation and advancement which has a momentum that feels unstoppable. In fact, if you even “suggest” (which I am not) halting technological progress you get one of two response, “Luddite” or “You can’t”. That is how inexorable society views technological change.

So let me explain this in more detail, all technological advancement is about someone, somewhere seeing a problem, a weakness, a difficulty, a challenge and deciding to overcome that challenge. It’s admirable and impressive. It’s also creating problems. As a species we poorly weigh all aspects of a decision, all the pros and all the cons. Will we make money from this? Is this advancement, “cool”, Does this make my life “easier” are often the only inputs to our production/purchase decisions. There is a broad societal acceptance that “easier”, “freer” and “convenient” are universally beneficial. A simple counter-argument, of course, can be found in the gym. Your muscles would insist that “easier”, “freer” and”convenient” are not the best way for them to be strengthened or stamina to be built. They require “challenge”, “difficulty” and “strain” in order to grow and improve.

So when a new advance comes along, if it makes our life easier, even in the smallest way, we snatch it up instantly. Take, for example, Alexa and Google Home. Hugely successful products already, but was it really that difficult to type out our search query? Defenders will say things like, “now I can search while I am doing something else” or “this frees up my hands to be more productive”. And of course supporters point to the disabled for the obvious assistance to someone who is unable to type. But let’s examine the other side of the coin. What are the downside risks to such a product? Usually, not part of the sales process, I’m afraid, so you have to think carefully to compile a list. For example, our verbal search, has it caused us to lose a different skill, like the unchallenged muscle, whereby the finding of the solution was equally as important as the actual answer. But on top of that specific challenge, (the lost process and associated effort that may have strengthened the muscle, which in this case is the mind), what are some of the associated externalities to voice assistants? Let’s take a look at a few.

Is that search query worth having Amazon or Google record EVERY SINGLE WORD your family speaks within hearing distance? How about considering the fact that Amazon and Google now build personal profiles about your preferences based upon this information. Do you realize that this then limits your search results accordingly? Companies are taking CHOICE away from you and suprisingly, people don’t seem to care, in fact some like the idea. Other externalities exist as well. Recently, an Alexa recorded the conversation of a family and sent it to random contacts.

An Amazon Echo recorded a family's conversation, then sent it to a random person in their contacts…
A family in Portland, Ore., received a nightmarish phone call two weeks ago. "Unplug your Alexa devices right now," a…www.washingtonpost.com

Or this externality?

Alexa and Siri Can Hear This Hidden Command. You Can't.
BERKELEY, Calif. - Many people have grown accustomed to talking to their smart devices, asking them to read a text…www.nytimes.com

Without getting too paranoid, this last one is downright creepy and dystopian, but its potential ramifications are catastrophic if carried to the extreme. I am certain that when you decided to purchase your voice assistant, none of these externalities were factored into your buying decision.

I use this as a real life example, because our evaluation of new technology is based upon bad math. Is it cool? Is it easier? Is it profitable? and Can I afford it/afford to be without it? Nowhere in that equation are the following:

1) Does it further eliminate privacy?

2) Does it make me lazy or more dependent upon a machine?

3) Does to keep me from using all aspects of my brain as much?

4) Does it allow me to interact with actual humans less?

5) What new and different risks are associated with this product?

6) If someone wants to do me harm, does this enable them to do so in an unforseen way?

One of the chief arguments of technological advancement is that it frees us up from the mundane, routine tasks. To assume that those tasks do not have value is ludicrous on its face, but more importantly, if we are “freed” up, what are we freed up to do? Usually, we are told it is high-minded things… be entrepreneurial… be poetic… be deep thinkers… solve bigger problems… spend more time with loved ones. To be honest, I don’t see an explosion of those endeavors. A further example of our bad math…

We adopt these technological advancements often without thought about the impact it may have on our psyche, our self-worth, our ambitions, or our safety. We make these choices because they are cool, or they make something easier. With purchase decision processes that are this simple and “upside-only” considered, developers of technology have it easy to make products attractive.

This blog post has only lightly touched on malice, but all of society should be concerned about malicious intent and technology’s impact on our suceptibility. The more connected we are, the more dependent upon technology we are, the easier it is to cause mass harm. Perfect examples are recent virus attacks that spread to over 47 countries in a matter of a few hours. Sometimes the consequences are minor such as locked up computers or minor hassles we deal with like corrupted programs. Other times the hacker/criminal steals money or spies on you. Regardless of the magnitude of the impact, the ability of a criminal to “reach you” and “reach many” has been increased almost infinitely.

Here’s a final externality — how would you function without Internet? Not just for an hour or two, but permanenetly? How about without power? These are modern day conveniences that are assumed to be permenent, but how permanent are they? Do you need to consider how to operate when they are gone? Our connectivity and reliance on power make us deeply dependent and woefully unprepared for these alternatives, even if the odds of occurence are small. Hollywood frequently paints a grim picture of the dystopian existence when these conveniences are taken away, however, our ancestors existed quite nicely. Would you be prepared to survive or even thrive? The chances of these calamities are greater than zero…

Awareness of externalities is important, Consideration about downside risk is crucial and a willingness to realize that everything we do or even purchase has pros AND cons to them… The more awareness of the “cons” that you have, the better chance you have to mitigate those risks and reap a greater benefit from the upside of our technology choices. Most importantly, as a society, we will make better collective decisions on our technological progress and thwart the dangers of Technological Autocracy.

New C-Suite Position - AI and Automation Risk Management

Many of you will be familiar with the challenges that Facebook is facing.

The Cambridge Analytica saga is a scandal of Facebook's own making | John Harris
Big corporate scandals tend not to come completely out of the blue. As with politicians, accident-prone companies…www.theguardian.com

Internal disagreements about how data has been used. Was it sold? Was it manipulated? Was it breached? It has put the company itself at-risk and highlighted the need for a new position at the C-Suite level, one that most companies have avoided up until now. AI and Automation Risk Management.

Data is the new currency, Data is the new oil. It is the lifeblood of all Artificial Intelligence algorithms, machine and deep learning models. It is the way that our machines are being trained. It is the basis for the best and brightest to begin to uncover new tools for healthcare, new ways to protect security, new ways to sell products. In 2017, just Google and Facebook had a revenue close to $60 billion in advertising alone, all due to data. However, usage of that data is at-risk, because of perceived abuses by Facebook and others.

Data is also and more importantly about PEOPLE. Data is PERSONAL, even if there have been attempts to anonymize it. People need an advocate,inside the company, defending their legal, implied and human rights. This is a dynamic marketplace, with new rules, regulations and laws being considered and implemented all of the time. AI and Automation face some substantial challenges in their development, here is a short list and NONE of these are a priority for engineers and programmers, despite the occassional altruistic commentary to the contrary. As you will see the advancement of AI and Automation requires full-time attention.:

  1. Ethical machines and algorithms — There are millions and millions of decisions being made by machines and algorithms. Many of these decisions are meant to be based upon our value system as human beings. That means our Ethics need to be coded into the machines and this is no easy task with a single set of Ethics. Deriving profit from Ethics, is tenuous at best and there is certain to be a cost.
  2. Data and decision bias — Our society is filled with bias, our perceptions are filled with bias, thus our data is filled with bias. If we do not take steps to correct bias in the data, then our algorithmic decisions will be biased. However, correcting for bias may not be as profitable which is why it needs to be debated in the C-suite.
  3. Privacy — there is a significant push back forming on what is privacy online. GDPR in Europe is a substantial set of laws providing people with increased transparency, privacy and in some cases the right to be forgotten. Compliance with GDPR is one responsibility of the AI Risk Manager.
  4. Cybersecurity and Usage Security (a Know-Your Customer process for data uasge). Companies already engage in cybersecurity, but the responsiblity is higher when you are protecting customer data. Furthermore, companies should adopt the the finance industry standard of “Know Your Customer (KYC)”. Companies must know and understand how data is being used by clients to prevent abuses or illegal behavior.
  5. Safety (can the machines that are being built be controlled and avoid unintentional consequences to human life). This idea is a little farther afield for most, however now that an UBER, autonomous vehicle has been involved in a fatality, it is front and center. The AI Risk Manager’s job is to consider the possiblities of how a machine may injure a human being. Whether that be through hack, negligence, or system failure.
  6. Future of Work (as jobs are destroyed, how does the company treat existing employees and the displaced workers/communities) — This is the PR challenge of the role, depending on how the company chooses to engage it’s community. But imagine for a moment taking a factory with 1000 employees and automating the whole thing. that’s 1000 people directly effected. That’s a community potentially devestated, if all 1000 employees were laid off.
  7. Legal implications of cutting edge technology (in partnership with Legal team or outside counsel) — GDPR compliance, legal liability of machines, new regulations and their implementation. These are the domain of the AI Risk Manager in conjunction with counsel and outside counsel.

This voice is is a C-suite job and must have equal authority to the sources of revenue, in order to stand a chance of protecting the company and protecting the data, i.e. the people who create the data.

I am not here to tell you to stop using data. However if you believe that each of these companies, whose primary purpose is not compliance but instead to make profits, will always use this data prudently is naive at its best. Engineers and programmers solve problems, they have not been trained to consider existential risk, such as feelings, privacy rights, and bias. They see data of all kinds as “food” and “input” to their models. To be fair, I don’t see anything wrong with their approach. However, the company cannot let that exist unfettered. It is risking its entire reputation on using data, which is actually using PEOPLE’S PERSONAL AND OFTEN PRIVATE INFORMATION, to get results.

For many new companies and new initiatives, data is the lifeblood of the effort. It deserves to be protected and safeguarded. Beyond that, since data is about people, it deserves to be treated with respect, consideration, and fairness. Everyone is paying more and more attention to issues like this and companies must react. People and their data need an advocate in the C-Suite. I recommend the Chief of AI and Automation Risk Manager.

AI and Automation - Managing the Risk to Reap the Reward

Personally, I have been hung up on Transhumanism. For those of you not familiar Transhumanism is the practice of imbedding technology directly nito your body — merging man and machine, a subset of AI and Automation. It seems obvious to me that when given the chance, some people with resources will enhance/augment themselves in order to gain an edge at the expense of other human beings around them. It’s really the same concept as trying to get into a better university, or training harder on the athletic field. As a species we are competitive and if it is important enough to you, one will do what it takes to win. People will always use this competitive fire to gain an upperhand on others. Whether it is performance enhancing drugs in sports, or paid lobbyists in politics, money, wealth and power create an uneven playing field. Technology is another tool and some, if not many, will use it for nefarious purposes, I am certain.

But nefarious uses of AI are only part of the story. When it comes to transhumanism, there are many upsides. The chance to overcome Alzeheimers, to return the use of lost limbs, to restore eyesight and hearing. Lots of places where technology in the body can return the disabled to normal human function which is a beautiful story.

So in that spirit, I want to shine a light on all of the unquestionable good that we may choose to do with advances in AI and automation. Because when we have a clear understanding of both good outcomes and bad outcomes, then we are in a position to understand our risk/reward profile with respect to the technology. When we understand our risk/reward profile, we can determine if the risks can be managed and if the rewards are worth it.

Let’s try some examples, starting with the good:

  1. AI and Automation may enhance the world’s economic growth to the point where there is enough wealth to eradicate poverty, hunger and homelessness.
  2. AI & Automation will allow researchers to test and discovery more drugs and medical treatments to eliminate debilitating disease, especially diseases that are essentially a form of living torture, like Alzheimers or Cerebral Palsy.
  3. Advancement of technology will allow us to explore our galaxy and beyond
  4. Advancement in computing technology and measurement techniques will allow us to understand the physical world around us at even the smallest, building block level
  5. Increase our efficiency and reduce wasteful resource consumption, especially carbon-based energy.

At least those five things are unambiguously good. There is hardly a rational thinker around who will dispute those major advancements and their inherent goodness. It is also likely that the sum of that goodness, out-weighs the sum of any badness that we come across with the advancement of AI and Automation. Why am I confident in saying that? Because it appears to be true for the entirety of our existence. The reason technology marches forward is because it is viewed as inherently good, inherently beneficial and in most cases, as problem solving.

But here is where the problem lies and my concern creeps in. AI has enornormous potential to benefit society. As a result it has significant power. Significant power comes with significant downside risk if that power is abused.

Here are some concerns:

  1. Complete loss of privacy (Privacy)
  2. Imbedded bias in our data, systems and algorithms that perpetuate inequality (Bias)
  3. Susceptibility to hacking (Security)
  4. Systems that operating in and sometimes take over our lives, without sharing our moral code or goals (Ethics)
  5. Annihiliation and machine takeover, if at some point the species is deemed useless or worse, a competitor (Control and Safety)

There are plenty of other downside risks, just like there are plenty of additional upside rewards. The mission of this blog post was not to calculate the risk/reward scenarios today, but rather to highlight for the reader that there are both Risks and Rewards. AI and Automation isn’t just awesome, there are costs and they need to be weighed.

My aim in the 12+ months that I have considered the AI Risk space has been very simple, but maybe poorly expressed. So let me try to set the record straight with a couple of bullet points:

  1. ForHumanity supports and applauds the advancement of AI and Automation. ForHumanity also expects technology to advance further and faster than the expectation of the majority of humanity.
  2. ForHumanity believes that AI & Automation have many advocates and will not require an additional voice to champion its expected successes. There are plenty of people poised to advance AI and Automation
  3. ForHumanity believes that there are risks associated with the advancement of AI and Automation, both to society at-large as well as to individual freedoms, rights and well-being
  4. ForHumanity believes that there are very few people who examine these risks, especially amongst those who develop AI and Automation systems and in the general population
  5. Therefore, to improve the likelihood of the broadest, most inclusive positive outcomes from the advancement of technology, ForHumanity focuses at identifying and solving the downside risks associated with AI and Automation

When we consider our measures of reward and the commensurate risk, are we accurately measuring the rewards and risks? My friend and colleague John Havens at IEEE has been arguing for sometime now that our measures of well-being are too economic (profit, capital appreciation, GDP) and neglect more difficult to measure concepts such as joy, peace and harmony. The result is that we may not correctly evaluate the risk/reward profile, reach incorrect conclusions and charge ahead with our technological advances. It is precisely that moment that my risk management senses kick in. If we can eliminate or mitigate the downside risks, then our chance to be successful, even if measured poorly increases dramatically. That is why ForHumanity focuses on the downside risk of technology. What can we lose? What sacrifices are being made? What does a bad outcome look like?

With that guiding principle, ForHumanity presses forward to examine the key risks associated with AI and Automation, Safety & Control, Ethics, Bias, Privacy and Security. We will try to identify the risks and solutions to those risks with the aim to provide each member of society the best possible outcome along the way. There may be times that ForHumanity comes across as negative or opposed to technology, but in the vast majority of cases that is not true. We are just focused on the associated risks with technology. The only time we will be negative on a technology is when the risks overwhelm the reward.

We welcome company on this journey and certainly could use your support as we go. Follow our facebook feed ForHumanity or Twitter @forhumanity_org and of course on the website at https://www.forhumanity.center/