New C-Suite Position - AI and Automation Risk Management

Many of you will be familiar with the challenges that Facebook is facing.

The Cambridge Analytica saga is a scandal of Facebook's own making | John Harris
Big corporate scandals tend not to come completely out of the blue. As with politicians, accident-prone companies…www.theguardian.com

Internal disagreements about how data has been used. Was it sold? Was it manipulated? Was it breached? It has put the company itself at-risk and highlighted the need for a new position at the C-Suite level, one that most companies have avoided up until now. AI and Automation Risk Management.

Data is the new currency, Data is the new oil. It is the lifeblood of all Artificial Intelligence algorithms, machine and deep learning models. It is the way that our machines are being trained. It is the basis for the best and brightest to begin to uncover new tools for healthcare, new ways to protect security, new ways to sell products. In 2017, just Google and Facebook had a revenue close to $60 billion in advertising alone, all due to data. However, usage of that data is at-risk, because of perceived abuses by Facebook and others.

Data is also and more importantly about PEOPLE. Data is PERSONAL, even if there have been attempts to anonymize it. People need an advocate,inside the company, defending their legal, implied and human rights. This is a dynamic marketplace, with new rules, regulations and laws being considered and implemented all of the time. AI and Automation face some substantial challenges in their development, here is a short list and NONE of these are a priority for engineers and programmers, despite the occassional altruistic commentary to the contrary. As you will see the advancement of AI and Automation requires full-time attention.:

  1. Ethical machines and algorithms — There are millions and millions of decisions being made by machines and algorithms. Many of these decisions are meant to be based upon our value system as human beings. That means our Ethics need to be coded into the machines and this is no easy task with a single set of Ethics. Deriving profit from Ethics, is tenuous at best and there is certain to be a cost.
  2. Data and decision bias — Our society is filled with bias, our perceptions are filled with bias, thus our data is filled with bias. If we do not take steps to correct bias in the data, then our algorithmic decisions will be biased. However, correcting for bias may not be as profitable which is why it needs to be debated in the C-suite.
  3. Privacy — there is a significant push back forming on what is privacy online. GDPR in Europe is a substantial set of laws providing people with increased transparency, privacy and in some cases the right to be forgotten. Compliance with GDPR is one responsibility of the AI Risk Manager.
  4. Cybersecurity and Usage Security (a Know-Your Customer process for data uasge). Companies already engage in cybersecurity, but the responsiblity is higher when you are protecting customer data. Furthermore, companies should adopt the the finance industry standard of “Know Your Customer (KYC)”. Companies must know and understand how data is being used by clients to prevent abuses or illegal behavior.
  5. Safety (can the machines that are being built be controlled and avoid unintentional consequences to human life). This idea is a little farther afield for most, however now that an UBER, autonomous vehicle has been involved in a fatality, it is front and center. The AI Risk Manager’s job is to consider the possiblities of how a machine may injure a human being. Whether that be through hack, negligence, or system failure.
  6. Future of Work (as jobs are destroyed, how does the company treat existing employees and the displaced workers/communities) — This is the PR challenge of the role, depending on how the company chooses to engage it’s community. But imagine for a moment taking a factory with 1000 employees and automating the whole thing. that’s 1000 people directly effected. That’s a community potentially devestated, if all 1000 employees were laid off.
  7. Legal implications of cutting edge technology (in partnership with Legal team or outside counsel) — GDPR compliance, legal liability of machines, new regulations and their implementation. These are the domain of the AI Risk Manager in conjunction with counsel and outside counsel.

This voice is is a C-suite job and must have equal authority to the sources of revenue, in order to stand a chance of protecting the company and protecting the data, i.e. the people who create the data.

I am not here to tell you to stop using data. However if you believe that each of these companies, whose primary purpose is not compliance but instead to make profits, will always use this data prudently is naive at its best. Engineers and programmers solve problems, they have not been trained to consider existential risk, such as feelings, privacy rights, and bias. They see data of all kinds as “food” and “input” to their models. To be fair, I don’t see anything wrong with their approach. However, the company cannot let that exist unfettered. It is risking its entire reputation on using data, which is actually using PEOPLE’S PERSONAL AND OFTEN PRIVATE INFORMATION, to get results.

For many new companies and new initiatives, data is the lifeblood of the effort. It deserves to be protected and safeguarded. Beyond that, since data is about people, it deserves to be treated with respect, consideration, and fairness. Everyone is paying more and more attention to issues like this and companies must react. People and their data need an advocate in the C-Suite. I recommend the Chief of AI and Automation Risk Manager.