technology

Independent Audit of AI Systems - FAQ

This article was a portion of ForHumanity’s response to the NYC Economic Development Corp’s Request for Expressions of Interest regarding their proposed Center for Responsible AI. The submission was a joint submission with The Future Society and Michelle Calabro. Independent Audit is offered, as the solution for the framwork, guidelines and governance functions of the Center. The proposal has neither been accepted or rejected at the time of this publication.

1.0 Overview

Independent Audit of AI Systems is a framework for auditing AI systems (products, services, and corporations) by examining and analyzing the downside risks associated with the ubiquitous advance of AI & Automation. It looks at 5 key areas: Privacy, Bias, Ethics, Trust and Cybersecurity. If we can make safe and responsible Artificial Intelligence & Automation profitable whilst making dangerous and irresponsible AI & Automation costly, then all of humanity wins.

The market demand for a comprehensive, implementable, third-party oversight and governance solution is enormous. There are regular and consistent calls for this type of comprehensive oversight, on a global scale. The words, ‘Trust, Assurance and Audit’ are frequently used to give humanity the comfort we need with these currently opaque and ungoverned systems. It will not only impact the world of AI in New York City, but around the world, thrusting New York into the center of all debate on Responsible AI.

Creation of the framework will be an industry-wide effort from the Audit/Assurance/Trust industry, and they will conduct an iterative, open-source, crowd-sourced dialogue with anyone who wants to engage in the process. The result will be a fully-implementable and dynamic set of rules by which companies can not only know they are being Responsible with their AI, they can prove it with an independent audit.

1.1 What is it?

A framework for auditing AI systems by examining and analyzing the downside risks associated with the ubiquitous advance of AI & Automation. It looks at 5 key areas: Privacy, Bias, Ethics, Trust and Cybersecurity.

1.2 What can get audited?

Products, services and corporations.

1.3 Why should it exist?

If we can make safe and responsible Artificial Intelligence & Automation profitable whilst making dangerous and irresponsible AI & Automation costly, then all of humanity wins.

1.4 Does the market want it?

The market demand for a comprehensive, implementable, third-party oversight and governance solution is enormous. There are regular and consistent calls for this type of comprehensive oversight, on a global scale. The words, ‘Trust, Assurance and Audit’ are frequently used to give humanity the comfort we need with these currently opaque and ungoverned systems.

1.5 Why must this happen in New York City?

Independent Audit of AI Systems is the audit/assurance/trust industry’s answer to AI governance and oversight. It is logical that it should reside in New York City, which is home to each of the Big 4’s global headquarters. Furthermore, it is crucial that an industry-wide initiative is implemented to maximize the impact of the audit rules.

1.6 How will New York City benefit from this?

We expect these rules to be adapted to local jurisdictional law in NYC and globally. This will thrust New York into the center of all debate on Responsible AI. Jobs are likely to be created as the audit process is deepened. New York City will be viewed at the leader of Responsible AI; the place to go for the most “cutting-edge” discussions. The place to go for answers.

1.7 How will humanity benefit from this?

Currently AI is generated to benefit companies, through increased profit. We are not criticizing this point-of-view, but we are suggesting that there needs to be a more balanced approach — one which considers the overall welfare of humanity. Through oversight and third-party independent governance, via Independent Audit of AI System, humanity will have a greater assurance that their interests are being considered. People can be assured that ‘best practices’ are being followed, which will increase peoples’ trust and confidence in AI systems wherever they impact people’s lives. Markets and systems will be more fair — not just in one country, as the audit rules will extend globally as well.

1.8 How will individuals benefit from this?

Individuals will benefit from better, safer products and services. They will benefit through increased education and awareness on what SAFEAI and Responsible AI mean. Individuals will be able to identify SAFEAI goods and services and increasingly have choice and control over how they manage their interactions with AIs in all facets of their lives.

1.9 Will Independent Audit of AI Systems be a self-sustainable system?
Independent Audit of AI Systems is designed to be self-sustaining over time. The SAFEAI logo will be licensed as a “seal-of-approval” to any/all companies that wish to license it. The choice is theirs, as there is not a fee to ForHumanity for the audit framework and the audits are not conducted by us.

As we develop the Independent Audit of AI Systems framework, we will spend considerable effort to build the brand as well. We want to facilitate people using the logo to impact their buying decisions for products and services. If successful, this will drive demand for the logo from those who pass audits. We will ask a licensing fee in return, with the justification being both the promotion of the brand and the ongoing maintenance of the system. The primary goal and thus the driver of licensing pricing will be the sustainability of the Independent Audit of AI systems process. It is expected that our initial donors will backstop the organization until it is self-sustaining.

2.0 Who will create it?

Creation of the framework will be an industry-wide effort from the Audit/Assurance/Trust industry, and they will conduct an iterative, open-source, crowd-sourced dialogue with anyone who wants to engage in the process. In order to be successful, it will be global and inclusive of many interested parties.

There will be a dedicated full-time staff of Core Audit Silo Heads whose responsibilities will be the following:

  • Research new ideas/concepts/contributors

  • Suggest possible rules

  • Solicit specific feedback on proposed rules

  • Track and manage dissent, striving for broad consensus

  • Manage discussion for decorum

  • Present proposed rules to Board of Directors

  • Hold public office for 2 hours daily, during global time zones

  • Facilitate dialogue on Slack with all interested participants

The Executive Director will have the following responsibilities:

  • Create partnerships with many like-minded stakeholders

  • Facilitate the quarterly update process and Board of Directors votes

  • Release the board votes and new rules

The Board of Directors (maximum 21 members) will consist of a diverse collection of industry professionals and academics who are experts in determining if a rule passes audit criteria.

2.1 How will the audit framework be developed?

Independent Audit of AI Systems will be a comprehensive set of best-practices covering the ‘Core Audit Silos’ of Ethics, Bias, Privacy, Trust and Cybersecurity.

Core Audit Silo Heads will host global public office hours during pre-announced, varying time slots to meet the needs of experts all around the world. They will also conduct an iterative, open-source, crowd-sourced dialogue on an ongoing basis (on a platform such as Slack) with anyone who wants to engage in the process. Corporations (who will be subject to the audit) have the opportunity to raise concerns and objections with the proposed rules. Independent Audit of AI Systems will walk the tightrope between maximum benefit for humanity and practical implementability.

Audit rules will be:

  • Implementable

  • Binary (complaint/non-compliant)

  • Open-Sourced and Iterated

  • Unambiguous

  • Consensus-driven

  • Measurable

The cohort of experts in each ‘Core Audit Silo’ will conduct micro-experiments, use-case experiments and applications to iterate and eventually achieve these standards and best practices. We will improve Responsible AI incrementally, and eventually arrive at consensus-driven rules by examining word choice, definitions, and boundaries.

The result will be a fully-implementable and dynamic set of rules by which companies can not only know they are being Responsible with their AI, they can prove it with an independent audit.

2.2 Hasn’t this been done before?

Yes. In fact, there are many precedents to learn from. The Independent Audit of AI Systems is a collaborative process designed to enhance the magnificent work already being done by experts and thought-leaders around the world.

We are seeking an outcome similar to that of the FASB and IFRS efforts in the early 70’s: the financial services industry came together and created standardized rules that were adopted by the SEC et al within a few short years. The growth comes from a level playing field, like we watched with financial accounting in the late 70’s and early 80’s.

Many groups are doing Ethical Guidelines work; the IEEE’s ‘Ethically Aligned Design’ remains the gold standard. With a singular mission to leave the world a little better today than it was yesterday, we will translate high-minded ideals into smaller, more practical baby steps toward implementing new rules. Our goal can be achieved with extensive collaboration from IEEE, NIST and others to adapt their work into an immediately implementable, auditable framework.

2.3 Once an AI System achieves SAFEAI status, what happens?

  • All participants will know how to be responsible with their AI.

  • Passing the Audit is good for 12 months.

  • Even as the rules change quarterly, companies have time to catch up as new ‘best-practices’ are implemented.

  • Compliant companies may choose to license the SAFEAI brand.

  • Rocker tags include Auditor, Services, Corporate, and Product.

  • Annotated as e.g. “SAFEAI 2019.

This post is a follow on to the previous posts on Independent Audit of AI Systems, both on Medium

https://medium.com/all-technology-feeds/ai-safety-the-concept-of-independent-audit-370bb45c01d

The Merits of Independent Audit of AI Systems

For those who have followed ForHumanity, you know that we have been developing and promoting the concept of Independent…

medium.com

https://becominghuman.ai/safeai-a-tool-for-concerned-parents-cb9a8218998d

and on the CACM blog

Governance and Oversight Coming to AI and Automation: Independent Audit of AI Systems

Governance and independent oversight on the design and implementation of all forms of artificial intelligence (AI) and…

cacm.acm.org

We welcome all of your support, as we continue to push this concepts forward and work towards bringing oversight and governance with true accountability to the world of AI.

AlphaGoZero - A Reflection and a Concern


AlphaGoZero from DeepMind

The AI community remains “abuzz”, as DeepMind continues to announce AlphaGo Zero successes. For the uninitiated, DeepMind created a new AI/machine learning program designed to improve upon its first iteration, AlphaGo, which had successfully bested the human world champions at the ancient game of Go. Why the buzz? Well AlphaGo Zero was built “Tabula Rasa” which is the philospher Locke’s definition of the human mind at birth and it means “blank slate”. The orginal AlphaGo studied human games of Go over and over and over, millions of iterations in order to learn how to win at the game. Zero (that’s how we will refer to AlphaGo Zero, so that it is clearer) learned Tabula Rasa, meaning it was simply given the rules and objectives and taught itself. 2.5 days later it surpassed the best players in the world. 21 days later it surpassed the best AI player in AlphaGo. 40 days later it was winning 100 times out of 100. That is the speed at which “mastery of thought or narrow superintelligence” (in the game of Go) was achieved. 40 days and consider that the test was done without trying to maximize processing power and speed. Recently, the same AlphaGoZero program also mastered chess and Shogi.

Okay, that all sounds impressive, but what does it really mean. For a start here are a few disruptive things it may mean.

  1. The designers at DeepMind have quickly convinced themselves that learning from humans is suboptimal

  2. In this narrow space of intelligence (the game of Go, Chess and Shogi)— this is proof that learning from humans IS suboptimal

  3. Also, in this narrow space, SuperIntelligence has been achieved

  4. Since some believe that learning is learning (a big assumption) — why wouldn’t all learning by machines be accomplished without human input.

  5. We now have an argument for WHY some people will want to implement Artificial General Intelligence. Or said differently, “don’t we want SuperIntelligence in as many areas as possible”?

Let me make sure that is clear for the reader. Unless someone can explain to the world how and why certain “learning” is DIFFERENT from learning the game of Go, Chess or Shogi, the brilliant minds at DeepMind have just proven to you that “learning” from humans is inefficient. That machines, when given a task to learn, will achieve human level mastery and beyond, quickly. This information will be used to demonstrate flawed human thinking time and again —the argument for replacing human thinking.

Said another way — AlphaGo Zero is PROOF that human thinking/learning is suboptimal. Let that sink in… What are some implications of THAT concept:

  1. How fast can I get tech into my head? some will reach this conclusion as they accept the idea that human thinking is suboptimal and to remain competitive, they must “advance” as well

  2. What is the point of human thought? another conclusion many will reach. Just let the machine do it.

Now before we fall too far off the cliff, let me be clear to say that I am certain that machine intelligence will discover things, learn things and create things that were simply not possible using the human mind alone. Those developments will drive the next few centuries of economic growth and create substantial wealth and opportunity (who that wealth accrues to is an entirely different debate, but to believe it would be equally shared is naive). As a Capitalist and non-Luddite, I believe in the advancement of these technologies and I believe they will succeed faster than most people have anticipated. The future for intelligence, discovery, productivity and exploration is vast and exciting. I am a fan.

But is that everything, is it even optimal? Seems to me that we are leaving out a lot of key issues when we measure ourselves in the “better off” category. I am not sure if these advancement are for the betterment of society. That’s the key question for me. Why are we advancing society? What about society is advancing? I know the word advancing is “loaded”, as in… of course we want to “advance” society, we should always “advance” things. We always assume that being smarter, acquiring more wealth and making everything easier is better. I think this is a falsehood, missing the mark on our total well-being.

Here are some things that we have “advanced” because of technology:

  1. Income inequality

  2. Loneliness/Depression

  3. Lack of individual survivability

  4. Polarization of society

  5. Breakdown of community

  6. De-Valuing of people and human life

  7. Worship of Intelligence

  8. General drop in our physical fitness

  9. Loss of Faith

Of course those are only some of the negatives and yet we won’t even be able to agree that those are, in fact, all negatives. I am highlighting the point that we continue to use technology to “advance” but our measures of well-being are not being maintained and tested against the technological advancements to determine if we are ACTUALLY better off. We feel more productive - our companies certainly are. But do we have deeper, stronger relationships? Do we experience more love, contentment and joy? I am not sure that we do, so when measured properly, maybe we have stepped back decades because of technology and didn’t even realize it. That is difficult for most people to even contemplate and their knee-jerk reaction is “of course things are better” or “those negatives don’t apply to me” . I am not sure that a fair, considered, and comprehensive assessment of well-being reaches the same conclusion. John Havens, a friend at IEEE, is leading the charge on this discussion and I think it is fruitful. This Youtube link, is a good overview of his thoughts on Well-being.

On top of all of this — the abdication of thought, learning and intellectual growth to machines combined with utter reliance upon machines for physical work can be scary. I do agree that we may triumph over many of these challenges and continue to master technology for the betterment of society as a whole. However it is important to raise these concerns and to consider if we are measuring progress and advancement correctly.

I think the author Frank Hebert, who created the Dune books and franchise was on to something as he considered his future view of where technological progress and our culture was taking us.

In his Sci-Fi classic Series Dune, specifically the book, God Emperor of Dune(1981), Leto II Atreides indicates that the Butlerian Jihad had been a semi-religious social upheaval initiated by humans who felt repulsed by how guided and controlled they had become by machines: “The target of the Jihad was a machine-attitude as much as the machines,” Leto said. “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed.”

My greatest concern about the progress of artificial intelligence and automation is that the pendulum will have to swing too far before we figure out there is a problem. The reason that quote above exists, albeit from a story of ficition, is that Frank Herbert could envision a machine-intelligence dominated society. A Jihad (which is a violent clash often associated with the passion of religious fervor), not a revival, not a rennaissance, not a shifting-of-gears, was required to stop the juggernaut that was technology and machine intelligence in Herbert’s fictional world. Works of fiction are not evidence, they aren’t even an argument, but they may inform the thoughtful on a possible outcome from excessive adoption of machine intelligence.

Moderation and comprehensive evaluation of progress has never been a trait of our species, therefore it is likely that our adoption of AI and Automation will be excessive, and, in the end, detrimental to humanity. The prescription for avoiding these issues is challenging. It is against our general nature and it may require us to break our obsession with technological “advancement” as it is currently measured. It requires a broader consensus on well-being to measure our prosperity and to decide how and when to replace ourselves with machine intelligence. AlphaGoZero, and other AIs like it, will convince many that they should replace human intelligence. Personally, I can’t imagine replacing the vast majority of human intelligence with machine intelligence, but it is the path we are on and it leaves me concerned. Are you?

The Merits of Independent Audit of AI Systems

For those who have followed ForHumanity, you know that we have been developing and promoting the concept of Independent Audit of AI systems. If you are new to the concept of Independent Audit, the basics can be found here:

AI Safety — The Concept of Independent Audit
For months I have regularly tweeted a response to my colleagues in the AI Safety community about being a supporter of…medium.com

and here:

SAFEAI — A Tool for Concerned Parents
What is SAFEAI?becominghuman.ai

For this post, I wanted to cover the benefits of Indepedent Audit which are maximized when four things occur:

  1. Corporations widely adopt Independent Audit

  2. Governments make Independent Audit mandatory, akin to the requirements related to GAAP or IFRS accounting principles

  3. AI Safety professionals widely participates in the open-source, crowd-sourced search for best-practices

  4. When consumers of product/services use the SAFEAI logo to inform their buying/usage decisions

It is under these assumptions that I will talk about the features and subsequent benefits which accrue to humanity. Below is a bullet point list:

  1. Transparency — consumers of both products and services will receive an unprecendent level of transparency from companies. The transparency/disclosure we refer to here is in a few key areas, such as ethical decision-making, data usage, safety, control, explainability of algorithms, accountability of both algorithms and corporations, and bias avoidance. These areas of increased Transparency result in…

  2. Fairer markets — when markets are transparent then decisions made in the marketplace will be better. Rewarding responsible companies and punishing irresponsible companies. To be clear, it is the market that rewards and punishes, based on transparency and choice. And…

  3. Trust — this is the foundation of fairer markets. A marketplace which has become increasingly adversarial between companies and consumers, especially in the capture and exploitation of data, can be reversed. Trust is engendered when consumers feel that they are being provided with a valuable service AND when the price for that service is considered fair. Price, in this case, includes not only monetary compensation for services, but also impacts to privacy and personal well-being.

  4. Opacity — This benefit accrues to corporations, but has an over-arching, related benefit, to society. Opacity is the opposite of Transparency (referenced above), so we must explain the why both are listed as features. Transparency is discussed above, but does not extend to the intellectual property of the company — specifically, the code and machines that are employing artifical intelligence. Opacity to corporations allows them to protect their intellectual property (IP). When IP can be protected, then companies will invest in products and service development, knowing that they can recoop their costs and earn a fair profit. When transparency is EXCESSIVE (often due to regulation), then investment is discouraged and in fact, cheating, copying and outright theft become commonplace. Choosing Independent Audit allows a company to protect its IP without excessive disclosure. Choosing Independent Audit, might offset the increasing call for comprehensive disclosure that legal authorities might mandate. Essentially, Independent Audit strikes the best balance between opacity and transparency, giving corporations the protection they need to justify investment while giving consumers the transparency they need to trust the companies which provide them products/services.

  5. -Third-party verification — The value of third-party verification is an under-valued element in our world today. There are many places, where third-party verification has dramatically raised the bar of quality. Starting with financial audits and ranging to product-testers, such as Consumer Reports. These behind-the-scenes services act as a watch dog on your behalf. There are costs associated with all independent third-party reviews and those costs are passed thru to you, the consumer. The result is that companies know that they cannot cheat, they cannot cut corners, they cannot act unethically or they will be called out. These systems are not perfect for various reasons, not the least of which is a belief by humans that they can “get away with it”, but in the end, the truth will always come out sooner-or later as long as there is a watchdog in the room.

  6. Extends beyond national boundaries — Independent Audit is a market based mechanism. The SAFEAI seal of approval will build brand recognition. Over time, it will effect consumer decision-making, causing consumers to choose products and services which have been auditted for SAFEAI. The impacts of this “seal-of-approval” are not subject to national boundries the way that laws are hindered in their effectiveness. We are not suggesting that countries should not pass laws and regulations in AI Safety, quite to the contrary, new laws and regulations will have great value. However, it is a corporation’s responsbility to avoid regulations and laws when it is legal to do so and will result in greater profitability for the company. Therefore, it is necessary to have a market-based mechanism, one that impacts profitability directly and globally, such as the Independent Audit of AI Systems and the SAFEAI seal-of-approval. When consumers use the SAFEAI logo to inform their purchase and usage decisions then we know that safe and responsible AI will be profitable while dangerous and irresponsible AI will effefcctively make a company’s products worthless. When this happens, humanity wins, but it requires humanity to participate and to pay attention to the SAFEAI logo.

  7. On-going and Dynamic process — unlike standard financial accounting principles which may go unchanged for years at a time, “best-practices” in ethics, bias, privacy, trust and cybersecurity are likely to change regulalry. ForHumanity will maintain a dynamic, transparent and constant review process with our global open-source, crowd-sourced network to continuingly uncover “best-practices” which will update our audit process quarterly.

  8. Transparent process — The globally, open-source, crowd-sourced process is open to all. Anyone may join the conversation. All will be heard, all input will be considered, all votes count and most importantly, all reasonable dissent will be tracked and addressed. As the audit process is created, the results will be transparent to all. Further comment and critique is encouraged so that we may refine our process to achieve the best possible results. There is no fee or membership required to participate in the input process. There are only rules of proper decorum, so that we conduct our business in a civilized manner that protects the rights and dignity of all involved.

  9. Dedicated professionals- ForHumanity maintains a full-time staff dedicated to each of the core audit silos of ethics, bias, privacy, trust and cybersecurity. As a result, these professionals are constantly sourcing new ideas and new contacts to uncover the “best-practices” for our audit process. They maintain daily office hours and are continually reviewing input from all over the world. Our professionals are able to bridge the gap between academic thought and practical application to deliver the dynamic audit process.

  10. Audit process may be tailored for local jurisdictions — not dissimilar to the way accounting principles matured overtime, ForHumanity expects that the audit process will become tailored. Laws, customs and regulations may dictate that the audit process be adjusted slightly on a country-by-country (or regional) basis. This will allow the process to achieve an over-arching comment on “best-practices” while being locally compliant.

  11. Audit is good for one year — Independent Audit is conducted annually using the current audit process for “best-practices”. As these “best-practices” are likely to change fairly regularly as technology advances, this could create an onerous process of compliance for companies. The most recent audit will be good for one year, regardless of changes in “best-practices”. This standard allows the company time to comply with the ever changing “best-practices” while permitting the audit process to remain dynamic.

  12. Results are binary- When a company submits to a SAFEAI audit, it is a pass/fail endeavor on each of the five audit silos. The process will be structured so that companies are either compliant or non-compliant, so that the consumer may know if a company is using “best-practices” for ethics, bias, privacy, trust and cybersecurity. A company may pass any number of the silos, but is not SAFEAI compliant unless all five silos are passed successfully.

  13. Goal-Aligned with Humanity. ForHumanity exists for one purpose, to mitigate the downside risks associated with AI and Automation in order for Humanity to receive the highest possible benefits from these technological advancements. Our client-base, is you. ForHumanity is a non-profit organization designed to serve those who would buy products or use services which rely upon artificial intelligence. ForHumanity has a few sources of revenue. First, we receive funding from individuals through gracious donations. Second, we receive licensing revenue from corporations who have already completed SAFEAI audits when they choose to license the SAFEAI logo to show the world. The company does NOT pay a fee to ForHumanity for the audit and never will. We will not have the audit results tainted by a “pay-for-play” process which incentivizes successful audits. Only once they have successfully complied with the audit, then may a company pay to license the SAFEAI logo to demonstrate successfully passing the audit. Other members pay membership fees if they want to serve humanity, but fixing problems at companies that want to be SAFEAI compliant. Finally, our core members are those which conduct the audits on behalf of ForHumanity and thus for you. In the end, we strive to ensure that the goals of ForHumanity are prefectly aligned to serve… humanity.

ForHumanity believes that Independent Audit of AI Systems is crucial to mitigating the downside risks associated with the proliferation of AI and automation. But it doesn’t exist in a vacuum and it doesn’t succeed without the help of everyone. Please consider how you can become invovled and reach out.

Technology is an Autocracy - and the Risk from Externalities is Growing

I suspect this is not an idea that many have considered. To date, as a society we really have not appreciated how technological change occurs and we certainly have rarely considered the governance of new change. When we have a discovery or an advancement of science, like all other inventions, it isn’t accomplished based upon the will of the majority. There is no vote, there is no consideration for society at-large (externalities). Rarely is the downside risk considered. One person sees a problem. Their own personal view of that problem and they aim to fix it. That is entreprenuerial, the foundation upon which much of Western Capitalism is built. It is also Authoritarian. One person, one rule and little or no accountability. Scary when you think about it. When you combine this “process” and lack of control with our species’ other great skill, “problem solving”, you create technological innovation and advancement which has a momentum that feels unstoppable. In fact, if you even “suggest” (which I am not) halting technological progress you get one of two response, “Luddite” or “You can’t”. That is how inexorable society views technological change.

So let me explain this in more detail, all technological advancement is about someone, somewhere seeing a problem, a weakness, a difficulty, a challenge and deciding to overcome that challenge. It’s admirable and impressive. It’s also creating problems. As a species we poorly weigh all aspects of a decision, all the pros and all the cons. Will we make money from this? Is this advancement, “cool”, Does this make my life “easier” are often the only inputs to our production/purchase decisions. There is a broad societal acceptance that “easier”, “freer” and “convenient” are universally beneficial. A simple counter-argument, of course, can be found in the gym. Your muscles would insist that “easier”, “freer” and”convenient” are not the best way for them to be strengthened or stamina to be built. They require “challenge”, “difficulty” and “strain” in order to grow and improve.

So when a new advance comes along, if it makes our life easier, even in the smallest way, we snatch it up instantly. Take, for example, Alexa and Google Home. Hugely successful products already, but was it really that difficult to type out our search query? Defenders will say things like, “now I can search while I am doing something else” or “this frees up my hands to be more productive”. And of course supporters point to the disabled for the obvious assistance to someone who is unable to type. But let’s examine the other side of the coin. What are the downside risks to such a product? Usually, not part of the sales process, I’m afraid, so you have to think carefully to compile a list. For example, our verbal search, has it caused us to lose a different skill, like the unchallenged muscle, whereby the finding of the solution was equally as important as the actual answer. But on top of that specific challenge, (the lost process and associated effort that may have strengthened the muscle, which in this case is the mind), what are some of the associated externalities to voice assistants? Let’s take a look at a few.

Is that search query worth having Amazon or Google record EVERY SINGLE WORD your family speaks within hearing distance? How about considering the fact that Amazon and Google now build personal profiles about your preferences based upon this information. Do you realize that this then limits your search results accordingly? Companies are taking CHOICE away from you and suprisingly, people don’t seem to care, in fact some like the idea. Other externalities exist as well. Recently, an Alexa recorded the conversation of a family and sent it to random contacts.

An Amazon Echo recorded a family's conversation, then sent it to a random person in their contacts…
A family in Portland, Ore., received a nightmarish phone call two weeks ago. "Unplug your Alexa devices right now," a…www.washingtonpost.com

Or this externality?

Alexa and Siri Can Hear This Hidden Command. You Can't.
BERKELEY, Calif. - Many people have grown accustomed to talking to their smart devices, asking them to read a text…www.nytimes.com

Without getting too paranoid, this last one is downright creepy and dystopian, but its potential ramifications are catastrophic if carried to the extreme. I am certain that when you decided to purchase your voice assistant, none of these externalities were factored into your buying decision.

I use this as a real life example, because our evaluation of new technology is based upon bad math. Is it cool? Is it easier? Is it profitable? and Can I afford it/afford to be without it? Nowhere in that equation are the following:

1) Does it further eliminate privacy?

2) Does it make me lazy or more dependent upon a machine?

3) Does to keep me from using all aspects of my brain as much?

4) Does it allow me to interact with actual humans less?

5) What new and different risks are associated with this product?

6) If someone wants to do me harm, does this enable them to do so in an unforseen way?

One of the chief arguments of technological advancement is that it frees us up from the mundane, routine tasks. To assume that those tasks do not have value is ludicrous on its face, but more importantly, if we are “freed” up, what are we freed up to do? Usually, we are told it is high-minded things… be entrepreneurial… be poetic… be deep thinkers… solve bigger problems… spend more time with loved ones. To be honest, I don’t see an explosion of those endeavors. A further example of our bad math…

We adopt these technological advancements often without thought about the impact it may have on our psyche, our self-worth, our ambitions, or our safety. We make these choices because they are cool, or they make something easier. With purchase decision processes that are this simple and “upside-only” considered, developers of technology have it easy to make products attractive.

This blog post has only lightly touched on malice, but all of society should be concerned about malicious intent and technology’s impact on our suceptibility. The more connected we are, the more dependent upon technology we are, the easier it is to cause mass harm. Perfect examples are recent virus attacks that spread to over 47 countries in a matter of a few hours. Sometimes the consequences are minor such as locked up computers or minor hassles we deal with like corrupted programs. Other times the hacker/criminal steals money or spies on you. Regardless of the magnitude of the impact, the ability of a criminal to “reach you” and “reach many” has been increased almost infinitely.

Here’s a final externality — how would you function without Internet? Not just for an hour or two, but permanenetly? How about without power? These are modern day conveniences that are assumed to be permenent, but how permanent are they? Do you need to consider how to operate when they are gone? Our connectivity and reliance on power make us deeply dependent and woefully unprepared for these alternatives, even if the odds of occurence are small. Hollywood frequently paints a grim picture of the dystopian existence when these conveniences are taken away, however, our ancestors existed quite nicely. Would you be prepared to survive or even thrive? The chances of these calamities are greater than zero…

Awareness of externalities is important, Consideration about downside risk is crucial and a willingness to realize that everything we do or even purchase has pros AND cons to them… The more awareness of the “cons” that you have, the better chance you have to mitigate those risks and reap a greater benefit from the upside of our technology choices. Most importantly, as a society, we will make better collective decisions on our technological progress and thwart the dangers of Technological Autocracy.

New C-Suite Position - AI and Automation Risk Management

Many of you will be familiar with the challenges that Facebook is facing.

The Cambridge Analytica saga is a scandal of Facebook's own making | John Harris
Big corporate scandals tend not to come completely out of the blue. As with politicians, accident-prone companies…www.theguardian.com

Internal disagreements about how data has been used. Was it sold? Was it manipulated? Was it breached? It has put the company itself at-risk and highlighted the need for a new position at the C-Suite level, one that most companies have avoided up until now. AI and Automation Risk Management.

Data is the new currency, Data is the new oil. It is the lifeblood of all Artificial Intelligence algorithms, machine and deep learning models. It is the way that our machines are being trained. It is the basis for the best and brightest to begin to uncover new tools for healthcare, new ways to protect security, new ways to sell products. In 2017, just Google and Facebook had a revenue close to $60 billion in advertising alone, all due to data. However, usage of that data is at-risk, because of perceived abuses by Facebook and others.

Data is also and more importantly about PEOPLE. Data is PERSONAL, even if there have been attempts to anonymize it. People need an advocate,inside the company, defending their legal, implied and human rights. This is a dynamic marketplace, with new rules, regulations and laws being considered and implemented all of the time. AI and Automation face some substantial challenges in their development, here is a short list and NONE of these are a priority for engineers and programmers, despite the occassional altruistic commentary to the contrary. As you will see the advancement of AI and Automation requires full-time attention.:

  1. Ethical machines and algorithms — There are millions and millions of decisions being made by machines and algorithms. Many of these decisions are meant to be based upon our value system as human beings. That means our Ethics need to be coded into the machines and this is no easy task with a single set of Ethics. Deriving profit from Ethics, is tenuous at best and there is certain to be a cost.
  2. Data and decision bias — Our society is filled with bias, our perceptions are filled with bias, thus our data is filled with bias. If we do not take steps to correct bias in the data, then our algorithmic decisions will be biased. However, correcting for bias may not be as profitable which is why it needs to be debated in the C-suite.
  3. Privacy — there is a significant push back forming on what is privacy online. GDPR in Europe is a substantial set of laws providing people with increased transparency, privacy and in some cases the right to be forgotten. Compliance with GDPR is one responsibility of the AI Risk Manager.
  4. Cybersecurity and Usage Security (a Know-Your Customer process for data uasge). Companies already engage in cybersecurity, but the responsiblity is higher when you are protecting customer data. Furthermore, companies should adopt the the finance industry standard of “Know Your Customer (KYC)”. Companies must know and understand how data is being used by clients to prevent abuses or illegal behavior.
  5. Safety (can the machines that are being built be controlled and avoid unintentional consequences to human life). This idea is a little farther afield for most, however now that an UBER, autonomous vehicle has been involved in a fatality, it is front and center. The AI Risk Manager’s job is to consider the possiblities of how a machine may injure a human being. Whether that be through hack, negligence, or system failure.
  6. Future of Work (as jobs are destroyed, how does the company treat existing employees and the displaced workers/communities) — This is the PR challenge of the role, depending on how the company chooses to engage it’s community. But imagine for a moment taking a factory with 1000 employees and automating the whole thing. that’s 1000 people directly effected. That’s a community potentially devestated, if all 1000 employees were laid off.
  7. Legal implications of cutting edge technology (in partnership with Legal team or outside counsel) — GDPR compliance, legal liability of machines, new regulations and their implementation. These are the domain of the AI Risk Manager in conjunction with counsel and outside counsel.

This voice is is a C-suite job and must have equal authority to the sources of revenue, in order to stand a chance of protecting the company and protecting the data, i.e. the people who create the data.

I am not here to tell you to stop using data. However if you believe that each of these companies, whose primary purpose is not compliance but instead to make profits, will always use this data prudently is naive at its best. Engineers and programmers solve problems, they have not been trained to consider existential risk, such as feelings, privacy rights, and bias. They see data of all kinds as “food” and “input” to their models. To be fair, I don’t see anything wrong with their approach. However, the company cannot let that exist unfettered. It is risking its entire reputation on using data, which is actually using PEOPLE’S PERSONAL AND OFTEN PRIVATE INFORMATION, to get results.

For many new companies and new initiatives, data is the lifeblood of the effort. It deserves to be protected and safeguarded. Beyond that, since data is about people, it deserves to be treated with respect, consideration, and fairness. Everyone is paying more and more attention to issues like this and companies must react. People and their data need an advocate in the C-Suite. I recommend the Chief of AI and Automation Risk Manager.

ForHumanity AI and Automation Awards 2017

Top AI breakthrough

AlphaGo Zero — game changing machine learning, by removing the human from the equation

https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help/

This is an incredibly important result for a series of reasons. First, AlphaGo Zero learned to master the game with ZERO input from human players and previous experience. It trained knowing only the rules of the game. Demonstrating that it can learn better and faster with NO human involvement. Second, the implication for future AI advancement is likely that humans are “in the way” of optimal learning. Third, the AlphaGo Zero went on to become a Chess Master in 4 hours, demonstrating an adaptability that has dramatically increased the speed of machine learning. DeepMind now has over 400 PhD’s working on Artificial GENERAL Intelligence.

Dumbest/Scariest thing in AI in 2017

Sophia, AI machine made by Hanson Robotics, granted citizenship in Saudi Arabia — VERY negative domino effect

http://www.arabnews.com/node/1183166/saudi-arabia#.WfEXlGXM2oI.twitter

The implications associated with “citizenship” for machines are far reaching. Citizenship in most locations includes the right to vote, to receive social services from the government and the right to equal treatment. The world is not ready and may never be ready to have machines that vote, or machines that are treated as equals to human beings. While this was an AI stunt, it’s impact could be devastating if played out.

Most immediately Impactful development — Autonomous Drive vehicles

Waymo reaches 4 million actual autonomous test miles

https://medium.com/waymo/waymos-fleet-reaches-4-million-self-driven-miles-b28f32de495a

Autonomous cars progressing to real world applications more quickly than most anticipated

https://arstechnica.com/cars/2017/10/report-waymo-aiming-to-launch-commercial-driverless-service-this-year/

The impact of autonomous drive vehicles cannot be understated. From the devastation of jobs in the trucking and taxi industry to the freedom of mobility for many who are unable to drive. Also, autonomous driving is likely to result in significantly fewer deaths than human driving. This impact carries through to the auto insurance industry, where KPMG reckons that 71% of the auto insurance industry will disappear in the coming years. Central Authority control on the movement of people is another second order consideration that not many have concerned themselves with.

Most impactful in the future — Machine dexterity

Here are a few amazing examples of the advancement of machine dexterity. As machines are able to move and function similarly to humans, then their ability to replicate our functions increases dramatically. While this dexterity is being developed, work is also being done to allow machines to learn “how” to move like a human being nearly overnight, instead of through miles and miles of code — machines teach themselves from video and virtual reality.

Boston Dynamics Atlas robot human level agility approaching, including backflips — short video, easy watch

https://www.youtube.com/watch?v=fRj34o4hN4I&feature=youtu.be

Moley introduces the first robotic kitchen –

https://www.youtube.com/watch?v=QDprrrEdomM&feature=share

Machine movement being taught human level dexterity with simulated learning algorithms — accelerating learning and approximating human process

https://www.facebook.com/futurismscience/videos/816173055250697/?fref=mentions

Most Scary — Science Fiction-esque development

North Dakota State students develop self-replicating robot

https://www.manufacturingtomorrow.com/news/2017/07/19/ndsu-students-develop-3d-printing-self-replicating-robot/10034/

The impact of machines being able to replicate themselves, is a concept for most dystopian sci-fi movies, however there are many practical reasons for machines to replicate themselves, which is what the researchers at North Dakota State were focused on. However, with my risk management hat on for AI Safety, it just raises a whole other set of rules and ethics that need to be considered which the industry and the world are not prepared for.

Most Scary AI real-world issues NOW

Natural Language processing finds millions of faked comments on FCC proposal regarding Net Neutrality

https://futurism.com/over-million-comments-backing-net-neutrality-repeal-likely-faked/

AI being used to duplicate voices in a matter of seconds

https://www.digitaltrends.com/cool-tech/ai-lyrebird-duplicate-anyones-voice/

Using AI, videos can be made to make real individuals appear to say things that they did not…

https://futurism.com/videos/this-ai-can-create-fake-videos-that-look-real/

Top AI Safety Initiatives

Two initiatives share the top stop. Mostly because of their practical applications in a world that remains too heavily devoted to talk and inaction. These efforts are all about action!

AutonomousWeapons.org produces Slaughterbots video

This video is an excellent story telling device to highlight the risk of autonomous weaponry. Furthermore, there is an open letter being written that all humans should support. Please do so. Over 2.2 mil views so far

https://youtu.be/9CO6M2HsoIA

Ethically Aligned Design v 2.0

John Havens and his team at IEEE, the technology standards organization have collaborated extremely successfully with over 250 AI safety specialists from around the world to develop Ethically Aligned Design (EAD) v 2.0. It is the beacon on which AI and Automation ethics should build their implementation. More importantly, anyone who is rejecting the work of EAD should immediately be viewed with great skepticism and concern as to their motives.

http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf

AI and Automation - Managing the Risk to Reap the Reward

Personally, I have been hung up on Transhumanism. For those of you not familiar Transhumanism is the practice of imbedding technology directly nito your body — merging man and machine, a subset of AI and Automation. It seems obvious to me that when given the chance, some people with resources will enhance/augment themselves in order to gain an edge at the expense of other human beings around them. It’s really the same concept as trying to get into a better university, or training harder on the athletic field. As a species we are competitive and if it is important enough to you, one will do what it takes to win. People will always use this competitive fire to gain an upperhand on others. Whether it is performance enhancing drugs in sports, or paid lobbyists in politics, money, wealth and power create an uneven playing field. Technology is another tool and some, if not many, will use it for nefarious purposes, I am certain.

But nefarious uses of AI are only part of the story. When it comes to transhumanism, there are many upsides. The chance to overcome Alzeheimers, to return the use of lost limbs, to restore eyesight and hearing. Lots of places where technology in the body can return the disabled to normal human function which is a beautiful story.

So in that spirit, I want to shine a light on all of the unquestionable good that we may choose to do with advances in AI and automation. Because when we have a clear understanding of both good outcomes and bad outcomes, then we are in a position to understand our risk/reward profile with respect to the technology. When we understand our risk/reward profile, we can determine if the risks can be managed and if the rewards are worth it.

Let’s try some examples, starting with the good:

  1. AI and Automation may enhance the world’s economic growth to the point where there is enough wealth to eradicate poverty, hunger and homelessness.
  2. AI & Automation will allow researchers to test and discovery more drugs and medical treatments to eliminate debilitating disease, especially diseases that are essentially a form of living torture, like Alzheimers or Cerebral Palsy.
  3. Advancement of technology will allow us to explore our galaxy and beyond
  4. Advancement in computing technology and measurement techniques will allow us to understand the physical world around us at even the smallest, building block level
  5. Increase our efficiency and reduce wasteful resource consumption, especially carbon-based energy.

At least those five things are unambiguously good. There is hardly a rational thinker around who will dispute those major advancements and their inherent goodness. It is also likely that the sum of that goodness, out-weighs the sum of any badness that we come across with the advancement of AI and Automation. Why am I confident in saying that? Because it appears to be true for the entirety of our existence. The reason technology marches forward is because it is viewed as inherently good, inherently beneficial and in most cases, as problem solving.

But here is where the problem lies and my concern creeps in. AI has enornormous potential to benefit society. As a result it has significant power. Significant power comes with significant downside risk if that power is abused.

Here are some concerns:

  1. Complete loss of privacy (Privacy)
  2. Imbedded bias in our data, systems and algorithms that perpetuate inequality (Bias)
  3. Susceptibility to hacking (Security)
  4. Systems that operating in and sometimes take over our lives, without sharing our moral code or goals (Ethics)
  5. Annihiliation and machine takeover, if at some point the species is deemed useless or worse, a competitor (Control and Safety)

There are plenty of other downside risks, just like there are plenty of additional upside rewards. The mission of this blog post was not to calculate the risk/reward scenarios today, but rather to highlight for the reader that there are both Risks and Rewards. AI and Automation isn’t just awesome, there are costs and they need to be weighed.

My aim in the 12+ months that I have considered the AI Risk space has been very simple, but maybe poorly expressed. So let me try to set the record straight with a couple of bullet points:

  1. ForHumanity supports and applauds the advancement of AI and Automation. ForHumanity also expects technology to advance further and faster than the expectation of the majority of humanity.
  2. ForHumanity believes that AI & Automation have many advocates and will not require an additional voice to champion its expected successes. There are plenty of people poised to advance AI and Automation
  3. ForHumanity believes that there are risks associated with the advancement of AI and Automation, both to society at-large as well as to individual freedoms, rights and well-being
  4. ForHumanity believes that there are very few people who examine these risks, especially amongst those who develop AI and Automation systems and in the general population
  5. Therefore, to improve the likelihood of the broadest, most inclusive positive outcomes from the advancement of technology, ForHumanity focuses at identifying and solving the downside risks associated with AI and Automation

When we consider our measures of reward and the commensurate risk, are we accurately measuring the rewards and risks? My friend and colleague John Havens at IEEE has been arguing for sometime now that our measures of well-being are too economic (profit, capital appreciation, GDP) and neglect more difficult to measure concepts such as joy, peace and harmony. The result is that we may not correctly evaluate the risk/reward profile, reach incorrect conclusions and charge ahead with our technological advances. It is precisely that moment that my risk management senses kick in. If we can eliminate or mitigate the downside risks, then our chance to be successful, even if measured poorly increases dramatically. That is why ForHumanity focuses on the downside risk of technology. What can we lose? What sacrifices are being made? What does a bad outcome look like?

With that guiding principle, ForHumanity presses forward to examine the key risks associated with AI and Automation, Safety & Control, Ethics, Bias, Privacy and Security. We will try to identify the risks and solutions to those risks with the aim to provide each member of society the best possible outcome along the way. There may be times that ForHumanity comes across as negative or opposed to technology, but in the vast majority of cases that is not true. We are just focused on the associated risks with technology. The only time we will be negative on a technology is when the risks overwhelm the reward.

We welcome company on this journey and certainly could use your support as we go. Follow our facebook feed ForHumanity or Twitter @forhumanity_org and of course on the website at https://www.forhumanity.center/

Drawing A Line - The Difference Between Medicine and Transhumanism

I think I am writing this for myself. I hear about advancements in science that can stimulate and re-activate neurons and I think, Amazing! I listen to Tom Gruber talk about memory technology linked directly to your head and I think, no way Big Brother to the nth degree. Are these very different? Should I have such extreme views? Or am I just nuts? (N-V-T-S for the Mel Brooks fans out there).

I seem to be drawing a line between augmentation and stimulation. Between technology and medicine. Maybe that’s not a fair line, but I do. I recognize that my line is not the line for everyone, so please don’t take this as some edict or law, that I suggest. I know that transhumanism will prevail and people will put tech in their body.

Scientists May Have Reactivated The Gene That Causes Neurons To Stop Growing
In Brief Scientists have found a way of reactivating genes in mice to continue neuron growth. The development could be…futurism.com

 

So why do a draw a line between drug, chemical and external stimulus versus internal, permanent implantation? And where does that line stop/start.

One differentiation that may be key is that medicine is an isolated interaction versus ongoing connectivity. Let me try to explain.

When I take medication or medical treatment that is not implanted, it is a finite decision. Once it enters in my body, it either works or it doesn’t work, but regardless, the medication is done interacting with the outside world. It is me and the after effects, positive or negative. With augmentation/implantation, that device is permanent and it is permanently connected to the outside world. Those are vastly different ongoing relationships. One is distinctly private, the other is inherently public.

This makes a big impact on your value judgement, I suspect. When you take a drug, medication or course of treatment, it’s a one-time interaction with the dosage. One-time interaction with the outside world, essentially invading your body. There is no need for ongoing communications (outside of results-based measurements). It’s between you and your body. Any information obtained by the outside world would have to be granted.

The introduction of tech, whether implanted or nanotechnology designed to treat disease results in a diagnostic relationship between you and the outside world that is no longer controlled by your brains and five senses, in fact that communication is completely out of your control. I believe that is the key distinction and exactly where my discomfort lies — control.

Now the purveyors of these technologies will argue that you have control, but it can never be guaranteed. The only way to guarantee it, would be to 100% eliminate the communication protocol from within your body and the outside world. Nothing is perfectly secure when it comes to technology.

But I also said that it effects your value judgement. This is not a black or white issue. I agree with Tom Gruber and Hugh Herr, if I had a disability and the best way to be restored to full human capability was via augmentation or implantation, I suspect I would choose the tech. Sacrificing the risk that my tech could be compromised to my detriment, simply because my immediate need has been met. I have never had a debilitating disability, but I believe that is the choice I would make and that many people would agree. In the end this is about choice and I believe that all should have the right to choose in this matter. But this particular choice is a value judgment focused on rehabilitation. I think the equation changes, when we are talking about creating super-human traits.

To be clear, super-human traits are those senses or bodily functions that surpass human norms of that sense or function. So in the pursuit of super-human traits, I am happy to accept the introduction of artificial medicines into my body, because they do not create an ongoing communications vehicle with the outside world that may be hijacked. If that medicine can increase my longevity, clear up cloudiness in my brain, make my blood vessels more flexible, then I will take the risk of side effects in order to be super-human in that respect.

But pacemakers present an interesting gray-area. Traditionally a pacemaker would be returning me to traditional human level function. But what if we could each receive a pacemaker at birth that perfectly handled cholesterol, high blood pressure and heart function, completely eliminating heart-related death. Would that be a tech I would accept? I would be making my cardiac functions super-human, but exposing myself to the risk that the device could be hijacked or damaged in it’s communication with the outside world.

A few distinctions, first, such a device probably doesn’t directly tie into my communication control between my body and the outside world. Second, the risk of nefarious interaction with the device probably only limits me to natural human performance, taking away my super-human heart or movement. So it would appear that the risk may be worth the potential reward. However that risk of hijacking is already increased. Could my heart be stopped? That’s a serious consequence.

Now, let’s put that tech in our brains, or any of our fives sense or our mobility. Now the rewards have been espoused by many already. Personally I can see the benefits, but now the risk is just too high. The constant communication between my senses or my brain without outside world opens me up to hacking. It has the potential to change me, to change how I perceive the world. I can be manipulated, I can be controlled. It places my sense of self at risk. In fact, the positive operations of the augmentations themselves changes who I am at my core. And thus, I draw my line…

By drawing this line, I create consequences for myself. Transhumans will pass me by, maybe not in my lifetime, but these consequences apply to all who share my line in the sand. I will struggle to compete for jobs. I will not have a perfect memory. Fortunately, those things don’t define who I am, nor where I derive happiness and I hope that others will remember that when the immense peer pressure to implant technology enters your life.

In the end, I find humanity, with our flaws and failings to be perfect as is, beautifully chaotic. So this concept of weakness and fear being played upon by advancing technology feels sad and contrived, sort of like the carnival huckster. Playing on your fear that you aren’t smart enough, that your memory has some blank spots, that you struggle to remember people’s names when you meet them. Its through our personal chaos, our personal challenges and our personal foibles that we find our greatest victories and our most important lessons learned. I see a path for technology that would aim for Utopia and leave us dull, dreary and bored, automatons searching for the next source of pleasure. I see implanted brain and sense technology — controlling us. Not delivering great joy and happiness in our lives.

I guess that is the problem with all of the “intelligence chasing”. The implication is that none of us are smart enough. Professor Hugh Herr thinks that we all should have bionic limbs, why not be able to leap tall buildings in a single bound. Elon Musk sees it as obvious that we won’t be able to compete with machines because our analog senses are too slow. So he aims to increase the data extraction capabilities of our brains.

What is all this crap for? If we have a single goal in life, isn’t it something akin to happiness, or joy or peace or love? Please explain to me how being smarter gives your peace? Or how being able to jump higher or run faster gives you joy?

Some will try to answer those questions as follows: If you are smarter and have a better memory, you can get a better job and make more money. If you have stronger legs you can run faster, jump higher, be better at sports or manual labor. But let’s take the sports example, if you have “bionic” legs and win a long jumping contest, will that be gratifying at all? Especially if the competitors are not augmented? My reaction is, “who cares”, you should have won… boring.

Regardless of my view there will be people who find happiness in beating others. There is a distinct difference in the joy of victory in a friendly competition versus the joy you feel when you are taking the last job available because you have an augmented brain and the other applicants do not. Quite honestly, I would find the latter shameful, but we live in a society that rewards and cheers the person willing to “invest” in their future. This last point is the reason that transhumanism advance mostly unfettered. Humans are competitive, ruthless and self-centered enough that some people will choose augmentation in order to be superior. Society will laud people who do so because we are convinced that more technology, more brains, more assets and and more skills will lead to happiness. I support the right to augment your brain, to make yourself superior to others and I support your right to make bad choices, I believe this will be one of them.

The Process of Technological Unemployment - How Will it Happen?

There is a lot of question, concern and doubt surrounding the idea of technological unemployment. So much discussion that there is essentially a backlash of writing on how the issue is overhyped. The arguments are pretty standard, robots and AI aren’t THAT far along yet. Still plenty to do. And the most common, people have been afraid of technology before and it has always created new jobs and new work. People, like taxi drivers and truck drivers will simply have to get retrained. But the problem with the discussion is not that one side is wrong, rather it is that we are having two different conversations. The miscommunication lies in timeframe. All of the arguments for robs NOT disappearing are short run economic arguments. My argument is a Long Run argument. I have argued here:

#Future of Work — why are we struggling to understand the disruption?
I have this discussion with my good friends all the time now. I predict the destruction of most work and they think I’m…medium.com

The summary, is that in the Long Run, there are precious few skills that AI and Automation will not do better than a human. As a result, the only jobs for humans that will remain are those where the human controls the capital/labor decision or where the job values a human for their actual humanness. In that scenario, unemployment is well over 80% and society is dramatically changed. So, if you accept my long run argument, you would be right to wonder how we get from here to there. That is what this post is about. I believe that by highlighting this long run process, that it will help calm some short term fear and help people to better understand technological unemployment.

Summary points:

  1. Companies will put off hiring to absorb displaced workers due to automation
  2. We will NOT see significant layoffs related to Automation and this will be fuel of the technological unemployment nay-sayers, who inaccurately use ATMs as their argument for how technology creates jobs
  3. Job growth and hiring will continue to slow over the coming decade (decreased by more than 50%, while population growth has remained fairly steady)
  4. New graduates will continue to see a good job market as they are the most likely to have current skills
  5. 40 and 50-somethings will become the unemployed, as the technological skills pass them by
  6. 40 and 50 something will become the rise of the “part-time” worker, consultant, jack-of-all-trades and according to the BLS marginally attached U6 unemployed or completely unmeasured
  7. Economic shocks will result in higher levels of layoffs, similar to the 2009 Financial crisis and in contrast to the previous 40 years of market shocks.
  8. U6 (BLS — total unemployed and marginally attached workers) numbers become the key to understanding true unemployment. This number currently stands at 8.6% and hit a high of 17.1% during the 2008-09 financial crisis.

Let’s use Walmart as our example to understand a few of these points, ignoring the innovation that may occur in the near future, let’s start the clock with Walmart moving to autonomous truck drivers. here’s the likely process.

Step 1 — autonomous driving, but still with a driver. In a few test cases. This is standard practice for any company, especially when big changes are made. Small scale tests for a fixed period of time to determine scalability, feasability and roll out procedures. No one gets unemployed

Step 2 — Step 1 was a perfect success and the plan is to replace all truck drivers with full autonomous tractor trailers. More than 7,000 people out of work, right? No, you’d be wrong. Companies hate to fire people. It is bad press, it can damage their brand. It makes them look profit-centric, instead of customer oriented. So instead, Walmart will announce the move, with big publicity to highlight the savings and at the same time they will also announce that all 7000 employs will remain and be retrained.

How does this work? Essentially the company will go on a semi-hiring freeze. They are now overstaffed by 7k people. But a company like Walmart is turning over about 50k employees per year. So if they only hire 43k in a given year, that is something that people will hardly notice. In fact they can put out a press release highlighting that they hired 43k people in this particular year. The problem though, is that they hired 7k LESS than they would have or should have.

So that’s how it starts. Technological Unemployment begins with a reduction in hiring and we’ve been experiencing it for a decade already. When that happens at company after company, then hiring will stagnate first. However it will be difficult to see. New graduates will continue to be hired, especially if the education system is able to tailor its programs to match demands of a dynamic marketplace. The Bureau of Labor Statistics data backs this up.

The rate of job creation has slowed pretty dramatically. From 1970–2000, the US was creating about 2 mil jobs per year, consistently. Since 2000, that rate has fallen to less than 1 mil per year. While population growth has held steady at 2.55 mil per year. This is the hidden acceleration of automation. To grow profits and grow businesses, companies need less and less employees to be successful.

The unemployment rate for new college graduates. Even during the financial crisis of 2009, where we experienced peak near term unemployment, the unemployment rate for college graduates was never over 5%. As long as Universities continue to provide current, in-demand skills, graduates will find work. This also makes intuitive sense, because students, by their definition, have time to learn, whereas, under our current structure, education is not built into our day-to-day jobs. This makes workforce obsolescence a real issue. (See my blog post on Lifetime Learning)

Lifetime Learning — a Necessity Already
I started ForHumanity, partly as a response to thoughts I had about my children who are 8 and 6 today as I write this…medium.com

So that sets things up to become generational. It will be the 40-somethings and 50-somethings who will increasingly find it difficult to find new work as the technological skills may pass them by. More and more people in the middle of their careers will be forced into entrepreneurial endeavors and part-time work. Often, jobs won’t become eligible for unemployment benefits, like self-employment and many IRS 1099 filers. These people, unemployed or underemployed are not counted in any of the surveys, essentially they fall out of the work force. This number will increase over the coming years. I actually expect the Bureau of Labor Statistics to look to refine their labor participation measurement to try to capture these displaced workers.

This plays out in a different way. Over the past 40 years we have experienced market meltdowns and crises of various magnitudes. But the financial crisis in 2009 was probably the most poignant. The U6 participation rate jumped from 7.5% to 17.1% as the effects of the crisis enabled companies to actually eliminate people, the PR be-damned. That rise is unprecedented in the post-depression employment era. I believe that future crises will be used the same way as a significant form of layoffs and I expect this kind of jump in the U6 to be more of the norm than an outlier. It has taken the US economy nearly a decade of growth just to return to the mean U6 level of the previous 20 years. You can expect this U6 number to be extremely sensitive to market shocks and I’d expect a general upward trend in U6 over the coming 2 decades. This is where unemployment shows itself. I expect by the 2030’s, unless they change the calculation, U6 unemployment will be consistently around 20%

In the end, over the coming generation, there will be less job. Less full-time work and increasing replacement of human labor with automation, but it will be an insidious creep towards higher functional unemployment. Hopefully it will not be so subtle as to be neglected by the appropriate policy response or even misunderstood by technological unemployment nay-sayers.

Tech In Your Body - Let The Seduction Begin

https://www.ted.com/talks/tom_gruber_how_ai_can_enhance_our_memory_work_and_social_lives?utm_campaign=&utm_medium=on.ted.com-static&utm_content=awesm-publisher&awesm=on.ted.com_IndustrialInternet&utm_source=t.co

Please watch that video as an introduction to this blog post. It’s 9+ minutes, which I recognize will instantly turn most of your off from reading the rest of this for the TL:DR (Too Long, Didn’t read for those not familiar with the term) problem. But for those of you who choose to stay. Here’s what I took away from this:

  1. This is some smooth presenting. You introduce technology that so clearly helps people who are disabled live better lives.
  2. He does a great job of highlighting the fallibility of memory and the associated downside of human memory
  3. He makes you disappointed in yourself for your failures, namely forgetfulness of all kinds

So what are my concerns with having tech in your body or most importantly in your brain.

  1. You will be hackable. Seriously. Someone at some point will be able to control your mind, how you think, and how you act. You may doubt this, but, there are those who believe that the voting population of the US was manipulated in the last election and that didn’t even involve embedded technology.
  2. Your every thought will now exist somewhere on a hard drive. Imperfect memory is a gift, not a liability. As much as we wish we remembered the good memories, its the forgotten bad memories that are probably more important to our mental health
  3. Talk about privacy — this is the definition of NO privacy. Not only will we be surrounded by thousands of instruments listening and recording everything we do with the IoT, but now it’s in our head and our memory
  4. Imagine if you are opposed to the government on ANY level. James Comey recently said, you have NO privacy. Could the government snoop around in your head? Is your head now admissible evidence against you?
  5. The wealthiest companies and the most skillful hackers will ALWAYS have access to you, your brain, your memories. It is impossible for security to stay ahead of countries, companies and nefarious actors.
  6. Apple wants you to buy their tech. Apple is a company, driven by profit. They aren’t trying to fix the world. If some customers feel better off, they enjoy those stories and make sure that the world knows about them, but in the end, Apple is a company and companies are beholden to their shareholders. This is about profit. Interestingly, sometimes these companies fool themselves, convincing themselves of their altruism, but it won’t last. The bottom line eventually drives all decisions.
  7. This will dramatically increase income inequality. Only the most wealthy will find this technology available to them initially and most likely it will make them wealthier
  8. Those uncomfortable with putting tech in their bodies will become a persecuted minority

Transhumanism and augementation are not new. Work on prostheses connected to the brain has been improving dramatically. Take Professor Hugh Herr of MIT, stranded on Mt. Washington, loses the lower half of both legs and works for decades to develop prosthetic legs for himself which he can control with his mind. He has received numerous awards and accolades for his work. And it is amazing. Anytime a human being, loses some of their abilities, whether that be through birth or accident and is able to have them restored to human level function — it is a glorious story. If only it stopped there, read this excerpt from a recent article on Prof Herr written by Sally Helgesen, published in Strategy + Business…

But the scope of Herr’s interests and ambitions takes him beyond the desire to simply redress lost function. He’s become an evangelist for the notion that augmentation can also be used to expand capacity for those with intact bodies and functioning minds: wearables that enable human eyes to see infrared waves; tools that permit individuals to design and sculpt their own bodies, either for aesthetic reasons or to enhance athletic or professional performance. — Sally Helgesen, on Prof Hugh Herr

And this is where I get uncomfortable. Likely this is my own discomfort as I believe that many will choose to augment themselves in the future. Many people want to be superhuman, even if it is only in a narrow function. It’s a seductive offer. People like to be better than others. Oh, there is one catch, Prof Hugh Herr’s prostheses cost many millions of dollars, but I’m sure everyone has that lying around.

I am not opposed to augmentation or transhumanism. I can’t stop it. Maybe I should want to stop it based on these concerns, but I recognize that people have a right to make these decisions for themselves. The disabled can’t be prevented from being returned to human standard function levels. If Prof Herr wants to make super climbing legs for himself, he has that right. And just in case you think brain tech is a generation ahead of us, please note that brain tech already exists and is being enhanced daily.

Spinal Injury Patients Could Regain Mobility Through Brain-Computer Interfaces
In Brief Researchers are developing an implantable brain chip to bypass problems in the spinal cord and using a brain…futurism.com

I am concerned that as technology moves inside our bodies, we won’t be able to distinguish natural humans from the cyborgs, when we need to legally. Whether it be for athletic competitions, like Oscar Pistorius at the London Olympics, or Jeopardy! in the future. Certainly a person with a perfect memory (or bionic thumb) can’t be allowed to compete with a natural human — the competition would be a farce. These examples are fairly whimsical, but I am using them rather than the more disturbing versions where natural humans and augmented humans will be interested to know who-is-who.

I am further concerned about increasing income inequality. Augmentation today and in the near future will only be for the wealthiest. The newest class of brain enhancements will be expensive, it is the nature of R&D and the capitalist model that they will begin to recoup their costs. So will we be helping humanity or will the rich be extending their lead, I suspect you know the answer to that question. Imagine the parent about to send their child off to University. Already committed to spending hundred of thousands of dollars on that education. Won’t you feel compelled to purchase that “perfect memory” device from Apple to ensure that investment gets maximized? What about the competition from other students who are likely getting their “perfect memories”, are you the cheap parent that can’t afford to get your child the very best? Are you trying to let him/her fail? It’s a beautiful sales model and one that will certainly work. I think the risks to your humanity are too great.

Lastly, I am concerned that transhumanism will create two classes of humans, the augmented and the natural. The augmented will be superior to the natural and we have numerous examples in history of how a population that believes itself superior treats the inferior. This is the genuine problem because it will divide humanity. It has to. You can’t compete at work, you probably won’t have work. The lives of the super-human will be better and they will be in control. Could a natural human be forced to augment? Told it is for your own good, it is the next phase in human evolution? Eventually, for failure to augment, you might be deemed worthless and unemployable. Could you be outcast? I know this sounds like science-fiction, but I disagree. This is an extension of human relationships since time immemorial. Superior groups have a longer track record of overwhelming weaker entities, whether that be based on: military might (Greeks), organization (Romans), the enlighment/colonization/Age of Discovery (Western Europe), nationalism (Nazi- Germany), economic-strength (US), superior forces impose their will on the weak, embarrassingly often “for their own good”. It is not difficult to see it play out. Science and Technology already feels superior, you can hear in Mr. Gruber’s talk. Certainly I am talking about nothing different or unique here, except that it is based on forecast. I argue this is an obvious course of events, time is the only variable. For further thoughts on this division of humanity you can read my previous blog post:

Technology + The Human Body = Insurmountable Societal Challenge?
Picture this world for a moment. You are out of work. Technology is moving so rapidly that it is hard to keep up. In…medium.com

ForHumanity believes in the right to augment. But our most important efforts will be to create a world for natural humans to thrive in and I suspect that we will spend the rest of my life fighting an uphill battle for that. Each of you has the right to augment, especially if you are augmenting to return to human level standards. But once augmentation goes beyond the natural human, like science and technology are planning, then we have a problem. I’ve enumerated a series of concerns without highlighting the positive. I felt Mr. Gruber did an excellent job on the positives. But his sales pitch was so smooth, it made me distinctly uncomfortable and I needed to explain why. I hope this helps you consider tech in your body and make an informed decision when the time comes.