trust

Independent Audit of AI Systems - FAQ

This article was a portion of ForHumanity’s response to the NYC Economic Development Corp’s Request for Expressions of Interest regarding their proposed Center for Responsible AI. The submission was a joint submission with The Future Society and Michelle Calabro. Independent Audit is offered, as the solution for the framwork, guidelines and governance functions of the Center. The proposal has neither been accepted or rejected at the time of this publication.

1.0 Overview

Independent Audit of AI Systems is a framework for auditing AI systems (products, services, and corporations) by examining and analyzing the downside risks associated with the ubiquitous advance of AI & Automation. It looks at 5 key areas: Privacy, Bias, Ethics, Trust and Cybersecurity. If we can make safe and responsible Artificial Intelligence & Automation profitable whilst making dangerous and irresponsible AI & Automation costly, then all of humanity wins.

The market demand for a comprehensive, implementable, third-party oversight and governance solution is enormous. There are regular and consistent calls for this type of comprehensive oversight, on a global scale. The words, ‘Trust, Assurance and Audit’ are frequently used to give humanity the comfort we need with these currently opaque and ungoverned systems. It will not only impact the world of AI in New York City, but around the world, thrusting New York into the center of all debate on Responsible AI.

Creation of the framework will be an industry-wide effort from the Audit/Assurance/Trust industry, and they will conduct an iterative, open-source, crowd-sourced dialogue with anyone who wants to engage in the process. The result will be a fully-implementable and dynamic set of rules by which companies can not only know they are being Responsible with their AI, they can prove it with an independent audit.

1.1 What is it?

A framework for auditing AI systems by examining and analyzing the downside risks associated with the ubiquitous advance of AI & Automation. It looks at 5 key areas: Privacy, Bias, Ethics, Trust and Cybersecurity.

1.2 What can get audited?

Products, services and corporations.

1.3 Why should it exist?

If we can make safe and responsible Artificial Intelligence & Automation profitable whilst making dangerous and irresponsible AI & Automation costly, then all of humanity wins.

1.4 Does the market want it?

The market demand for a comprehensive, implementable, third-party oversight and governance solution is enormous. There are regular and consistent calls for this type of comprehensive oversight, on a global scale. The words, ‘Trust, Assurance and Audit’ are frequently used to give humanity the comfort we need with these currently opaque and ungoverned systems.

1.5 Why must this happen in New York City?

Independent Audit of AI Systems is the audit/assurance/trust industry’s answer to AI governance and oversight. It is logical that it should reside in New York City, which is home to each of the Big 4’s global headquarters. Furthermore, it is crucial that an industry-wide initiative is implemented to maximize the impact of the audit rules.

1.6 How will New York City benefit from this?

We expect these rules to be adapted to local jurisdictional law in NYC and globally. This will thrust New York into the center of all debate on Responsible AI. Jobs are likely to be created as the audit process is deepened. New York City will be viewed at the leader of Responsible AI; the place to go for the most “cutting-edge” discussions. The place to go for answers.

1.7 How will humanity benefit from this?

Currently AI is generated to benefit companies, through increased profit. We are not criticizing this point-of-view, but we are suggesting that there needs to be a more balanced approach — one which considers the overall welfare of humanity. Through oversight and third-party independent governance, via Independent Audit of AI System, humanity will have a greater assurance that their interests are being considered. People can be assured that ‘best practices’ are being followed, which will increase peoples’ trust and confidence in AI systems wherever they impact people’s lives. Markets and systems will be more fair — not just in one country, as the audit rules will extend globally as well.

1.8 How will individuals benefit from this?

Individuals will benefit from better, safer products and services. They will benefit through increased education and awareness on what SAFEAI and Responsible AI mean. Individuals will be able to identify SAFEAI goods and services and increasingly have choice and control over how they manage their interactions with AIs in all facets of their lives.

1.9 Will Independent Audit of AI Systems be a self-sustainable system?
Independent Audit of AI Systems is designed to be self-sustaining over time. The SAFEAI logo will be licensed as a “seal-of-approval” to any/all companies that wish to license it. The choice is theirs, as there is not a fee to ForHumanity for the audit framework and the audits are not conducted by us.

As we develop the Independent Audit of AI Systems framework, we will spend considerable effort to build the brand as well. We want to facilitate people using the logo to impact their buying decisions for products and services. If successful, this will drive demand for the logo from those who pass audits. We will ask a licensing fee in return, with the justification being both the promotion of the brand and the ongoing maintenance of the system. The primary goal and thus the driver of licensing pricing will be the sustainability of the Independent Audit of AI systems process. It is expected that our initial donors will backstop the organization until it is self-sustaining.

2.0 Who will create it?

Creation of the framework will be an industry-wide effort from the Audit/Assurance/Trust industry, and they will conduct an iterative, open-source, crowd-sourced dialogue with anyone who wants to engage in the process. In order to be successful, it will be global and inclusive of many interested parties.

There will be a dedicated full-time staff of Core Audit Silo Heads whose responsibilities will be the following:

  • Research new ideas/concepts/contributors

  • Suggest possible rules

  • Solicit specific feedback on proposed rules

  • Track and manage dissent, striving for broad consensus

  • Manage discussion for decorum

  • Present proposed rules to Board of Directors

  • Hold public office for 2 hours daily, during global time zones

  • Facilitate dialogue on Slack with all interested participants

The Executive Director will have the following responsibilities:

  • Create partnerships with many like-minded stakeholders

  • Facilitate the quarterly update process and Board of Directors votes

  • Release the board votes and new rules

The Board of Directors (maximum 21 members) will consist of a diverse collection of industry professionals and academics who are experts in determining if a rule passes audit criteria.

2.1 How will the audit framework be developed?

Independent Audit of AI Systems will be a comprehensive set of best-practices covering the ‘Core Audit Silos’ of Ethics, Bias, Privacy, Trust and Cybersecurity.

Core Audit Silo Heads will host global public office hours during pre-announced, varying time slots to meet the needs of experts all around the world. They will also conduct an iterative, open-source, crowd-sourced dialogue on an ongoing basis (on a platform such as Slack) with anyone who wants to engage in the process. Corporations (who will be subject to the audit) have the opportunity to raise concerns and objections with the proposed rules. Independent Audit of AI Systems will walk the tightrope between maximum benefit for humanity and practical implementability.

Audit rules will be:

  • Implementable

  • Binary (complaint/non-compliant)

  • Open-Sourced and Iterated

  • Unambiguous

  • Consensus-driven

  • Measurable

The cohort of experts in each ‘Core Audit Silo’ will conduct micro-experiments, use-case experiments and applications to iterate and eventually achieve these standards and best practices. We will improve Responsible AI incrementally, and eventually arrive at consensus-driven rules by examining word choice, definitions, and boundaries.

The result will be a fully-implementable and dynamic set of rules by which companies can not only know they are being Responsible with their AI, they can prove it with an independent audit.

2.2 Hasn’t this been done before?

Yes. In fact, there are many precedents to learn from. The Independent Audit of AI Systems is a collaborative process designed to enhance the magnificent work already being done by experts and thought-leaders around the world.

We are seeking an outcome similar to that of the FASB and IFRS efforts in the early 70’s: the financial services industry came together and created standardized rules that were adopted by the SEC et al within a few short years. The growth comes from a level playing field, like we watched with financial accounting in the late 70’s and early 80’s.

Many groups are doing Ethical Guidelines work; the IEEE’s ‘Ethically Aligned Design’ remains the gold standard. With a singular mission to leave the world a little better today than it was yesterday, we will translate high-minded ideals into smaller, more practical baby steps toward implementing new rules. Our goal can be achieved with extensive collaboration from IEEE, NIST and others to adapt their work into an immediately implementable, auditable framework.

2.3 Once an AI System achieves SAFEAI status, what happens?

  • All participants will know how to be responsible with their AI.

  • Passing the Audit is good for 12 months.

  • Even as the rules change quarterly, companies have time to catch up as new ‘best-practices’ are implemented.

  • Compliant companies may choose to license the SAFEAI brand.

  • Rocker tags include Auditor, Services, Corporate, and Product.

  • Annotated as e.g. “SAFEAI 2019.

This post is a follow on to the previous posts on Independent Audit of AI Systems, both on Medium

https://medium.com/all-technology-feeds/ai-safety-the-concept-of-independent-audit-370bb45c01d

The Merits of Independent Audit of AI Systems

For those who have followed ForHumanity, you know that we have been developing and promoting the concept of Independent…

medium.com

https://becominghuman.ai/safeai-a-tool-for-concerned-parents-cb9a8218998d

and on the CACM blog

Governance and Oversight Coming to AI and Automation: Independent Audit of AI Systems

Governance and independent oversight on the design and implementation of all forms of artificial intelligence (AI) and…

cacm.acm.org

We welcome all of your support, as we continue to push this concepts forward and work towards bringing oversight and governance with true accountability to the world of AI.