AI safety

Ban Autonomous AI Currency/Capital Usage

The current and rightful AI video rage is the SlaughterBots video, ingeniously produced by Stuart Russell of Berkeley and the good people at autonomousweapons.org. If you have not watched AND reacted to support their open letter, you are making TWO gigantic mistakes. Here is the link to the video.

https://youtu.be/9CO6M2HsoIA

As mentioned, there are good people fighting the fight to keep weapons systems from being fully autonomous, so I won’t delve into that more here. Instead, it reminded me that I had not written a long conceived blog post about autonomous currency/capital usage. Unfortunately, I don’t have a Hollywood produced YouTube video to support this argument, but I wish that I did because the risk is probably just as great and far more insidious.

The suggested law is that no machine should be allowed to use capital or currency based upon its own decision. Like in the case of autonomous weaponry, I am arguing that all capital/currency should 100% of the time be tied to a specific human being, to a human decision maker. To be more clear, I am not opposed to automated Billpay and other forms of electronic payment. In all of those cases, a human has decided to dedicate capital/currency to meet a payment and to serve a specific purpose. There is human intent. Further, I recognize the merits of humans being able to automate payments and capital flows. This ban is VERY specific. It bans a machine, using autonomous logic which reaches its own conclusions from controlling a pool of capital which it can deploy without explicit consent from a human being.

What I am opposed to is machine intent. Artificial Intelligence is frequently goal-oriented. Rarer is it goal-oriented and ethically bound. Rarer still is an AI that is goal-oriented, where societal goals are part of its calculus. Since an AI cannot be “imprisoned”, at least not yet. And it remains to be seen if an AI can actually be turned “Off” (see this YouTube video below to better understand the AI “off button problem”). This ban is necessary.

https://youtu.be/3TYT1QfdfsM

So without the Hollywood fan fare of Slaughterbots, I would like to suggest that the downside risk associated with an autonomous entity controlling a significant pool of capital is equally as devestating as the potential of Slaughterbots. Some examples of risk:

  1. Creation of monopolies and subsequent price gauging on wants and needs, even on things as basic as water rights
  2. Consume excess power
  3. Manipulate markets and asset prices
  4. Dominate the legal system, fight restrictions and legal process
  5. Influence policy, elections, behavior and thought

I understand that many readers have trouble assigning downside risk to AI and autonomous systems. They see the good that is being done. They appreciate the benefits that AI and automation can bring to society, as do I. My point-of-view is a simple one. If we can eliminate and control the downside risk, even if the probability of that risk is low, then our societal outcomes are much better off.

The argument here is fairly simple. Humans can be put in jail. Humans can have their access to funds cut off by freezing bank accounts or limiting access to the internet and communications devices. We cannot jail an AI and we may be unable to disconnect it from the network when necessary. So the best method to limit growth and rogue behavior is to keep the machine from controlling capital at the outset. I suggest the outright banning of direct and singular ownership of currency and tradeable assets by machines ONLY. All capital must have a beneficial human owner. Even today, implementing this policy would have an immediate impact in the way that AI and Automated systems are implemented. These changes would make us safer and reduce the risk that an AI might abuse currency/capital as described above.

The suggested policy has challenges already today, that must be addressed. First, cryptocurrencies may already be beyond an edict such as this, as there is an inability to freeze cryptos. Second, most global legal systems allow for corporate entities, such as corporations, trusts, foundations and partnerships which HIDE or at least obfuscate the beneficial owners. See the Guidance on Transparency and Beneficial Ownership from the Financial Action Task Force in 2014, which highlights the issue:

  1. Corporate vehicles — such as companies, trusts, foundations, partnerships, and other types of legal persons and arrangements — conduct a wide variety of commercial and entrepreneurial activities. However, despite the essential and legitimate role that corporate vehicles play in the global economy, under certain conditions, they have been misused for illicit purposes, including money laundering (ML), bribery and corruption, insider dealings, tax fraud, terrorist financing (TF), and other illegal activities. This is because, for criminals trying to circumvent anti-money laundering (AML) and counter-terrorist financing (CFT) measures, corporate vehicles are an attractive way to disguise and convert the proceeds of crime before introducing them into the financial system.

The law that I suggest implementing is not a far leap from the existing AML and Know Your Customer (KYC) compliance that most developed nations comply with. In fact that would be the mechanism of implementation. Creating a legal requirement for human beneficial ownership, while excluding machine ownership. Customer Due Diligince and KYC rules would include disclosures on human beneficial ownership and applications of AI/Automation.

Unfortunately, creating this ban is only a legal measure. There will come a time when criminals will use AI and Automation in a way that will implement currency/capital nefariously. This rule will need to be in place to allow law enforcement to make an attempt to shut it down before it becomes harmful. Think about the Flash Crash of 2010, which eliminated $1 trillion dollars of stock market value in little more than a few minutes. This is an example of the potential damage that could be inflicted on a market or prices and believe-it- or-not,the Flash Crash of 2010 is probably a small, gentle example of downside risk.

One might ask who would be opposed to such a ban. There is a short and easy answer to that. It’s Sophia the robot created by Hanson Robotics (and the small faction of people advocating for machine rights already) that was recently granted citizenship in Saudia Arabia and now “claims” to want to have a child. If a machine is allowed to control currency, with itself as a beneficial owner, then robots should be paid a wage for labor. Are we prepared to hand over all human rights to machines? Why are we building these machines at all if we don’t benefit from them and they exist on their own accord? Why make the investment?

World's first robot 'citizen' says she wants to start a family
JUST one month after she became the world's first robot to be granted citizenship of a country, Sophia has said that…www.news.com.au

Implementing this rule into KYC and AML rules is a good start to controlling the downside risk from autonomous systems. It is virtually harmless today to implement. If and when machines are sentient, and we are discussing granting them rights, we can revisit the discussion. For now however, we are talking about systems, built by humanity to serve humanity. Part of serving humanity is to control and eliminate downside risk. Machines do not need to have the ability to control their own capital/currency. This is easy so let’s get it done.

AI Safety - The Concept of Independent Audit

For months I have regularly tweeted a response to my colleagues in the AI Safety community about being a supporter of independent audit, but recently it has become clear to me that I have insufficiently explained how this works and why it is so powerful. This blog will attempt to do that. Unfortunately it is likely be longer than my typical blog, so apologies in advance.

Independent audit defined. ForHumanity, or similar entity, that exists, not for profit, but for the benefit of humanity would conduct detailed, transparent and iterative audits on all developers of AI. Our audit would review the following SIX ELEMENTS OF SAFEAI on behalf of humanity:

  1. Control — this is an analysis of the on/off switch problem that plagues AI. Can AI be controlled by its human operators? Today it is easier than it will be tomorrow as AI is given more access and more autonomy.
  2. Safety — Can the AI system harm humanity? This is a broader analytic designed to examine the protocols by which the AI will manage its behavior. Has it been programmed to avoid human loss at all costs? Will it minimize human loss if no other choice? Has this concept even been considered?
  3. Ethics/Standards — IEEE’s Ethically Aligned Design is laying out a framework that may be adopted by AI developers to represent best practices on ethics and standards of operation. 
    There are 11 subgroups designing practical standards for their specific areas of expertise (P7000 groups). ForHumanity’s audit would look to “enforce” these standards.
  4. Privacy — Are global best practices being followed by the company’s AI. Today, to the best of my knoweldge, GDPR in Europe is the gold standard of privacy standards and would be the model that we would audit on.
  5. Cyber security — regarding all human data and interactions with the company's AI, are your security protocols consistent with industry best practices? Are users safe? If something fails, what can be done about it?
  6. Bias — Have your data sets and algorithms been tested to identify bias? Is the bias being correct for, if not, why not? AI should not result in classes of people being excluded from fair and respectful treatment

The criteria that is analysed is important, but it is not the most important aspect of independent audit. Market acceptance and market demand are the key to make independent audit work. Here’s the business case.

We have a well established public and private debt market. It is well over $75 trillion US dollars globally. One of the key driving forces behind the success of that debt market is Independent Audit. Ratings agencies, like Moody’s, Fitch and Standard & Poor’s for decades have provided the marketplace with debt ratings. Regardless of how you feel about ratings agencies or their mandate, one thing is certain, they have provided a reasonable sense of the riskiness of debt. They have allowed for a marketplace to be liquid and to thrive. Company’s (issuers) are willing to be rated and investors (buyers) rely upon them for a portion of their investment decision. It is a system with a long track record of success. Here are some of the features of the ratings market model:

  1. Issuers of debt find it very difficult to issue bonds without a rating
  2. It is a for-profit business
  3. There are few suppliers of ratings which is driven by market acceptance. Many providers of ratings would dilute their value and create a least common denominator approach. Issuers would seek the easiest way to get the highest rating.
  4. Investors rely upon those ratings for a portion of their decision making process
  5. Company’s provide either legally mandated or requested transparency into their financials for a “proper” assessment of risk
  6. There is an appeals process for companies who feel they are not given a fair rating
  7. The revenue stream from creating ratings allows the ratings agencies to grow and rate more and more debt

Now I would like to extrapolate the credit rating model into an AI safety ratings model to highlight how I believe this can work. However, before I do that there is one key feature of the ratings agency model that MUST exist for it to work. The marketplace MUST demand it. For example, if an independent audit was conducted on the Amazon Alexa (this has NOT happened to date) and it failed to pass the audit or was given a subsequent low rating because Amazon had failed some or all of the SIX ELEMENTS OF SAFEAI, then you, the consumer, have to stop buying it. When the marketplace decides that these safety elements of AI are important, that is when we will begin to see AI safety implemented by companies.

That is not to say that these companies and products are NOT making AI safety a priority today. We simply do not know. From my work,there are many cases where they are not, however without independent audit, we cannot know where the deficiencies lie. We also can’t highlight the successes. For all I know, Amazon Alexa would perfectly pass the SIX ELEMENTS OF SAFEAI today. But until we have that transparency, the world will not know.

That is why independent audit is so important. Creating good products safely, is a lot harder than just creating good products. When companies know they will be scrutinized, they behave better — that is a fact. No company will want to have a bad rating published about them or their product. It is bad publicity. It could ruin their business and that is the key for humanity. AI safety MUST become a priority in the buying decision for consumers and business implementation alike.

Now, a fair criticism of independent audit is the “how” part and I concur wholeheartedly, but it shouldn’t stop us from starting the process. The first credit rating would be a train-wreck of a process compared with the analysis that is conducted by today’s analysts. So it will be now, but the most important part of the process is the “intent” to be auditted and the “intent” to provide SAFEAI. We won’t get it perfectly right the first time nor will we get it right everytime, but we will make the whole process a lot better with transparency and effective oversight.

Some will argue for government regulation (see Elon Musk), but AI and the working being done by global corporations have already outstripped national boundaries. It would be very easy for an AI developer to avoid scrutiny that is nationally focused than a process that is transnational and market driven. Below I have created a list of reasons that market driven regulation, which this amounts to, is far superior to government based regulation:

  1. Avoids the tax haven problem associated with different rules in different jurisdictions, which simply shifts development to the easiest location
  2. Avoids government involvement, which frequently had been sub-optimal
  3. Allows for government based AI projects to be rated — huge benefit regarding autonomous weaponary
  4. Tackles the problem from a global perspective, which is how many of the AI developers already operate
  5. Market driven standards can be applied rather than bureaucracy and politics using the rules as tools to maintain their power
  6. Neutrality

Now how will this all work. It’s a huge endeavor and it won’t happen overnight, but I suspect that anyone taking the time to properly consider this will realize the merits of the approach and the importance of the general issue. It is something we MUST do. So here’s the process I suggest:

  1. Funded - we have to decide that this important and begin to fund the audit capabilities of ForHumanity or a like-minded organization
  2. Willingness - some smart company or companies must submit to independent audit, recognizing that today, it is a sausage making endeavor, but one that they and ForHumanity are undertaking together to build this process and to benefit both humanity and their product for achieving SAFEAI
  3. Market acceptance- this is a brand recognition exercise. We must grow the awareness of the issues around AI safety and have consumers begin to demand SAFEAI
  4. Revenue positive, Licensing - once the first audit is conducted, the company and product will be issued the SAFEAI logo. Associated with this logo is a small per product licensing (cents) fee payable to ForHumanity or a like organization. This fee allows us to expand our efforts and to audit more and more organizations. It amounts to the corporate fee payable for the benefits to humanity. (Yes, I know it is likely to be passed on to the consumer)
  5. Expansionary - this process will have to be repeated over and over and over again until the transparency and audit process become pervasive. Then we will know when products are rogue and out of compliance by choice, not from neglect or the newness of the process
  6. Refining and iterative- this MUST be a dynamic process that is constantly scouring the world for best-practices, making them known to the marketplace and allowing for implementation and upgrade. This process should be collaborative with the company being audited in order to accomplish the end goal of AI safety
  7. Transparent - companies must be transparent in their dealings
  8. Opaque - the rating cannot be transparent in the short coming in order to protect the company and the buyers of the product, who still choose to purchase and use the product. It is sufficient to know that there is a deficiency somewhere, but there is no need to make that deficiency public. It will be between ForHumanity and the company itself. Ideally the company learns of their deficiency and immediately aims to remedy the issue
  9. Dynamic - this team cannot be a bureaucratic team, it must be comprised of some of the most motivated and intelligent people from around the world determined to deliver AI safety to the world at-large. It must be a passion, it must be a dedication. It will require great knowledge and integrity
  10. Action-oriented - I am a do-er, this organization should be about auditing not about discussing. Where appropriate and obvious, like in the case of IEEE’s EAD, adoption should supersede new discussions. Take advantage of the great work being done already by the AI safety crowd
  11. And it has to be more than me - As I have written these words and considered these thoughts, it is always with the idea that it is impossible for one person, or even a small group to have a handle on “best-practices”. This will require larger teams of people from many nationalities, from many industries, from many belief systems to create the balance and collaborative environment that can achieve something vitally important to all of humanity.

Please consider helping me. You don’t have to agree with every thought. The process will change daily, the goalposts will change daily, “best-practices” will change daily. You have to agree on two things:

  1. That this is vitally important
  2. That independent audit is the best way to achieve our cumulative goals

If you can agree on these two simple premise then join with me to make these things happen. That requires that you like this blog post. That you share it with friends. That you tell them that it is important to read and consider. It requires you to reconsider how you review and buy AI associated products. It requires you to take some action. But if you do these things, humanity will benefit in the long run and I believe that is worth it.