Ban Autonomous AI Currency/Capital Usage

The current and rightful AI video rage is the SlaughterBots video, ingeniously produced by Stuart Russell of Berkeley and the good people at autonomousweapons.org. If you have not watched AND reacted to support their open letter, you are making TWO gigantic mistakes. Here is the link to the video.

https://youtu.be/9CO6M2HsoIA

As mentioned, there are good people fighting the fight to keep weapons systems from being fully autonomous, so I won’t delve into that more here. Instead, it reminded me that I had not written a long conceived blog post about autonomous currency/capital usage. Unfortunately, I don’t have a Hollywood produced YouTube video to support this argument, but I wish that I did because the risk is probably just as great and far more insidious.

The suggested law is that no machine should be allowed to use capital or currency based upon its own decision. Like in the case of autonomous weaponry, I am arguing that all capital/currency should 100% of the time be tied to a specific human being, to a human decision maker. To be more clear, I am not opposed to automated Billpay and other forms of electronic payment. In all of those cases, a human has decided to dedicate capital/currency to meet a payment and to serve a specific purpose. There is human intent. Further, I recognize the merits of humans being able to automate payments and capital flows. This ban is VERY specific. It bans a machine, using autonomous logic which reaches its own conclusions from controlling a pool of capital which it can deploy without explicit consent from a human being.

What I am opposed to is machine intent. Artificial Intelligence is frequently goal-oriented. Rarer is it goal-oriented and ethically bound. Rarer still is an AI that is goal-oriented, where societal goals are part of its calculus. Since an AI cannot be “imprisoned”, at least not yet. And it remains to be seen if an AI can actually be turned “Off” (see this YouTube video below to better understand the AI “off button problem”). This ban is necessary.

https://youtu.be/3TYT1QfdfsM

So without the Hollywood fan fare of Slaughterbots, I would like to suggest that the downside risk associated with an autonomous entity controlling a significant pool of capital is equally as devestating as the potential of Slaughterbots. Some examples of risk:

  1. Creation of monopolies and subsequent price gauging on wants and needs, even on things as basic as water rights
  2. Consume excess power
  3. Manipulate markets and asset prices
  4. Dominate the legal system, fight restrictions and legal process
  5. Influence policy, elections, behavior and thought

I understand that many readers have trouble assigning downside risk to AI and autonomous systems. They see the good that is being done. They appreciate the benefits that AI and automation can bring to society, as do I. My point-of-view is a simple one. If we can eliminate and control the downside risk, even if the probability of that risk is low, then our societal outcomes are much better off.

The argument here is fairly simple. Humans can be put in jail. Humans can have their access to funds cut off by freezing bank accounts or limiting access to the internet and communications devices. We cannot jail an AI and we may be unable to disconnect it from the network when necessary. So the best method to limit growth and rogue behavior is to keep the machine from controlling capital at the outset. I suggest the outright banning of direct and singular ownership of currency and tradeable assets by machines ONLY. All capital must have a beneficial human owner. Even today, implementing this policy would have an immediate impact in the way that AI and Automated systems are implemented. These changes would make us safer and reduce the risk that an AI might abuse currency/capital as described above.

The suggested policy has challenges already today, that must be addressed. First, cryptocurrencies may already be beyond an edict such as this, as there is an inability to freeze cryptos. Second, most global legal systems allow for corporate entities, such as corporations, trusts, foundations and partnerships which HIDE or at least obfuscate the beneficial owners. See the Guidance on Transparency and Beneficial Ownership from the Financial Action Task Force in 2014, which highlights the issue:

  1. Corporate vehicles — such as companies, trusts, foundations, partnerships, and other types of legal persons and arrangements — conduct a wide variety of commercial and entrepreneurial activities. However, despite the essential and legitimate role that corporate vehicles play in the global economy, under certain conditions, they have been misused for illicit purposes, including money laundering (ML), bribery and corruption, insider dealings, tax fraud, terrorist financing (TF), and other illegal activities. This is because, for criminals trying to circumvent anti-money laundering (AML) and counter-terrorist financing (CFT) measures, corporate vehicles are an attractive way to disguise and convert the proceeds of crime before introducing them into the financial system.

The law that I suggest implementing is not a far leap from the existing AML and Know Your Customer (KYC) compliance that most developed nations comply with. In fact that would be the mechanism of implementation. Creating a legal requirement for human beneficial ownership, while excluding machine ownership. Customer Due Diligince and KYC rules would include disclosures on human beneficial ownership and applications of AI/Automation.

Unfortunately, creating this ban is only a legal measure. There will come a time when criminals will use AI and Automation in a way that will implement currency/capital nefariously. This rule will need to be in place to allow law enforcement to make an attempt to shut it down before it becomes harmful. Think about the Flash Crash of 2010, which eliminated $1 trillion dollars of stock market value in little more than a few minutes. This is an example of the potential damage that could be inflicted on a market or prices and believe-it- or-not,the Flash Crash of 2010 is probably a small, gentle example of downside risk.

One might ask who would be opposed to such a ban. There is a short and easy answer to that. It’s Sophia the robot created by Hanson Robotics (and the small faction of people advocating for machine rights already) that was recently granted citizenship in Saudia Arabia and now “claims” to want to have a child. If a machine is allowed to control currency, with itself as a beneficial owner, then robots should be paid a wage for labor. Are we prepared to hand over all human rights to machines? Why are we building these machines at all if we don’t benefit from them and they exist on their own accord? Why make the investment?

World's first robot 'citizen' says she wants to start a family
JUST one month after she became the world's first robot to be granted citizenship of a country, Sophia has said that…www.news.com.au

Implementing this rule into KYC and AML rules is a good start to controlling the downside risk from autonomous systems. It is virtually harmless today to implement. If and when machines are sentient, and we are discussing granting them rights, we can revisit the discussion. For now however, we are talking about systems, built by humanity to serve humanity. Part of serving humanity is to control and eliminate downside risk. Machines do not need to have the ability to control their own capital/currency. This is easy so let’s get it done.