Index

Facebook and Cambridge Analytica - "Know Your Customer", A Higher Standard

Know Your Customer (KYC) is a required practice in finance. Defined as the process of a business identifying and verifying the identity of its clients. The term is also used to refer to the bank and anti-money laundering regulations which governs these activities. Many of you will not be familiar with this rule of law. It exists primarily in the financial industry and is a cousin to laws such as Anti-Money Laundering (AML) and the Patriot Act of 2001 and US Freedom Act of 2015. These laws were designed to require companies to examine who their clients were. Are they involved in illegal activities? Do they finance terrorism? What is the source of these monies? Does the customer engage in illegal activity? The idea was to prevent our financial industry from supporting or further the ability of wrong-doers to cause harm. So how does this apply to Facebook and the Cambridge Analytica issues?

I am suggesting that the Data Industry, which includes any company that sells or provides access to individual information should be held to this same standard. Facebook should have to Know Your Customer. Google should have to Know Your Customer. Doesn’t this seem reasonable? The nice part about this proposal is that it isn’t new. We don’t have to draft brand new laws to cover it, rather just modify some existing language. KYC exists for banks, now let’s expand it to social media, search engines and the sellers of big data.

Everywhere in the news today, we have questions about “who is buying ads on social media”? Was it Russians trying to influence an election? Was it neo-nazis, ANTIFA or other radical idealogues? Is it a purveyor of “fake news”? If social media outlets were required to KYC their potential clients then they will be able to weed out many of these organizations well before their propoganda reaches the eyes of their subscribers. Facebook has already stated that they want to avoid allowing groups such as these to influence their users via their platform. So it is highly reasonable to ask them to do it, or be penalized for failure to do so. Accountability is a powerful thing. Accountability means that it actuals gets done.

Speaking of “getting it done”, some of you may have seen Facebook’s complete about-face on its compliance with GDPR, moving 1.5 billion users out of Irish jurisdiction and to California where there are very limited legal restricitons. https://arstechnica.com/tech-policy/2018/04/facebook-removes-1-5-billion-users-from-protection-of-eu-privacy-law/

If you aren’t familiar with GDPR, it is Europe’s powerful new privacy law. For months, Facebook has publically stated how it would intend to comply with the law. But when push came to shove, their most recent move is to avoid the law and avoid compliance as much as possible. So flowery-language is something we often here from corporate executives on these matters, but in the end, they will still serve shareholders and profits first and foremost. So unless, these companies are forced to comply, don’t expect them to do it out of moral compunction, that’s rarely how companies operate.

Returning the the practical application of KYC, for a financial firm, this means that a salesperson has to have a reasonable relationship with their client, in order to assure that they are compliant with KYC. They need to know the client personally and be familiar with the source and usage of funds. If a financial firm fails to execute KYC and it turns out that the organization they are doing business with is charged with a crime, then the financial firm and the individuals involved would find swift ramifications, including substantive fines and potential jail time. This should apply to social media and the data industry.

Let me give you a nasty example. Have you looked at the amazing detail Facebook or Google have compiled about you? It is fairly terrifying and there are some out there (Gartner, for example) who have even predicted that your devices will know you better than your family knows you by 2022.

https://www.gartner.com/smarterwithgartner/emotion-ai-will-personalize-interactions/

Now assuming this is even close to true for many of us, then imagine where that information is sold to a PhD candidate at MIT, or other reputable AI program, except that PhD student, beyond doing his AI research is funnelling that data on to hackers on the dark web, or worse, to a nation-state engaged in cyberwarfare. How easy would it be for that group to cripple a large portion of the country? Or maybe, it has already happened, with examples like Equifax and its 143 million client breach. Can you be sure that the world’s largest hacks aren’t getting their start by accessing your data from a data reseller?

To be fair, in finance, often times you are taking in the funds and controlling the activities after the fact. You know what is going on. With data, often times you are selling access or actual data to the customer and no longer have control over their activities, it might seem. But this argument simply enhances my interest in Know Your Customer, because these firms may have little idea how this data is being used or misused. Better to slow down the gravvy train than to ride it into oblivion.

Obivously the details would need to be drafted and hammered out by Congress, but I am seeking support of the broader concept and encouraging supporters to add it to the legislative agenda. ForHumanity has a fairly comprehensive set of legislative proposals at this point which we would hope would be consider in the broad concept of AI policy. Questions, thoughts and comments are always welcome. This field of AI safety remains so new that we really should have a crowd-sourced approach to identify best-practices. We welcome you to join us in the process.

New C-Suite Position - AI and Automation Risk Management

Many of you will be familiar with the challenges that Facebook is facing.

The Cambridge Analytica saga is a scandal of Facebook's own making | John Harris
Big corporate scandals tend not to come completely out of the blue. As with politicians, accident-prone companies…www.theguardian.com

Internal disagreements about how data has been used. Was it sold? Was it manipulated? Was it breached? It has put the company itself at-risk and highlighted the need for a new position at the C-Suite level, one that most companies have avoided up until now. AI and Automation Risk Management.

Data is the new currency, Data is the new oil. It is the lifeblood of all Artificial Intelligence algorithms, machine and deep learning models. It is the way that our machines are being trained. It is the basis for the best and brightest to begin to uncover new tools for healthcare, new ways to protect security, new ways to sell products. In 2017, just Google and Facebook had a revenue close to $60 billion in advertising alone, all due to data. However, usage of that data is at-risk, because of perceived abuses by Facebook and others.

Data is also and more importantly about PEOPLE. Data is PERSONAL, even if there have been attempts to anonymize it. People need an advocate,inside the company, defending their legal, implied and human rights. This is a dynamic marketplace, with new rules, regulations and laws being considered and implemented all of the time. AI and Automation face some substantial challenges in their development, here is a short list and NONE of these are a priority for engineers and programmers, despite the occassional altruistic commentary to the contrary. As you will see the advancement of AI and Automation requires full-time attention.:

  1. Ethical machines and algorithms — There are millions and millions of decisions being made by machines and algorithms. Many of these decisions are meant to be based upon our value system as human beings. That means our Ethics need to be coded into the machines and this is no easy task with a single set of Ethics. Deriving profit from Ethics, is tenuous at best and there is certain to be a cost.
  2. Data and decision bias — Our society is filled with bias, our perceptions are filled with bias, thus our data is filled with bias. If we do not take steps to correct bias in the data, then our algorithmic decisions will be biased. However, correcting for bias may not be as profitable which is why it needs to be debated in the C-suite.
  3. Privacy — there is a significant push back forming on what is privacy online. GDPR in Europe is a substantial set of laws providing people with increased transparency, privacy and in some cases the right to be forgotten. Compliance with GDPR is one responsibility of the AI Risk Manager.
  4. Cybersecurity and Usage Security (a Know-Your Customer process for data uasge). Companies already engage in cybersecurity, but the responsiblity is higher when you are protecting customer data. Furthermore, companies should adopt the the finance industry standard of “Know Your Customer (KYC)”. Companies must know and understand how data is being used by clients to prevent abuses or illegal behavior.
  5. Safety (can the machines that are being built be controlled and avoid unintentional consequences to human life). This idea is a little farther afield for most, however now that an UBER, autonomous vehicle has been involved in a fatality, it is front and center. The AI Risk Manager’s job is to consider the possiblities of how a machine may injure a human being. Whether that be through hack, negligence, or system failure.
  6. Future of Work (as jobs are destroyed, how does the company treat existing employees and the displaced workers/communities) — This is the PR challenge of the role, depending on how the company chooses to engage it’s community. But imagine for a moment taking a factory with 1000 employees and automating the whole thing. that’s 1000 people directly effected. That’s a community potentially devestated, if all 1000 employees were laid off.
  7. Legal implications of cutting edge technology (in partnership with Legal team or outside counsel) — GDPR compliance, legal liability of machines, new regulations and their implementation. These are the domain of the AI Risk Manager in conjunction with counsel and outside counsel.

This voice is is a C-suite job and must have equal authority to the sources of revenue, in order to stand a chance of protecting the company and protecting the data, i.e. the people who create the data.

I am not here to tell you to stop using data. However if you believe that each of these companies, whose primary purpose is not compliance but instead to make profits, will always use this data prudently is naive at its best. Engineers and programmers solve problems, they have not been trained to consider existential risk, such as feelings, privacy rights, and bias. They see data of all kinds as “food” and “input” to their models. To be fair, I don’t see anything wrong with their approach. However, the company cannot let that exist unfettered. It is risking its entire reputation on using data, which is actually using PEOPLE’S PERSONAL AND OFTEN PRIVATE INFORMATION, to get results.

For many new companies and new initiatives, data is the lifeblood of the effort. It deserves to be protected and safeguarded. Beyond that, since data is about people, it deserves to be treated with respect, consideration, and fairness. Everyone is paying more and more attention to issues like this and companies must react. People and their data need an advocate in the C-Suite. I recommend the Chief of AI and Automation Risk Manager.

Personal Data - How to Control Info and Get Paid For Being You

Here’s a quick and easy way to break some of the current monopolies that exist in the personal data market (looking at Google, Facebook and Amazon). Let people own their own data, disseminate it as they see fit and, shockingly, get paid for what is rightfully theirs.

The concept is simple, the implementation is challenging and requires a good size investment from somewhere at the outset. I’ll explain the concept first and then circle back around to the implementation at the end.

A personal data exchange. Simply explained, each person 18 and over, has the right to list themselves on a “data” exchange, just like companies list their equity on a stock exchange. Each individual would get a listing off their own on the exchange. So ideally the exchange would have hundreds of millions of listings. One, for each unique person. This exchange, which would function as a clearinghouse for YOUR data. It would be the marketplace to go to for companies, of all kinds, to retrieve the data that they require from the sellers of data — YOU. If you list John Q Public on the exchange and want to severely limit the data you provide the world, that is your choice, however, because your data is used less frequently, you will get paid very little for being John Q Public. Privacy, is your right and you should be in control of your own data.

However, if John Q Public lists his data, answers questions submitted by the marketplace, responds in a timely manner to requests and is generally open about preferences, opinions and many other pieces of information in demand, then John Q Public will find a nice revenue stream associated with this very valuable data.

The marketplace is interactive. It starts with many of the key data items that are commonly available on a simple internet search. However, as you file your “listing” on the exchange then companies will seek your information, maybe thru GPS location, maybe via your search and query results. They may also send out questionaires. If you, the listing company, find yourself uncomfortable answering certain data questions, then stop. You are in control. You release your data. How you want, when you want. Do you want your shopping tracked? If yes, then allow it. Maybe only certain shopping is included. Furthermore, this data does not have to be “public”. It can be provided directly to a client company anonymously (anonymous data is cheaper to the company and pays you less as well). As the listing entity, providing the data, you will be in control. Companies can come to the marketplace and create datasets. Ask questions of the personal listings in an attempt to grow a business, design their alogirthms or pivot a marketing strategy. They can pay for high quality information which will be crucial to their business. As a “listed entity” you would have responsibility to respond, if you want to get paid. You must answer, truthfully, completely and in a timely manner. If the data proves to be useless over time because people lie and obscure relevant information, than the marketplace breaks down and people will lose CONTROL over the data, the way that it exists today.

It is hard to know how valuable this data is. Today, it is priced at zero by the owners of the data — you. Google and Facebook make nearly $100 billion per year, selling your data — selling you. I am certain that given the choice between no Google Search engine, no facebook and $1000 in your pocket. Most people will choose the $1000. Furthermore, it is an excellent way to democratize wealth as anyone will be eligible to participate.

Now, implementation is a challenge, but not insurmountable. The absolute key is that you must launch the exchange with many, many listings. Therefore, to motivate individuals to list on the exchange, they will need to be compensated to do so. I find that money has a way of getting people’s attention. However, we aren’t talking about a small amount of money, as the minimum number of listings probably needs to be 20–30 million people from the outset to make the dataset meaningful. Furthermore, an incentive system to encourage people to help others list will be valuable. So the capital required to incentivize the listing is significant, well over $2 billion just to launch. Fortunately, there are a few large pools of investors, unbiased and correctly aligned with individuals to fund the endeavor.

Secondly, the technological challenge of this exchange is not trivial. Handling over 100 million listings of John and Sarah Q Publics is a sizable endeavor. Furthermore, the front-end will need to be extremely flexible and user friendly to ensure that all listings can easily and efficiently manage their data.

The education process is also monumental as many people do NOT value their data, furthermore they do not value their privacy. They are content to give away this asset for free, or at least in exchange for a forum to fight about politics, post some pictures, search the web, have a laugh or order some goods to be delivered in less than 2 days. However, success here breeds success and with the right incentive structure, I believe that this process will go viral. If the money is sufficient enough, then data will become an important revenue stream for individuals and households.

Finally, this has to be done at a truly institutional level. It requires the cache of the New York Stock Exchange or Chicago Mercantile Exchange to demonstrate the “seriousness” of the concept, combined with the institutional fortitude to handle the sheer magnitude required to make this successful and finally with the technology chops and delivery expertise to pull it all together and provide an exceptional experience for the listed individuals. This idea has been tried or discussed on a smaller scale, but small scale doesn’t work for the companies in the data business. This idea is go big or go home, but in terms of privacy and control of personal data, there is no better way than to put you in charge and let the market pay you directly.

The disintermediation of Google, Facebook and Amazon in the “control your data business” is just a perk…

ForHumanity AI and Automation Awards 2017

Top AI breakthrough

AlphaGo Zero — game changing machine learning, by removing the human from the equation

https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help/

This is an incredibly important result for a series of reasons. First, AlphaGo Zero learned to master the game with ZERO input from human players and previous experience. It trained knowing only the rules of the game. Demonstrating that it can learn better and faster with NO human involvement. Second, the implication for future AI advancement is likely that humans are “in the way” of optimal learning. Third, the AlphaGo Zero went on to become a Chess Master in 4 hours, demonstrating an adaptability that has dramatically increased the speed of machine learning. DeepMind now has over 400 PhD’s working on Artificial GENERAL Intelligence.

Dumbest/Scariest thing in AI in 2017

Sophia, AI machine made by Hanson Robotics, granted citizenship in Saudi Arabia — VERY negative domino effect

http://www.arabnews.com/node/1183166/saudi-arabia#.WfEXlGXM2oI.twitter

The implications associated with “citizenship” for machines are far reaching. Citizenship in most locations includes the right to vote, to receive social services from the government and the right to equal treatment. The world is not ready and may never be ready to have machines that vote, or machines that are treated as equals to human beings. While this was an AI stunt, it’s impact could be devastating if played out.

Most immediately Impactful development — Autonomous Drive vehicles

Waymo reaches 4 million actual autonomous test miles

https://medium.com/waymo/waymos-fleet-reaches-4-million-self-driven-miles-b28f32de495a

Autonomous cars progressing to real world applications more quickly than most anticipated

https://arstechnica.com/cars/2017/10/report-waymo-aiming-to-launch-commercial-driverless-service-this-year/

The impact of autonomous drive vehicles cannot be understated. From the devastation of jobs in the trucking and taxi industry to the freedom of mobility for many who are unable to drive. Also, autonomous driving is likely to result in significantly fewer deaths than human driving. This impact carries through to the auto insurance industry, where KPMG reckons that 71% of the auto insurance industry will disappear in the coming years. Central Authority control on the movement of people is another second order consideration that not many have concerned themselves with.

Most impactful in the future — Machine dexterity

Here are a few amazing examples of the advancement of machine dexterity. As machines are able to move and function similarly to humans, then their ability to replicate our functions increases dramatically. While this dexterity is being developed, work is also being done to allow machines to learn “how” to move like a human being nearly overnight, instead of through miles and miles of code — machines teach themselves from video and virtual reality.

Boston Dynamics Atlas robot human level agility approaching, including backflips — short video, easy watch

https://www.youtube.com/watch?v=fRj34o4hN4I&feature=youtu.be

Moley introduces the first robotic kitchen –

https://www.youtube.com/watch?v=QDprrrEdomM&feature=share

Machine movement being taught human level dexterity with simulated learning algorithms — accelerating learning and approximating human process

https://www.facebook.com/futurismscience/videos/816173055250697/?fref=mentions

Most Scary — Science Fiction-esque development

North Dakota State students develop self-replicating robot

https://www.manufacturingtomorrow.com/news/2017/07/19/ndsu-students-develop-3d-printing-self-replicating-robot/10034/

The impact of machines being able to replicate themselves, is a concept for most dystopian sci-fi movies, however there are many practical reasons for machines to replicate themselves, which is what the researchers at North Dakota State were focused on. However, with my risk management hat on for AI Safety, it just raises a whole other set of rules and ethics that need to be considered which the industry and the world are not prepared for.

Most Scary AI real-world issues NOW

Natural Language processing finds millions of faked comments on FCC proposal regarding Net Neutrality

https://futurism.com/over-million-comments-backing-net-neutrality-repeal-likely-faked/

AI being used to duplicate voices in a matter of seconds

https://www.digitaltrends.com/cool-tech/ai-lyrebird-duplicate-anyones-voice/

Using AI, videos can be made to make real individuals appear to say things that they did not…

https://futurism.com/videos/this-ai-can-create-fake-videos-that-look-real/

Top AI Safety Initiatives

Two initiatives share the top stop. Mostly because of their practical applications in a world that remains too heavily devoted to talk and inaction. These efforts are all about action!

AutonomousWeapons.org produces Slaughterbots video

This video is an excellent story telling device to highlight the risk of autonomous weaponry. Furthermore, there is an open letter being written that all humans should support. Please do so. Over 2.2 mil views so far

https://youtu.be/9CO6M2HsoIA

Ethically Aligned Design v 2.0

John Havens and his team at IEEE, the technology standards organization have collaborated extremely successfully with over 250 AI safety specialists from around the world to develop Ethically Aligned Design (EAD) v 2.0. It is the beacon on which AI and Automation ethics should build their implementation. More importantly, anyone who is rejecting the work of EAD should immediately be viewed with great skepticism and concern as to their motives.

http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf

AI and Automation - Managing the Risk to Reap the Reward

Personally, I have been hung up on Transhumanism. For those of you not familiar Transhumanism is the practice of imbedding technology directly nito your body — merging man and machine, a subset of AI and Automation. It seems obvious to me that when given the chance, some people with resources will enhance/augment themselves in order to gain an edge at the expense of other human beings around them. It’s really the same concept as trying to get into a better university, or training harder on the athletic field. As a species we are competitive and if it is important enough to you, one will do what it takes to win. People will always use this competitive fire to gain an upperhand on others. Whether it is performance enhancing drugs in sports, or paid lobbyists in politics, money, wealth and power create an uneven playing field. Technology is another tool and some, if not many, will use it for nefarious purposes, I am certain.

But nefarious uses of AI are only part of the story. When it comes to transhumanism, there are many upsides. The chance to overcome Alzeheimers, to return the use of lost limbs, to restore eyesight and hearing. Lots of places where technology in the body can return the disabled to normal human function which is a beautiful story.

So in that spirit, I want to shine a light on all of the unquestionable good that we may choose to do with advances in AI and automation. Because when we have a clear understanding of both good outcomes and bad outcomes, then we are in a position to understand our risk/reward profile with respect to the technology. When we understand our risk/reward profile, we can determine if the risks can be managed and if the rewards are worth it.

Let’s try some examples, starting with the good:

  1. AI and Automation may enhance the world’s economic growth to the point where there is enough wealth to eradicate poverty, hunger and homelessness.
  2. AI & Automation will allow researchers to test and discovery more drugs and medical treatments to eliminate debilitating disease, especially diseases that are essentially a form of living torture, like Alzheimers or Cerebral Palsy.
  3. Advancement of technology will allow us to explore our galaxy and beyond
  4. Advancement in computing technology and measurement techniques will allow us to understand the physical world around us at even the smallest, building block level
  5. Increase our efficiency and reduce wasteful resource consumption, especially carbon-based energy.

At least those five things are unambiguously good. There is hardly a rational thinker around who will dispute those major advancements and their inherent goodness. It is also likely that the sum of that goodness, out-weighs the sum of any badness that we come across with the advancement of AI and Automation. Why am I confident in saying that? Because it appears to be true for the entirety of our existence. The reason technology marches forward is because it is viewed as inherently good, inherently beneficial and in most cases, as problem solving.

But here is where the problem lies and my concern creeps in. AI has enornormous potential to benefit society. As a result it has significant power. Significant power comes with significant downside risk if that power is abused.

Here are some concerns:

  1. Complete loss of privacy (Privacy)
  2. Imbedded bias in our data, systems and algorithms that perpetuate inequality (Bias)
  3. Susceptibility to hacking (Security)
  4. Systems that operating in and sometimes take over our lives, without sharing our moral code or goals (Ethics)
  5. Annihiliation and machine takeover, if at some point the species is deemed useless or worse, a competitor (Control and Safety)

There are plenty of other downside risks, just like there are plenty of additional upside rewards. The mission of this blog post was not to calculate the risk/reward scenarios today, but rather to highlight for the reader that there are both Risks and Rewards. AI and Automation isn’t just awesome, there are costs and they need to be weighed.

My aim in the 12+ months that I have considered the AI Risk space has been very simple, but maybe poorly expressed. So let me try to set the record straight with a couple of bullet points:

  1. ForHumanity supports and applauds the advancement of AI and Automation. ForHumanity also expects technology to advance further and faster than the expectation of the majority of humanity.
  2. ForHumanity believes that AI & Automation have many advocates and will not require an additional voice to champion its expected successes. There are plenty of people poised to advance AI and Automation
  3. ForHumanity believes that there are risks associated with the advancement of AI and Automation, both to society at-large as well as to individual freedoms, rights and well-being
  4. ForHumanity believes that there are very few people who examine these risks, especially amongst those who develop AI and Automation systems and in the general population
  5. Therefore, to improve the likelihood of the broadest, most inclusive positive outcomes from the advancement of technology, ForHumanity focuses at identifying and solving the downside risks associated with AI and Automation

When we consider our measures of reward and the commensurate risk, are we accurately measuring the rewards and risks? My friend and colleague John Havens at IEEE has been arguing for sometime now that our measures of well-being are too economic (profit, capital appreciation, GDP) and neglect more difficult to measure concepts such as joy, peace and harmony. The result is that we may not correctly evaluate the risk/reward profile, reach incorrect conclusions and charge ahead with our technological advances. It is precisely that moment that my risk management senses kick in. If we can eliminate or mitigate the downside risks, then our chance to be successful, even if measured poorly increases dramatically. That is why ForHumanity focuses on the downside risk of technology. What can we lose? What sacrifices are being made? What does a bad outcome look like?

With that guiding principle, ForHumanity presses forward to examine the key risks associated with AI and Automation, Safety & Control, Ethics, Bias, Privacy and Security. We will try to identify the risks and solutions to those risks with the aim to provide each member of society the best possible outcome along the way. There may be times that ForHumanity comes across as negative or opposed to technology, but in the vast majority of cases that is not true. We are just focused on the associated risks with technology. The only time we will be negative on a technology is when the risks overwhelm the reward.

We welcome company on this journey and certainly could use your support as we go. Follow our facebook feed ForHumanity or Twitter @forhumanity_org and of course on the website at https://www.forhumanity.center/

Ban Autonomous AI Currency/Capital Usage

The current and rightful AI video rage is the SlaughterBots video, ingeniously produced by Stuart Russell of Berkeley and the good people at autonomousweapons.org. If you have not watched AND reacted to support their open letter, you are making TWO gigantic mistakes. Here is the link to the video.

https://youtu.be/9CO6M2HsoIA

As mentioned, there are good people fighting the fight to keep weapons systems from being fully autonomous, so I won’t delve into that more here. Instead, it reminded me that I had not written a long conceived blog post about autonomous currency/capital usage. Unfortunately, I don’t have a Hollywood produced YouTube video to support this argument, but I wish that I did because the risk is probably just as great and far more insidious.

The suggested law is that no machine should be allowed to use capital or currency based upon its own decision. Like in the case of autonomous weaponry, I am arguing that all capital/currency should 100% of the time be tied to a specific human being, to a human decision maker. To be more clear, I am not opposed to automated Billpay and other forms of electronic payment. In all of those cases, a human has decided to dedicate capital/currency to meet a payment and to serve a specific purpose. There is human intent. Further, I recognize the merits of humans being able to automate payments and capital flows. This ban is VERY specific. It bans a machine, using autonomous logic which reaches its own conclusions from controlling a pool of capital which it can deploy without explicit consent from a human being.

What I am opposed to is machine intent. Artificial Intelligence is frequently goal-oriented. Rarer is it goal-oriented and ethically bound. Rarer still is an AI that is goal-oriented, where societal goals are part of its calculus. Since an AI cannot be “imprisoned”, at least not yet. And it remains to be seen if an AI can actually be turned “Off” (see this YouTube video below to better understand the AI “off button problem”). This ban is necessary.

https://youtu.be/3TYT1QfdfsM

So without the Hollywood fan fare of Slaughterbots, I would like to suggest that the downside risk associated with an autonomous entity controlling a significant pool of capital is equally as devestating as the potential of Slaughterbots. Some examples of risk:

  1. Creation of monopolies and subsequent price gauging on wants and needs, even on things as basic as water rights
  2. Consume excess power
  3. Manipulate markets and asset prices
  4. Dominate the legal system, fight restrictions and legal process
  5. Influence policy, elections, behavior and thought

I understand that many readers have trouble assigning downside risk to AI and autonomous systems. They see the good that is being done. They appreciate the benefits that AI and automation can bring to society, as do I. My point-of-view is a simple one. If we can eliminate and control the downside risk, even if the probability of that risk is low, then our societal outcomes are much better off.

The argument here is fairly simple. Humans can be put in jail. Humans can have their access to funds cut off by freezing bank accounts or limiting access to the internet and communications devices. We cannot jail an AI and we may be unable to disconnect it from the network when necessary. So the best method to limit growth and rogue behavior is to keep the machine from controlling capital at the outset. I suggest the outright banning of direct and singular ownership of currency and tradeable assets by machines ONLY. All capital must have a beneficial human owner. Even today, implementing this policy would have an immediate impact in the way that AI and Automated systems are implemented. These changes would make us safer and reduce the risk that an AI might abuse currency/capital as described above.

The suggested policy has challenges already today, that must be addressed. First, cryptocurrencies may already be beyond an edict such as this, as there is an inability to freeze cryptos. Second, most global legal systems allow for corporate entities, such as corporations, trusts, foundations and partnerships which HIDE or at least obfuscate the beneficial owners. See the Guidance on Transparency and Beneficial Ownership from the Financial Action Task Force in 2014, which highlights the issue:

  1. Corporate vehicles — such as companies, trusts, foundations, partnerships, and other types of legal persons and arrangements — conduct a wide variety of commercial and entrepreneurial activities. However, despite the essential and legitimate role that corporate vehicles play in the global economy, under certain conditions, they have been misused for illicit purposes, including money laundering (ML), bribery and corruption, insider dealings, tax fraud, terrorist financing (TF), and other illegal activities. This is because, for criminals trying to circumvent anti-money laundering (AML) and counter-terrorist financing (CFT) measures, corporate vehicles are an attractive way to disguise and convert the proceeds of crime before introducing them into the financial system.

The law that I suggest implementing is not a far leap from the existing AML and Know Your Customer (KYC) compliance that most developed nations comply with. In fact that would be the mechanism of implementation. Creating a legal requirement for human beneficial ownership, while excluding machine ownership. Customer Due Diligince and KYC rules would include disclosures on human beneficial ownership and applications of AI/Automation.

Unfortunately, creating this ban is only a legal measure. There will come a time when criminals will use AI and Automation in a way that will implement currency/capital nefariously. This rule will need to be in place to allow law enforcement to make an attempt to shut it down before it becomes harmful. Think about the Flash Crash of 2010, which eliminated $1 trillion dollars of stock market value in little more than a few minutes. This is an example of the potential damage that could be inflicted on a market or prices and believe-it- or-not,the Flash Crash of 2010 is probably a small, gentle example of downside risk.

One might ask who would be opposed to such a ban. There is a short and easy answer to that. It’s Sophia the robot created by Hanson Robotics (and the small faction of people advocating for machine rights already) that was recently granted citizenship in Saudia Arabia and now “claims” to want to have a child. If a machine is allowed to control currency, with itself as a beneficial owner, then robots should be paid a wage for labor. Are we prepared to hand over all human rights to machines? Why are we building these machines at all if we don’t benefit from them and they exist on their own accord? Why make the investment?

World's first robot 'citizen' says she wants to start a family
JUST one month after she became the world's first robot to be granted citizenship of a country, Sophia has said that…www.news.com.au

Implementing this rule into KYC and AML rules is a good start to controlling the downside risk from autonomous systems. It is virtually harmless today to implement. If and when machines are sentient, and we are discussing granting them rights, we can revisit the discussion. For now however, we are talking about systems, built by humanity to serve humanity. Part of serving humanity is to control and eliminate downside risk. Machines do not need to have the ability to control their own capital/currency. This is easy so let’s get it done.

Robot Tax (No), Sovereign Wealth Fund (Yes)

 

Let’s talk for a minute about Robot Tax. Recently, it’s been a quick fix concept, espoused by Bill Gates and others. In San Francisco and South Korea, there is already talk of implementation. It’s the wrong idea, unless The United States truly wants to hinder robot implementation, in which case let’s hope the rest of the world agrees at the exact same moment to the exact same policy.

The idea of a “robot tax” comes from the fear of Technological Unemployment. Machines replacing human workers at many jobs, without an offsetting number of replacement jobs. I find this fear to be very reasonable as I have argued here: https://medium.com/@ForHumanity_Org/future-of-work-why-are-we-struggling-to-understand-the-disruption-e7661a2a0a81.

I also recognize that not everyone thinks that Technological Unemployment is a problem. However, there is one thing I do know. If Tech Uneployment is not a problem, then we will end up in a good place with minimal risk and an economy that has grown more efficient, creating many high paying jobs. If however, Tech Unemployment does come to pass, then we will be faced with some major societal challenges. Upside gain = POSITIVE, down side risk = VERY NEGATIVE. So with my risk management hat on, this blog post attempts to deal with the downside risk of Technological Unemployment, by tackling the issue of “robot tax” and introducing an alternative solution - a United States Sovereign Wealth Fund (USSWF).

To begin, let’s examine some key assumptions for how Technological Unemployment may occur, recognizing that this is not a certain outcome. There are those who even argument that it won’t happen at all.

  1. Technological unemployment will be a process that takes years if not decades.
  2. We don’t know how and when people will become displaced. This is true at a company level, a sector level and across the economy as a whole, which I discussed here https://medium.com/@ForHumanity_Org/the-process-of-technological-unemployment-how-will-it-happen-489e2f8b037c
  3. We don’t know what new jobs will be created
  4. There is an implicit concern about rising income inequality as the owners of capital reap the rewards of automation, while labor receives a decreasing share — approaching zero
  5. There is a belief that technological unemployment will be a byproduct of significant growth in economic production and subsequent wealth attributable to the implementation of AI and Automation.

Technological Unemployment, as discussed in detail by Oxford (2013), PwC (2016), Mckinsey (2017) and referenced directly by the White House report on AI and Automation (October 2016) is estimated to result in as much as 30–40% unemployment. Many people believe that this possibility is real and as a result they are concerned about supporting the unemployed. This is the genesis of the “robot tax” concept. Here are some concerns about the idea:

  1. It picks winners and losers based on how easily things are automated
  2. It creates a huge competitive advantage for countries/jurisdictions that do not tax robots. Most companies competing in AI and Automation are global in nature. Machine implementation will simply be shifted to other jurisdictions where the tax impact is minimized, so will the wealth creation associated with that machine implementation. Apple is a perfect example with nearly $250 billion dollars in cash overseas. Nearly all of that offshore cash is a function of tax issues. This point is not debatable, it is a corporate responsibility to minimize taxes legally, just as Apple has done. There is significant tangible history of corporations shifting production to jurisdictions where the tax is favorable. A robot tax punishes companies for innovating and will drive wealth creation out of the United States.
  3. Bureaucracy, companies rarely “flip the switch to automation” on a Friday and asks a set of employees to stop coming in on Monday. Therefore, identifying human replacement by machines will be difficult and policing a “robot tax” implementation would require significant manpower. If you are like me, when you see something is difficult to police, you probably also imagine a large bureaucracy trying to police it. Therefore, imagine the IRS, do we really need an IRTS (Internal Robot Tax Service) too? I suspect that is something most people would like to avoid. Taxes are difficult to implement on all companies public and private.
  4. Timing and measurement mismatch — Another problem with the “robot tax” concept is timing. For example, Company A automates Person A out of a job and therefore is subject to a tax. So Company A has its profit and capital reduced to pay the tax, but if this happens tomorrow, it is reasonably likely that Person A finds another job. Now the tax has reduced the capital allocation capability of the growing company and wasn’t put to good use supporting Person A who didn’t require the funds. Instead, it sits with the government, which is surely an inefficeint allocation of capital.

On top of all of this, no one has layed out the entire structure of a “robot tax”. Some have suggested banning the replacement of jobs by machines. Others have suggested a tax without much detail. In South Korea, where reports came out that they had enacted the first “robot tax”, it turned out to be all hype. The actual implementation was the decrease of tax INCENTIVES for the makers of automation.

In the end I suspect a Robot Tax is simply a reflexive action being suggested by people who are rightfully concerned but have no other alternatives to address their concern. Ideally this paper and the USSWF addresses those issues and proves to be a better tool to tackle the challenges.

Look I get the theory, if you want to grow your company and make more profits for yourself by automating, let’s tax those profits because you are removing jobs from the workforce. However a tax is punitive and redistributional. Why not participate with the company in their wealth creation, on behalf of that displaced worker. Practically speaking, if a worker were to be laid off by Google, Amazon or Apple, but compensated in stock for their termination. There is a decent chance that the capital gains of the stock may offset the lost wages.

If you believe the concerns about Technological Unemployment and that a significant amount of work will be replaced by automation, we do need to prepare for higher, permanent unemployment. So what should be done. The answer, from my perspective, is a series of solutions listed below (of course, this is no small challenge and requires many policies to try to mitigate the risks). This list amounts to a comprehensive AI, Automation and Tech policy for the United States (where I have covered them previously, I will show the links), but for the rest of the piece, we will concentrate on point #5:

  1. Lifetime learning to ensure we have the most nimble and current workforce available https://medium.com/@ForHumanity_Org/lifetime-learning-a-necessity-already-2e96807251db
  2. A laissez-faire, regulatory environment (as discussed by Andrea O’Sullivan and others) that encourages leadership in the area of AI and Automation, (one could argue, this exists today, but is unlikely to remain so lenient, which may eventually hinder the expansion) https://www.technologyreview.com/s/609132/dont-let-regulators-ruin-ai/
  3. Independent, market-driven audit oversight on key inputs to AI and Automation, such as ethics, standards, safety, security, bias and privacy. A model, not dissimlar to credit ratings https://medium.com/all-technology-feeds/ai-safety-the-concept-of-independent-audit-370bb45c01d
  4. A plan for shifting current welfare and Social Security into a Universal Basic Income https://medium.com/@ForHumanity_Org/universal-basic-income-a-working-road-map-66d4ccd8a817, coupled with an increase in taxes on the wealthy to make the UBI funding budget neutral. In the future, NOT TODAY.
  5. Finally a Soverign Wealth fund for the United States (USSWF) designed to maximize the benefits of the growth and expansion on behalf of all US citizens equally.

Since this post is introducing the Sovereign Wealth Fund as a means towards maximizing the benefit of the comprensive Technology Policy listed above, we will remain focused on it and the other elements of the policy remain open for debate elsewhere.

Returning to the idea of Technological Unemployment, we can all agree it would be a process and would take some time to occur. During that process, there are few things that the United States should prefer to have happen. Here is a list of those positive outcomes:

  1. We want the United States to capture/create the maximum amount of the wealth creation from this process
  2. We want to be in a position to protect displaced workers immediately when they are no longer relevant to the workforce
  3. With regards to cutting edge skills, we want citizens of the United States to be in a position to posses the requisite skills for new age work.
  4. We want to begin to build a warchest of resources for the time when significant unemployment becomes more permanent (Sovereign Wealth Fund)
  5. As a country we want to be rewarded for create a favorable climate for automation and artificial intelligence

So with those goals in mind. We can examine how a sovereign wealth fund can help to achieve those goals and maximize benefits to all citizens from the expected increase in AI and automation.

To be clear, the funding for a US Soverign Wealth Fund (USSWF) is effectively a tax, so this is somewhat of a sematic argument, but the asset based funding and goal-orientation of the USSWF is what will differentiate it from the current discussions about a “robot tax”.

Sovereign Wealth Fund implementation, some of the highlights

  1. The USSWF immediately receives a 1% common stock issuance from every company in the S&P 500. Followed by a .25% increase each year for 16 years.
  2. Every new company admitted into the S&P 500 must immediately issue this 1% common stock into the fund and begin the annual contributions
  3. All shares included in the fund are standard voting shares with liquidity available from the traditional markets trading the underlying company common stock
  4. The USSWF is overseen by the US Congress
  5. The USSWF is controlled by a panel of 50 commissioners, one from each state, appointed by the governor of each state. The commissioner must be from a political party that the governor is NOT a member of. These are attempts to depoliticize the body and to identify citizens to serve the country.
  6. The USSWF exists for the benefit of all Americans equally, with a primary concentration of providing welfare/basic income support to US citizens
  7. The USSWF is a lock box and not part of the US Federal Govt budget. The US Federal Government may appeal to the USSWF for funding associated with the mission of the USSWF
  8. Simplicity — implementation is easy, automatic and systematic

Now, if this concept seems foreign to you, then you are clearly an American. Here are a list of the world’s largest Soverign Wealth Funds over $100 bil.

 

Notice that much of the oil rich world has already created a Sovereign Wealth Fund. Norway’s fund is the gold standard, providing a capital asset value of $192k per person. At 5% income stream, that is nearly a $10k payment per person with capital preservation. That payment can go a long way towards supporting a population. That is the concept we are trying to achieve with the USSWF, a bulwark against future challenges, such as technological unemployment. But instead of oil, or a single resource, the citizens of the US should benefit from the entirety of the economy. A genuine justification of our consumer-oriented mentality and laissez-faire regulatory environment.

As we try to capture the entirety of the US economy, we must find a liquid, tradeable proxy. I have suggested the S&P 500 as the right proxy for the US market. To avoid picking winners and losers directly. Automation and technological advances will happen in all industries and all markets. The best way to be comprehensive is to use a benchmark of the whole economy. The S&P 500 is widely regarded as the best proxy for the US large capitalization economy. Constituents of the S&P 500 are the most liquid stocks in the world and will result in the most minimal of impact from USSWF ownership. By having participation in all companies, the USSWF is in a position to reap the benefits of US economic growth, as represented by the stock market, on behalf of the citizens of the US.

There is also the opportunity to have the USSWF funded in additional ways, whether that be from oil royalties, carbon credits or other mutually beneficial societal resource dividend. These discussions are merited, but are beyond the scope of this paper today.

Some features and benefits of a USSWF:

  1. The intial funding is a tax on the wealthy (holders of stocks), which is consistent with a goal of redistribution and tempering the rising income inequality in the US. In some ways, it becomes a tax on foreign investors as well, which is an ancillary benefit.
  2. Keep capital in the hands of capital allocators and away from the government until needed
  3. It increases participation in the US economy as measured by the stock markets. All US citizens would have a share of the stock market up from the estimated 52% today.
  4. It creates a dynamic pool of assets which has an opportunity to track the expected wealth creation resulting from AI and Automation. Therefore the USSWF is well aligned with the goal of rewarding citizens for the possibility of having their jobs displaced
  5. A Sovereign Wealth Fund is far from new and has many precedents both foreign and domestic (public pension funds) upon which to build a robust program designed to maximize the benefit to US Citizens
  6. Alignment instead of antagonistic taxation. If the company wins, the country wins and even the displaced workers win.
  7. Compound return. Because the USSWF will not have a current cashflow demand, it will be allowed to grow for a pre-determined period, maybe 20 years. This will also maximize the benefits available to all current and future US Citizens

Some areas of concern (these points will be discussed further below):

  1. Dilution. This is immediate dilution on the value of existing shareholders and the equity markets are not likley to be happy about it.
  2. Commingling of corporate and government interests
  3. Reliance upon capitalism and equity market growth to fund future challenges. This is both a risk/reward area of concern and an idealogical area of concern for some.
  4. Bureacracy/control of the USSWF
  5. Increase in income inequality

Let’s tackle a few of these. Dilution, will be a concern and in a vacuum, one could expect the stock market to make a significant move backwards on this announcement. However, one could argue that the establishment of a long term, pro-automation policy and the avoidance of a “robot tax” might be equally as beneficial offsetting the expected dilution. Additionally, there is the benefit to the market as a whole from proactively addressing a pressing concern.

Commingling of government and corporate interests. These interests have been commingled since the formation of the country, however the US has had an active policy to not become shareholders of corporations and this would change (unlike most other countries). At the 5% max level, I don’t see government being excessively influential in a shareholders meeting. But it may be the implied result that is concerning. The implied result is that the government is now “pro-business” and thus must be anti-citizen or anti-worker. I recognize the concern, and this blends into point #3 Reliance upon Capitalism and equity markets. Unless we are prepared to change the nature of this country immediately, we are already firmly entrenched in capitalism. We ought to maximize its effect in this endeavor. Capitalism has an excellent track record of wealth creation, but sometimes at the expense of some who are left behind whether through income inequality or lax worker safeguards. In the case of the USSWF, at least the worker is better positioned to profit from capitalism than she would be otherwise. It is my belief that any other solution is too far afield from the current wishes of the population.

Finally, bureaucracy and control. The USSWF involves a good deal of assets and that means it is powerful. That will attract bureaucracy and politicians who will look to control the USSWF. A 50 person panel is more than sufficient expertise on hand to make quality, non-political decisions in the interest of the American people, especially if they are deemed fiduciaries by the legal definition. Any attempts to associate a large staff with the USSWF should be met with great skepticism as the mandate is the operation of an Index fund tied directly to the S&P 500. It is a pool of assets, designed to represent the US economy, not to create a target return. Any return associated with the assets is a function of economic growth or broad stock market appreciation only.

Increase in income inequality. As a result of trying to maximize the United States’ share of growth due to automation, much of those rewards will accrue to the wealthy. This is a result of the reliance on capitalism and maximum capital allocation efficiency at the corporate level. This of course benefits the USSWF, but it benefits the owners of capital more (95% versus 5%) in the short run. To mitigate this problem, a crucial piece of the Tech, AI and Automation policy is a significant increase in taxes on the wealthy. Universal Basic Income, which currently is the only means by which the massive amount of unemployed are able to survive, must be funded from somewhere. The USSWF would be in a position to provide a solid foundation for this payment, but it will be insufficient to meet the full demand. The owners of capital MUST recognize that their windfalls from reaping the benefits of a laissez-faire regulatory and tax regime will be called upon to benefit displaced workers.

No plan is perfect, but the USSWF is better than a “robot tax”, whatever that might turn out to be. The key to success with the USSWF or any other solution aimed at protecting US citizens from technological unemployment is to participate in the wealth creation. Other ideas along those lines would be welcome additions to the discussion. I hope that you will think deeply on these points, challenge the theories, make suggestions and further this debate.

Universal Basic Income Isn't Perfect - It's a Necessity Because of the Future of Work

The Roosevelt Institute recently did a study, which is being quoted everywhere, that claims that a $12,000 per annum Universal Basic Income (UBI) would increase the GDP by 12.56% above baseline over the course of 8 years. After which, the economy would return to baseline. Of course the part that no one mentions is that this transfer payment is DEBT-FINANCED by the US Government. Using the same model, when taxes were used to pay for the UBI there was no effect. So they adapted the model to account for distributional effects but they made no similar adjustment to the investment impact because the only impact they examined was marginal impact of consumption and not the impact on investment.

So let me get this straight, if the US Government borrows a ton of money and gives it to people to spend then the economy will be stimulated… I’m shocked! ANY program where the government borrows that much money will project to stimulate the economy. Whether it is a tax cut for the rich (because these days only about 49% of the population pays taxes… so by definition) or a huge infrastructure program or a huge energy program or a huge defense program. Money into the system creates GDP. One can debate its effectiveness, but I am not interested in that. The point here is that proponents of UBI need to stop citing this study. There is no way that a UBI program will be done in this country with debt financing… zero chance.

UBI will need to figure out how to represent itself. Here are some key questions for UBI, that if we can all embrace these suggested answers it will go a long way to keeping the story focused and making it palatable for people who oppose UBI.

  1. Is it inherently stimulative (no)?
  2. Is it redistrubution (yes)?
  3. Is it a possible answer for a world where the future or work is in doubt (yes)?
  4. Does it attempt to achieve some basic societal goals of food, clothing and shelter for all people (yes)?
  5. Is it possible that UBI has a negative impact on GDP (yes)?
  6. Does it matter if UBI may negatively impact the GDP if it achieves the societal goals and we have growth due to technology (no)?
  7. Should we consider UBI, like food stamps, which can only be used for the intended purposes? (no)
  8. Does UBI have knock on psychological positive effects (unknown)?

Let’s tackle a few of these points, starting with #1. In the Roosvelt institute study, the stimulative effect was due to the borrowing not the concept of UBI, so we should stop quoting GDP stimulus ASAP. Now, if UBI through redistrubtion is considered there are merits to both sides of the argument and I doubt that our models are sensitive enough to discern a positive or negative effect. First, if the wealthy are saving their money and NOT deploying it productively via their capital investments, then taking wealth from them in order to give it to the poor makes brilliant sense. The poor will use it to buy good which would increase GDP thru consumption. However the evidence here is probably the opposite. Which means that this capital has had a knock on effect of stimulating more GDP growth than the likely effect of purchases of food, clothing and shelter. On top of that, there is no certainty that money for the poor will always be used productively. We have an opoid problem, gambling and other non-productive uses of UBI that should be concerning. If a portion of the UBI floats into unproductive spending then GDP takes a second hit. One hit from the negative redistrubution of capital and a second from unproductive expenditures. All in, I think UBI would be fortunate io breakeven on GDP impact. Thus we should avoid suggesting it is stimulative since that likely only hurts credibility.

#2 Is it redistribution? Of course it is. UBI should embrace this concept, especially in light of the huge increases in income inequality around the globe.

 

This is a disgusting graph. The World should be ashamed to allow this to happen. But, but, but, we have three choices when it comes to redistrubtion. Don’t do it, do it partially or do it completely. There is an argument for don’t do it. It is as follows. The overall wealth of the world has increased dramatically and thus the percentage of people living in poverty has fallen dramatically. So there is a robust argument that we should continue to allow the capital to flow to the wealthy, allow them to grow global GDP and allow trickle down improvements to the poor. It has worked as you can see in the chart below showing the numer of people living in poverty (using a few different definitions of poverty).

 

But it has not worked fast enough for many, which is what humanitarians argue. They may be right, I can’t be certain. Which leads us to our second choice, partial redistribution. Allow capital to flow and the rich to get richer (because it creates growth that we ALL can participate in), but let’s take a portion, enough to buy the poor up to the poverty line for starters. Covering food, clothing and shelter. Essentially trying to have our cake and eating it too. Leave the majority of the capital where it has proven to be productive but earn the societal good of eradicating poverty. Sounds good, but not likely to be that simple. Taking a meaningful portion of capital from the wealthy allocators will have a leveraged downside effect to the economy. We simply cannot know how negative it will be. We have already discussed the challenge of maximizing the UBI towards 100% productive usage of food, clothing and shelter. Failure to achieve 100% productive UBI usage reducing the value to GDP commensurately. Our final choice is complete redistrubtion, which is both impractical and ineffective and not worth writing more than a sentence about. Humans crave power and will always try to achieve, thus we will have income inequality. The point of this entire discussion is that we need to try to find the optimal amount of redistrubtion.

#3 the reason that ForHumanity supports UBI is because we anticipate machines replacing most human input into work. In a world where the available jobs are substantially fewer than the number of people, we must create a society that supports all of its members with the basic necessities, otherwise anarchy and revolution become the only choice. As much as humans crave power, they crave survival more. They will do what it takes to survive, even if that includes setting aside civil society. Is UBI the only answer to a world with far fewer jobs than people? ForHumanity remains open to other suggestions.

#4 UBI can target transfer payments to individual to cover food, clothing and shelter. To be honest, that should be good enough. It is a wonderful goal for a society to insure that all citizens have the basic necessities of life. To eradicate hunger and to eradicate homelessness should be priorities which is hardly debatable. It only becomes a debate when two things happen. Government gets involved to “administer” the goal and when those with the wealth are taxed for it. One way to begin the discussion is with the latter. Silicon Valley billionaires can advocate for the power of UBI, but they ought to volunteer their own wealth to pay for it, in the same breath. Until they do, it will remain an elusive goal. With respect to government, it becomes a political tool. “How to implement”, “fund for administration”, “who gets what and where and when”. When you examine Social Security, it had a similar mission and altruistic goal and now it is a political nightmare. Attempting to pass the most logical of adjustments like, raising the qualification age (associated with the unanticipated rise in life expectancy), is impossible simply because politics get in the way. So to attempt to deal with the political ramifications, I tried to tackle some political reform as well, see the link.

Universal Basic Income — A Working Road Map
As I recently discussed, there are a number of challenges to making Universal Basic Income (UBI) a real solution…medium.com

I approached UBI as a transfer payment offered in exachange for a 5 year commitment to government services that would be required by all citizens between the age of 18 and 30. The brief version of the theory is that would “justify” the payment for life to the UBI detractors who worry that money for “free” is a bad idea for a variety of reasons. Second, this substantially helps to pay for UBI by lowering the cost of running the government on all levels. In the end, government is needed to administer the policy and politics will make it difficult to implement well.

#5 Could UBI have a negative impact on GDP? I believe that it would. Based on the reduction of poverty argument already presented. While I am certain that a large percentage of the UBI payment would be consumed specifically for food, clothing and shelter, those are all consumption items and provided a 1 for 1 impact on GDP. Whereas, capital, treated as investment capital can be leveraged into investments that result in the expasion of profit and wealth. I’ll give you an example. If I have $1000 and buy $1000 of food taken from a wealthy individual out of her $1000 of savings. I am fed and the food companies and grocery stores have an additional $1000. The wealthy individual has $1000 less of savings and therefore made $1000 less investment in something. Wealthy individual invests $1000 in a widget company, which pays an employee $800 and invests $200 in equipment to sell more goods and services, such that the company produces $2000 worth of product, then our investment has grown wealth, far more than our transfer payment is capable. This is the argument of capitalism and the numbers have borne out the argument.

#6 Does it matter if the economy is negatively impacted. Well that depends, has automation and AI increased GDP by more? How negative is the impact? Remember, we have earned a series of societal goods by implementing our UBI payment, especially if the future of work leaves us with many who require their UBI to survive. We also may create enough wealth to achieve our goals. This is more a question of timing than anything else. But a small hit to GDP is worth it for the good that UBI can do.

#7 — Should UBI be portioned out, similar to FoodStamps (SNAP) in order to prevent misuse. Covered in the article below, misues of Food Stamps has generally been considered a fallacy. The vast majority of the program works effectively (less than 2% fraud rate) and has been used for food. While it is not unreasonable to believe that misuse might increase when a UBI payment is larger, the concern may not be great enough to merit forced participation. This also helps to avoid a large bureaucracy designed to policy the implentation of the policy, which is probably more valuable.

The Very Short History of Food Stamp Fraud in America
The reform of the United States' approximately $1 trillion in welfare programs is a perpetual subject for lawmakers to…time.com

#8 Does UBI have knock on psychological positive economic effects? Here I think there is some possibility. There are many studies being done now, in microcosm to try to identify the effects. But I am afraid that the “in microcosm” and the non-permanent basis of all of these studies is highly distortive. Most of these people are working while receiving the payment. In essence, when UBI is the only choice, people will react very differently. When the future of work means limited job opportunities for people, reactions will be different. I suggest that we temper the “UBI makes everyone feel great” feedback. UBI is a necessity, for a future where work is scarce and people may struggle to meet the basic necessities of life.

UBI may be a great next step in our system of basic welfare. It may represent the best way for the wealthy to support the world around them and insure that the basic necessities of life are available to all. But I caution proponents of UBI to avoid over reaching with the “merits” of UBI, in the end, I think it damages the credibility of a potentially necessary program.

AI Safety - The Concept of Independent Audit

For months I have regularly tweeted a response to my colleagues in the AI Safety community about being a supporter of independent audit, but recently it has become clear to me that I have insufficiently explained how this works and why it is so powerful. This blog will attempt to do that. Unfortunately it is likely be longer than my typical blog, so apologies in advance.

Independent audit defined. ForHumanity, or similar entity, that exists, not for profit, but for the benefit of humanity would conduct detailed, transparent and iterative audits on all developers of AI. Our audit would review the following SIX ELEMENTS OF SAFEAI on behalf of humanity:

  1. Control — this is an analysis of the on/off switch problem that plagues AI. Can AI be controlled by its human operators? Today it is easier than it will be tomorrow as AI is given more access and more autonomy.
  2. Safety — Can the AI system harm humanity? This is a broader analytic designed to examine the protocols by which the AI will manage its behavior. Has it been programmed to avoid human loss at all costs? Will it minimize human loss if no other choice? Has this concept even been considered?
  3. Ethics/Standards — IEEE’s Ethically Aligned Design is laying out a framework that may be adopted by AI developers to represent best practices on ethics and standards of operation. 
    There are 11 subgroups designing practical standards for their specific areas of expertise (P7000 groups). ForHumanity’s audit would look to “enforce” these standards.
  4. Privacy — Are global best practices being followed by the company’s AI. Today, to the best of my knoweldge, GDPR in Europe is the gold standard of privacy standards and would be the model that we would audit on.
  5. Cyber security — regarding all human data and interactions with the company's AI, are your security protocols consistent with industry best practices? Are users safe? If something fails, what can be done about it?
  6. Bias — Have your data sets and algorithms been tested to identify bias? Is the bias being correct for, if not, why not? AI should not result in classes of people being excluded from fair and respectful treatment

The criteria that is analysed is important, but it is not the most important aspect of independent audit. Market acceptance and market demand are the key to make independent audit work. Here’s the business case.

We have a well established public and private debt market. It is well over $75 trillion US dollars globally. One of the key driving forces behind the success of that debt market is Independent Audit. Ratings agencies, like Moody’s, Fitch and Standard & Poor’s for decades have provided the marketplace with debt ratings. Regardless of how you feel about ratings agencies or their mandate, one thing is certain, they have provided a reasonable sense of the riskiness of debt. They have allowed for a marketplace to be liquid and to thrive. Company’s (issuers) are willing to be rated and investors (buyers) rely upon them for a portion of their investment decision. It is a system with a long track record of success. Here are some of the features of the ratings market model:

  1. Issuers of debt find it very difficult to issue bonds without a rating
  2. It is a for-profit business
  3. There are few suppliers of ratings which is driven by market acceptance. Many providers of ratings would dilute their value and create a least common denominator approach. Issuers would seek the easiest way to get the highest rating.
  4. Investors rely upon those ratings for a portion of their decision making process
  5. Company’s provide either legally mandated or requested transparency into their financials for a “proper” assessment of risk
  6. There is an appeals process for companies who feel they are not given a fair rating
  7. The revenue stream from creating ratings allows the ratings agencies to grow and rate more and more debt

Now I would like to extrapolate the credit rating model into an AI safety ratings model to highlight how I believe this can work. However, before I do that there is one key feature of the ratings agency model that MUST exist for it to work. The marketplace MUST demand it. For example, if an independent audit was conducted on the Amazon Alexa (this has NOT happened to date) and it failed to pass the audit or was given a subsequent low rating because Amazon had failed some or all of the SIX ELEMENTS OF SAFEAI, then you, the consumer, have to stop buying it. When the marketplace decides that these safety elements of AI are important, that is when we will begin to see AI safety implemented by companies.

That is not to say that these companies and products are NOT making AI safety a priority today. We simply do not know. From my work,there are many cases where they are not, however without independent audit, we cannot know where the deficiencies lie. We also can’t highlight the successes. For all I know, Amazon Alexa would perfectly pass the SIX ELEMENTS OF SAFEAI today. But until we have that transparency, the world will not know.

That is why independent audit is so important. Creating good products safely, is a lot harder than just creating good products. When companies know they will be scrutinized, they behave better — that is a fact. No company will want to have a bad rating published about them or their product. It is bad publicity. It could ruin their business and that is the key for humanity. AI safety MUST become a priority in the buying decision for consumers and business implementation alike.

Now, a fair criticism of independent audit is the “how” part and I concur wholeheartedly, but it shouldn’t stop us from starting the process. The first credit rating would be a train-wreck of a process compared with the analysis that is conducted by today’s analysts. So it will be now, but the most important part of the process is the “intent” to be auditted and the “intent” to provide SAFEAI. We won’t get it perfectly right the first time nor will we get it right everytime, but we will make the whole process a lot better with transparency and effective oversight.

Some will argue for government regulation (see Elon Musk), but AI and the working being done by global corporations have already outstripped national boundaries. It would be very easy for an AI developer to avoid scrutiny that is nationally focused than a process that is transnational and market driven. Below I have created a list of reasons that market driven regulation, which this amounts to, is far superior to government based regulation:

  1. Avoids the tax haven problem associated with different rules in different jurisdictions, which simply shifts development to the easiest location
  2. Avoids government involvement, which frequently had been sub-optimal
  3. Allows for government based AI projects to be rated — huge benefit regarding autonomous weaponary
  4. Tackles the problem from a global perspective, which is how many of the AI developers already operate
  5. Market driven standards can be applied rather than bureaucracy and politics using the rules as tools to maintain their power
  6. Neutrality

Now how will this all work. It’s a huge endeavor and it won’t happen overnight, but I suspect that anyone taking the time to properly consider this will realize the merits of the approach and the importance of the general issue. It is something we MUST do. So here’s the process I suggest:

  1. Funded - we have to decide that this important and begin to fund the audit capabilities of ForHumanity or a like-minded organization
  2. Willingness - some smart company or companies must submit to independent audit, recognizing that today, it is a sausage making endeavor, but one that they and ForHumanity are undertaking together to build this process and to benefit both humanity and their product for achieving SAFEAI
  3. Market acceptance- this is a brand recognition exercise. We must grow the awareness of the issues around AI safety and have consumers begin to demand SAFEAI
  4. Revenue positive, Licensing - once the first audit is conducted, the company and product will be issued the SAFEAI logo. Associated with this logo is a small per product licensing (cents) fee payable to ForHumanity or a like organization. This fee allows us to expand our efforts and to audit more and more organizations. It amounts to the corporate fee payable for the benefits to humanity. (Yes, I know it is likely to be passed on to the consumer)
  5. Expansionary - this process will have to be repeated over and over and over again until the transparency and audit process become pervasive. Then we will know when products are rogue and out of compliance by choice, not from neglect or the newness of the process
  6. Refining and iterative- this MUST be a dynamic process that is constantly scouring the world for best-practices, making them known to the marketplace and allowing for implementation and upgrade. This process should be collaborative with the company being audited in order to accomplish the end goal of AI safety
  7. Transparent - companies must be transparent in their dealings
  8. Opaque - the rating cannot be transparent in the short coming in order to protect the company and the buyers of the product, who still choose to purchase and use the product. It is sufficient to know that there is a deficiency somewhere, but there is no need to make that deficiency public. It will be between ForHumanity and the company itself. Ideally the company learns of their deficiency and immediately aims to remedy the issue
  9. Dynamic - this team cannot be a bureaucratic team, it must be comprised of some of the most motivated and intelligent people from around the world determined to deliver AI safety to the world at-large. It must be a passion, it must be a dedication. It will require great knowledge and integrity
  10. Action-oriented - I am a do-er, this organization should be about auditing not about discussing. Where appropriate and obvious, like in the case of IEEE’s EAD, adoption should supersede new discussions. Take advantage of the great work being done already by the AI safety crowd
  11. And it has to be more than me - As I have written these words and considered these thoughts, it is always with the idea that it is impossible for one person, or even a small group to have a handle on “best-practices”. This will require larger teams of people from many nationalities, from many industries, from many belief systems to create the balance and collaborative environment that can achieve something vitally important to all of humanity.

Please consider helping me. You don’t have to agree with every thought. The process will change daily, the goalposts will change daily, “best-practices” will change daily. You have to agree on two things:

  1. That this is vitally important
  2. That independent audit is the best way to achieve our cumulative goals

If you can agree on these two simple premise then join with me to make these things happen. That requires that you like this blog post. That you share it with friends. That you tell them that it is important to read and consider. It requires you to reconsider how you review and buy AI associated products. It requires you to take some action. But if you do these things, humanity will benefit in the long run and I believe that is worth it.