artificial Intelligence

Technology is an Autocracy - and the Risk from Externalities is Growing

I suspect this is not an idea that many have considered. To date, as a society we really have not appreciated how technological change occurs and we certainly have rarely considered the governance of new change. When we have a discovery or an advancement of science, like all other inventions, it isn’t accomplished based upon the will of the majority. There is no vote, there is no consideration for society at-large (externalities). Rarely is the downside risk considered. One person sees a problem. Their own personal view of that problem and they aim to fix it. That is entreprenuerial, the foundation upon which much of Western Capitalism is built. It is also Authoritarian. One person, one rule and little or no accountability. Scary when you think about it. When you combine this “process” and lack of control with our species’ other great skill, “problem solving”, you create technological innovation and advancement which has a momentum that feels unstoppable. In fact, if you even “suggest” (which I am not) halting technological progress you get one of two response, “Luddite” or “You can’t”. That is how inexorable society views technological change.

So let me explain this in more detail, all technological advancement is about someone, somewhere seeing a problem, a weakness, a difficulty, a challenge and deciding to overcome that challenge. It’s admirable and impressive. It’s also creating problems. As a species we poorly weigh all aspects of a decision, all the pros and all the cons. Will we make money from this? Is this advancement, “cool”, Does this make my life “easier” are often the only inputs to our production/purchase decisions. There is a broad societal acceptance that “easier”, “freer” and “convenient” are universally beneficial. A simple counter-argument, of course, can be found in the gym. Your muscles would insist that “easier”, “freer” and”convenient” are not the best way for them to be strengthened or stamina to be built. They require “challenge”, “difficulty” and “strain” in order to grow and improve.

So when a new advance comes along, if it makes our life easier, even in the smallest way, we snatch it up instantly. Take, for example, Alexa and Google Home. Hugely successful products already, but was it really that difficult to type out our search query? Defenders will say things like, “now I can search while I am doing something else” or “this frees up my hands to be more productive”. And of course supporters point to the disabled for the obvious assistance to someone who is unable to type. But let’s examine the other side of the coin. What are the downside risks to such a product? Usually, not part of the sales process, I’m afraid, so you have to think carefully to compile a list. For example, our verbal search, has it caused us to lose a different skill, like the unchallenged muscle, whereby the finding of the solution was equally as important as the actual answer. But on top of that specific challenge, (the lost process and associated effort that may have strengthened the muscle, which in this case is the mind), what are some of the associated externalities to voice assistants? Let’s take a look at a few.

Is that search query worth having Amazon or Google record EVERY SINGLE WORD your family speaks within hearing distance? How about considering the fact that Amazon and Google now build personal profiles about your preferences based upon this information. Do you realize that this then limits your search results accordingly? Companies are taking CHOICE away from you and suprisingly, people don’t seem to care, in fact some like the idea. Other externalities exist as well. Recently, an Alexa recorded the conversation of a family and sent it to random contacts.

An Amazon Echo recorded a family's conversation, then sent it to a random person in their contacts…
A family in Portland, Ore., received a nightmarish phone call two weeks ago. "Unplug your Alexa devices right now," a…www.washingtonpost.com

Or this externality?

Alexa and Siri Can Hear This Hidden Command. You Can't.
BERKELEY, Calif. - Many people have grown accustomed to talking to their smart devices, asking them to read a text…www.nytimes.com

Without getting too paranoid, this last one is downright creepy and dystopian, but its potential ramifications are catastrophic if carried to the extreme. I am certain that when you decided to purchase your voice assistant, none of these externalities were factored into your buying decision.

I use this as a real life example, because our evaluation of new technology is based upon bad math. Is it cool? Is it easier? Is it profitable? and Can I afford it/afford to be without it? Nowhere in that equation are the following:

1) Does it further eliminate privacy?

2) Does it make me lazy or more dependent upon a machine?

3) Does to keep me from using all aspects of my brain as much?

4) Does it allow me to interact with actual humans less?

5) What new and different risks are associated with this product?

6) If someone wants to do me harm, does this enable them to do so in an unforseen way?

One of the chief arguments of technological advancement is that it frees us up from the mundane, routine tasks. To assume that those tasks do not have value is ludicrous on its face, but more importantly, if we are “freed” up, what are we freed up to do? Usually, we are told it is high-minded things… be entrepreneurial… be poetic… be deep thinkers… solve bigger problems… spend more time with loved ones. To be honest, I don’t see an explosion of those endeavors. A further example of our bad math…

We adopt these technological advancements often without thought about the impact it may have on our psyche, our self-worth, our ambitions, or our safety. We make these choices because they are cool, or they make something easier. With purchase decision processes that are this simple and “upside-only” considered, developers of technology have it easy to make products attractive.

This blog post has only lightly touched on malice, but all of society should be concerned about malicious intent and technology’s impact on our suceptibility. The more connected we are, the more dependent upon technology we are, the easier it is to cause mass harm. Perfect examples are recent virus attacks that spread to over 47 countries in a matter of a few hours. Sometimes the consequences are minor such as locked up computers or minor hassles we deal with like corrupted programs. Other times the hacker/criminal steals money or spies on you. Regardless of the magnitude of the impact, the ability of a criminal to “reach you” and “reach many” has been increased almost infinitely.

Here’s a final externality — how would you function without Internet? Not just for an hour or two, but permanenetly? How about without power? These are modern day conveniences that are assumed to be permenent, but how permanent are they? Do you need to consider how to operate when they are gone? Our connectivity and reliance on power make us deeply dependent and woefully unprepared for these alternatives, even if the odds of occurence are small. Hollywood frequently paints a grim picture of the dystopian existence when these conveniences are taken away, however, our ancestors existed quite nicely. Would you be prepared to survive or even thrive? The chances of these calamities are greater than zero…

Awareness of externalities is important, Consideration about downside risk is crucial and a willingness to realize that everything we do or even purchase has pros AND cons to them… The more awareness of the “cons” that you have, the better chance you have to mitigate those risks and reap a greater benefit from the upside of our technology choices. Most importantly, as a society, we will make better collective decisions on our technological progress and thwart the dangers of Technological Autocracy.

The Value of Data in the Digital Age

To date, data is being valued and priced by everyone EXCEPT the creators of that content — YOU. If we want to change that many things need to happen, but it begins with taking the time to figure out how a person values their own data. So let’s dissect and see if we can shed some light on this idea.

First, this process is VERY unique, because for the first time EVER, every single person on the planet has the potential to sell a product. Second, instead of being a “consuming” culture and propelling the corporate world forward, human beings are the ones in a position to profit. Third, everyone’s value judgement on data is unique, personal and unquestionable. Fourth, the opportunties for people to enrich themselves in a world of possible technological unemployment is tremendously important to the welfare of society. Finally, on top of the social ramifications, there is the obvious moral ramifications. As highlighted by the misuse of your data by corprorations, this idea of individual data ownership is morally correct.

Now we are not talking about ALL data. If you use Amazon’s services to buy something. All of the information that you create by searching, browsing and buying on the Amazon site, also belongs to them. So while I can opine on the “value” of individual data, I am certain that the legal questions around data are just beginning to be sorted out.

So with all of that in mind, let’s examine how each individual person may value the data that they can provide. Noteworthy to this discussion is that every individual has a different value function. Different people will value different things about their data. So it is vital that we appreciate that each person will price things uniquely.

However the parameter that they weigh can be summarized in a few key variables which are covered below. So lets create a list and explain each one:

  1. Risk of Breach — Each data item, if fallen into the wrong hands can cause harm. This is the risk of breach. This risk will be perceived differently based upon the reputation for safety of the data user, a perceived sense of associated insurance and the context of the data itself. For example, let’s consider 4 tiers of risk of breach. Tier 1 ( HARMLESS) — the contents of my dishwasher. This data might have value to someone and could not harm me if used nefariously. Tier 2 (LKELY HARMLESS)— the contents of my refrigerator. Still like to be unable to hurt me, but since people may know what I consume, one could they possibly tamper with it. Tier 3 (HARMFUL — ONLY INCONVENIENT) Examples here might include, financial breach. Where often the risk is not only yours, there is a form of insurance (bank insurance or other similar example), but it is dangerous and painful when it occurs. Tier 4 (HARMFUL — PERSONAL SAFETY) Examples here might include your exact location, your health records, access to your cybernetics and/or your genetic code.
  2. Risk of Privacy — How sensitive or personal do we view the data items. On this risk, I beleive that pricing is rather binary or maybe parabolic. Many data items which we can produce do not make us concerned if made public. That is, until a line is crossed, where we consider them private. My pictures, fine. My shared moments, fine. My bedroom behavior, private. So when that line is crossed, the price of the associated data rises substantially. To continue the example, a manufacturer of an everyday item, such as pots and pans, may not have to pay a privacy premium for data associated with our cooking habits. However, a manufacturer of adult toys, may have to pay a substantial premium to gain access to the bedroom behavior of a meaningful size sample of data. This is a good time to remember that these pricing mechanisms are personal, true microeconomics and everyone will value the risk of privacy differently. Even to the point where the example I just gave may be completely reversed. Bedroom behavior, no problem… but keep your eyes of of my cooking.
  3. Time — how easy is it to generate the data. Can I generate the data simply by existing? That data will be cheaper. Do I have to engage in a use of my time to create the data, that data will be more expensive. Would you like me to watch your commercials? more expensive. Would you like me to fill out your survey? 2 questions is cheaper than 20 questions. Time is also a function about the entire mechanism for creating, monitoring the data.
  4. Applicability — is the data being asked of me relevant to me. This is a question of “known” versus “unknown”. If I regularly drink a certain type of coffee, I am more likely to accept coupons, ads, sales and promotions from my coffee shop than I am from the Tea emporium around the corner. The function here is inverted, as the applicability decreases, the value of access to “me” increases from my perspective. That is not to say that it also increases for the data consumer, so with respect to applicability we have typically juxtaposed supply and demand curves. Also, if you only value data based on the supply side (what I am willing to give), then you miss out on revenue opportunities by allowing people access to your attention to “broaden your exposure”.

If the world changes to a personal data driven model, then the corporate world and the artificial intelligence world, will have to learn how to examine these key variables. The marketplace where these transactions will occur MUST be a robust mechanism for price discovery whereby many different bids and offers are being considered on a real-time basis to determine the “price/value” of data This is why I have proposed the Personal Data Exchange as a mechanism for identifying this value proposition. Exchanges are in the business of price discovery, on behalf of their listed entities, in this case, “you”.

Personal Data — How to Control Info and Get Paid for Being You
Here’s a quick and easy way to break some of the current monopolies that exist in the personal data market (looking at…medium.com

In the end, this is the morally corrected position. For a variety of reasons it a justifiable and necessary change to a marketplace that was created largerly without your consent. Recent changes to the law, such as GDPR in Europe have begun to claw back the rights of the indidivual. But if we can get this done, it becomes a complete gamechanger. Please… get on board. Your thoughts and critiques are welcome and encouraged, ForHumanity.

Facebook and Cambridge Analytica - "Know Your Customer", A Higher Standard

Know Your Customer (KYC) is a required practice in finance. Defined as the process of a business identifying and verifying the identity of its clients. The term is also used to refer to the bank and anti-money laundering regulations which governs these activities. Many of you will not be familiar with this rule of law. It exists primarily in the financial industry and is a cousin to laws such as Anti-Money Laundering (AML) and the Patriot Act of 2001 and US Freedom Act of 2015. These laws were designed to require companies to examine who their clients were. Are they involved in illegal activities? Do they finance terrorism? What is the source of these monies? Does the customer engage in illegal activity? The idea was to prevent our financial industry from supporting or further the ability of wrong-doers to cause harm. So how does this apply to Facebook and the Cambridge Analytica issues?

I am suggesting that the Data Industry, which includes any company that sells or provides access to individual information should be held to this same standard. Facebook should have to Know Your Customer. Google should have to Know Your Customer. Doesn’t this seem reasonable? The nice part about this proposal is that it isn’t new. We don’t have to draft brand new laws to cover it, rather just modify some existing language. KYC exists for banks, now let’s expand it to social media, search engines and the sellers of big data.

Everywhere in the news today, we have questions about “who is buying ads on social media”? Was it Russians trying to influence an election? Was it neo-nazis, ANTIFA or other radical idealogues? Is it a purveyor of “fake news”? If social media outlets were required to KYC their potential clients then they will be able to weed out many of these organizations well before their propoganda reaches the eyes of their subscribers. Facebook has already stated that they want to avoid allowing groups such as these to influence their users via their platform. So it is highly reasonable to ask them to do it, or be penalized for failure to do so. Accountability is a powerful thing. Accountability means that it actuals gets done.

Speaking of “getting it done”, some of you may have seen Facebook’s complete about-face on its compliance with GDPR, moving 1.5 billion users out of Irish jurisdiction and to California where there are very limited legal restricitons. https://arstechnica.com/tech-policy/2018/04/facebook-removes-1-5-billion-users-from-protection-of-eu-privacy-law/

If you aren’t familiar with GDPR, it is Europe’s powerful new privacy law. For months, Facebook has publically stated how it would intend to comply with the law. But when push came to shove, their most recent move is to avoid the law and avoid compliance as much as possible. So flowery-language is something we often here from corporate executives on these matters, but in the end, they will still serve shareholders and profits first and foremost. So unless, these companies are forced to comply, don’t expect them to do it out of moral compunction, that’s rarely how companies operate.

Returning the the practical application of KYC, for a financial firm, this means that a salesperson has to have a reasonable relationship with their client, in order to assure that they are compliant with KYC. They need to know the client personally and be familiar with the source and usage of funds. If a financial firm fails to execute KYC and it turns out that the organization they are doing business with is charged with a crime, then the financial firm and the individuals involved would find swift ramifications, including substantive fines and potential jail time. This should apply to social media and the data industry.

Let me give you a nasty example. Have you looked at the amazing detail Facebook or Google have compiled about you? It is fairly terrifying and there are some out there (Gartner, for example) who have even predicted that your devices will know you better than your family knows you by 2022.

https://www.gartner.com/smarterwithgartner/emotion-ai-will-personalize-interactions/

Now assuming this is even close to true for many of us, then imagine where that information is sold to a PhD candidate at MIT, or other reputable AI program, except that PhD student, beyond doing his AI research is funnelling that data on to hackers on the dark web, or worse, to a nation-state engaged in cyberwarfare. How easy would it be for that group to cripple a large portion of the country? Or maybe, it has already happened, with examples like Equifax and its 143 million client breach. Can you be sure that the world’s largest hacks aren’t getting their start by accessing your data from a data reseller?

To be fair, in finance, often times you are taking in the funds and controlling the activities after the fact. You know what is going on. With data, often times you are selling access or actual data to the customer and no longer have control over their activities, it might seem. But this argument simply enhances my interest in Know Your Customer, because these firms may have little idea how this data is being used or misused. Better to slow down the gravvy train than to ride it into oblivion.

Obivously the details would need to be drafted and hammered out by Congress, but I am seeking support of the broader concept and encouraging supporters to add it to the legislative agenda. ForHumanity has a fairly comprehensive set of legislative proposals at this point which we would hope would be consider in the broad concept of AI policy. Questions, thoughts and comments are always welcome. This field of AI safety remains so new that we really should have a crowd-sourced approach to identify best-practices. We welcome you to join us in the process.

ForHumanity AI and Automation Awards 2017

Top AI breakthrough

AlphaGo Zero — game changing machine learning, by removing the human from the equation

https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help/

This is an incredibly important result for a series of reasons. First, AlphaGo Zero learned to master the game with ZERO input from human players and previous experience. It trained knowing only the rules of the game. Demonstrating that it can learn better and faster with NO human involvement. Second, the implication for future AI advancement is likely that humans are “in the way” of optimal learning. Third, the AlphaGo Zero went on to become a Chess Master in 4 hours, demonstrating an adaptability that has dramatically increased the speed of machine learning. DeepMind now has over 400 PhD’s working on Artificial GENERAL Intelligence.

Dumbest/Scariest thing in AI in 2017

Sophia, AI machine made by Hanson Robotics, granted citizenship in Saudi Arabia — VERY negative domino effect

http://www.arabnews.com/node/1183166/saudi-arabia#.WfEXlGXM2oI.twitter

The implications associated with “citizenship” for machines are far reaching. Citizenship in most locations includes the right to vote, to receive social services from the government and the right to equal treatment. The world is not ready and may never be ready to have machines that vote, or machines that are treated as equals to human beings. While this was an AI stunt, it’s impact could be devastating if played out.

Most immediately Impactful development — Autonomous Drive vehicles

Waymo reaches 4 million actual autonomous test miles

https://medium.com/waymo/waymos-fleet-reaches-4-million-self-driven-miles-b28f32de495a

Autonomous cars progressing to real world applications more quickly than most anticipated

https://arstechnica.com/cars/2017/10/report-waymo-aiming-to-launch-commercial-driverless-service-this-year/

The impact of autonomous drive vehicles cannot be understated. From the devastation of jobs in the trucking and taxi industry to the freedom of mobility for many who are unable to drive. Also, autonomous driving is likely to result in significantly fewer deaths than human driving. This impact carries through to the auto insurance industry, where KPMG reckons that 71% of the auto insurance industry will disappear in the coming years. Central Authority control on the movement of people is another second order consideration that not many have concerned themselves with.

Most impactful in the future — Machine dexterity

Here are a few amazing examples of the advancement of machine dexterity. As machines are able to move and function similarly to humans, then their ability to replicate our functions increases dramatically. While this dexterity is being developed, work is also being done to allow machines to learn “how” to move like a human being nearly overnight, instead of through miles and miles of code — machines teach themselves from video and virtual reality.

Boston Dynamics Atlas robot human level agility approaching, including backflips — short video, easy watch

https://www.youtube.com/watch?v=fRj34o4hN4I&feature=youtu.be

Moley introduces the first robotic kitchen –

https://www.youtube.com/watch?v=QDprrrEdomM&feature=share

Machine movement being taught human level dexterity with simulated learning algorithms — accelerating learning and approximating human process

https://www.facebook.com/futurismscience/videos/816173055250697/?fref=mentions

Most Scary — Science Fiction-esque development

North Dakota State students develop self-replicating robot

https://www.manufacturingtomorrow.com/news/2017/07/19/ndsu-students-develop-3d-printing-self-replicating-robot/10034/

The impact of machines being able to replicate themselves, is a concept for most dystopian sci-fi movies, however there are many practical reasons for machines to replicate themselves, which is what the researchers at North Dakota State were focused on. However, with my risk management hat on for AI Safety, it just raises a whole other set of rules and ethics that need to be considered which the industry and the world are not prepared for.

Most Scary AI real-world issues NOW

Natural Language processing finds millions of faked comments on FCC proposal regarding Net Neutrality

https://futurism.com/over-million-comments-backing-net-neutrality-repeal-likely-faked/

AI being used to duplicate voices in a matter of seconds

https://www.digitaltrends.com/cool-tech/ai-lyrebird-duplicate-anyones-voice/

Using AI, videos can be made to make real individuals appear to say things that they did not…

https://futurism.com/videos/this-ai-can-create-fake-videos-that-look-real/

Top AI Safety Initiatives

Two initiatives share the top stop. Mostly because of their practical applications in a world that remains too heavily devoted to talk and inaction. These efforts are all about action!

AutonomousWeapons.org produces Slaughterbots video

This video is an excellent story telling device to highlight the risk of autonomous weaponry. Furthermore, there is an open letter being written that all humans should support. Please do so. Over 2.2 mil views so far

https://youtu.be/9CO6M2HsoIA

Ethically Aligned Design v 2.0

John Havens and his team at IEEE, the technology standards organization have collaborated extremely successfully with over 250 AI safety specialists from around the world to develop Ethically Aligned Design (EAD) v 2.0. It is the beacon on which AI and Automation ethics should build their implementation. More importantly, anyone who is rejecting the work of EAD should immediately be viewed with great skepticism and concern as to their motives.

http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf

AI and Automation - Managing the Risk to Reap the Reward

Personally, I have been hung up on Transhumanism. For those of you not familiar Transhumanism is the practice of imbedding technology directly nito your body — merging man and machine, a subset of AI and Automation. It seems obvious to me that when given the chance, some people with resources will enhance/augment themselves in order to gain an edge at the expense of other human beings around them. It’s really the same concept as trying to get into a better university, or training harder on the athletic field. As a species we are competitive and if it is important enough to you, one will do what it takes to win. People will always use this competitive fire to gain an upperhand on others. Whether it is performance enhancing drugs in sports, or paid lobbyists in politics, money, wealth and power create an uneven playing field. Technology is another tool and some, if not many, will use it for nefarious purposes, I am certain.

But nefarious uses of AI are only part of the story. When it comes to transhumanism, there are many upsides. The chance to overcome Alzeheimers, to return the use of lost limbs, to restore eyesight and hearing. Lots of places where technology in the body can return the disabled to normal human function which is a beautiful story.

So in that spirit, I want to shine a light on all of the unquestionable good that we may choose to do with advances in AI and automation. Because when we have a clear understanding of both good outcomes and bad outcomes, then we are in a position to understand our risk/reward profile with respect to the technology. When we understand our risk/reward profile, we can determine if the risks can be managed and if the rewards are worth it.

Let’s try some examples, starting with the good:

  1. AI and Automation may enhance the world’s economic growth to the point where there is enough wealth to eradicate poverty, hunger and homelessness.
  2. AI & Automation will allow researchers to test and discovery more drugs and medical treatments to eliminate debilitating disease, especially diseases that are essentially a form of living torture, like Alzheimers or Cerebral Palsy.
  3. Advancement of technology will allow us to explore our galaxy and beyond
  4. Advancement in computing technology and measurement techniques will allow us to understand the physical world around us at even the smallest, building block level
  5. Increase our efficiency and reduce wasteful resource consumption, especially carbon-based energy.

At least those five things are unambiguously good. There is hardly a rational thinker around who will dispute those major advancements and their inherent goodness. It is also likely that the sum of that goodness, out-weighs the sum of any badness that we come across with the advancement of AI and Automation. Why am I confident in saying that? Because it appears to be true for the entirety of our existence. The reason technology marches forward is because it is viewed as inherently good, inherently beneficial and in most cases, as problem solving.

But here is where the problem lies and my concern creeps in. AI has enornormous potential to benefit society. As a result it has significant power. Significant power comes with significant downside risk if that power is abused.

Here are some concerns:

  1. Complete loss of privacy (Privacy)
  2. Imbedded bias in our data, systems and algorithms that perpetuate inequality (Bias)
  3. Susceptibility to hacking (Security)
  4. Systems that operating in and sometimes take over our lives, without sharing our moral code or goals (Ethics)
  5. Annihiliation and machine takeover, if at some point the species is deemed useless or worse, a competitor (Control and Safety)

There are plenty of other downside risks, just like there are plenty of additional upside rewards. The mission of this blog post was not to calculate the risk/reward scenarios today, but rather to highlight for the reader that there are both Risks and Rewards. AI and Automation isn’t just awesome, there are costs and they need to be weighed.

My aim in the 12+ months that I have considered the AI Risk space has been very simple, but maybe poorly expressed. So let me try to set the record straight with a couple of bullet points:

  1. ForHumanity supports and applauds the advancement of AI and Automation. ForHumanity also expects technology to advance further and faster than the expectation of the majority of humanity.
  2. ForHumanity believes that AI & Automation have many advocates and will not require an additional voice to champion its expected successes. There are plenty of people poised to advance AI and Automation
  3. ForHumanity believes that there are risks associated with the advancement of AI and Automation, both to society at-large as well as to individual freedoms, rights and well-being
  4. ForHumanity believes that there are very few people who examine these risks, especially amongst those who develop AI and Automation systems and in the general population
  5. Therefore, to improve the likelihood of the broadest, most inclusive positive outcomes from the advancement of technology, ForHumanity focuses at identifying and solving the downside risks associated with AI and Automation

When we consider our measures of reward and the commensurate risk, are we accurately measuring the rewards and risks? My friend and colleague John Havens at IEEE has been arguing for sometime now that our measures of well-being are too economic (profit, capital appreciation, GDP) and neglect more difficult to measure concepts such as joy, peace and harmony. The result is that we may not correctly evaluate the risk/reward profile, reach incorrect conclusions and charge ahead with our technological advances. It is precisely that moment that my risk management senses kick in. If we can eliminate or mitigate the downside risks, then our chance to be successful, even if measured poorly increases dramatically. That is why ForHumanity focuses on the downside risk of technology. What can we lose? What sacrifices are being made? What does a bad outcome look like?

With that guiding principle, ForHumanity presses forward to examine the key risks associated with AI and Automation, Safety & Control, Ethics, Bias, Privacy and Security. We will try to identify the risks and solutions to those risks with the aim to provide each member of society the best possible outcome along the way. There may be times that ForHumanity comes across as negative or opposed to technology, but in the vast majority of cases that is not true. We are just focused on the associated risks with technology. The only time we will be negative on a technology is when the risks overwhelm the reward.

We welcome company on this journey and certainly could use your support as we go. Follow our facebook feed ForHumanity or Twitter @forhumanity_org and of course on the website at https://www.forhumanity.center/

Ban Autonomous AI Currency/Capital Usage

The current and rightful AI video rage is the SlaughterBots video, ingeniously produced by Stuart Russell of Berkeley and the good people at autonomousweapons.org. If you have not watched AND reacted to support their open letter, you are making TWO gigantic mistakes. Here is the link to the video.

https://youtu.be/9CO6M2HsoIA

As mentioned, there are good people fighting the fight to keep weapons systems from being fully autonomous, so I won’t delve into that more here. Instead, it reminded me that I had not written a long conceived blog post about autonomous currency/capital usage. Unfortunately, I don’t have a Hollywood produced YouTube video to support this argument, but I wish that I did because the risk is probably just as great and far more insidious.

The suggested law is that no machine should be allowed to use capital or currency based upon its own decision. Like in the case of autonomous weaponry, I am arguing that all capital/currency should 100% of the time be tied to a specific human being, to a human decision maker. To be more clear, I am not opposed to automated Billpay and other forms of electronic payment. In all of those cases, a human has decided to dedicate capital/currency to meet a payment and to serve a specific purpose. There is human intent. Further, I recognize the merits of humans being able to automate payments and capital flows. This ban is VERY specific. It bans a machine, using autonomous logic which reaches its own conclusions from controlling a pool of capital which it can deploy without explicit consent from a human being.

What I am opposed to is machine intent. Artificial Intelligence is frequently goal-oriented. Rarer is it goal-oriented and ethically bound. Rarer still is an AI that is goal-oriented, where societal goals are part of its calculus. Since an AI cannot be “imprisoned”, at least not yet. And it remains to be seen if an AI can actually be turned “Off” (see this YouTube video below to better understand the AI “off button problem”). This ban is necessary.

https://youtu.be/3TYT1QfdfsM

So without the Hollywood fan fare of Slaughterbots, I would like to suggest that the downside risk associated with an autonomous entity controlling a significant pool of capital is equally as devestating as the potential of Slaughterbots. Some examples of risk:

  1. Creation of monopolies and subsequent price gauging on wants and needs, even on things as basic as water rights
  2. Consume excess power
  3. Manipulate markets and asset prices
  4. Dominate the legal system, fight restrictions and legal process
  5. Influence policy, elections, behavior and thought

I understand that many readers have trouble assigning downside risk to AI and autonomous systems. They see the good that is being done. They appreciate the benefits that AI and automation can bring to society, as do I. My point-of-view is a simple one. If we can eliminate and control the downside risk, even if the probability of that risk is low, then our societal outcomes are much better off.

The argument here is fairly simple. Humans can be put in jail. Humans can have their access to funds cut off by freezing bank accounts or limiting access to the internet and communications devices. We cannot jail an AI and we may be unable to disconnect it from the network when necessary. So the best method to limit growth and rogue behavior is to keep the machine from controlling capital at the outset. I suggest the outright banning of direct and singular ownership of currency and tradeable assets by machines ONLY. All capital must have a beneficial human owner. Even today, implementing this policy would have an immediate impact in the way that AI and Automated systems are implemented. These changes would make us safer and reduce the risk that an AI might abuse currency/capital as described above.

The suggested policy has challenges already today, that must be addressed. First, cryptocurrencies may already be beyond an edict such as this, as there is an inability to freeze cryptos. Second, most global legal systems allow for corporate entities, such as corporations, trusts, foundations and partnerships which HIDE or at least obfuscate the beneficial owners. See the Guidance on Transparency and Beneficial Ownership from the Financial Action Task Force in 2014, which highlights the issue:

  1. Corporate vehicles — such as companies, trusts, foundations, partnerships, and other types of legal persons and arrangements — conduct a wide variety of commercial and entrepreneurial activities. However, despite the essential and legitimate role that corporate vehicles play in the global economy, under certain conditions, they have been misused for illicit purposes, including money laundering (ML), bribery and corruption, insider dealings, tax fraud, terrorist financing (TF), and other illegal activities. This is because, for criminals trying to circumvent anti-money laundering (AML) and counter-terrorist financing (CFT) measures, corporate vehicles are an attractive way to disguise and convert the proceeds of crime before introducing them into the financial system.

The law that I suggest implementing is not a far leap from the existing AML and Know Your Customer (KYC) compliance that most developed nations comply with. In fact that would be the mechanism of implementation. Creating a legal requirement for human beneficial ownership, while excluding machine ownership. Customer Due Diligince and KYC rules would include disclosures on human beneficial ownership and applications of AI/Automation.

Unfortunately, creating this ban is only a legal measure. There will come a time when criminals will use AI and Automation in a way that will implement currency/capital nefariously. This rule will need to be in place to allow law enforcement to make an attempt to shut it down before it becomes harmful. Think about the Flash Crash of 2010, which eliminated $1 trillion dollars of stock market value in little more than a few minutes. This is an example of the potential damage that could be inflicted on a market or prices and believe-it- or-not,the Flash Crash of 2010 is probably a small, gentle example of downside risk.

One might ask who would be opposed to such a ban. There is a short and easy answer to that. It’s Sophia the robot created by Hanson Robotics (and the small faction of people advocating for machine rights already) that was recently granted citizenship in Saudia Arabia and now “claims” to want to have a child. If a machine is allowed to control currency, with itself as a beneficial owner, then robots should be paid a wage for labor. Are we prepared to hand over all human rights to machines? Why are we building these machines at all if we don’t benefit from them and they exist on their own accord? Why make the investment?

World's first robot 'citizen' says she wants to start a family
JUST one month after she became the world's first robot to be granted citizenship of a country, Sophia has said that…www.news.com.au

Implementing this rule into KYC and AML rules is a good start to controlling the downside risk from autonomous systems. It is virtually harmless today to implement. If and when machines are sentient, and we are discussing granting them rights, we can revisit the discussion. For now however, we are talking about systems, built by humanity to serve humanity. Part of serving humanity is to control and eliminate downside risk. Machines do not need to have the ability to control their own capital/currency. This is easy so let’s get it done.

Robot Tax (No), Sovereign Wealth Fund (Yes)

 

Let’s talk for a minute about Robot Tax. Recently, it’s been a quick fix concept, espoused by Bill Gates and others. In San Francisco and South Korea, there is already talk of implementation. It’s the wrong idea, unless The United States truly wants to hinder robot implementation, in which case let’s hope the rest of the world agrees at the exact same moment to the exact same policy.

The idea of a “robot tax” comes from the fear of Technological Unemployment. Machines replacing human workers at many jobs, without an offsetting number of replacement jobs. I find this fear to be very reasonable as I have argued here: https://medium.com/@ForHumanity_Org/future-of-work-why-are-we-struggling-to-understand-the-disruption-e7661a2a0a81.

I also recognize that not everyone thinks that Technological Unemployment is a problem. However, there is one thing I do know. If Tech Uneployment is not a problem, then we will end up in a good place with minimal risk and an economy that has grown more efficient, creating many high paying jobs. If however, Tech Unemployment does come to pass, then we will be faced with some major societal challenges. Upside gain = POSITIVE, down side risk = VERY NEGATIVE. So with my risk management hat on, this blog post attempts to deal with the downside risk of Technological Unemployment, by tackling the issue of “robot tax” and introducing an alternative solution - a United States Sovereign Wealth Fund (USSWF).

To begin, let’s examine some key assumptions for how Technological Unemployment may occur, recognizing that this is not a certain outcome. There are those who even argument that it won’t happen at all.

  1. Technological unemployment will be a process that takes years if not decades.
  2. We don’t know how and when people will become displaced. This is true at a company level, a sector level and across the economy as a whole, which I discussed here https://medium.com/@ForHumanity_Org/the-process-of-technological-unemployment-how-will-it-happen-489e2f8b037c
  3. We don’t know what new jobs will be created
  4. There is an implicit concern about rising income inequality as the owners of capital reap the rewards of automation, while labor receives a decreasing share — approaching zero
  5. There is a belief that technological unemployment will be a byproduct of significant growth in economic production and subsequent wealth attributable to the implementation of AI and Automation.

Technological Unemployment, as discussed in detail by Oxford (2013), PwC (2016), Mckinsey (2017) and referenced directly by the White House report on AI and Automation (October 2016) is estimated to result in as much as 30–40% unemployment. Many people believe that this possibility is real and as a result they are concerned about supporting the unemployed. This is the genesis of the “robot tax” concept. Here are some concerns about the idea:

  1. It picks winners and losers based on how easily things are automated
  2. It creates a huge competitive advantage for countries/jurisdictions that do not tax robots. Most companies competing in AI and Automation are global in nature. Machine implementation will simply be shifted to other jurisdictions where the tax impact is minimized, so will the wealth creation associated with that machine implementation. Apple is a perfect example with nearly $250 billion dollars in cash overseas. Nearly all of that offshore cash is a function of tax issues. This point is not debatable, it is a corporate responsibility to minimize taxes legally, just as Apple has done. There is significant tangible history of corporations shifting production to jurisdictions where the tax is favorable. A robot tax punishes companies for innovating and will drive wealth creation out of the United States.
  3. Bureaucracy, companies rarely “flip the switch to automation” on a Friday and asks a set of employees to stop coming in on Monday. Therefore, identifying human replacement by machines will be difficult and policing a “robot tax” implementation would require significant manpower. If you are like me, when you see something is difficult to police, you probably also imagine a large bureaucracy trying to police it. Therefore, imagine the IRS, do we really need an IRTS (Internal Robot Tax Service) too? I suspect that is something most people would like to avoid. Taxes are difficult to implement on all companies public and private.
  4. Timing and measurement mismatch — Another problem with the “robot tax” concept is timing. For example, Company A automates Person A out of a job and therefore is subject to a tax. So Company A has its profit and capital reduced to pay the tax, but if this happens tomorrow, it is reasonably likely that Person A finds another job. Now the tax has reduced the capital allocation capability of the growing company and wasn’t put to good use supporting Person A who didn’t require the funds. Instead, it sits with the government, which is surely an inefficeint allocation of capital.

On top of all of this, no one has layed out the entire structure of a “robot tax”. Some have suggested banning the replacement of jobs by machines. Others have suggested a tax without much detail. In South Korea, where reports came out that they had enacted the first “robot tax”, it turned out to be all hype. The actual implementation was the decrease of tax INCENTIVES for the makers of automation.

In the end I suspect a Robot Tax is simply a reflexive action being suggested by people who are rightfully concerned but have no other alternatives to address their concern. Ideally this paper and the USSWF addresses those issues and proves to be a better tool to tackle the challenges.

Look I get the theory, if you want to grow your company and make more profits for yourself by automating, let’s tax those profits because you are removing jobs from the workforce. However a tax is punitive and redistributional. Why not participate with the company in their wealth creation, on behalf of that displaced worker. Practically speaking, if a worker were to be laid off by Google, Amazon or Apple, but compensated in stock for their termination. There is a decent chance that the capital gains of the stock may offset the lost wages.

If you believe the concerns about Technological Unemployment and that a significant amount of work will be replaced by automation, we do need to prepare for higher, permanent unemployment. So what should be done. The answer, from my perspective, is a series of solutions listed below (of course, this is no small challenge and requires many policies to try to mitigate the risks). This list amounts to a comprehensive AI, Automation and Tech policy for the United States (where I have covered them previously, I will show the links), but for the rest of the piece, we will concentrate on point #5:

  1. Lifetime learning to ensure we have the most nimble and current workforce available https://medium.com/@ForHumanity_Org/lifetime-learning-a-necessity-already-2e96807251db
  2. A laissez-faire, regulatory environment (as discussed by Andrea O’Sullivan and others) that encourages leadership in the area of AI and Automation, (one could argue, this exists today, but is unlikely to remain so lenient, which may eventually hinder the expansion) https://www.technologyreview.com/s/609132/dont-let-regulators-ruin-ai/
  3. Independent, market-driven audit oversight on key inputs to AI and Automation, such as ethics, standards, safety, security, bias and privacy. A model, not dissimlar to credit ratings https://medium.com/all-technology-feeds/ai-safety-the-concept-of-independent-audit-370bb45c01d
  4. A plan for shifting current welfare and Social Security into a Universal Basic Income https://medium.com/@ForHumanity_Org/universal-basic-income-a-working-road-map-66d4ccd8a817, coupled with an increase in taxes on the wealthy to make the UBI funding budget neutral. In the future, NOT TODAY.
  5. Finally a Soverign Wealth fund for the United States (USSWF) designed to maximize the benefits of the growth and expansion on behalf of all US citizens equally.

Since this post is introducing the Sovereign Wealth Fund as a means towards maximizing the benefit of the comprensive Technology Policy listed above, we will remain focused on it and the other elements of the policy remain open for debate elsewhere.

Returning to the idea of Technological Unemployment, we can all agree it would be a process and would take some time to occur. During that process, there are few things that the United States should prefer to have happen. Here is a list of those positive outcomes:

  1. We want the United States to capture/create the maximum amount of the wealth creation from this process
  2. We want to be in a position to protect displaced workers immediately when they are no longer relevant to the workforce
  3. With regards to cutting edge skills, we want citizens of the United States to be in a position to posses the requisite skills for new age work.
  4. We want to begin to build a warchest of resources for the time when significant unemployment becomes more permanent (Sovereign Wealth Fund)
  5. As a country we want to be rewarded for create a favorable climate for automation and artificial intelligence

So with those goals in mind. We can examine how a sovereign wealth fund can help to achieve those goals and maximize benefits to all citizens from the expected increase in AI and automation.

To be clear, the funding for a US Soverign Wealth Fund (USSWF) is effectively a tax, so this is somewhat of a sematic argument, but the asset based funding and goal-orientation of the USSWF is what will differentiate it from the current discussions about a “robot tax”.

Sovereign Wealth Fund implementation, some of the highlights

  1. The USSWF immediately receives a 1% common stock issuance from every company in the S&P 500. Followed by a .25% increase each year for 16 years.
  2. Every new company admitted into the S&P 500 must immediately issue this 1% common stock into the fund and begin the annual contributions
  3. All shares included in the fund are standard voting shares with liquidity available from the traditional markets trading the underlying company common stock
  4. The USSWF is overseen by the US Congress
  5. The USSWF is controlled by a panel of 50 commissioners, one from each state, appointed by the governor of each state. The commissioner must be from a political party that the governor is NOT a member of. These are attempts to depoliticize the body and to identify citizens to serve the country.
  6. The USSWF exists for the benefit of all Americans equally, with a primary concentration of providing welfare/basic income support to US citizens
  7. The USSWF is a lock box and not part of the US Federal Govt budget. The US Federal Government may appeal to the USSWF for funding associated with the mission of the USSWF
  8. Simplicity — implementation is easy, automatic and systematic

Now, if this concept seems foreign to you, then you are clearly an American. Here are a list of the world’s largest Soverign Wealth Funds over $100 bil.

 

Notice that much of the oil rich world has already created a Sovereign Wealth Fund. Norway’s fund is the gold standard, providing a capital asset value of $192k per person. At 5% income stream, that is nearly a $10k payment per person with capital preservation. That payment can go a long way towards supporting a population. That is the concept we are trying to achieve with the USSWF, a bulwark against future challenges, such as technological unemployment. But instead of oil, or a single resource, the citizens of the US should benefit from the entirety of the economy. A genuine justification of our consumer-oriented mentality and laissez-faire regulatory environment.

As we try to capture the entirety of the US economy, we must find a liquid, tradeable proxy. I have suggested the S&P 500 as the right proxy for the US market. To avoid picking winners and losers directly. Automation and technological advances will happen in all industries and all markets. The best way to be comprehensive is to use a benchmark of the whole economy. The S&P 500 is widely regarded as the best proxy for the US large capitalization economy. Constituents of the S&P 500 are the most liquid stocks in the world and will result in the most minimal of impact from USSWF ownership. By having participation in all companies, the USSWF is in a position to reap the benefits of US economic growth, as represented by the stock market, on behalf of the citizens of the US.

There is also the opportunity to have the USSWF funded in additional ways, whether that be from oil royalties, carbon credits or other mutually beneficial societal resource dividend. These discussions are merited, but are beyond the scope of this paper today.

Some features and benefits of a USSWF:

  1. The intial funding is a tax on the wealthy (holders of stocks), which is consistent with a goal of redistribution and tempering the rising income inequality in the US. In some ways, it becomes a tax on foreign investors as well, which is an ancillary benefit.
  2. Keep capital in the hands of capital allocators and away from the government until needed
  3. It increases participation in the US economy as measured by the stock markets. All US citizens would have a share of the stock market up from the estimated 52% today.
  4. It creates a dynamic pool of assets which has an opportunity to track the expected wealth creation resulting from AI and Automation. Therefore the USSWF is well aligned with the goal of rewarding citizens for the possibility of having their jobs displaced
  5. A Sovereign Wealth Fund is far from new and has many precedents both foreign and domestic (public pension funds) upon which to build a robust program designed to maximize the benefit to US Citizens
  6. Alignment instead of antagonistic taxation. If the company wins, the country wins and even the displaced workers win.
  7. Compound return. Because the USSWF will not have a current cashflow demand, it will be allowed to grow for a pre-determined period, maybe 20 years. This will also maximize the benefits available to all current and future US Citizens

Some areas of concern (these points will be discussed further below):

  1. Dilution. This is immediate dilution on the value of existing shareholders and the equity markets are not likley to be happy about it.
  2. Commingling of corporate and government interests
  3. Reliance upon capitalism and equity market growth to fund future challenges. This is both a risk/reward area of concern and an idealogical area of concern for some.
  4. Bureacracy/control of the USSWF
  5. Increase in income inequality

Let’s tackle a few of these. Dilution, will be a concern and in a vacuum, one could expect the stock market to make a significant move backwards on this announcement. However, one could argue that the establishment of a long term, pro-automation policy and the avoidance of a “robot tax” might be equally as beneficial offsetting the expected dilution. Additionally, there is the benefit to the market as a whole from proactively addressing a pressing concern.

Commingling of government and corporate interests. These interests have been commingled since the formation of the country, however the US has had an active policy to not become shareholders of corporations and this would change (unlike most other countries). At the 5% max level, I don’t see government being excessively influential in a shareholders meeting. But it may be the implied result that is concerning. The implied result is that the government is now “pro-business” and thus must be anti-citizen or anti-worker. I recognize the concern, and this blends into point #3 Reliance upon Capitalism and equity markets. Unless we are prepared to change the nature of this country immediately, we are already firmly entrenched in capitalism. We ought to maximize its effect in this endeavor. Capitalism has an excellent track record of wealth creation, but sometimes at the expense of some who are left behind whether through income inequality or lax worker safeguards. In the case of the USSWF, at least the worker is better positioned to profit from capitalism than she would be otherwise. It is my belief that any other solution is too far afield from the current wishes of the population.

Finally, bureaucracy and control. The USSWF involves a good deal of assets and that means it is powerful. That will attract bureaucracy and politicians who will look to control the USSWF. A 50 person panel is more than sufficient expertise on hand to make quality, non-political decisions in the interest of the American people, especially if they are deemed fiduciaries by the legal definition. Any attempts to associate a large staff with the USSWF should be met with great skepticism as the mandate is the operation of an Index fund tied directly to the S&P 500. It is a pool of assets, designed to represent the US economy, not to create a target return. Any return associated with the assets is a function of economic growth or broad stock market appreciation only.

Increase in income inequality. As a result of trying to maximize the United States’ share of growth due to automation, much of those rewards will accrue to the wealthy. This is a result of the reliance on capitalism and maximum capital allocation efficiency at the corporate level. This of course benefits the USSWF, but it benefits the owners of capital more (95% versus 5%) in the short run. To mitigate this problem, a crucial piece of the Tech, AI and Automation policy is a significant increase in taxes on the wealthy. Universal Basic Income, which currently is the only means by which the massive amount of unemployed are able to survive, must be funded from somewhere. The USSWF would be in a position to provide a solid foundation for this payment, but it will be insufficient to meet the full demand. The owners of capital MUST recognize that their windfalls from reaping the benefits of a laissez-faire regulatory and tax regime will be called upon to benefit displaced workers.

No plan is perfect, but the USSWF is better than a “robot tax”, whatever that might turn out to be. The key to success with the USSWF or any other solution aimed at protecting US citizens from technological unemployment is to participate in the wealth creation. Other ideas along those lines would be welcome additions to the discussion. I hope that you will think deeply on these points, challenge the theories, make suggestions and further this debate.

AI and Automation - Replacing YOUR Job

Tonight I was with a collection of very smart people. All successful on Wall Street, all well into their careers. As the discussion turned to my work and looking at the future of Artificial Intelligence (AI)/ automation and its impact on our humanity, invariably, they are skeptical about machines replacing humans in the task of work. In fact, each one uses the same fall back argument, which seems to be the “go to” argument that people make when faced with the question of technological unemployment. “People have been afraid of job losses in the past and technology always creates new jobs”

And I understand that argument. I also support that argument, one need look no further than the job title of “Social Media coordinator” or “Web Development specialist” to realize that those jobs would not have even been considered in the mid 90’s. So let’s be clear, I promise that the advancement of technology will create new jobs for humans and I do not suggest that we should in any way limit the advancement of technology. Have I been clear? Does everyone understand that I know new jobs will be created and that I don’t want to hinder technology’s advancement.

Now, can we tackle the actual issue. AI and Robotics, unlike all technology before is replacement level stuff. More importantly, the goal of each AI/Automation advancement is looking directly at human behavior and trying to replicate it. The belief is simple. Machines make less errors than human beings do and Machines work 24/7 unlike human beings. Using that simple math (ceteris paribus), we should all agree that a machine that can complete the activity of the equivalent human is preferable. So here is the question I would like each of you to consider carefully and thoughtfully. What are the activities in your daily work which you believe a machine will NEVER be able to do?

Now before you answer, consider a few things. There are already machines creating art work, machines writing symphonies, machines flying airplanes and machines manufacturing microscopic mini machines. We have machines doing brain surgery, machines diagnosing tumors and machines predicting the weather. So as you analyze the tasks that you do and the skills that you exercise to accomplish those tasks, do you truly believe that a machine will not be able to do that task in the future?

You may genuinely believe that there is a portion of your job that the machine can’t do, but I guarantee there is a guy somewhere in Silicon Valley or in Tokyo, or in Shanghai or in Berlin, who looked at your task and is trying right now to develop a machine that can replicate that task. Can AI/Automation do every task as well as a human? Of course not, that is why we have machine learning, not machine learned… These systems are in their infancy and they are learning with exponential speed — speed which is unmatched in the human experience. When people hear me talk, sometimes they think I am anti-technology, when I am the exact opposite. I believe so much in humanity’s ability to advance technology, that I believe most tasks, will be replicated by machines and accomplished more efficiently and more cheaply than by humans. This isn’t necessarily what is best for humans, but those are unintended consequences and best left to a different blog post.

But let’s go back to you. Think about the tasks you do each day, what elements of your job cannot be replicated? Is there something special that is happening with your physical dexterity that a robot can’t match? Is there a specific set of thoughts or combination of thoughts, which a machine can’t mimic? The level of uniqueness that a person must feel to believe that the things that they do are not more than learned electrical impulses with 10–50 years of practice, would be truly special indeed.

Don’t get me wrong, humanity has many special qualities, but the things that make up work, rarely encompass those special qualities which I have listed a few of below:

  1. Hope
  2. Dreams
  3. Sympathy
  4. Peace
  5. Joy
  6. Contentment

I am sure there are a few others, like love, that people will offer up, but I would beg to differ on love and relationship-type differentiation. There are aspects of love, the things we do to demonstrate our love, which are highly replicable by machines. I am very comfortable with the idea that many people will not be able to experience love with or from a machine. But I hope that you would grant that there will be people who also believe they love a machine.

AI and Automation developers are already creating extremely life-like machines designed as companions for people who are ill, disabled or experience social disorders. The Sex industry advances the touch and feel of these machines. The medical industry advances cybernetics and materials to replace human tissue. When combined over the next decade with improvements neural nets, machine learning and quantum computing, we will find that machines will become reasonable proxies for humans. These machines will continue to improve their intellects and emotional capabilities as well. Over time they will earn trust and respect from people. Trust and respect are key elements in human relationships, which are either granted at the outset of a relationship or earned over time through positive, repetitive reinforcement. Learning thru repetition is a machine expertise. Finally, because of their input/output capabilities, these machines will be able to learn at an astounding rate compared to humans. So I ask you one final time to consider the tasks that you do in your job, which of those will a machine not be able to replace in the coming decade or two?

When you reach the conclusion which I have; that most “work” can be replaced by machines, now imagine that world. It’s different, it’s scary, it’s wonderful, it’s boring, it has freedom, it has slavery but most importantly it is VASTLY different from today’s world and it will change society completely. This is my concern. This is what I ask you to consider. I hope that you will join me in the preparation for the challenges to our society from AI and Automation.

The Process of Technological Unemployment - How Will it Happen?

There is a lot of question, concern and doubt surrounding the idea of technological unemployment. So much discussion that there is essentially a backlash of writing on how the issue is overhyped. The arguments are pretty standard, robots and AI aren’t THAT far along yet. Still plenty to do. And the most common, people have been afraid of technology before and it has always created new jobs and new work. People, like taxi drivers and truck drivers will simply have to get retrained. But the problem with the discussion is not that one side is wrong, rather it is that we are having two different conversations. The miscommunication lies in timeframe. All of the arguments for robs NOT disappearing are short run economic arguments. My argument is a Long Run argument. I have argued here:

#Future of Work — why are we struggling to understand the disruption?
I have this discussion with my good friends all the time now. I predict the destruction of most work and they think I’m…medium.com

The summary, is that in the Long Run, there are precious few skills that AI and Automation will not do better than a human. As a result, the only jobs for humans that will remain are those where the human controls the capital/labor decision or where the job values a human for their actual humanness. In that scenario, unemployment is well over 80% and society is dramatically changed. So, if you accept my long run argument, you would be right to wonder how we get from here to there. That is what this post is about. I believe that by highlighting this long run process, that it will help calm some short term fear and help people to better understand technological unemployment.

Summary points:

  1. Companies will put off hiring to absorb displaced workers due to automation
  2. We will NOT see significant layoffs related to Automation and this will be fuel of the technological unemployment nay-sayers, who inaccurately use ATMs as their argument for how technology creates jobs
  3. Job growth and hiring will continue to slow over the coming decade (decreased by more than 50%, while population growth has remained fairly steady)
  4. New graduates will continue to see a good job market as they are the most likely to have current skills
  5. 40 and 50-somethings will become the unemployed, as the technological skills pass them by
  6. 40 and 50 something will become the rise of the “part-time” worker, consultant, jack-of-all-trades and according to the BLS marginally attached U6 unemployed or completely unmeasured
  7. Economic shocks will result in higher levels of layoffs, similar to the 2009 Financial crisis and in contrast to the previous 40 years of market shocks.
  8. U6 (BLS — total unemployed and marginally attached workers) numbers become the key to understanding true unemployment. This number currently stands at 8.6% and hit a high of 17.1% during the 2008-09 financial crisis.

Let’s use Walmart as our example to understand a few of these points, ignoring the innovation that may occur in the near future, let’s start the clock with Walmart moving to autonomous truck drivers. here’s the likely process.

Step 1 — autonomous driving, but still with a driver. In a few test cases. This is standard practice for any company, especially when big changes are made. Small scale tests for a fixed period of time to determine scalability, feasability and roll out procedures. No one gets unemployed

Step 2 — Step 1 was a perfect success and the plan is to replace all truck drivers with full autonomous tractor trailers. More than 7,000 people out of work, right? No, you’d be wrong. Companies hate to fire people. It is bad press, it can damage their brand. It makes them look profit-centric, instead of customer oriented. So instead, Walmart will announce the move, with big publicity to highlight the savings and at the same time they will also announce that all 7000 employs will remain and be retrained.

How does this work? Essentially the company will go on a semi-hiring freeze. They are now overstaffed by 7k people. But a company like Walmart is turning over about 50k employees per year. So if they only hire 43k in a given year, that is something that people will hardly notice. In fact they can put out a press release highlighting that they hired 43k people in this particular year. The problem though, is that they hired 7k LESS than they would have or should have.

So that’s how it starts. Technological Unemployment begins with a reduction in hiring and we’ve been experiencing it for a decade already. When that happens at company after company, then hiring will stagnate first. However it will be difficult to see. New graduates will continue to be hired, especially if the education system is able to tailor its programs to match demands of a dynamic marketplace. The Bureau of Labor Statistics data backs this up.

The rate of job creation has slowed pretty dramatically. From 1970–2000, the US was creating about 2 mil jobs per year, consistently. Since 2000, that rate has fallen to less than 1 mil per year. While population growth has held steady at 2.55 mil per year. This is the hidden acceleration of automation. To grow profits and grow businesses, companies need less and less employees to be successful.

The unemployment rate for new college graduates. Even during the financial crisis of 2009, where we experienced peak near term unemployment, the unemployment rate for college graduates was never over 5%. As long as Universities continue to provide current, in-demand skills, graduates will find work. This also makes intuitive sense, because students, by their definition, have time to learn, whereas, under our current structure, education is not built into our day-to-day jobs. This makes workforce obsolescence a real issue. (See my blog post on Lifetime Learning)

Lifetime Learning — a Necessity Already
I started ForHumanity, partly as a response to thoughts I had about my children who are 8 and 6 today as I write this…medium.com

So that sets things up to become generational. It will be the 40-somethings and 50-somethings who will increasingly find it difficult to find new work as the technological skills may pass them by. More and more people in the middle of their careers will be forced into entrepreneurial endeavors and part-time work. Often, jobs won’t become eligible for unemployment benefits, like self-employment and many IRS 1099 filers. These people, unemployed or underemployed are not counted in any of the surveys, essentially they fall out of the work force. This number will increase over the coming years. I actually expect the Bureau of Labor Statistics to look to refine their labor participation measurement to try to capture these displaced workers.

This plays out in a different way. Over the past 40 years we have experienced market meltdowns and crises of various magnitudes. But the financial crisis in 2009 was probably the most poignant. The U6 participation rate jumped from 7.5% to 17.1% as the effects of the crisis enabled companies to actually eliminate people, the PR be-damned. That rise is unprecedented in the post-depression employment era. I believe that future crises will be used the same way as a significant form of layoffs and I expect this kind of jump in the U6 to be more of the norm than an outlier. It has taken the US economy nearly a decade of growth just to return to the mean U6 level of the previous 20 years. You can expect this U6 number to be extremely sensitive to market shocks and I’d expect a general upward trend in U6 over the coming 2 decades. This is where unemployment shows itself. I expect by the 2030’s, unless they change the calculation, U6 unemployment will be consistently around 20%

In the end, over the coming generation, there will be less job. Less full-time work and increasing replacement of human labor with automation, but it will be an insidious creep towards higher functional unemployment. Hopefully it will not be so subtle as to be neglected by the appropriate policy response or even misunderstood by technological unemployment nay-sayers.

Tech In Your Body - Let The Seduction Begin

https://www.ted.com/talks/tom_gruber_how_ai_can_enhance_our_memory_work_and_social_lives?utm_campaign=&utm_medium=on.ted.com-static&utm_content=awesm-publisher&awesm=on.ted.com_IndustrialInternet&utm_source=t.co

Please watch that video as an introduction to this blog post. It’s 9+ minutes, which I recognize will instantly turn most of your off from reading the rest of this for the TL:DR (Too Long, Didn’t read for those not familiar with the term) problem. But for those of you who choose to stay. Here’s what I took away from this:

  1. This is some smooth presenting. You introduce technology that so clearly helps people who are disabled live better lives.
  2. He does a great job of highlighting the fallibility of memory and the associated downside of human memory
  3. He makes you disappointed in yourself for your failures, namely forgetfulness of all kinds

So what are my concerns with having tech in your body or most importantly in your brain.

  1. You will be hackable. Seriously. Someone at some point will be able to control your mind, how you think, and how you act. You may doubt this, but, there are those who believe that the voting population of the US was manipulated in the last election and that didn’t even involve embedded technology.
  2. Your every thought will now exist somewhere on a hard drive. Imperfect memory is a gift, not a liability. As much as we wish we remembered the good memories, its the forgotten bad memories that are probably more important to our mental health
  3. Talk about privacy — this is the definition of NO privacy. Not only will we be surrounded by thousands of instruments listening and recording everything we do with the IoT, but now it’s in our head and our memory
  4. Imagine if you are opposed to the government on ANY level. James Comey recently said, you have NO privacy. Could the government snoop around in your head? Is your head now admissible evidence against you?
  5. The wealthiest companies and the most skillful hackers will ALWAYS have access to you, your brain, your memories. It is impossible for security to stay ahead of countries, companies and nefarious actors.
  6. Apple wants you to buy their tech. Apple is a company, driven by profit. They aren’t trying to fix the world. If some customers feel better off, they enjoy those stories and make sure that the world knows about them, but in the end, Apple is a company and companies are beholden to their shareholders. This is about profit. Interestingly, sometimes these companies fool themselves, convincing themselves of their altruism, but it won’t last. The bottom line eventually drives all decisions.
  7. This will dramatically increase income inequality. Only the most wealthy will find this technology available to them initially and most likely it will make them wealthier
  8. Those uncomfortable with putting tech in their bodies will become a persecuted minority

Transhumanism and augementation are not new. Work on prostheses connected to the brain has been improving dramatically. Take Professor Hugh Herr of MIT, stranded on Mt. Washington, loses the lower half of both legs and works for decades to develop prosthetic legs for himself which he can control with his mind. He has received numerous awards and accolades for his work. And it is amazing. Anytime a human being, loses some of their abilities, whether that be through birth or accident and is able to have them restored to human level function — it is a glorious story. If only it stopped there, read this excerpt from a recent article on Prof Herr written by Sally Helgesen, published in Strategy + Business…

But the scope of Herr’s interests and ambitions takes him beyond the desire to simply redress lost function. He’s become an evangelist for the notion that augmentation can also be used to expand capacity for those with intact bodies and functioning minds: wearables that enable human eyes to see infrared waves; tools that permit individuals to design and sculpt their own bodies, either for aesthetic reasons or to enhance athletic or professional performance. — Sally Helgesen, on Prof Hugh Herr

And this is where I get uncomfortable. Likely this is my own discomfort as I believe that many will choose to augment themselves in the future. Many people want to be superhuman, even if it is only in a narrow function. It’s a seductive offer. People like to be better than others. Oh, there is one catch, Prof Hugh Herr’s prostheses cost many millions of dollars, but I’m sure everyone has that lying around.

I am not opposed to augmentation or transhumanism. I can’t stop it. Maybe I should want to stop it based on these concerns, but I recognize that people have a right to make these decisions for themselves. The disabled can’t be prevented from being returned to human standard function levels. If Prof Herr wants to make super climbing legs for himself, he has that right. And just in case you think brain tech is a generation ahead of us, please note that brain tech already exists and is being enhanced daily.

Spinal Injury Patients Could Regain Mobility Through Brain-Computer Interfaces
In Brief Researchers are developing an implantable brain chip to bypass problems in the spinal cord and using a brain…futurism.com

I am concerned that as technology moves inside our bodies, we won’t be able to distinguish natural humans from the cyborgs, when we need to legally. Whether it be for athletic competitions, like Oscar Pistorius at the London Olympics, or Jeopardy! in the future. Certainly a person with a perfect memory (or bionic thumb) can’t be allowed to compete with a natural human — the competition would be a farce. These examples are fairly whimsical, but I am using them rather than the more disturbing versions where natural humans and augmented humans will be interested to know who-is-who.

I am further concerned about increasing income inequality. Augmentation today and in the near future will only be for the wealthiest. The newest class of brain enhancements will be expensive, it is the nature of R&D and the capitalist model that they will begin to recoup their costs. So will we be helping humanity or will the rich be extending their lead, I suspect you know the answer to that question. Imagine the parent about to send their child off to University. Already committed to spending hundred of thousands of dollars on that education. Won’t you feel compelled to purchase that “perfect memory” device from Apple to ensure that investment gets maximized? What about the competition from other students who are likely getting their “perfect memories”, are you the cheap parent that can’t afford to get your child the very best? Are you trying to let him/her fail? It’s a beautiful sales model and one that will certainly work. I think the risks to your humanity are too great.

Lastly, I am concerned that transhumanism will create two classes of humans, the augmented and the natural. The augmented will be superior to the natural and we have numerous examples in history of how a population that believes itself superior treats the inferior. This is the genuine problem because it will divide humanity. It has to. You can’t compete at work, you probably won’t have work. The lives of the super-human will be better and they will be in control. Could a natural human be forced to augment? Told it is for your own good, it is the next phase in human evolution? Eventually, for failure to augment, you might be deemed worthless and unemployable. Could you be outcast? I know this sounds like science-fiction, but I disagree. This is an extension of human relationships since time immemorial. Superior groups have a longer track record of overwhelming weaker entities, whether that be based on: military might (Greeks), organization (Romans), the enlighment/colonization/Age of Discovery (Western Europe), nationalism (Nazi- Germany), economic-strength (US), superior forces impose their will on the weak, embarrassingly often “for their own good”. It is not difficult to see it play out. Science and Technology already feels superior, you can hear in Mr. Gruber’s talk. Certainly I am talking about nothing different or unique here, except that it is based on forecast. I argue this is an obvious course of events, time is the only variable. For further thoughts on this division of humanity you can read my previous blog post:

Technology + The Human Body = Insurmountable Societal Challenge?
Picture this world for a moment. You are out of work. Technology is moving so rapidly that it is hard to keep up. In…medium.com

ForHumanity believes in the right to augment. But our most important efforts will be to create a world for natural humans to thrive in and I suspect that we will spend the rest of my life fighting an uphill battle for that. Each of you has the right to augment, especially if you are augmenting to return to human level standards. But once augmentation goes beyond the natural human, like science and technology are planning, then we have a problem. I’ve enumerated a series of concerns without highlighting the positive. I felt Mr. Gruber did an excellent job on the positives. But his sales pitch was so smooth, it made me distinctly uncomfortable and I needed to explain why. I hope this helps you consider tech in your body and make an informed decision when the time comes.