automation

ForHumanity AI and Automation Awards 2017

Top AI breakthrough

AlphaGo Zero — game changing machine learning, by removing the human from the equation

https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help/

This is an incredibly important result for a series of reasons. First, AlphaGo Zero learned to master the game with ZERO input from human players and previous experience. It trained knowing only the rules of the game. Demonstrating that it can learn better and faster with NO human involvement. Second, the implication for future AI advancement is likely that humans are “in the way” of optimal learning. Third, the AlphaGo Zero went on to become a Chess Master in 4 hours, demonstrating an adaptability that has dramatically increased the speed of machine learning. DeepMind now has over 400 PhD’s working on Artificial GENERAL Intelligence.

Dumbest/Scariest thing in AI in 2017

Sophia, AI machine made by Hanson Robotics, granted citizenship in Saudi Arabia — VERY negative domino effect

http://www.arabnews.com/node/1183166/saudi-arabia#.WfEXlGXM2oI.twitter

The implications associated with “citizenship” for machines are far reaching. Citizenship in most locations includes the right to vote, to receive social services from the government and the right to equal treatment. The world is not ready and may never be ready to have machines that vote, or machines that are treated as equals to human beings. While this was an AI stunt, it’s impact could be devastating if played out.

Most immediately Impactful development — Autonomous Drive vehicles

Waymo reaches 4 million actual autonomous test miles

https://medium.com/waymo/waymos-fleet-reaches-4-million-self-driven-miles-b28f32de495a

Autonomous cars progressing to real world applications more quickly than most anticipated

https://arstechnica.com/cars/2017/10/report-waymo-aiming-to-launch-commercial-driverless-service-this-year/

The impact of autonomous drive vehicles cannot be understated. From the devastation of jobs in the trucking and taxi industry to the freedom of mobility for many who are unable to drive. Also, autonomous driving is likely to result in significantly fewer deaths than human driving. This impact carries through to the auto insurance industry, where KPMG reckons that 71% of the auto insurance industry will disappear in the coming years. Central Authority control on the movement of people is another second order consideration that not many have concerned themselves with.

Most impactful in the future — Machine dexterity

Here are a few amazing examples of the advancement of machine dexterity. As machines are able to move and function similarly to humans, then their ability to replicate our functions increases dramatically. While this dexterity is being developed, work is also being done to allow machines to learn “how” to move like a human being nearly overnight, instead of through miles and miles of code — machines teach themselves from video and virtual reality.

Boston Dynamics Atlas robot human level agility approaching, including backflips — short video, easy watch

https://www.youtube.com/watch?v=fRj34o4hN4I&feature=youtu.be

Moley introduces the first robotic kitchen –

https://www.youtube.com/watch?v=QDprrrEdomM&feature=share

Machine movement being taught human level dexterity with simulated learning algorithms — accelerating learning and approximating human process

https://www.facebook.com/futurismscience/videos/816173055250697/?fref=mentions

Most Scary — Science Fiction-esque development

North Dakota State students develop self-replicating robot

https://www.manufacturingtomorrow.com/news/2017/07/19/ndsu-students-develop-3d-printing-self-replicating-robot/10034/

The impact of machines being able to replicate themselves, is a concept for most dystopian sci-fi movies, however there are many practical reasons for machines to replicate themselves, which is what the researchers at North Dakota State were focused on. However, with my risk management hat on for AI Safety, it just raises a whole other set of rules and ethics that need to be considered which the industry and the world are not prepared for.

Most Scary AI real-world issues NOW

Natural Language processing finds millions of faked comments on FCC proposal regarding Net Neutrality

https://futurism.com/over-million-comments-backing-net-neutrality-repeal-likely-faked/

AI being used to duplicate voices in a matter of seconds

https://www.digitaltrends.com/cool-tech/ai-lyrebird-duplicate-anyones-voice/

Using AI, videos can be made to make real individuals appear to say things that they did not…

https://futurism.com/videos/this-ai-can-create-fake-videos-that-look-real/

Top AI Safety Initiatives

Two initiatives share the top stop. Mostly because of their practical applications in a world that remains too heavily devoted to talk and inaction. These efforts are all about action!

AutonomousWeapons.org produces Slaughterbots video

This video is an excellent story telling device to highlight the risk of autonomous weaponry. Furthermore, there is an open letter being written that all humans should support. Please do so. Over 2.2 mil views so far

https://youtu.be/9CO6M2HsoIA

Ethically Aligned Design v 2.0

John Havens and his team at IEEE, the technology standards organization have collaborated extremely successfully with over 250 AI safety specialists from around the world to develop Ethically Aligned Design (EAD) v 2.0. It is the beacon on which AI and Automation ethics should build their implementation. More importantly, anyone who is rejecting the work of EAD should immediately be viewed with great skepticism and concern as to their motives.

http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf

AI and Automation - Managing the Risk to Reap the Reward

Personally, I have been hung up on Transhumanism. For those of you not familiar Transhumanism is the practice of imbedding technology directly nito your body — merging man and machine, a subset of AI and Automation. It seems obvious to me that when given the chance, some people with resources will enhance/augment themselves in order to gain an edge at the expense of other human beings around them. It’s really the same concept as trying to get into a better university, or training harder on the athletic field. As a species we are competitive and if it is important enough to you, one will do what it takes to win. People will always use this competitive fire to gain an upperhand on others. Whether it is performance enhancing drugs in sports, or paid lobbyists in politics, money, wealth and power create an uneven playing field. Technology is another tool and some, if not many, will use it for nefarious purposes, I am certain.

But nefarious uses of AI are only part of the story. When it comes to transhumanism, there are many upsides. The chance to overcome Alzeheimers, to return the use of lost limbs, to restore eyesight and hearing. Lots of places where technology in the body can return the disabled to normal human function which is a beautiful story.

So in that spirit, I want to shine a light on all of the unquestionable good that we may choose to do with advances in AI and automation. Because when we have a clear understanding of both good outcomes and bad outcomes, then we are in a position to understand our risk/reward profile with respect to the technology. When we understand our risk/reward profile, we can determine if the risks can be managed and if the rewards are worth it.

Let’s try some examples, starting with the good:

  1. AI and Automation may enhance the world’s economic growth to the point where there is enough wealth to eradicate poverty, hunger and homelessness.
  2. AI & Automation will allow researchers to test and discovery more drugs and medical treatments to eliminate debilitating disease, especially diseases that are essentially a form of living torture, like Alzheimers or Cerebral Palsy.
  3. Advancement of technology will allow us to explore our galaxy and beyond
  4. Advancement in computing technology and measurement techniques will allow us to understand the physical world around us at even the smallest, building block level
  5. Increase our efficiency and reduce wasteful resource consumption, especially carbon-based energy.

At least those five things are unambiguously good. There is hardly a rational thinker around who will dispute those major advancements and their inherent goodness. It is also likely that the sum of that goodness, out-weighs the sum of any badness that we come across with the advancement of AI and Automation. Why am I confident in saying that? Because it appears to be true for the entirety of our existence. The reason technology marches forward is because it is viewed as inherently good, inherently beneficial and in most cases, as problem solving.

But here is where the problem lies and my concern creeps in. AI has enornormous potential to benefit society. As a result it has significant power. Significant power comes with significant downside risk if that power is abused.

Here are some concerns:

  1. Complete loss of privacy (Privacy)
  2. Imbedded bias in our data, systems and algorithms that perpetuate inequality (Bias)
  3. Susceptibility to hacking (Security)
  4. Systems that operating in and sometimes take over our lives, without sharing our moral code or goals (Ethics)
  5. Annihiliation and machine takeover, if at some point the species is deemed useless or worse, a competitor (Control and Safety)

There are plenty of other downside risks, just like there are plenty of additional upside rewards. The mission of this blog post was not to calculate the risk/reward scenarios today, but rather to highlight for the reader that there are both Risks and Rewards. AI and Automation isn’t just awesome, there are costs and they need to be weighed.

My aim in the 12+ months that I have considered the AI Risk space has been very simple, but maybe poorly expressed. So let me try to set the record straight with a couple of bullet points:

  1. ForHumanity supports and applauds the advancement of AI and Automation. ForHumanity also expects technology to advance further and faster than the expectation of the majority of humanity.
  2. ForHumanity believes that AI & Automation have many advocates and will not require an additional voice to champion its expected successes. There are plenty of people poised to advance AI and Automation
  3. ForHumanity believes that there are risks associated with the advancement of AI and Automation, both to society at-large as well as to individual freedoms, rights and well-being
  4. ForHumanity believes that there are very few people who examine these risks, especially amongst those who develop AI and Automation systems and in the general population
  5. Therefore, to improve the likelihood of the broadest, most inclusive positive outcomes from the advancement of technology, ForHumanity focuses at identifying and solving the downside risks associated with AI and Automation

When we consider our measures of reward and the commensurate risk, are we accurately measuring the rewards and risks? My friend and colleague John Havens at IEEE has been arguing for sometime now that our measures of well-being are too economic (profit, capital appreciation, GDP) and neglect more difficult to measure concepts such as joy, peace and harmony. The result is that we may not correctly evaluate the risk/reward profile, reach incorrect conclusions and charge ahead with our technological advances. It is precisely that moment that my risk management senses kick in. If we can eliminate or mitigate the downside risks, then our chance to be successful, even if measured poorly increases dramatically. That is why ForHumanity focuses on the downside risk of technology. What can we lose? What sacrifices are being made? What does a bad outcome look like?

With that guiding principle, ForHumanity presses forward to examine the key risks associated with AI and Automation, Safety & Control, Ethics, Bias, Privacy and Security. We will try to identify the risks and solutions to those risks with the aim to provide each member of society the best possible outcome along the way. There may be times that ForHumanity comes across as negative or opposed to technology, but in the vast majority of cases that is not true. We are just focused on the associated risks with technology. The only time we will be negative on a technology is when the risks overwhelm the reward.

We welcome company on this journey and certainly could use your support as we go. Follow our facebook feed ForHumanity or Twitter @forhumanity_org and of course on the website at https://www.forhumanity.center/

Ban Autonomous AI Currency/Capital Usage

The current and rightful AI video rage is the SlaughterBots video, ingeniously produced by Stuart Russell of Berkeley and the good people at autonomousweapons.org. If you have not watched AND reacted to support their open letter, you are making TWO gigantic mistakes. Here is the link to the video.

https://youtu.be/9CO6M2HsoIA

As mentioned, there are good people fighting the fight to keep weapons systems from being fully autonomous, so I won’t delve into that more here. Instead, it reminded me that I had not written a long conceived blog post about autonomous currency/capital usage. Unfortunately, I don’t have a Hollywood produced YouTube video to support this argument, but I wish that I did because the risk is probably just as great and far more insidious.

The suggested law is that no machine should be allowed to use capital or currency based upon its own decision. Like in the case of autonomous weaponry, I am arguing that all capital/currency should 100% of the time be tied to a specific human being, to a human decision maker. To be more clear, I am not opposed to automated Billpay and other forms of electronic payment. In all of those cases, a human has decided to dedicate capital/currency to meet a payment and to serve a specific purpose. There is human intent. Further, I recognize the merits of humans being able to automate payments and capital flows. This ban is VERY specific. It bans a machine, using autonomous logic which reaches its own conclusions from controlling a pool of capital which it can deploy without explicit consent from a human being.

What I am opposed to is machine intent. Artificial Intelligence is frequently goal-oriented. Rarer is it goal-oriented and ethically bound. Rarer still is an AI that is goal-oriented, where societal goals are part of its calculus. Since an AI cannot be “imprisoned”, at least not yet. And it remains to be seen if an AI can actually be turned “Off” (see this YouTube video below to better understand the AI “off button problem”). This ban is necessary.

https://youtu.be/3TYT1QfdfsM

So without the Hollywood fan fare of Slaughterbots, I would like to suggest that the downside risk associated with an autonomous entity controlling a significant pool of capital is equally as devestating as the potential of Slaughterbots. Some examples of risk:

  1. Creation of monopolies and subsequent price gauging on wants and needs, even on things as basic as water rights
  2. Consume excess power
  3. Manipulate markets and asset prices
  4. Dominate the legal system, fight restrictions and legal process
  5. Influence policy, elections, behavior and thought

I understand that many readers have trouble assigning downside risk to AI and autonomous systems. They see the good that is being done. They appreciate the benefits that AI and automation can bring to society, as do I. My point-of-view is a simple one. If we can eliminate and control the downside risk, even if the probability of that risk is low, then our societal outcomes are much better off.

The argument here is fairly simple. Humans can be put in jail. Humans can have their access to funds cut off by freezing bank accounts or limiting access to the internet and communications devices. We cannot jail an AI and we may be unable to disconnect it from the network when necessary. So the best method to limit growth and rogue behavior is to keep the machine from controlling capital at the outset. I suggest the outright banning of direct and singular ownership of currency and tradeable assets by machines ONLY. All capital must have a beneficial human owner. Even today, implementing this policy would have an immediate impact in the way that AI and Automated systems are implemented. These changes would make us safer and reduce the risk that an AI might abuse currency/capital as described above.

The suggested policy has challenges already today, that must be addressed. First, cryptocurrencies may already be beyond an edict such as this, as there is an inability to freeze cryptos. Second, most global legal systems allow for corporate entities, such as corporations, trusts, foundations and partnerships which HIDE or at least obfuscate the beneficial owners. See the Guidance on Transparency and Beneficial Ownership from the Financial Action Task Force in 2014, which highlights the issue:

  1. Corporate vehicles — such as companies, trusts, foundations, partnerships, and other types of legal persons and arrangements — conduct a wide variety of commercial and entrepreneurial activities. However, despite the essential and legitimate role that corporate vehicles play in the global economy, under certain conditions, they have been misused for illicit purposes, including money laundering (ML), bribery and corruption, insider dealings, tax fraud, terrorist financing (TF), and other illegal activities. This is because, for criminals trying to circumvent anti-money laundering (AML) and counter-terrorist financing (CFT) measures, corporate vehicles are an attractive way to disguise and convert the proceeds of crime before introducing them into the financial system.

The law that I suggest implementing is not a far leap from the existing AML and Know Your Customer (KYC) compliance that most developed nations comply with. In fact that would be the mechanism of implementation. Creating a legal requirement for human beneficial ownership, while excluding machine ownership. Customer Due Diligince and KYC rules would include disclosures on human beneficial ownership and applications of AI/Automation.

Unfortunately, creating this ban is only a legal measure. There will come a time when criminals will use AI and Automation in a way that will implement currency/capital nefariously. This rule will need to be in place to allow law enforcement to make an attempt to shut it down before it becomes harmful. Think about the Flash Crash of 2010, which eliminated $1 trillion dollars of stock market value in little more than a few minutes. This is an example of the potential damage that could be inflicted on a market or prices and believe-it- or-not,the Flash Crash of 2010 is probably a small, gentle example of downside risk.

One might ask who would be opposed to such a ban. There is a short and easy answer to that. It’s Sophia the robot created by Hanson Robotics (and the small faction of people advocating for machine rights already) that was recently granted citizenship in Saudia Arabia and now “claims” to want to have a child. If a machine is allowed to control currency, with itself as a beneficial owner, then robots should be paid a wage for labor. Are we prepared to hand over all human rights to machines? Why are we building these machines at all if we don’t benefit from them and they exist on their own accord? Why make the investment?

World's first robot 'citizen' says she wants to start a family
JUST one month after she became the world's first robot to be granted citizenship of a country, Sophia has said that…www.news.com.au

Implementing this rule into KYC and AML rules is a good start to controlling the downside risk from autonomous systems. It is virtually harmless today to implement. If and when machines are sentient, and we are discussing granting them rights, we can revisit the discussion. For now however, we are talking about systems, built by humanity to serve humanity. Part of serving humanity is to control and eliminate downside risk. Machines do not need to have the ability to control their own capital/currency. This is easy so let’s get it done.

Robot Tax (No), Sovereign Wealth Fund (Yes)

 

Let’s talk for a minute about Robot Tax. Recently, it’s been a quick fix concept, espoused by Bill Gates and others. In San Francisco and South Korea, there is already talk of implementation. It’s the wrong idea, unless The United States truly wants to hinder robot implementation, in which case let’s hope the rest of the world agrees at the exact same moment to the exact same policy.

The idea of a “robot tax” comes from the fear of Technological Unemployment. Machines replacing human workers at many jobs, without an offsetting number of replacement jobs. I find this fear to be very reasonable as I have argued here: https://medium.com/@ForHumanity_Org/future-of-work-why-are-we-struggling-to-understand-the-disruption-e7661a2a0a81.

I also recognize that not everyone thinks that Technological Unemployment is a problem. However, there is one thing I do know. If Tech Uneployment is not a problem, then we will end up in a good place with minimal risk and an economy that has grown more efficient, creating many high paying jobs. If however, Tech Unemployment does come to pass, then we will be faced with some major societal challenges. Upside gain = POSITIVE, down side risk = VERY NEGATIVE. So with my risk management hat on, this blog post attempts to deal with the downside risk of Technological Unemployment, by tackling the issue of “robot tax” and introducing an alternative solution - a United States Sovereign Wealth Fund (USSWF).

To begin, let’s examine some key assumptions for how Technological Unemployment may occur, recognizing that this is not a certain outcome. There are those who even argument that it won’t happen at all.

  1. Technological unemployment will be a process that takes years if not decades.
  2. We don’t know how and when people will become displaced. This is true at a company level, a sector level and across the economy as a whole, which I discussed here https://medium.com/@ForHumanity_Org/the-process-of-technological-unemployment-how-will-it-happen-489e2f8b037c
  3. We don’t know what new jobs will be created
  4. There is an implicit concern about rising income inequality as the owners of capital reap the rewards of automation, while labor receives a decreasing share — approaching zero
  5. There is a belief that technological unemployment will be a byproduct of significant growth in economic production and subsequent wealth attributable to the implementation of AI and Automation.

Technological Unemployment, as discussed in detail by Oxford (2013), PwC (2016), Mckinsey (2017) and referenced directly by the White House report on AI and Automation (October 2016) is estimated to result in as much as 30–40% unemployment. Many people believe that this possibility is real and as a result they are concerned about supporting the unemployed. This is the genesis of the “robot tax” concept. Here are some concerns about the idea:

  1. It picks winners and losers based on how easily things are automated
  2. It creates a huge competitive advantage for countries/jurisdictions that do not tax robots. Most companies competing in AI and Automation are global in nature. Machine implementation will simply be shifted to other jurisdictions where the tax impact is minimized, so will the wealth creation associated with that machine implementation. Apple is a perfect example with nearly $250 billion dollars in cash overseas. Nearly all of that offshore cash is a function of tax issues. This point is not debatable, it is a corporate responsibility to minimize taxes legally, just as Apple has done. There is significant tangible history of corporations shifting production to jurisdictions where the tax is favorable. A robot tax punishes companies for innovating and will drive wealth creation out of the United States.
  3. Bureaucracy, companies rarely “flip the switch to automation” on a Friday and asks a set of employees to stop coming in on Monday. Therefore, identifying human replacement by machines will be difficult and policing a “robot tax” implementation would require significant manpower. If you are like me, when you see something is difficult to police, you probably also imagine a large bureaucracy trying to police it. Therefore, imagine the IRS, do we really need an IRTS (Internal Robot Tax Service) too? I suspect that is something most people would like to avoid. Taxes are difficult to implement on all companies public and private.
  4. Timing and measurement mismatch — Another problem with the “robot tax” concept is timing. For example, Company A automates Person A out of a job and therefore is subject to a tax. So Company A has its profit and capital reduced to pay the tax, but if this happens tomorrow, it is reasonably likely that Person A finds another job. Now the tax has reduced the capital allocation capability of the growing company and wasn’t put to good use supporting Person A who didn’t require the funds. Instead, it sits with the government, which is surely an inefficeint allocation of capital.

On top of all of this, no one has layed out the entire structure of a “robot tax”. Some have suggested banning the replacement of jobs by machines. Others have suggested a tax without much detail. In South Korea, where reports came out that they had enacted the first “robot tax”, it turned out to be all hype. The actual implementation was the decrease of tax INCENTIVES for the makers of automation.

In the end I suspect a Robot Tax is simply a reflexive action being suggested by people who are rightfully concerned but have no other alternatives to address their concern. Ideally this paper and the USSWF addresses those issues and proves to be a better tool to tackle the challenges.

Look I get the theory, if you want to grow your company and make more profits for yourself by automating, let’s tax those profits because you are removing jobs from the workforce. However a tax is punitive and redistributional. Why not participate with the company in their wealth creation, on behalf of that displaced worker. Practically speaking, if a worker were to be laid off by Google, Amazon or Apple, but compensated in stock for their termination. There is a decent chance that the capital gains of the stock may offset the lost wages.

If you believe the concerns about Technological Unemployment and that a significant amount of work will be replaced by automation, we do need to prepare for higher, permanent unemployment. So what should be done. The answer, from my perspective, is a series of solutions listed below (of course, this is no small challenge and requires many policies to try to mitigate the risks). This list amounts to a comprehensive AI, Automation and Tech policy for the United States (where I have covered them previously, I will show the links), but for the rest of the piece, we will concentrate on point #5:

  1. Lifetime learning to ensure we have the most nimble and current workforce available https://medium.com/@ForHumanity_Org/lifetime-learning-a-necessity-already-2e96807251db
  2. A laissez-faire, regulatory environment (as discussed by Andrea O’Sullivan and others) that encourages leadership in the area of AI and Automation, (one could argue, this exists today, but is unlikely to remain so lenient, which may eventually hinder the expansion) https://www.technologyreview.com/s/609132/dont-let-regulators-ruin-ai/
  3. Independent, market-driven audit oversight on key inputs to AI and Automation, such as ethics, standards, safety, security, bias and privacy. A model, not dissimlar to credit ratings https://medium.com/all-technology-feeds/ai-safety-the-concept-of-independent-audit-370bb45c01d
  4. A plan for shifting current welfare and Social Security into a Universal Basic Income https://medium.com/@ForHumanity_Org/universal-basic-income-a-working-road-map-66d4ccd8a817, coupled with an increase in taxes on the wealthy to make the UBI funding budget neutral. In the future, NOT TODAY.
  5. Finally a Soverign Wealth fund for the United States (USSWF) designed to maximize the benefits of the growth and expansion on behalf of all US citizens equally.

Since this post is introducing the Sovereign Wealth Fund as a means towards maximizing the benefit of the comprensive Technology Policy listed above, we will remain focused on it and the other elements of the policy remain open for debate elsewhere.

Returning to the idea of Technological Unemployment, we can all agree it would be a process and would take some time to occur. During that process, there are few things that the United States should prefer to have happen. Here is a list of those positive outcomes:

  1. We want the United States to capture/create the maximum amount of the wealth creation from this process
  2. We want to be in a position to protect displaced workers immediately when they are no longer relevant to the workforce
  3. With regards to cutting edge skills, we want citizens of the United States to be in a position to posses the requisite skills for new age work.
  4. We want to begin to build a warchest of resources for the time when significant unemployment becomes more permanent (Sovereign Wealth Fund)
  5. As a country we want to be rewarded for create a favorable climate for automation and artificial intelligence

So with those goals in mind. We can examine how a sovereign wealth fund can help to achieve those goals and maximize benefits to all citizens from the expected increase in AI and automation.

To be clear, the funding for a US Soverign Wealth Fund (USSWF) is effectively a tax, so this is somewhat of a sematic argument, but the asset based funding and goal-orientation of the USSWF is what will differentiate it from the current discussions about a “robot tax”.

Sovereign Wealth Fund implementation, some of the highlights

  1. The USSWF immediately receives a 1% common stock issuance from every company in the S&P 500. Followed by a .25% increase each year for 16 years.
  2. Every new company admitted into the S&P 500 must immediately issue this 1% common stock into the fund and begin the annual contributions
  3. All shares included in the fund are standard voting shares with liquidity available from the traditional markets trading the underlying company common stock
  4. The USSWF is overseen by the US Congress
  5. The USSWF is controlled by a panel of 50 commissioners, one from each state, appointed by the governor of each state. The commissioner must be from a political party that the governor is NOT a member of. These are attempts to depoliticize the body and to identify citizens to serve the country.
  6. The USSWF exists for the benefit of all Americans equally, with a primary concentration of providing welfare/basic income support to US citizens
  7. The USSWF is a lock box and not part of the US Federal Govt budget. The US Federal Government may appeal to the USSWF for funding associated with the mission of the USSWF
  8. Simplicity — implementation is easy, automatic and systematic

Now, if this concept seems foreign to you, then you are clearly an American. Here are a list of the world’s largest Soverign Wealth Funds over $100 bil.

 

Notice that much of the oil rich world has already created a Sovereign Wealth Fund. Norway’s fund is the gold standard, providing a capital asset value of $192k per person. At 5% income stream, that is nearly a $10k payment per person with capital preservation. That payment can go a long way towards supporting a population. That is the concept we are trying to achieve with the USSWF, a bulwark against future challenges, such as technological unemployment. But instead of oil, or a single resource, the citizens of the US should benefit from the entirety of the economy. A genuine justification of our consumer-oriented mentality and laissez-faire regulatory environment.

As we try to capture the entirety of the US economy, we must find a liquid, tradeable proxy. I have suggested the S&P 500 as the right proxy for the US market. To avoid picking winners and losers directly. Automation and technological advances will happen in all industries and all markets. The best way to be comprehensive is to use a benchmark of the whole economy. The S&P 500 is widely regarded as the best proxy for the US large capitalization economy. Constituents of the S&P 500 are the most liquid stocks in the world and will result in the most minimal of impact from USSWF ownership. By having participation in all companies, the USSWF is in a position to reap the benefits of US economic growth, as represented by the stock market, on behalf of the citizens of the US.

There is also the opportunity to have the USSWF funded in additional ways, whether that be from oil royalties, carbon credits or other mutually beneficial societal resource dividend. These discussions are merited, but are beyond the scope of this paper today.

Some features and benefits of a USSWF:

  1. The intial funding is a tax on the wealthy (holders of stocks), which is consistent with a goal of redistribution and tempering the rising income inequality in the US. In some ways, it becomes a tax on foreign investors as well, which is an ancillary benefit.
  2. Keep capital in the hands of capital allocators and away from the government until needed
  3. It increases participation in the US economy as measured by the stock markets. All US citizens would have a share of the stock market up from the estimated 52% today.
  4. It creates a dynamic pool of assets which has an opportunity to track the expected wealth creation resulting from AI and Automation. Therefore the USSWF is well aligned with the goal of rewarding citizens for the possibility of having their jobs displaced
  5. A Sovereign Wealth Fund is far from new and has many precedents both foreign and domestic (public pension funds) upon which to build a robust program designed to maximize the benefit to US Citizens
  6. Alignment instead of antagonistic taxation. If the company wins, the country wins and even the displaced workers win.
  7. Compound return. Because the USSWF will not have a current cashflow demand, it will be allowed to grow for a pre-determined period, maybe 20 years. This will also maximize the benefits available to all current and future US Citizens

Some areas of concern (these points will be discussed further below):

  1. Dilution. This is immediate dilution on the value of existing shareholders and the equity markets are not likley to be happy about it.
  2. Commingling of corporate and government interests
  3. Reliance upon capitalism and equity market growth to fund future challenges. This is both a risk/reward area of concern and an idealogical area of concern for some.
  4. Bureacracy/control of the USSWF
  5. Increase in income inequality

Let’s tackle a few of these. Dilution, will be a concern and in a vacuum, one could expect the stock market to make a significant move backwards on this announcement. However, one could argue that the establishment of a long term, pro-automation policy and the avoidance of a “robot tax” might be equally as beneficial offsetting the expected dilution. Additionally, there is the benefit to the market as a whole from proactively addressing a pressing concern.

Commingling of government and corporate interests. These interests have been commingled since the formation of the country, however the US has had an active policy to not become shareholders of corporations and this would change (unlike most other countries). At the 5% max level, I don’t see government being excessively influential in a shareholders meeting. But it may be the implied result that is concerning. The implied result is that the government is now “pro-business” and thus must be anti-citizen or anti-worker. I recognize the concern, and this blends into point #3 Reliance upon Capitalism and equity markets. Unless we are prepared to change the nature of this country immediately, we are already firmly entrenched in capitalism. We ought to maximize its effect in this endeavor. Capitalism has an excellent track record of wealth creation, but sometimes at the expense of some who are left behind whether through income inequality or lax worker safeguards. In the case of the USSWF, at least the worker is better positioned to profit from capitalism than she would be otherwise. It is my belief that any other solution is too far afield from the current wishes of the population.

Finally, bureaucracy and control. The USSWF involves a good deal of assets and that means it is powerful. That will attract bureaucracy and politicians who will look to control the USSWF. A 50 person panel is more than sufficient expertise on hand to make quality, non-political decisions in the interest of the American people, especially if they are deemed fiduciaries by the legal definition. Any attempts to associate a large staff with the USSWF should be met with great skepticism as the mandate is the operation of an Index fund tied directly to the S&P 500. It is a pool of assets, designed to represent the US economy, not to create a target return. Any return associated with the assets is a function of economic growth or broad stock market appreciation only.

Increase in income inequality. As a result of trying to maximize the United States’ share of growth due to automation, much of those rewards will accrue to the wealthy. This is a result of the reliance on capitalism and maximum capital allocation efficiency at the corporate level. This of course benefits the USSWF, but it benefits the owners of capital more (95% versus 5%) in the short run. To mitigate this problem, a crucial piece of the Tech, AI and Automation policy is a significant increase in taxes on the wealthy. Universal Basic Income, which currently is the only means by which the massive amount of unemployed are able to survive, must be funded from somewhere. The USSWF would be in a position to provide a solid foundation for this payment, but it will be insufficient to meet the full demand. The owners of capital MUST recognize that their windfalls from reaping the benefits of a laissez-faire regulatory and tax regime will be called upon to benefit displaced workers.

No plan is perfect, but the USSWF is better than a “robot tax”, whatever that might turn out to be. The key to success with the USSWF or any other solution aimed at protecting US citizens from technological unemployment is to participate in the wealth creation. Other ideas along those lines would be welcome additions to the discussion. I hope that you will think deeply on these points, challenge the theories, make suggestions and further this debate.

Replacing Your Doctors with AI and Automation

The field of medicine seems to be a touchy one for people with regards to the automation of how it is practiced. We struggle to get comfortable with the idea that machines could somehow replace the human doctors with whom we trust our lives and the lives of our children. It’s that fear that prevents us from taking the intellectual steps to realize that doctors, nurses and most medical services can and will be automated. If it helps, picture your favorite sci-fi movie where we all get dropped into a pod, which both analyzes and fixes us in a second. If that seems, nice and convenient and reliable — then you’ll be in a better place to realize that there are steps to get from here to there. In this blog post, I will aim to show you the cumulative steps that the medical industry is taking to automate. I suspect you’ll be amazed to see them put together, all in one place…

Let’s start with some amazing videos

 

Look at the amazing precision that robotic surgery is capable of. Now there is still a human involved in this particular operation, but the tasks of the movement can be mimicked in the same way that the the Moley kitchen mimcs the movements of a 5 star Chef. Code will be drafted to make the technique of each surgery precise and perfect. Humans will remain involved to “oversee” operations until the machine prove that they have the adaptive capability to react to unforeseen difficulties, but each time it happens, machine learning adds it to the repertoire.

Now we use doctors for a lot more than surgery. Diagnosis is a critical part of what a GP or specialist needs to provide their patient. Here are some examples of AI at work in diagnosis.

This Artificial Intelligence was 92% Accurate in Breast Cancer Detection Contest
In Brief Scientists trained an AI machine to detect breast cancer in images of lymph nodes. A group of researchers from…futurism.com

Patients' illnesses could soon be diagnosed by AI, NHS leaders say
Computers could start diagnosing patients' illnesses within the next few years as artificial intelligence increasingly…www.theguardian.com

Again, many of these tools are currently designed and marketed as assistants to doctors. Mostly because the providers of these products realize that patients aren’t ready to give themselves over completely to machines. But in reality, the practitioner is taking the diagnosis and telling you what it is. I’m certain we have a calming AI voice that can read out the results for you. A further argument companies make regarding these systems currently is that the doctor PLUS the AI is more reliable than either alone. A solution for that of course is two different AI processes, but again, is the patient ready to trust a machine-only process. Lastly, if you don’t believe that these diagnostic procedures will improve, then maybe you have been paying attention to ANY technological advancements.

How else do we interact with doctors. Regular checkups often include blood tests which of course are easily automated, simpler and more thorough.

A New Test Uses a Single Drop of Blood to Screen for 13 Types of Cancer
Cancer is often difficult to treat because it's not detected in time. That there are more than 100 types of cancer…futurism.com

Physical evaluation — your smartphone and personal health trackers can provide enormous amounts of data about our day-to-day health, sleeping, nutrition and physical activity.

 

23 and Me and other services allow you to examine your DNA. Combined with the technology of CRISPR and advances in gene therapy, we are already having discussions about treating disease before it manifests.

DNA Genetic Testing & Analysis - 23andMe
23andMe is the first and only genetic service available directly to you that includes reports that meet FDA standards.www.23andme.com

Maybe our doctors are just there to provide advice, because of course, they will know 100% of the latest technological advancements, newest drug treatments, how to avoid negative drug interactions, the latest physical therapy techniques, nutritional impacts and treatments and all the relevant info on you from your various specialists — oh wait… yeah that’s a machine, not a human.

It would seem we are left with the doctor-patient relationship. For many doctors, this is a genuine strength and in the future will become a key differentiation for them. But I have also heard of plenty of disappointed patients, who probably can’t wait to trade their doctor in for comprehensive machine interaction. In all of my work on job replacement by automation, I always leave room for the human element. There will be people who choose to acquire services from humans, because they are human (at least as long as we can know the difference). So certain doctors will find work because of “who they are and how they relate to others”, which of course is a great thing. However, most doctors will not survive automation. I personally expect to see a phase out beginning with the generation of our youth today, so roughly 20–30 years.

AI In Medicine: Rise Of The Machines
Could a robot do my job as a radiologist? If you asked me 10 years ago, I would have said, "No way!" But if you ask me…www.forbes.com

Lastly, there are cost and convenience factors. Machines work 24/7, doctors do not. Machines are capital investments, doctors have on-going salaries which rise nearly every year. In a world that is reaching crisis levels with health care costs, AI and Automation will continue to be solutions for rising costs and will likely force our acceptance of automated medicine much more quickly than anticipated. The replacement process will not be linear, nor will it happen all at once. Rather, you will see an increased use of technology around you during visits. Certain services and tests will be introduced as fully automated, but with doctors and practitioners nearby to supervise, until enough time goes by without problem that patients are comfortable. But eventually, machines will replace most of the functions of a doctor. If our doctors, the people we trust with our most valuable asset - our lives, can be replaced, then what won’t automation be able to replace.

AI and Automation - Replacing YOUR Job

Tonight I was with a collection of very smart people. All successful on Wall Street, all well into their careers. As the discussion turned to my work and looking at the future of Artificial Intelligence (AI)/ automation and its impact on our humanity, invariably, they are skeptical about machines replacing humans in the task of work. In fact, each one uses the same fall back argument, which seems to be the “go to” argument that people make when faced with the question of technological unemployment. “People have been afraid of job losses in the past and technology always creates new jobs”

And I understand that argument. I also support that argument, one need look no further than the job title of “Social Media coordinator” or “Web Development specialist” to realize that those jobs would not have even been considered in the mid 90’s. So let’s be clear, I promise that the advancement of technology will create new jobs for humans and I do not suggest that we should in any way limit the advancement of technology. Have I been clear? Does everyone understand that I know new jobs will be created and that I don’t want to hinder technology’s advancement.

Now, can we tackle the actual issue. AI and Robotics, unlike all technology before is replacement level stuff. More importantly, the goal of each AI/Automation advancement is looking directly at human behavior and trying to replicate it. The belief is simple. Machines make less errors than human beings do and Machines work 24/7 unlike human beings. Using that simple math (ceteris paribus), we should all agree that a machine that can complete the activity of the equivalent human is preferable. So here is the question I would like each of you to consider carefully and thoughtfully. What are the activities in your daily work which you believe a machine will NEVER be able to do?

Now before you answer, consider a few things. There are already machines creating art work, machines writing symphonies, machines flying airplanes and machines manufacturing microscopic mini machines. We have machines doing brain surgery, machines diagnosing tumors and machines predicting the weather. So as you analyze the tasks that you do and the skills that you exercise to accomplish those tasks, do you truly believe that a machine will not be able to do that task in the future?

You may genuinely believe that there is a portion of your job that the machine can’t do, but I guarantee there is a guy somewhere in Silicon Valley or in Tokyo, or in Shanghai or in Berlin, who looked at your task and is trying right now to develop a machine that can replicate that task. Can AI/Automation do every task as well as a human? Of course not, that is why we have machine learning, not machine learned… These systems are in their infancy and they are learning with exponential speed — speed which is unmatched in the human experience. When people hear me talk, sometimes they think I am anti-technology, when I am the exact opposite. I believe so much in humanity’s ability to advance technology, that I believe most tasks, will be replicated by machines and accomplished more efficiently and more cheaply than by humans. This isn’t necessarily what is best for humans, but those are unintended consequences and best left to a different blog post.

But let’s go back to you. Think about the tasks you do each day, what elements of your job cannot be replicated? Is there something special that is happening with your physical dexterity that a robot can’t match? Is there a specific set of thoughts or combination of thoughts, which a machine can’t mimic? The level of uniqueness that a person must feel to believe that the things that they do are not more than learned electrical impulses with 10–50 years of practice, would be truly special indeed.

Don’t get me wrong, humanity has many special qualities, but the things that make up work, rarely encompass those special qualities which I have listed a few of below:

  1. Hope
  2. Dreams
  3. Sympathy
  4. Peace
  5. Joy
  6. Contentment

I am sure there are a few others, like love, that people will offer up, but I would beg to differ on love and relationship-type differentiation. There are aspects of love, the things we do to demonstrate our love, which are highly replicable by machines. I am very comfortable with the idea that many people will not be able to experience love with or from a machine. But I hope that you would grant that there will be people who also believe they love a machine.

AI and Automation developers are already creating extremely life-like machines designed as companions for people who are ill, disabled or experience social disorders. The Sex industry advances the touch and feel of these machines. The medical industry advances cybernetics and materials to replace human tissue. When combined over the next decade with improvements neural nets, machine learning and quantum computing, we will find that machines will become reasonable proxies for humans. These machines will continue to improve their intellects and emotional capabilities as well. Over time they will earn trust and respect from people. Trust and respect are key elements in human relationships, which are either granted at the outset of a relationship or earned over time through positive, repetitive reinforcement. Learning thru repetition is a machine expertise. Finally, because of their input/output capabilities, these machines will be able to learn at an astounding rate compared to humans. So I ask you one final time to consider the tasks that you do in your job, which of those will a machine not be able to replace in the coming decade or two?

When you reach the conclusion which I have; that most “work” can be replaced by machines, now imagine that world. It’s different, it’s scary, it’s wonderful, it’s boring, it has freedom, it has slavery but most importantly it is VASTLY different from today’s world and it will change society completely. This is my concern. This is what I ask you to consider. I hope that you will join me in the preparation for the challenges to our society from AI and Automation.

The Amazon Impact on Retail Jobs - Defining Technological Unemployment

I’d like to try to help people understand the impact that technology has on a specific sector — retail. Amazon is clearly the dominate player in the space and has become a behemoth since 2007. But without carefully looking at retail employment numbers — we can be fooled. Many authors and researchers have made the mistake of not trying to dissect Amazon as a member of the space and as a result, they miss the picture. Often asserting things such as Tim O’Reilly asserts “What Amazon teaches us about AI and the “jobless future”.” People who think that Amazon is not directly tied to the destruction of jobs in the retail sector are fooling themselves. Yes, Amazon will hire more people, but more importantly, for every job they hire, they will likely destroy three or four at a minimum.

So I figured I’d try to put the math on what everyone feels. Amazon grows and adds jobs. But they destroy more jobs than they create because of their efficiency and profitability. THAT is the definition of Technological Unemployment.

Since 20007, using the Bureau of Labor Statistics (BLS) data

Amazon is classified as nonstore retailing (NAICS 454) and further classified as Electronic Shopping (NAICS 4541)

 

So since 2007, Amazon has added 324,000 jobs approximately and the sector they are in has only grown by 126,000. So Amazon may have destroyed 198,000 jobs just in the non-store sector.

Ok, that’s one take but, since that sector (non-store NAICS 454) includes things like door-to-door sales, which by anyone’s definition are arcane, let’s dive deeper into the whole retail sector and see if the argument holds.

The BLS estimates a 7% growth rate in employment in retail from 2014–2024 (7% is the growth in jobs SINCE the financial crisis). Now it may not be a fair assumption but let’s drag that back to 2007. If we do that, the entire retail sector should have grown by 1.1 million jobs since 2007. Now employment was reduced during the financial crisis, but it still has increased by 395k jobs since 2007 including the reductions during the crisis. Don’t forget about the the 324k jobs that Amazon added into that number. That means the retail sector added 71k over 10 years EX-Amazon, even the financial crisis can’t explain that. What does explain an 82% share of job creation in a sector is that Amazon is the most profitable, most efficient retailer on the planet and they are forcing the rest of the industry to adapt or die, and most are dying.

When we examine where that growth over 10 years came from on a subsector basis the story becomes even more clear. In our examination, we allow for some sectors to be directly influenced by Amazon and some to be completely untouched. For example, Amazon likely has influenced these subsectors of retail (all subsectors of retail are included below):

While not influencing these sectors at all:

While some of these might be debatable, I’ve tried to include as many sectors as possible to give Amazon a fair review.

Here are the results for sectors directly influenced -5 k (including the +324k at Amazon itself), said differently -329k jobs directly related to Amazon in sectors they have affected since 2007 (job gains or losses after each sector):

While not influencing these sectors at all which grew by +394k jobs:

And none of this analysis includes the gigantic amount of layoffs expected in 2017 in the retail sector. So to summarize, depending on the analysis you’d like to use, Amazon has destroyed 198k jobs in the non-store retail sector, hindered BLS expected job growth by as much as 1 million jobs or explicitly caused a loss of 329k jobs in competing sectors all since 2007.

Take that a step further, Jeff Bezos (now the richest man in the world) and Amazon shareholders have been greatly enriched over that time, while thousands upon thousands of retail sector employees have or will lose their jobs, creating serious income inequality as a result of technological unemployment. But income inequality is a story for another post.

Look, I’m a capitalist, I’m a fan of efficiency. I’m a fan of service and sometimes I’m a fan of Amazon itself. Amazon has changed the way we shop, no one in their right mind would deny that. But let’s not kid ourselves. It will come at the cost of thousands upon thousands of jobs. These job losses and store closures will have serious repercussions on their local communities, many of which will not benefit from Amazon’s job growth. There are meaningful consequences to the technological unemployment directly caused by Amazon, and our desire for Amazon’s service. Amazon is leader in AI and Automation in the retail space. Those efforts cost humans jobs and always will. AI and Automation combined are replacement level machines for human labor. Those companies that replace humans the fastest will reap the largest profits and be the most successful. They will continue to hire humans because they can expand their businesses. The humans that work at these companies will continue to become more and more efficient and profitable on a per head basis. However, it will result in net job losses for society. That is TECHNOLOGICAL UNEMPLOYMENT and it is coming to all industries, not just retail.

13 Things That UBI Can't Replace - A Reflection on Universal Basic Income

This is not a knock on Universal Basic Income (UBI). Instead, this post is intended to be supportive of UBI, at least in terms of making sure that we are creating a proper replacement for work. Or at the very least recognizing what UBI can provide and where its short comings may lie so that we can develop other, supplemental solutions. The interest in UBI lies in the belief held by a growing number of people that the advancement of AI and Automation will significantly impact the number of full-time jobs available to people. If 40–80% of all jobs disappear over the next 20–50 years, it follows that there must be a replacement for income for people to survive on and potentially even thrive on.

Recently, Finland has been conducting a two -year experiment with UBI. As part of the ongoing research, a recent news article in the published by Chris Weller in the World Economic Forum

https://www.weforum.org/agenda/2017/05/finlands-basic-income-experiment-is-already-making-people-feel-better-after-just-4-months?utm_content=buffer46f38&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

quotes on participant, “ There was this one woman who said: ‘I was afraid every time the phone would ring, that unemployment services are calling to offer me a job,’” Turunen recalled of a woman who needed to care for her parents, and so couldn’t work.

Look, if we remove someone’s greatest fear, the feedback will ALWAYS be positive, I promise and my fingers aren’t crossed behind my back. UBI is easy to get positive reviews for. Here’s a random quote, made up by me to highlight the point, “Wait, you’re giving me a bunch of money for free? no responsibility? no commitment? no waiting in line and filling out paperwork? Yeah, that sounds AWESOME!” said anyone receiving a UBI payment. So it is important to look a lot deeper.

We need to realize that money and a job do not have a small and simple impact on our lives. Instead, they contribute many, many different things to our sense of self and well-being. And those contributions are weighted differently for each person.

Let’s try a few feelings that work gives us:

  1. a sense of purpose
  2. a place for social interaction
  3. a sense of accomplishment (which is different from purpose)
  4. a sense of providing/participating in survival
  5. a place to learn
  6. a place to be away from our families/people we co-habitat with
  7. an ability for upward mobility
  8. our self esteem
  9. a place for positive reinforcement
  10. the resources that allow us to entertain ourselves
  11. security
  12. intellectual challenges
  13. a positive use of time

That’s a lot of things that work may provide you and those things are often pretty important. Ask anyone who has been unemployed for an extended period of time. The feelings of “being lost”, “feeling useless”, “no reason to wake up in the morning”. These are legitimate feelings that weigh negatively on a person. Moreover, we have seen the devastation of a negative feedback loop on a society when unemployment is high. We have all seen the example of the Midwest ghost town when the local plant closed, or the rioting in Greece when unemployment threatened 40%. A substantial loss of jobs creates a terrible strain on society at-large.

UBI doesn’t solve ANY of these 13 problems. It provides money. Money becomes food and clothing and shelter. If there is a little left over it might provide for some small entertainment value. But the amount of money will NOT overcome the emotional losses of work. So advocates for UBI should quickly realize that UBI is NOT a panacea. It maybe a part of a solution but it is not THE solution. I am not discounting the value of a social safety net. the money I just references starts with food, clothing and shelter. So that’s a good start. If we didn’t achieve that goal and the unemployment level was too high we would have violent physical clashes in the street as people’s survival instinct kicked in. But that’s where UBI stops, it eliminates the need to survive. But let’s not forget that, most people are doing FAR more than surviving. But if our society suffers substantial job losses, those positive feelings we have as a thriving economy will vanish.

Over the coming months, I hope to develop some additional ideas and thought on what a UBI world might also need to overcome these 13 positive feelings missing in a world without work. Feedback and suggestions are always encouraged and welcome.

Also, by no means is this list exhaustive, I welcome further input on the feelings we earn from our paid labor. The more comprehensive that list is, the better equipped we will be to meet the needs of the people.

Should the significant amount of technological unemployment occur, consistent with the predictions of an increasing large number of analytics, it will take our collective actions to find suitable replacements for a world without work.

Strategies to Keep Your Job with AI & Automation

As the world moves swiftly towards significant amounts of artificial intelligence (AI) and Automation, there are numerous articles and studies predicting that many of our jobs will disappear. Some argue 40% (including the White House 2016 study), while others argue as many as 80% of the jobs will disappear over time. Still other argue that like past technological advances, these new technologies will spur a huge growth in new jobs that we cant even imagine yet. Regardless of any of those outcomes, here are a few ideas to best protect your employment power in the near term:

  1. Look at your process, the things that you and your colleagues do. Examine them for the things that make your job easier and harder. Identify areas that can be improved upon. IDENTIFY SHORTCOMINGS
  2. Do your own research for new technologies that can enhance your work place and find a way to introduce them FIND SOLUTIONS
  3. Ask for education on cutting edge technologies regularly BE THE PROBLEM SOLVER
  4. Become the OWNER of the technology if your work situation allows for that BECOME CAPITAL

#1 IDENTIFY SHORTCOMINGS — In the work place, be the person who is seeking better ways to get things done. Do it constructively, not negatively. Ask key questions of any process. Is this fast and efficient? Are the results accurate enough? Are their ways to automate manual procedures? Is there a way to reduce the cost of this process? If you can identify the shortcomings to any process, then you are able to understand where to look for improvements. The employee who is able identifying these challenges is always valuable to the owners of the company and less likely to be replaced.

#2 FIND SOLUTIONS — once a problem has been identified by you or others, actively seek to solve the problem. Spend some extra time scouring the web for solutions. Talk with people outside of your business and industry about the problem you are looking to solve. Many times similar problems exist elsewhere and may have already been solved. Or occasionally a process elsewhere can be adapted to your industry to solve your problem. The point however is to actively engage and FIX the problem.

#3 BE THE PROBLEM SOLVER — If you find a solution, look to implement them. If you need education to do so, ask to get it. Become the advocate for the fix to the problem and convince people to implement. Become the expert on the “fix”. Any employee who seeks to expand their knowledge and become an expert in new technologies will always be relevant to a dynamic organization. If you combine that with successful implementation of fixes for problem, then you will be a valuable employee in any organization.

#4 BECOME CAPITAL — One final, longer-term answer is to become Capital, become the owner of the AI & Automation. Here is what I mean. As I was thinking about job replacement by AI & Automation, it struck me that independent contractors, like plumbers and electricians may take some time to replace “themselves” with automation. In fact it may be a generational thing, as the plumber ages, he sees his nearing retirement and begins to save for the “new PLUMBER 2.0 from General Electric Robotics” that appears to be able to replicate his work, allowing him to become the “owner” of the business. Since the plumber is the one investing in the robot, he is Capital. Capital will be one of the last roles in the capitalist economy to be replaced by AI & Automation , because somebody has to buy and implement the automation. At least until we turn over the keys to the global economic model to Artificial General Intelligence.

Other than #4, which requires saving money. The first three ideas are about energy, time and effort. These are resources that we all have access to, even if they are in limited supply.. Only you can determine how much energy, time and effort to put into this process. But I am certain that if you invest energy, time and effort into these process, you will find yourself successful on the job and certainly more difficult to replace with AI & Automation.

Answering Nick Bostrom's Desiderata (Part 2 of 4)

*italicized sections are direct excerpts from Bostrom’s paper

Universal benefit. All humans who are alive at the transition get some share of the benefit, in compensation for the risk externality to which they were exposed.

Bostrom’s point about existential risk is extremely concerning to ForHumanity, and highlights our raison d’etre. These developments in Machine SuperIntelligence (MSI)are likely to occur with much of the public blind to the developments. Then MSI will arrive on the scene and change everything instantly, like the atomic bomb at Hiroshima. Awareness, and also a certain amount of non-proprietary transparency from AI developers, will reap great benefits for the public and facilitate universal benefits. Certainly, substantial expansion of the economy will benefit the world broadly. It is the distribution of that wealth that is the problem. Many will NOT participate in the MSI race and thus their reliance on the generosity of spirit by the creators of this intelligence is extreme.

ForHumanity is supportive of an MSI development process that is driven by corporations, instead of governments, in so far as they are more likely to make the optimum decisions to achieve SuperIntelligence. Corporations today are more global than any nation, often having offices in many of the countries where machine learning is being advanced. Furthermore, those corporations are often publicly traded and thus are already creating a broader, albeit non-inclusive, distribution of the wealth via their shareholders. We also recognize that corporate control has plenty of challenges. Corporations need not be moral. They also are profit maximizing and this is where the problem lies. Once Machine SuperIntelligence is achieved, instantly it should no longer be owned by the corporation or collaborative that achieved the goal. This of course flies in the face of 250 years of capitalistic thought.

Ideally, when that MSI switch is flipped, the group that created it would hand it off to humanity at-large and collect its substantial bonus, but NOT continue to reap the ongoing rewards nor “control it” whatever that may mean. Unlikely the Manhattan Project, a naturally militaristic endeavor at the time, to weaponize machine SuperIntelligence or make it Federally owned, is to both polarize the process and introduce natural bureaucracy, without reasonable benefit. We believe that this would quickly turn into a race to the bottom and would likely lead to increased risk, which is why we prefer corporate control for the time being. However this begs the question of independent oversight. Here ForHumanity advocates for a powerful NGO-esque watchdog group, who can interface freely with AI corporate developers, endorse safe practices and then prepare the masses, governments and the world for AI as we approach general intelligence. Also, this group, if properly governed and guided, might become the steward of a general MSI.

Magnanimity. A wide range of resource-satiable values (ones to which there is little objection aside from cost-based considerations), are realized if and when it becomes possible to do so using a minute fraction of total resources. This may encompass basic welfare provisions and income guarantees to all human individuals. It may also encompass many community goods, ethical ideals, aesthetic or sentimental projects, and various natural expressions of generosity, kindness, and compassion.

The economic explosion predicted by some associated with machine SuperIntelligence seems lofty to me. The projections assume a level of resource availability that simply may not exist on earth. Furthermore, scalability remains a function of physical construction (whether that is chips and motherboards or actual automation) and cannot be achieved instantaneously. However the basic concept of an equitable distribution of the wealth created by an MSI created wealth explosion is an easy idea to agree with, so we support Bostrom on this point.

However, that outcome appears more likely as an outlier . It further assumes that humanity does not choose another pathway, constrained machine SuperIntelligence. I have heard some discussion in the community, people refer to it as the “control problem”, but maybe it is our optimum outcome. Instead of “turning on a switch and watching the SuperIntelligence go”, why not have natural circuit breakers built in where humanity can take a pause, re-asses potential impacts and then continue the process. For instance, if machine SuperIntelligence were NEVER allowed to control currency/capital and instead always relied upon a beneficial human owner or owners, then you have a natural human circuit breaker (think like an appropriations committee). Or if MSI were unable to self-replicate, you have strong bounds under which the intelligence can operate. Finally, the power required for MSI will likely be enormous at the outset. Strong limits on the MSI’s ability to access power may provide the control measures that humanity requires, to at least assess next steps and create a process that is more manageable and more easily understood by humans. This approach probably leads to a dramatically more gradual expansion of both knowledge and wealth creation.

However, should MSI lead to a significant explosion of resources and subsequent wealth, there is no question that this ought to be shared as widely and as readily as possible. Currently, the world does not operate like this. The income inequality gap is at an all-time high and likely to accelerate despite significant humanitarian and charitable efforts to shift wealth from a wealthy class to a less privileged group. The mind shift that needs to occur to properly redistribute this wealth seems to run counter to the desires of our individual humanity. Many of us have a natural desire to compete, to win and expect to be rewarded for our victories. I have more concern at our ability to succeed in this endeavor than to actually achieve MSI. The result of a failure to achieve an equitable redistribution of wealth is that many will remain impoverished and oppressed. We support the idea of “Magnanimity”, but remain skeptical of humanity’s ability to achieve this successfully. My skepticism will not prevent us from trying.

Continuity. The path affords a reasonable degree of continuity such as to (i) maintain order and provide the institutional stability needed for actors to benefit from opportunities for trade behind the current veil of ignorance, including social safety nets; and (ii) prevent concentration and permutation from radically exceeding the levels implicit in the current social contract.

The current growth rate of the wealth inequality globally can be largely attributed to the growth and application of technology. Technology reduces costs and increase efficiency, which is how wealth is created. This wealth accumulates to shareholders and capital owners. In addition, automation increase the amount of return for capital and decreases the return to employees as their labor input is decreased. This accelerates the inequality.

Taking MSI to an extreme, where it is the “last invention humans ever need create”, well then you have all of the wealth concentrated in the owner of the MSI or potentially the MSI itself. This represents a massive problem and must be solved. If one subscribes to the notion that an MSI, discovered and unleashed at the corporate level will be turned over to humanity at-large, then you’ve dealt with the issue. However, that is a big “if”. At least in the United States contract law provides no precedent for ownership by humanity at-large, instead, the only semblance of large scale universal control is at the nation-state level, where we have the powers of eminent domain. This may actually be WORSE. Imagine a corporation is about to turn on an MSI and the Unites States or some other country steps in and says, “I’ll take that”. The fear it would engender in other nations, might lead to poor decisions by other nation-states. Decisions made out of fear. So the ownership question flies in the face of “continuity”.

Job Destruction and the future of work is another substantial “discontinuity”. I won’t go into the argument here as I’ve made it before as have many others, but it is sufficient to say that today’s society is ill prepared for 50–80% unemployment. The problems it will create as a result of wealth inequity and potential damage to individual self-worth are massive upheavals to society where we stand today. So I don’t see how “continuity” is even remotely achievable.

On a separate thought on continuity, I see the advent of MSI creating a substantial divide in society. As machines become more integrated into society and especially when they begin to achieve “personhood” and the associated rights and privileges (lest we think this is Sci-fi talk, the EU parliament has already recommend some steps toward “personhood” for machines). I can see where a schism is likely to form. I suspect this schism will result in a large number of communities and people, “opting out” of a machine intensive society. The “opt-out” option is a fundamental (constitutional in the US?) human right that I know will be challenged time and again by government and by society at-large in a machine driven world. It is vital we protect this right now to maximize continuity for all. Luddite today is already a pejorative word. It, like all other monikers designed to demean, is prejudicial and our society today should support the rights of those who choose NOT to participate in a machine driven/controlled society. I dare say that society should protect and guarantee those rights as I suspect this group will be a minority. Continuity ought to mean “continuity” from today, for all, based upon their right to life, liberty and the pursuit of happiness, even if that means a life without MSI and automation.