- January 2018
- Dec 1, 2017 AI and Automation - Managing the Risk to Reap the Reward Dec 1, 2017
- November 2017
- Oct 30, 2017 Robot Tax (No), Sovereign Wealth Fund (Yes) Oct 30, 2017
- Oct 9, 2017 Universal Basic Income Isn't Perfect - It's a Necessity Because of the Future of Work Oct 9, 2017
- Sep 22, 2017 AI Safety - The Concept of Independent Audit Sep 22, 2017
- Sep 15, 2017 Drawing A Line - The Difference Between Medicine and Transhumanism Sep 15, 2017
- Sep 13, 2017 Replacing Your Doctors with AI and Automation Sep 13, 2017
- Sep 8, 2017 AI and Automation - Replacing YOUR Job Sep 8, 2017
- Sep 6, 2017 The Process of Technological Unemployment - How Will it Happen? Sep 6, 2017
- Aug 18, 2017 Tech In Your Body - Let The Seduction Begin Aug 18, 2017
- Aug 15, 2017 Privacy - GDPR for the US Aug 15, 2017
- Aug 15, 2017 Why Sales Jobs Will Be Automated Too... Aug 15, 2017
- Aug 12, 2017 Wake Up Call - Data Bias and Corporations Aug 12, 2017
- Aug 7, 2017 Tackling Bias in Machine Learning, AI and Humanity Aug 7, 2017
- Aug 5, 2017 Emotional Intelligence Mirage? Jobs and our Humanity Aug 5, 2017
- Aug 3, 2017 #changethediscourse Aug 3, 2017
- Jul 24, 2017 Technology + The Human Body = Insurmountable Societal Challenge? Jul 24, 2017
- Jul 18, 2017 SAFEAI - A Tool For Concerned Parents Jul 18, 2017
- Jul 9, 2017 The Amazon Impact on Retail Jobs - Defining Technological Unemployment Jul 9, 2017
- Jun 23, 2017 Universal Basic Income— A Working Road Map Part 2- Corporate Responsibility Jun 23, 2017
- Jun 19, 2017 Faith and Artificial Intelligence - Humanity's greatest schism Jun 19, 2017
- Jun 12, 2017 LIFETIME LEARNING - A Necessity Already Jun 12, 2017
- Jun 7, 2017 13 Things That UBI Can't Replace - A Reflection on Universal Basic Income Jun 7, 2017
- May 12, 2017 What Should the Partnership on AI become? May 12, 2017
- May 2, 2017 Losing a Piece of our Humanity - A Cost of Technological Self-Sufficiency May 2, 2017
- Apr 13, 2017 Transhumanism, Life Extension and my Concerns Apr 13, 2017
- Mar 31, 2017 Universal Basic Income (UBI) - A Working Road Map Mar 31, 2017
- Mar 31, 2017 Right to Privacy -2nd Bill of Rights Mar 31, 2017
- Mar 27, 2017 Challenges to Overcome for Universal Basic Income (UBI) Mar 27, 2017
- Mar 26, 2017 Right to Procreate - 2nd Bill of Rights Mar 26, 2017
- Mar 16, 2017 Strategies to Keep Your Job with AI & Automation Mar 16, 2017
- Mar 10, 2017 The Right to Mobility - Constitutional Amendment Mar 10, 2017
- Mar 1, 2017 #futureofwork why are we struggling to understand the disruption? Mar 1, 2017
- Feb 20, 2017 Living in the Age of Machines - Do we need a second Bill of Rights? Feb 20, 2017
- Feb 9, 2017 Answering Nick Bostrom's Desiderata (Part 4 of 4) Feb 9, 2017
- Feb 9, 2017 Answering Nick Bostrom's Desiderata (Part 3 of 4) Feb 9, 2017
- Feb 4, 2017 Answering Nick Bostrom's Desiderata (Part 2 of 4) Feb 4, 2017
- Jan 18, 2017 Jan 18, 2017
- Jan 14, 2017 Answering Nick Bostrom's Desiderata (Part 1 of 4) Jan 14, 2017
Here’s a quick and easy way to break some of the current monopolies that exist in the personal data market (looking at Google, Facebook and Amazon). Let people own their own data, disseminate it as they see fit and, shockingly, get paid for what is rightfully theirs.
The concept is simple, the implementation is challenging and requires a good size investment from somewhere at the outset. I’ll explain the concept first and then circle back around to the implementation at the end.
A personal data exchange. Simply explained, each person 18 and over, has the right to list themselves on a “data” exchange, just like companies list their equity on a stock exchange. Each individual would get a listing off their own on the exchange. So ideally the exchange would have hundreds of millions of listings. One, for each unique person. This exchange, which would function as a clearinghouse for YOUR data. It would be the marketplace to go to for companies, of all kinds, to retrieve the data that they require from the sellers of data — YOU. If you list John Q Public on the exchange and want to severely limit the data you provide the world, that is your choice, however, because your data is used less frequently, you will get paid very little for being John Q Public. Privacy, is your right and you should be in control of your own data.
However, if John Q Public lists his data, answers questions submitted by the marketplace, responds in a timely manner to requests and is generally open about preferences, opinions and many other pieces of information in demand, then John Q Public will find a nice revenue stream associated with this very valuable data.
The marketplace is interactive. It starts with many of the key data items that are commonly available on a simple internet search. However, as you file your “listing” on the exchange then companies will seek your information, maybe thru GPS location, maybe via your search and query results. They may also send out questionaires. If you, the listing company, find yourself uncomfortable answering certain data questions, then stop. You are in control. You release your data. How you want, when you want. Do you want your shopping tracked? If yes, then allow it. Maybe only certain shopping is included. Furthermore, this data does not have to be “public”. It can be provided directly to a client company anonymously (anonymous data is cheaper to the company and pays you less as well). As the listing entity, providing the data, you will be in control. Companies can come to the marketplace and create datasets. Ask questions of the personal listings in an attempt to grow a business, design their alogirthms or pivot a marketing strategy. They can pay for high quality information which will be crucial to their business. As a “listed entity” you would have responsibility to respond, if you want to get paid. You must answer, truthfully, completely and in a timely manner. If the data proves to be useless over time because people lie and obscure relevant information, than the marketplace breaks down and people will lose CONTROL over the data, the way that it exists today.
It is hard to know how valuable this data is. Today, it is priced at zero by the owners of the data — you. Google and Facebook make nearly $100 billion per year, selling your data — selling you. I am certain that given the choice between no Google Search engine, no facebook and $1000 in your pocket. Most people will choose the $1000. Furthermore, it is an excellent way to democratize wealth as anyone will be eligible to participate.
Now, implementation is a challenge, but not insurmountable. The absolute key is that you must launch the exchange with many, many listings. Therefore, to motivate individuals to list on the exchange, they will need to be compensated to do so. I find that money has a way of getting people’s attention. However, we aren’t talking about a small amount of money, as the minimum number of listings probably needs to be 20–30 million people from the outset to make the dataset meaningful. Furthermore, an incentive system to encourage people to help others list will be valuable. So the capital required to incentivize the listing is significant, well over $2 billion just to launch. Fortunately, there are a few large pools of investors, unbiased and correctly aligned with individuals to fund the endeavor.
Secondly, the technological challenge of this exchange is not trivial. Handling over 100 million listings of John and Sarah Q Publics is a sizable endeavor. Furthermore, the front-end will need to be extremely flexible and user friendly to ensure that all listings can easily and efficiently manage their data.
The education process is also monumental as many people do NOT value their data, furthermore they do not value their privacy. They are content to give away this asset for free, or at least in exchange for a forum to fight about politics, post some pictures, search the web, have a laugh or order some goods to be delivered in less than 2 days. However, success here breeds success and with the right incentive structure, I believe that this process will go viral. If the money is sufficient enough, then data will become an important revenue stream for individuals and households.
Finally, this has to be done at a truly institutional level. It requires the cache of the New York Stock Exchange or Chicago Mercantile Exchange to demonstrate the “seriousness” of the concept, combined with the institutional fortitude to handle the sheer magnitude required to make this successful and finally with the technology chops and delivery expertise to pull it all together and provide an exceptional experience for the listed individuals. This idea has been tried or discussed on a smaller scale, but small scale doesn’t work for the companies in the data business. This idea is go big or go home, but in terms of privacy and control of personal data, there is no better way than to put you in charge and let the market pay you directly.
The disintermediation of Google, Facebook and Amazon in the “control your data business” is just a perk…
Top AI breakthrough
AlphaGo Zero — game changing machine learning, by removing the human from the equation
This is an incredibly important result for a series of reasons. First, AlphaGo Zero learned to master the game with ZERO input from human players and previous experience. It trained knowing only the rules of the game. Demonstrating that it can learn better and faster with NO human involvement. Second, the implication for future AI advancement is likely that humans are “in the way” of optimal learning. Third, the AlphaGo Zero went on to become a Chess Master in 4 hours, demonstrating an adaptability that has dramatically increased the speed of machine learning. DeepMind now has over 400 PhD’s working on Artificial GENERAL Intelligence.
Dumbest/Scariest thing in AI in 2017
Sophia, AI machine made by Hanson Robotics, granted citizenship in Saudi Arabia — VERY negative domino effect
The implications associated with “citizenship” for machines are far reaching. Citizenship in most locations includes the right to vote, to receive social services from the government and the right to equal treatment. The world is not ready and may never be ready to have machines that vote, or machines that are treated as equals to human beings. While this was an AI stunt, it’s impact could be devastating if played out.
Most immediately Impactful development — Autonomous Drive vehicles
Waymo reaches 4 million actual autonomous test miles
Autonomous cars progressing to real world applications more quickly than most anticipated
The impact of autonomous drive vehicles cannot be understated. From the devastation of jobs in the trucking and taxi industry to the freedom of mobility for many who are unable to drive. Also, autonomous driving is likely to result in significantly fewer deaths than human driving. This impact carries through to the auto insurance industry, where KPMG reckons that 71% of the auto insurance industry will disappear in the coming years. Central Authority control on the movement of people is another second order consideration that not many have concerned themselves with.
Most impactful in the future — Machine dexterity
Here are a few amazing examples of the advancement of machine dexterity. As machines are able to move and function similarly to humans, then their ability to replicate our functions increases dramatically. While this dexterity is being developed, work is also being done to allow machines to learn “how” to move like a human being nearly overnight, instead of through miles and miles of code — machines teach themselves from video and virtual reality.
Boston Dynamics Atlas robot human level agility approaching, including backflips — short video, easy watch
Moley introduces the first robotic kitchen –
Machine movement being taught human level dexterity with simulated learning algorithms — accelerating learning and approximating human process
Most Scary — Science Fiction-esque development
North Dakota State students develop self-replicating robot
The impact of machines being able to replicate themselves, is a concept for most dystopian sci-fi movies, however there are many practical reasons for machines to replicate themselves, which is what the researchers at North Dakota State were focused on. However, with my risk management hat on for AI Safety, it just raises a whole other set of rules and ethics that need to be considered which the industry and the world are not prepared for.
Most Scary AI real-world issues NOW
Natural Language processing finds millions of faked comments on FCC proposal regarding Net Neutrality
AI being used to duplicate voices in a matter of seconds
Using AI, videos can be made to make real individuals appear to say things that they did not…
Top AI Safety Initiatives
Two initiatives share the top stop. Mostly because of their practical applications in a world that remains too heavily devoted to talk and inaction. These efforts are all about action!
AutonomousWeapons.org produces Slaughterbots video
This video is an excellent story telling device to highlight the risk of autonomous weaponry. Furthermore, there is an open letter being written that all humans should support. Please do so. Over 2.2 mil views so far
Ethically Aligned Design v 2.0
John Havens and his team at IEEE, the technology standards organization have collaborated extremely successfully with over 250 AI safety specialists from around the world to develop Ethically Aligned Design (EAD) v 2.0. It is the beacon on which AI and Automation ethics should build their implementation. More importantly, anyone who is rejecting the work of EAD should immediately be viewed with great skepticism and concern as to their motives.
Personally, I have been hung up on Transhumanism. For those of you not familiar Transhumanism is the practice of imbedding technology directly nito your body — merging man and machine, a subset of AI and Automation. It seems obvious to me that when given the chance, some people with resources will enhance/augment themselves in order to gain an edge at the expense of other human beings around them. It’s really the same concept as trying to get into a better university, or training harder on the athletic field. As a species we are competitive and if it is important enough to you, one will do what it takes to win. People will always use this competitive fire to gain an upperhand on others. Whether it is performance enhancing drugs in sports, or paid lobbyists in politics, money, wealth and power create an uneven playing field. Technology is another tool and some, if not many, will use it for nefarious purposes, I am certain.
But nefarious uses of AI are only part of the story. When it comes to transhumanism, there are many upsides. The chance to overcome Alzeheimers, to return the use of lost limbs, to restore eyesight and hearing. Lots of places where technology in the body can return the disabled to normal human function which is a beautiful story.
So in that spirit, I want to shine a light on all of the unquestionable good that we may choose to do with advances in AI and automation. Because when we have a clear understanding of both good outcomes and bad outcomes, then we are in a position to understand our risk/reward profile with respect to the technology. When we understand our risk/reward profile, we can determine if the risks can be managed and if the rewards are worth it.
Let’s try some examples, starting with the good:
- AI and Automation may enhance the world’s economic growth to the point where there is enough wealth to eradicate poverty, hunger and homelessness.
- AI & Automation will allow researchers to test and discovery more drugs and medical treatments to eliminate debilitating disease, especially diseases that are essentially a form of living torture, like Alzheimers or Cerebral Palsy.
- Advancement of technology will allow us to explore our galaxy and beyond
- Advancement in computing technology and measurement techniques will allow us to understand the physical world around us at even the smallest, building block level
- Increase our efficiency and reduce wasteful resource consumption, especially carbon-based energy.
At least those five things are unambiguously good. There is hardly a rational thinker around who will dispute those major advancements and their inherent goodness. It is also likely that the sum of that goodness, out-weighs the sum of any badness that we come across with the advancement of AI and Automation. Why am I confident in saying that? Because it appears to be true for the entirety of our existence. The reason technology marches forward is because it is viewed as inherently good, inherently beneficial and in most cases, as problem solving.
But here is where the problem lies and my concern creeps in. AI has enornormous potential to benefit society. As a result it has significant power. Significant power comes with significant downside risk if that power is abused.
Here are some concerns:
- Complete loss of privacy (Privacy)
- Imbedded bias in our data, systems and algorithms that perpetuate inequality (Bias)
- Susceptibility to hacking (Security)
- Systems that operating in and sometimes take over our lives, without sharing our moral code or goals (Ethics)
- Annihiliation and machine takeover, if at some point the species is deemed useless or worse, a competitor (Control and Safety)
There are plenty of other downside risks, just like there are plenty of additional upside rewards. The mission of this blog post was not to calculate the risk/reward scenarios today, but rather to highlight for the reader that there are both Risks and Rewards. AI and Automation isn’t just awesome, there are costs and they need to be weighed.
My aim in the 12+ months that I have considered the AI Risk space has been very simple, but maybe poorly expressed. So let me try to set the record straight with a couple of bullet points:
- ForHumanity supports and applauds the advancement of AI and Automation. ForHumanity also expects technology to advance further and faster than the expectation of the majority of humanity.
- ForHumanity believes that AI & Automation have many advocates and will not require an additional voice to champion its expected successes. There are plenty of people poised to advance AI and Automation
- ForHumanity believes that there are risks associated with the advancement of AI and Automation, both to society at-large as well as to individual freedoms, rights and well-being
- ForHumanity believes that there are very few people who examine these risks, especially amongst those who develop AI and Automation systems and in the general population
- Therefore, to improve the likelihood of the broadest, most inclusive positive outcomes from the advancement of technology, ForHumanity focuses at identifying and solving the downside risks associated with AI and Automation
When we consider our measures of reward and the commensurate risk, are we accurately measuring the rewards and risks? My friend and colleague John Havens at IEEE has been arguing for sometime now that our measures of well-being are too economic (profit, capital appreciation, GDP) and neglect more difficult to measure concepts such as joy, peace and harmony. The result is that we may not correctly evaluate the risk/reward profile, reach incorrect conclusions and charge ahead with our technological advances. It is precisely that moment that my risk management senses kick in. If we can eliminate or mitigate the downside risks, then our chance to be successful, even if measured poorly increases dramatically. That is why ForHumanity focuses on the downside risk of technology. What can we lose? What sacrifices are being made? What does a bad outcome look like?
With that guiding principle, ForHumanity presses forward to examine the key risks associated with AI and Automation, Safety & Control, Ethics, Bias, Privacy and Security. We will try to identify the risks and solutions to those risks with the aim to provide each member of society the best possible outcome along the way. There may be times that ForHumanity comes across as negative or opposed to technology, but in the vast majority of cases that is not true. We are just focused on the associated risks with technology. The only time we will be negative on a technology is when the risks overwhelm the reward.
We welcome company on this journey and certainly could use your support as we go. Follow our facebook feed ForHumanity or Twitter @forhumanity_org and of course on the website at https://www.forhumanity.center/
The current and rightful AI video rage is the SlaughterBots video, ingeniously produced by Stuart Russell of Berkeley and the good people at autonomousweapons.org. If you have not watched AND reacted to support their open letter, you are making TWO gigantic mistakes. Here is the link to the video.
As mentioned, there are good people fighting the fight to keep weapons systems from being fully autonomous, so I won’t delve into that more here. Instead, it reminded me that I had not written a long conceived blog post about autonomous currency/capital usage. Unfortunately, I don’t have a Hollywood produced YouTube video to support this argument, but I wish that I did because the risk is probably just as great and far more insidious.
The suggested law is that no machine should be allowed to use capital or currency based upon its own decision. Like in the case of autonomous weaponry, I am arguing that all capital/currency should 100% of the time be tied to a specific human being, to a human decision maker. To be more clear, I am not opposed to automated Billpay and other forms of electronic payment. In all of those cases, a human has decided to dedicate capital/currency to meet a payment and to serve a specific purpose. There is human intent. Further, I recognize the merits of humans being able to automate payments and capital flows. This ban is VERY specific. It bans a machine, using autonomous logic which reaches its own conclusions from controlling a pool of capital which it can deploy without explicit consent from a human being.
What I am opposed to is machine intent. Artificial Intelligence is frequently goal-oriented. Rarer is it goal-oriented and ethically bound. Rarer still is an AI that is goal-oriented, where societal goals are part of its calculus. Since an AI cannot be “imprisoned”, at least not yet. And it remains to be seen if an AI can actually be turned “Off” (see this YouTube video below to better understand the AI “off button problem”). This ban is necessary.
So without the Hollywood fan fare of Slaughterbots, I would like to suggest that the downside risk associated with an autonomous entity controlling a significant pool of capital is equally as devestating as the potential of Slaughterbots. Some examples of risk:
- Creation of monopolies and subsequent price gauging on wants and needs, even on things as basic as water rights
- Consume excess power
- Manipulate markets and asset prices
- Dominate the legal system, fight restrictions and legal process
- Influence policy, elections, behavior and thought
I understand that many readers have trouble assigning downside risk to AI and autonomous systems. They see the good that is being done. They appreciate the benefits that AI and automation can bring to society, as do I. My point-of-view is a simple one. If we can eliminate and control the downside risk, even if the probability of that risk is low, then our societal outcomes are much better off.
The argument here is fairly simple. Humans can be put in jail. Humans can have their access to funds cut off by freezing bank accounts or limiting access to the internet and communications devices. We cannot jail an AI and we may be unable to disconnect it from the network when necessary. So the best method to limit growth and rogue behavior is to keep the machine from controlling capital at the outset. I suggest the outright banning of direct and singular ownership of currency and tradeable assets by machines ONLY. All capital must have a beneficial human owner. Even today, implementing this policy would have an immediate impact in the way that AI and Automated systems are implemented. These changes would make us safer and reduce the risk that an AI might abuse currency/capital as described above.
The suggested policy has challenges already today, that must be addressed. First, cryptocurrencies may already be beyond an edict such as this, as there is an inability to freeze cryptos. Second, most global legal systems allow for corporate entities, such as corporations, trusts, foundations and partnerships which HIDE or at least obfuscate the beneficial owners. See the Guidance on Transparency and Beneficial Ownership from the Financial Action Task Force in 2014, which highlights the issue:
- Corporate vehicles — such as companies, trusts, foundations, partnerships, and other types of legal persons and arrangements — conduct a wide variety of commercial and entrepreneurial activities. However, despite the essential and legitimate role that corporate vehicles play in the global economy, under certain conditions, they have been misused for illicit purposes, including money laundering (ML), bribery and corruption, insider dealings, tax fraud, terrorist financing (TF), and other illegal activities. This is because, for criminals trying to circumvent anti-money laundering (AML) and counter-terrorist financing (CFT) measures, corporate vehicles are an attractive way to disguise and convert the proceeds of crime before introducing them into the financial system.
The law that I suggest implementing is not a far leap from the existing AML and Know Your Customer (KYC) compliance that most developed nations comply with. In fact that would be the mechanism of implementation. Creating a legal requirement for human beneficial ownership, while excluding machine ownership. Customer Due Diligince and KYC rules would include disclosures on human beneficial ownership and applications of AI/Automation.
Unfortunately, creating this ban is only a legal measure. There will come a time when criminals will use AI and Automation in a way that will implement currency/capital nefariously. This rule will need to be in place to allow law enforcement to make an attempt to shut it down before it becomes harmful. Think about the Flash Crash of 2010, which eliminated $1 trillion dollars of stock market value in little more than a few minutes. This is an example of the potential damage that could be inflicted on a market or prices and believe-it- or-not,the Flash Crash of 2010 is probably a small, gentle example of downside risk.
One might ask who would be opposed to such a ban. There is a short and easy answer to that. It’s Sophia the robot created by Hanson Robotics (and the small faction of people advocating for machine rights already) that was recently granted citizenship in Saudia Arabia and now “claims” to want to have a child. If a machine is allowed to control currency, with itself as a beneficial owner, then robots should be paid a wage for labor. Are we prepared to hand over all human rights to machines? Why are we building these machines at all if we don’t benefit from them and they exist on their own accord? Why make the investment?
Implementing this rule into KYC and AML rules is a good start to controlling the downside risk from autonomous systems. It is virtually harmless today to implement. If and when machines are sentient, and we are discussing granting them rights, we can revisit the discussion. For now however, we are talking about systems, built by humanity to serve humanity. Part of serving humanity is to control and eliminate downside risk. Machines do not need to have the ability to control their own capital/currency. This is easy so let’s get it done.
Let’s talk for a minute about Robot Tax. Recently, it’s been a quick fix concept, espoused by Bill Gates and others. In San Francisco and South Korea, there is already talk of implementation. It’s the wrong idea, unless The United States truly wants to hinder robot implementation, in which case let’s hope the rest of the world agrees at the exact same moment to the exact same policy.
The idea of a “robot tax” comes from the fear of Technological Unemployment. Machines replacing human workers at many jobs, without an offsetting number of replacement jobs. I find this fear to be very reasonable as I have argued here: https://medium.com/@ForHumanity_Org/future-of-work-why-are-we-struggling-to-understand-the-disruption-e7661a2a0a81.
I also recognize that not everyone thinks that Technological Unemployment is a problem. However, there is one thing I do know. If Tech Uneployment is not a problem, then we will end up in a good place with minimal risk and an economy that has grown more efficient, creating many high paying jobs. If however, Tech Unemployment does come to pass, then we will be faced with some major societal challenges. Upside gain = POSITIVE, down side risk = VERY NEGATIVE. So with my risk management hat on, this blog post attempts to deal with the downside risk of Technological Unemployment, by tackling the issue of “robot tax” and introducing an alternative solution - a United States Sovereign Wealth Fund (USSWF).
To begin, let’s examine some key assumptions for how Technological Unemployment may occur, recognizing that this is not a certain outcome. There are those who even argument that it won’t happen at all.
- Technological unemployment will be a process that takes years if not decades.
- We don’t know how and when people will become displaced. This is true at a company level, a sector level and across the economy as a whole, which I discussed here https://medium.com/@ForHumanity_Org/the-process-of-technological-unemployment-how-will-it-happen-489e2f8b037c
- We don’t know what new jobs will be created
- There is an implicit concern about rising income inequality as the owners of capital reap the rewards of automation, while labor receives a decreasing share — approaching zero
- There is a belief that technological unemployment will be a byproduct of significant growth in economic production and subsequent wealth attributable to the implementation of AI and Automation.
Technological Unemployment, as discussed in detail by Oxford (2013), PwC (2016), Mckinsey (2017) and referenced directly by the White House report on AI and Automation (October 2016) is estimated to result in as much as 30–40% unemployment. Many people believe that this possibility is real and as a result they are concerned about supporting the unemployed. This is the genesis of the “robot tax” concept. Here are some concerns about the idea:
- It picks winners and losers based on how easily things are automated
- It creates a huge competitive advantage for countries/jurisdictions that do not tax robots. Most companies competing in AI and Automation are global in nature. Machine implementation will simply be shifted to other jurisdictions where the tax impact is minimized, so will the wealth creation associated with that machine implementation. Apple is a perfect example with nearly $250 billion dollars in cash overseas. Nearly all of that offshore cash is a function of tax issues. This point is not debatable, it is a corporate responsibility to minimize taxes legally, just as Apple has done. There is significant tangible history of corporations shifting production to jurisdictions where the tax is favorable. A robot tax punishes companies for innovating and will drive wealth creation out of the United States.
- Bureaucracy, companies rarely “flip the switch to automation” on a Friday and asks a set of employees to stop coming in on Monday. Therefore, identifying human replacement by machines will be difficult and policing a “robot tax” implementation would require significant manpower. If you are like me, when you see something is difficult to police, you probably also imagine a large bureaucracy trying to police it. Therefore, imagine the IRS, do we really need an IRTS (Internal Robot Tax Service) too? I suspect that is something most people would like to avoid. Taxes are difficult to implement on all companies public and private.
- Timing and measurement mismatch — Another problem with the “robot tax” concept is timing. For example, Company A automates Person A out of a job and therefore is subject to a tax. So Company A has its profit and capital reduced to pay the tax, but if this happens tomorrow, it is reasonably likely that Person A finds another job. Now the tax has reduced the capital allocation capability of the growing company and wasn’t put to good use supporting Person A who didn’t require the funds. Instead, it sits with the government, which is surely an inefficeint allocation of capital.
On top of all of this, no one has layed out the entire structure of a “robot tax”. Some have suggested banning the replacement of jobs by machines. Others have suggested a tax without much detail. In South Korea, where reports came out that they had enacted the first “robot tax”, it turned out to be all hype. The actual implementation was the decrease of tax INCENTIVES for the makers of automation.
In the end I suspect a Robot Tax is simply a reflexive action being suggested by people who are rightfully concerned but have no other alternatives to address their concern. Ideally this paper and the USSWF addresses those issues and proves to be a better tool to tackle the challenges.
Look I get the theory, if you want to grow your company and make more profits for yourself by automating, let’s tax those profits because you are removing jobs from the workforce. However a tax is punitive and redistributional. Why not participate with the company in their wealth creation, on behalf of that displaced worker. Practically speaking, if a worker were to be laid off by Google, Amazon or Apple, but compensated in stock for their termination. There is a decent chance that the capital gains of the stock may offset the lost wages.
If you believe the concerns about Technological Unemployment and that a significant amount of work will be replaced by automation, we do need to prepare for higher, permanent unemployment. So what should be done. The answer, from my perspective, is a series of solutions listed below (of course, this is no small challenge and requires many policies to try to mitigate the risks). This list amounts to a comprehensive AI, Automation and Tech policy for the United States (where I have covered them previously, I will show the links), but for the rest of the piece, we will concentrate on point #5:
- Lifetime learning to ensure we have the most nimble and current workforce available https://medium.com/@ForHumanity_Org/lifetime-learning-a-necessity-already-2e96807251db
- A laissez-faire, regulatory environment (as discussed by Andrea O’Sullivan and others) that encourages leadership in the area of AI and Automation, (one could argue, this exists today, but is unlikely to remain so lenient, which may eventually hinder the expansion) https://www.technologyreview.com/s/609132/dont-let-regulators-ruin-ai/
- Independent, market-driven audit oversight on key inputs to AI and Automation, such as ethics, standards, safety, security, bias and privacy. A model, not dissimlar to credit ratings https://medium.com/all-technology-feeds/ai-safety-the-concept-of-independent-audit-370bb45c01d
- A plan for shifting current welfare and Social Security into a Universal Basic Income https://medium.com/@ForHumanity_Org/universal-basic-income-a-working-road-map-66d4ccd8a817, coupled with an increase in taxes on the wealthy to make the UBI funding budget neutral. In the future, NOT TODAY.
- Finally a Soverign Wealth fund for the United States (USSWF) designed to maximize the benefits of the growth and expansion on behalf of all US citizens equally.
Since this post is introducing the Sovereign Wealth Fund as a means towards maximizing the benefit of the comprensive Technology Policy listed above, we will remain focused on it and the other elements of the policy remain open for debate elsewhere.
Returning to the idea of Technological Unemployment, we can all agree it would be a process and would take some time to occur. During that process, there are few things that the United States should prefer to have happen. Here is a list of those positive outcomes:
- We want the United States to capture/create the maximum amount of the wealth creation from this process
- We want to be in a position to protect displaced workers immediately when they are no longer relevant to the workforce
- With regards to cutting edge skills, we want citizens of the United States to be in a position to posses the requisite skills for new age work.
- We want to begin to build a warchest of resources for the time when significant unemployment becomes more permanent (Sovereign Wealth Fund)
- As a country we want to be rewarded for create a favorable climate for automation and artificial intelligence
So with those goals in mind. We can examine how a sovereign wealth fund can help to achieve those goals and maximize benefits to all citizens from the expected increase in AI and automation.
To be clear, the funding for a US Soverign Wealth Fund (USSWF) is effectively a tax, so this is somewhat of a sematic argument, but the asset based funding and goal-orientation of the USSWF is what will differentiate it from the current discussions about a “robot tax”.
Sovereign Wealth Fund implementation, some of the highlights
- The USSWF immediately receives a 1% common stock issuance from every company in the S&P 500. Followed by a .25% increase each year for 16 years.
- Every new company admitted into the S&P 500 must immediately issue this 1% common stock into the fund and begin the annual contributions
- All shares included in the fund are standard voting shares with liquidity available from the traditional markets trading the underlying company common stock
- The USSWF is overseen by the US Congress
- The USSWF is controlled by a panel of 50 commissioners, one from each state, appointed by the governor of each state. The commissioner must be from a political party that the governor is NOT a member of. These are attempts to depoliticize the body and to identify citizens to serve the country.
- The USSWF exists for the benefit of all Americans equally, with a primary concentration of providing welfare/basic income support to US citizens
- The USSWF is a lock box and not part of the US Federal Govt budget. The US Federal Government may appeal to the USSWF for funding associated with the mission of the USSWF
- Simplicity — implementation is easy, automatic and systematic
Now, if this concept seems foreign to you, then you are clearly an American. Here are a list of the world’s largest Soverign Wealth Funds over $100 bil.
Notice that much of the oil rich world has already created a Sovereign Wealth Fund. Norway’s fund is the gold standard, providing a capital asset value of $192k per person. At 5% income stream, that is nearly a $10k payment per person with capital preservation. That payment can go a long way towards supporting a population. That is the concept we are trying to achieve with the USSWF, a bulwark against future challenges, such as technological unemployment. But instead of oil, or a single resource, the citizens of the US should benefit from the entirety of the economy. A genuine justification of our consumer-oriented mentality and laissez-faire regulatory environment.
As we try to capture the entirety of the US economy, we must find a liquid, tradeable proxy. I have suggested the S&P 500 as the right proxy for the US market. To avoid picking winners and losers directly. Automation and technological advances will happen in all industries and all markets. The best way to be comprehensive is to use a benchmark of the whole economy. The S&P 500 is widely regarded as the best proxy for the US large capitalization economy. Constituents of the S&P 500 are the most liquid stocks in the world and will result in the most minimal of impact from USSWF ownership. By having participation in all companies, the USSWF is in a position to reap the benefits of US economic growth, as represented by the stock market, on behalf of the citizens of the US.
There is also the opportunity to have the USSWF funded in additional ways, whether that be from oil royalties, carbon credits or other mutually beneficial societal resource dividend. These discussions are merited, but are beyond the scope of this paper today.
Some features and benefits of a USSWF:
- The intial funding is a tax on the wealthy (holders of stocks), which is consistent with a goal of redistribution and tempering the rising income inequality in the US. In some ways, it becomes a tax on foreign investors as well, which is an ancillary benefit.
- Keep capital in the hands of capital allocators and away from the government until needed
- It increases participation in the US economy as measured by the stock markets. All US citizens would have a share of the stock market up from the estimated 52% today.
- It creates a dynamic pool of assets which has an opportunity to track the expected wealth creation resulting from AI and Automation. Therefore the USSWF is well aligned with the goal of rewarding citizens for the possibility of having their jobs displaced
- A Sovereign Wealth Fund is far from new and has many precedents both foreign and domestic (public pension funds) upon which to build a robust program designed to maximize the benefit to US Citizens
- Alignment instead of antagonistic taxation. If the company wins, the country wins and even the displaced workers win.
- Compound return. Because the USSWF will not have a current cashflow demand, it will be allowed to grow for a pre-determined period, maybe 20 years. This will also maximize the benefits available to all current and future US Citizens
Some areas of concern (these points will be discussed further below):
- Dilution. This is immediate dilution on the value of existing shareholders and the equity markets are not likley to be happy about it.
- Commingling of corporate and government interests
- Reliance upon capitalism and equity market growth to fund future challenges. This is both a risk/reward area of concern and an idealogical area of concern for some.
- Bureacracy/control of the USSWF
- Increase in income inequality
Let’s tackle a few of these. Dilution, will be a concern and in a vacuum, one could expect the stock market to make a significant move backwards on this announcement. However, one could argue that the establishment of a long term, pro-automation policy and the avoidance of a “robot tax” might be equally as beneficial offsetting the expected dilution. Additionally, there is the benefit to the market as a whole from proactively addressing a pressing concern.
Commingling of government and corporate interests. These interests have been commingled since the formation of the country, however the US has had an active policy to not become shareholders of corporations and this would change (unlike most other countries). At the 5% max level, I don’t see government being excessively influential in a shareholders meeting. But it may be the implied result that is concerning. The implied result is that the government is now “pro-business” and thus must be anti-citizen or anti-worker. I recognize the concern, and this blends into point #3 Reliance upon Capitalism and equity markets. Unless we are prepared to change the nature of this country immediately, we are already firmly entrenched in capitalism. We ought to maximize its effect in this endeavor. Capitalism has an excellent track record of wealth creation, but sometimes at the expense of some who are left behind whether through income inequality or lax worker safeguards. In the case of the USSWF, at least the worker is better positioned to profit from capitalism than she would be otherwise. It is my belief that any other solution is too far afield from the current wishes of the population.
Finally, bureaucracy and control. The USSWF involves a good deal of assets and that means it is powerful. That will attract bureaucracy and politicians who will look to control the USSWF. A 50 person panel is more than sufficient expertise on hand to make quality, non-political decisions in the interest of the American people, especially if they are deemed fiduciaries by the legal definition. Any attempts to associate a large staff with the USSWF should be met with great skepticism as the mandate is the operation of an Index fund tied directly to the S&P 500. It is a pool of assets, designed to represent the US economy, not to create a target return. Any return associated with the assets is a function of economic growth or broad stock market appreciation only.
Increase in income inequality. As a result of trying to maximize the United States’ share of growth due to automation, much of those rewards will accrue to the wealthy. This is a result of the reliance on capitalism and maximum capital allocation efficiency at the corporate level. This of course benefits the USSWF, but it benefits the owners of capital more (95% versus 5%) in the short run. To mitigate this problem, a crucial piece of the Tech, AI and Automation policy is a significant increase in taxes on the wealthy. Universal Basic Income, which currently is the only means by which the massive amount of unemployed are able to survive, must be funded from somewhere. The USSWF would be in a position to provide a solid foundation for this payment, but it will be insufficient to meet the full demand. The owners of capital MUST recognize that their windfalls from reaping the benefits of a laissez-faire regulatory and tax regime will be called upon to benefit displaced workers.
No plan is perfect, but the USSWF is better than a “robot tax”, whatever that might turn out to be. The key to success with the USSWF or any other solution aimed at protecting US citizens from technological unemployment is to participate in the wealth creation. Other ideas along those lines would be welcome additions to the discussion. I hope that you will think deeply on these points, challenge the theories, make suggestions and further this debate.
The Roosevelt Institute recently did a study, which is being quoted everywhere, that claims that a $12,000 per annum Universal Basic Income (UBI) would increase the GDP by 12.56% above baseline over the course of 8 years. After which, the economy would return to baseline. Of course the part that no one mentions is that this transfer payment is DEBT-FINANCED by the US Government. Using the same model, when taxes were used to pay for the UBI there was no effect. So they adapted the model to account for distributional effects but they made no similar adjustment to the investment impact because the only impact they examined was marginal impact of consumption and not the impact on investment.
So let me get this straight, if the US Government borrows a ton of money and gives it to people to spend then the economy will be stimulated… I’m shocked! ANY program where the government borrows that much money will project to stimulate the economy. Whether it is a tax cut for the rich (because these days only about 49% of the population pays taxes… so by definition) or a huge infrastructure program or a huge energy program or a huge defense program. Money into the system creates GDP. One can debate its effectiveness, but I am not interested in that. The point here is that proponents of UBI need to stop citing this study. There is no way that a UBI program will be done in this country with debt financing… zero chance.
UBI will need to figure out how to represent itself. Here are some key questions for UBI, that if we can all embrace these suggested answers it will go a long way to keeping the story focused and making it palatable for people who oppose UBI.
- Is it inherently stimulative (no)?
- Is it redistrubution (yes)?
- Is it a possible answer for a world where the future or work is in doubt (yes)?
- Does it attempt to achieve some basic societal goals of food, clothing and shelter for all people (yes)?
- Is it possible that UBI has a negative impact on GDP (yes)?
- Does it matter if UBI may negatively impact the GDP if it achieves the societal goals and we have growth due to technology (no)?
- Should we consider UBI, like food stamps, which can only be used for the intended purposes? (no)
- Does UBI have knock on psychological positive effects (unknown)?
Let’s tackle a few of these points, starting with #1. In the Roosvelt institute study, the stimulative effect was due to the borrowing not the concept of UBI, so we should stop quoting GDP stimulus ASAP. Now, if UBI through redistrubtion is considered there are merits to both sides of the argument and I doubt that our models are sensitive enough to discern a positive or negative effect. First, if the wealthy are saving their money and NOT deploying it productively via their capital investments, then taking wealth from them in order to give it to the poor makes brilliant sense. The poor will use it to buy good which would increase GDP thru consumption. However the evidence here is probably the opposite. Which means that this capital has had a knock on effect of stimulating more GDP growth than the likely effect of purchases of food, clothing and shelter. On top of that, there is no certainty that money for the poor will always be used productively. We have an opoid problem, gambling and other non-productive uses of UBI that should be concerning. If a portion of the UBI floats into unproductive spending then GDP takes a second hit. One hit from the negative redistrubution of capital and a second from unproductive expenditures. All in, I think UBI would be fortunate io breakeven on GDP impact. Thus we should avoid suggesting it is stimulative since that likely only hurts credibility.
#2 Is it redistribution? Of course it is. UBI should embrace this concept, especially in light of the huge increases in income inequality around the globe.
This is a disgusting graph. The World should be ashamed to allow this to happen. But, but, but, we have three choices when it comes to redistrubtion. Don’t do it, do it partially or do it completely. There is an argument for don’t do it. It is as follows. The overall wealth of the world has increased dramatically and thus the percentage of people living in poverty has fallen dramatically. So there is a robust argument that we should continue to allow the capital to flow to the wealthy, allow them to grow global GDP and allow trickle down improvements to the poor. It has worked as you can see in the chart below showing the numer of people living in poverty (using a few different definitions of poverty).
But it has not worked fast enough for many, which is what humanitarians argue. They may be right, I can’t be certain. Which leads us to our second choice, partial redistribution. Allow capital to flow and the rich to get richer (because it creates growth that we ALL can participate in), but let’s take a portion, enough to buy the poor up to the poverty line for starters. Covering food, clothing and shelter. Essentially trying to have our cake and eating it too. Leave the majority of the capital where it has proven to be productive but earn the societal good of eradicating poverty. Sounds good, but not likely to be that simple. Taking a meaningful portion of capital from the wealthy allocators will have a leveraged downside effect to the economy. We simply cannot know how negative it will be. We have already discussed the challenge of maximizing the UBI towards 100% productive usage of food, clothing and shelter. Failure to achieve 100% productive UBI usage reducing the value to GDP commensurately. Our final choice is complete redistrubtion, which is both impractical and ineffective and not worth writing more than a sentence about. Humans crave power and will always try to achieve, thus we will have income inequality. The point of this entire discussion is that we need to try to find the optimal amount of redistrubtion.
#3 the reason that ForHumanity supports UBI is because we anticipate machines replacing most human input into work. In a world where the available jobs are substantially fewer than the number of people, we must create a society that supports all of its members with the basic necessities, otherwise anarchy and revolution become the only choice. As much as humans crave power, they crave survival more. They will do what it takes to survive, even if that includes setting aside civil society. Is UBI the only answer to a world with far fewer jobs than people? ForHumanity remains open to other suggestions.
#4 UBI can target transfer payments to individual to cover food, clothing and shelter. To be honest, that should be good enough. It is a wonderful goal for a society to insure that all citizens have the basic necessities of life. To eradicate hunger and to eradicate homelessness should be priorities which is hardly debatable. It only becomes a debate when two things happen. Government gets involved to “administer” the goal and when those with the wealth are taxed for it. One way to begin the discussion is with the latter. Silicon Valley billionaires can advocate for the power of UBI, but they ought to volunteer their own wealth to pay for it, in the same breath. Until they do, it will remain an elusive goal. With respect to government, it becomes a political tool. “How to implement”, “fund for administration”, “who gets what and where and when”. When you examine Social Security, it had a similar mission and altruistic goal and now it is a political nightmare. Attempting to pass the most logical of adjustments like, raising the qualification age (associated with the unanticipated rise in life expectancy), is impossible simply because politics get in the way. So to attempt to deal with the political ramifications, I tried to tackle some political reform as well, see the link.
I approached UBI as a transfer payment offered in exachange for a 5 year commitment to government services that would be required by all citizens between the age of 18 and 30. The brief version of the theory is that would “justify” the payment for life to the UBI detractors who worry that money for “free” is a bad idea for a variety of reasons. Second, this substantially helps to pay for UBI by lowering the cost of running the government on all levels. In the end, government is needed to administer the policy and politics will make it difficult to implement well.
#5 Could UBI have a negative impact on GDP? I believe that it would. Based on the reduction of poverty argument already presented. While I am certain that a large percentage of the UBI payment would be consumed specifically for food, clothing and shelter, those are all consumption items and provided a 1 for 1 impact on GDP. Whereas, capital, treated as investment capital can be leveraged into investments that result in the expasion of profit and wealth. I’ll give you an example. If I have $1000 and buy $1000 of food taken from a wealthy individual out of her $1000 of savings. I am fed and the food companies and grocery stores have an additional $1000. The wealthy individual has $1000 less of savings and therefore made $1000 less investment in something. Wealthy individual invests $1000 in a widget company, which pays an employee $800 and invests $200 in equipment to sell more goods and services, such that the company produces $2000 worth of product, then our investment has grown wealth, far more than our transfer payment is capable. This is the argument of capitalism and the numbers have borne out the argument.
#6 Does it matter if the economy is negatively impacted. Well that depends, has automation and AI increased GDP by more? How negative is the impact? Remember, we have earned a series of societal goods by implementing our UBI payment, especially if the future of work leaves us with many who require their UBI to survive. We also may create enough wealth to achieve our goals. This is more a question of timing than anything else. But a small hit to GDP is worth it for the good that UBI can do.
#7 — Should UBI be portioned out, similar to FoodStamps (SNAP) in order to prevent misuse. Covered in the article below, misues of Food Stamps has generally been considered a fallacy. The vast majority of the program works effectively (less than 2% fraud rate) and has been used for food. While it is not unreasonable to believe that misuse might increase when a UBI payment is larger, the concern may not be great enough to merit forced participation. This also helps to avoid a large bureaucracy designed to policy the implentation of the policy, which is probably more valuable.
#8 Does UBI have knock on psychological positive economic effects? Here I think there is some possibility. There are many studies being done now, in microcosm to try to identify the effects. But I am afraid that the “in microcosm” and the non-permanent basis of all of these studies is highly distortive. Most of these people are working while receiving the payment. In essence, when UBI is the only choice, people will react very differently. When the future of work means limited job opportunities for people, reactions will be different. I suggest that we temper the “UBI makes everyone feel great” feedback. UBI is a necessity, for a future where work is scarce and people may struggle to meet the basic necessities of life.
UBI may be a great next step in our system of basic welfare. It may represent the best way for the wealthy to support the world around them and insure that the basic necessities of life are available to all. But I caution proponents of UBI to avoid over reaching with the “merits” of UBI, in the end, I think it damages the credibility of a potentially necessary program.
For months I have regularly tweeted a response to my colleagues in the AI Safety community about being a supporter of independent audit, but recently it has become clear to me that I have insufficiently explained how this works and why it is so powerful. This blog will attempt to do that. Unfortunately it is likely be longer than my typical blog, so apologies in advance.
Independent audit defined. ForHumanity, or similar entity, that exists, not for profit, but for the benefit of humanity would conduct detailed, transparent and iterative audits on all developers of AI. Our audit would review the following SIX ELEMENTS OF SAFEAI on behalf of humanity:
- Control — this is an analysis of the on/off switch problem that plagues AI. Can AI be controlled by its human operators? Today it is easier than it will be tomorrow as AI is given more access and more autonomy.
- Safety — Can the AI system harm humanity? This is a broader analytic designed to examine the protocols by which the AI will manage its behavior. Has it been programmed to avoid human loss at all costs? Will it minimize human loss if no other choice? Has this concept even been considered?
- Ethics/Standards — IEEE’s Ethically Aligned Design is laying out a framework that may be adopted by AI developers to represent best practices on ethics and standards of operation.
There are 11 subgroups designing practical standards for their specific areas of expertise (P7000 groups). ForHumanity’s audit would look to “enforce” these standards.
- Privacy — Are global best practices being followed by the company’s AI. Today, to the best of my knoweldge, GDPR in Europe is the gold standard of privacy standards and would be the model that we would audit on.
- Cyber security — regarding all human data and interactions with the company's AI, are your security protocols consistent with industry best practices? Are users safe? If something fails, what can be done about it?
- Bias — Have your data sets and algorithms been tested to identify bias? Is the bias being correct for, if not, why not? AI should not result in classes of people being excluded from fair and respectful treatment
The criteria that is analysed is important, but it is not the most important aspect of independent audit. Market acceptance and market demand are the key to make independent audit work. Here’s the business case.
We have a well established public and private debt market. It is well over $75 trillion US dollars globally. One of the key driving forces behind the success of that debt market is Independent Audit. Ratings agencies, like Moody’s, Fitch and Standard & Poor’s for decades have provided the marketplace with debt ratings. Regardless of how you feel about ratings agencies or their mandate, one thing is certain, they have provided a reasonable sense of the riskiness of debt. They have allowed for a marketplace to be liquid and to thrive. Company’s (issuers) are willing to be rated and investors (buyers) rely upon them for a portion of their investment decision. It is a system with a long track record of success. Here are some of the features of the ratings market model:
- Issuers of debt find it very difficult to issue bonds without a rating
- It is a for-profit business
- There are few suppliers of ratings which is driven by market acceptance. Many providers of ratings would dilute their value and create a least common denominator approach. Issuers would seek the easiest way to get the highest rating.
- Investors rely upon those ratings for a portion of their decision making process
- Company’s provide either legally mandated or requested transparency into their financials for a “proper” assessment of risk
- There is an appeals process for companies who feel they are not given a fair rating
- The revenue stream from creating ratings allows the ratings agencies to grow and rate more and more debt
Now I would like to extrapolate the credit rating model into an AI safety ratings model to highlight how I believe this can work. However, before I do that there is one key feature of the ratings agency model that MUST exist for it to work. The marketplace MUST demand it. For example, if an independent audit was conducted on the Amazon Alexa (this has NOT happened to date) and it failed to pass the audit or was given a subsequent low rating because Amazon had failed some or all of the SIX ELEMENTS OF SAFEAI, then you, the consumer, have to stop buying it. When the marketplace decides that these safety elements of AI are important, that is when we will begin to see AI safety implemented by companies.
That is not to say that these companies and products are NOT making AI safety a priority today. We simply do not know. From my work,there are many cases where they are not, however without independent audit, we cannot know where the deficiencies lie. We also can’t highlight the successes. For all I know, Amazon Alexa would perfectly pass the SIX ELEMENTS OF SAFEAI today. But until we have that transparency, the world will not know.
That is why independent audit is so important. Creating good products safely, is a lot harder than just creating good products. When companies know they will be scrutinized, they behave better — that is a fact. No company will want to have a bad rating published about them or their product. It is bad publicity. It could ruin their business and that is the key for humanity. AI safety MUST become a priority in the buying decision for consumers and business implementation alike.
Now, a fair criticism of independent audit is the “how” part and I concur wholeheartedly, but it shouldn’t stop us from starting the process. The first credit rating would be a train-wreck of a process compared with the analysis that is conducted by today’s analysts. So it will be now, but the most important part of the process is the “intent” to be auditted and the “intent” to provide SAFEAI. We won’t get it perfectly right the first time nor will we get it right everytime, but we will make the whole process a lot better with transparency and effective oversight.
Some will argue for government regulation (see Elon Musk), but AI and the working being done by global corporations have already outstripped national boundaries. It would be very easy for an AI developer to avoid scrutiny that is nationally focused than a process that is transnational and market driven. Below I have created a list of reasons that market driven regulation, which this amounts to, is far superior to government based regulation:
- Avoids the tax haven problem associated with different rules in different jurisdictions, which simply shifts development to the easiest location
- Avoids government involvement, which frequently had been sub-optimal
- Allows for government based AI projects to be rated — huge benefit regarding autonomous weaponary
- Tackles the problem from a global perspective, which is how many of the AI developers already operate
- Market driven standards can be applied rather than bureaucracy and politics using the rules as tools to maintain their power
Now how will this all work. It’s a huge endeavor and it won’t happen overnight, but I suspect that anyone taking the time to properly consider this will realize the merits of the approach and the importance of the general issue. It is something we MUST do. So here’s the process I suggest:
- Funded - we have to decide that this important and begin to fund the audit capabilities of ForHumanity or a like-minded organization
- Willingness - some smart company or companies must submit to independent audit, recognizing that today, it is a sausage making endeavor, but one that they and ForHumanity are undertaking together to build this process and to benefit both humanity and their product for achieving SAFEAI
- Market acceptance- this is a brand recognition exercise. We must grow the awareness of the issues around AI safety and have consumers begin to demand SAFEAI
- Revenue positive, Licensing - once the first audit is conducted, the company and product will be issued the SAFEAI logo. Associated with this logo is a small per product licensing (cents) fee payable to ForHumanity or a like organization. This fee allows us to expand our efforts and to audit more and more organizations. It amounts to the corporate fee payable for the benefits to humanity. (Yes, I know it is likely to be passed on to the consumer)
- Expansionary - this process will have to be repeated over and over and over again until the transparency and audit process become pervasive. Then we will know when products are rogue and out of compliance by choice, not from neglect or the newness of the process
- Refining and iterative- this MUST be a dynamic process that is constantly scouring the world for best-practices, making them known to the marketplace and allowing for implementation and upgrade. This process should be collaborative with the company being audited in order to accomplish the end goal of AI safety
- Transparent - companies must be transparent in their dealings
- Opaque - the rating cannot be transparent in the short coming in order to protect the company and the buyers of the product, who still choose to purchase and use the product. It is sufficient to know that there is a deficiency somewhere, but there is no need to make that deficiency public. It will be between ForHumanity and the company itself. Ideally the company learns of their deficiency and immediately aims to remedy the issue
- Dynamic - this team cannot be a bureaucratic team, it must be comprised of some of the most motivated and intelligent people from around the world determined to deliver AI safety to the world at-large. It must be a passion, it must be a dedication. It will require great knowledge and integrity
- Action-oriented - I am a do-er, this organization should be about auditing not about discussing. Where appropriate and obvious, like in the case of IEEE’s EAD, adoption should supersede new discussions. Take advantage of the great work being done already by the AI safety crowd
- And it has to be more than me - As I have written these words and considered these thoughts, it is always with the idea that it is impossible for one person, or even a small group to have a handle on “best-practices”. This will require larger teams of people from many nationalities, from many industries, from many belief systems to create the balance and collaborative environment that can achieve something vitally important to all of humanity.
Please consider helping me. You don’t have to agree with every thought. The process will change daily, the goalposts will change daily, “best-practices” will change daily. You have to agree on two things:
- That this is vitally important
- That independent audit is the best way to achieve our cumulative goals
If you can agree on these two simple premise then join with me to make these things happen. That requires that you like this blog post. That you share it with friends. That you tell them that it is important to read and consider. It requires you to reconsider how you review and buy AI associated products. It requires you to take some action. But if you do these things, humanity will benefit in the long run and I believe that is worth it.
I think I am writing this for myself. I hear about advancements in science that can stimulate and re-activate neurons and I think, Amazing! I listen to Tom Gruber talk about memory technology linked directly to your head and I think, no way Big Brother to the nth degree. Are these very different? Should I have such extreme views? Or am I just nuts? (N-V-T-S for the Mel Brooks fans out there).
I seem to be drawing a line between augmentation and stimulation. Between technology and medicine. Maybe that’s not a fair line, but I do. I recognize that my line is not the line for everyone, so please don’t take this as some edict or law, that I suggest. I know that transhumanism will prevail and people will put tech in their body.
Scientists May Have Reactivated The Gene That Causes Neurons To Stop Growing
In Brief Scientists have found a way of reactivating genes in mice to continue neuron growth. The development could be…futurism.com
So why do a draw a line between drug, chemical and external stimulus versus internal, permanent implantation? And where does that line stop/start.
One differentiation that may be key is that medicine is an isolated interaction versus ongoing connectivity. Let me try to explain.
When I take medication or medical treatment that is not implanted, it is a finite decision. Once it enters in my body, it either works or it doesn’t work, but regardless, the medication is done interacting with the outside world. It is me and the after effects, positive or negative. With augmentation/implantation, that device is permanent and it is permanently connected to the outside world. Those are vastly different ongoing relationships. One is distinctly private, the other is inherently public.
This makes a big impact on your value judgement, I suspect. When you take a drug, medication or course of treatment, it’s a one-time interaction with the dosage. One-time interaction with the outside world, essentially invading your body. There is no need for ongoing communications (outside of results-based measurements). It’s between you and your body. Any information obtained by the outside world would have to be granted.
The introduction of tech, whether implanted or nanotechnology designed to treat disease results in a diagnostic relationship between you and the outside world that is no longer controlled by your brains and five senses, in fact that communication is completely out of your control. I believe that is the key distinction and exactly where my discomfort lies — control.
Now the purveyors of these technologies will argue that you have control, but it can never be guaranteed. The only way to guarantee it, would be to 100% eliminate the communication protocol from within your body and the outside world. Nothing is perfectly secure when it comes to technology.
But I also said that it effects your value judgement. This is not a black or white issue. I agree with Tom Gruber and Hugh Herr, if I had a disability and the best way to be restored to full human capability was via augmentation or implantation, I suspect I would choose the tech. Sacrificing the risk that my tech could be compromised to my detriment, simply because my immediate need has been met. I have never had a debilitating disability, but I believe that is the choice I would make and that many people would agree. In the end this is about choice and I believe that all should have the right to choose in this matter. But this particular choice is a value judgment focused on rehabilitation. I think the equation changes, when we are talking about creating super-human traits.
To be clear, super-human traits are those senses or bodily functions that surpass human norms of that sense or function. So in the pursuit of super-human traits, I am happy to accept the introduction of artificial medicines into my body, because they do not create an ongoing communications vehicle with the outside world that may be hijacked. If that medicine can increase my longevity, clear up cloudiness in my brain, make my blood vessels more flexible, then I will take the risk of side effects in order to be super-human in that respect.
But pacemakers present an interesting gray-area. Traditionally a pacemaker would be returning me to traditional human level function. But what if we could each receive a pacemaker at birth that perfectly handled cholesterol, high blood pressure and heart function, completely eliminating heart-related death. Would that be a tech I would accept? I would be making my cardiac functions super-human, but exposing myself to the risk that the device could be hijacked or damaged in it’s communication with the outside world.
A few distinctions, first, such a device probably doesn’t directly tie into my communication control between my body and the outside world. Second, the risk of nefarious interaction with the device probably only limits me to natural human performance, taking away my super-human heart or movement. So it would appear that the risk may be worth the potential reward. However that risk of hijacking is already increased. Could my heart be stopped? That’s a serious consequence.
Now, let’s put that tech in our brains, or any of our fives sense or our mobility. Now the rewards have been espoused by many already. Personally I can see the benefits, but now the risk is just too high. The constant communication between my senses or my brain without outside world opens me up to hacking. It has the potential to change me, to change how I perceive the world. I can be manipulated, I can be controlled. It places my sense of self at risk. In fact, the positive operations of the augmentations themselves changes who I am at my core. And thus, I draw my line…
By drawing this line, I create consequences for myself. Transhumans will pass me by, maybe not in my lifetime, but these consequences apply to all who share my line in the sand. I will struggle to compete for jobs. I will not have a perfect memory. Fortunately, those things don’t define who I am, nor where I derive happiness and I hope that others will remember that when the immense peer pressure to implant technology enters your life.
In the end, I find humanity, with our flaws and failings to be perfect as is, beautifully chaotic. So this concept of weakness and fear being played upon by advancing technology feels sad and contrived, sort of like the carnival huckster. Playing on your fear that you aren’t smart enough, that your memory has some blank spots, that you struggle to remember people’s names when you meet them. Its through our personal chaos, our personal challenges and our personal foibles that we find our greatest victories and our most important lessons learned. I see a path for technology that would aim for Utopia and leave us dull, dreary and bored, automatons searching for the next source of pleasure. I see implanted brain and sense technology — controlling us. Not delivering great joy and happiness in our lives.
I guess that is the problem with all of the “intelligence chasing”. The implication is that none of us are smart enough. Professor Hugh Herr thinks that we all should have bionic limbs, why not be able to leap tall buildings in a single bound. Elon Musk sees it as obvious that we won’t be able to compete with machines because our analog senses are too slow. So he aims to increase the data extraction capabilities of our brains.
What is all this crap for? If we have a single goal in life, isn’t it something akin to happiness, or joy or peace or love? Please explain to me how being smarter gives your peace? Or how being able to jump higher or run faster gives you joy?
Some will try to answer those questions as follows: If you are smarter and have a better memory, you can get a better job and make more money. If you have stronger legs you can run faster, jump higher, be better at sports or manual labor. But let’s take the sports example, if you have “bionic” legs and win a long jumping contest, will that be gratifying at all? Especially if the competitors are not augmented? My reaction is, “who cares”, you should have won… boring.
Regardless of my view there will be people who find happiness in beating others. There is a distinct difference in the joy of victory in a friendly competition versus the joy you feel when you are taking the last job available because you have an augmented brain and the other applicants do not. Quite honestly, I would find the latter shameful, but we live in a society that rewards and cheers the person willing to “invest” in their future. This last point is the reason that transhumanism advance mostly unfettered. Humans are competitive, ruthless and self-centered enough that some people will choose augmentation in order to be superior. Society will laud people who do so because we are convinced that more technology, more brains, more assets and and more skills will lead to happiness. I support the right to augment your brain, to make yourself superior to others and I support your right to make bad choices, I believe this will be one of them.
The field of medicine seems to be a touchy one for people with regards to the automation of how it is practiced. We struggle to get comfortable with the idea that machines could somehow replace the human doctors with whom we trust our lives and the lives of our children. It’s that fear that prevents us from taking the intellectual steps to realize that doctors, nurses and most medical services can and will be automated. If it helps, picture your favorite sci-fi movie where we all get dropped into a pod, which both analyzes and fixes us in a second. If that seems, nice and convenient and reliable — then you’ll be in a better place to realize that there are steps to get from here to there. In this blog post, I will aim to show you the cumulative steps that the medical industry is taking to automate. I suspect you’ll be amazed to see them put together, all in one place…
Let’s start with some amazing videos
Look at the amazing precision that robotic surgery is capable of. Now there is still a human involved in this particular operation, but the tasks of the movement can be mimicked in the same way that the the Moley kitchen mimcs the movements of a 5 star Chef. Code will be drafted to make the technique of each surgery precise and perfect. Humans will remain involved to “oversee” operations until the machine prove that they have the adaptive capability to react to unforeseen difficulties, but each time it happens, machine learning adds it to the repertoire.
Now we use doctors for a lot more than surgery. Diagnosis is a critical part of what a GP or specialist needs to provide their patient. Here are some examples of AI at work in diagnosis.
This Artificial Intelligence was 92% Accurate in Breast Cancer Detection Contest
In Brief Scientists trained an AI machine to detect breast cancer in images of lymph nodes. A group of researchers from…futurism.com
Patients' illnesses could soon be diagnosed by AI, NHS leaders say
Computers could start diagnosing patients' illnesses within the next few years as artificial intelligence increasingly…www.theguardian.com
Again, many of these tools are currently designed and marketed as assistants to doctors. Mostly because the providers of these products realize that patients aren’t ready to give themselves over completely to machines. But in reality, the practitioner is taking the diagnosis and telling you what it is. I’m certain we have a calming AI voice that can read out the results for you. A further argument companies make regarding these systems currently is that the doctor PLUS the AI is more reliable than either alone. A solution for that of course is two different AI processes, but again, is the patient ready to trust a machine-only process. Lastly, if you don’t believe that these diagnostic procedures will improve, then maybe you have been paying attention to ANY technological advancements.
How else do we interact with doctors. Regular checkups often include blood tests which of course are easily automated, simpler and more thorough.
Physical evaluation — your smartphone and personal health trackers can provide enormous amounts of data about our day-to-day health, sleeping, nutrition and physical activity.
23 and Me and other services allow you to examine your DNA. Combined with the technology of CRISPR and advances in gene therapy, we are already having discussions about treating disease before it manifests.
Maybe our doctors are just there to provide advice, because of course, they will know 100% of the latest technological advancements, newest drug treatments, how to avoid negative drug interactions, the latest physical therapy techniques, nutritional impacts and treatments and all the relevant info on you from your various specialists — oh wait… yeah that’s a machine, not a human.
It would seem we are left with the doctor-patient relationship. For many doctors, this is a genuine strength and in the future will become a key differentiation for them. But I have also heard of plenty of disappointed patients, who probably can’t wait to trade their doctor in for comprehensive machine interaction. In all of my work on job replacement by automation, I always leave room for the human element. There will be people who choose to acquire services from humans, because they are human (at least as long as we can know the difference). So certain doctors will find work because of “who they are and how they relate to others”, which of course is a great thing. However, most doctors will not survive automation. I personally expect to see a phase out beginning with the generation of our youth today, so roughly 20–30 years.
Lastly, there are cost and convenience factors. Machines work 24/7, doctors do not. Machines are capital investments, doctors have on-going salaries which rise nearly every year. In a world that is reaching crisis levels with health care costs, AI and Automation will continue to be solutions for rising costs and will likely force our acceptance of automated medicine much more quickly than anticipated. The replacement process will not be linear, nor will it happen all at once. Rather, you will see an increased use of technology around you during visits. Certain services and tests will be introduced as fully automated, but with doctors and practitioners nearby to supervise, until enough time goes by without problem that patients are comfortable. But eventually, machines will replace most of the functions of a doctor. If our doctors, the people we trust with our most valuable asset - our lives, can be replaced, then what won’t automation be able to replace.