Index

Universal Basic Income, Capitalism and Christianity - Can We Reconcile the Three

I was born and raised in the West, steeped in Capitalism, market economies and the power of supply and demand. As I began to consider the concept of Technological Unemployment, I wrote about Capitalism having an “end-game”.

https://medium.com/@ForHumanity_Org/capitalism-aritifical-intelligence-robotics-socialism-universal-basic-income-740cc3f1c41e

I believe that remains true. I believe that if left to its own devices, with technological advancement, Capital (as in Capital v Labor) would choose to eliminate the labor from its cost equation, resulting in 100% of profit left for Capital. You might argue that 100% capital and 0% labor is too extreme, and I agree. There will always be roles/work for humans to do, based upon the skills that humanity retains which machines cannnot replicate, even if that is limited to their “humanness”. In this piece, I am using the extreme example to highlight a risk, not predict an exact future. Capital is incentivized to eliminate labor from its cost structure. AI and Automation are capital investments that can replace labor therefore, I expect Capital to increase investment in AI and Automation which will likley result in significant unemployment, at least as it relates to jobs that pay a salary.

To complete the ideological triumvirate, I was raised and subsequently chose to be a Christian, which defines the core of my morality. I am not asking you to agree with my morality, just understand that my moral choices, come from this background, as I try to reconile these concepts. With that as foundation, I decided to host a backyard BBQ, where the pre-announced topic was Universal Basic Income, Christianity and Capitalism, reconciling the three ideologies. I invited good friends and was not attempting to make this a “comprehensive and stastically significant focus-group”, instead I wanted to just talk and debate and see if we could learn a few things and achieve some level of consensus. It was a lovely dinner, the talk and questions were challenging and while we wandered a little bit into the weeds, as all good conversations tend to, we actually did find some key points upon which we generally agreed, even if the details remained a little debateable or ambiguosly defined.

So I present for your consideration the results of this discussion. It should be noted, that the crux of the discussion was about UBI and thus what follows is a discussion about UBI, influenced by our similar capitalistic (western) backgrounds and by our shared Christian-faith. I believe this can be a useful guide for others as to how we considered some of the challenges presented by these three ideologies and where we landed. I do not expect that all will exactly share these beliefs, but rather take this as one version of the discussion for you to consider.

A few bullet points:

  1. Belief that we have a moral responsibility, as a community to care for the poor and those who cannot take care of themselves. This is absolute and a core principle based on our Christian faith.
  2. Belief that “risk and reward are linked, greater risk should equate to greater potential reward and vice versa” is a bibilical concept. It need not apply only to money and capital, but in the Parable of the Talents, failure to “invest” Talents is considered sinful. This was discussed in the context of all behavior. Taking risk, deserves reward, but may also lead to failures, which is okay. Our understanding of the parable is that we should take risks with the assets that we have - we should invest. The group voiced a concern that UBI may lead to risk-averse behavior of all kinds, notably a lack of investment. Many UBI proponents talk out of both sides of their mouth on this subject which is why we spent time on it. On one hand, they criticize those who have taken great risk, sometimes with time, effort, work/life balance sacrifice, capital or even reputation, instead frequently attributing it to inheritence or unfair exploitation. Then they suggest that a UBI will lead all people to be more entreprenuerial because their downside risk is floored with the UBI, in other words, they will take risks. Either risk and reward are linked at all levels or they are not. You can’t reward “UBI entreprenuers” with profits and begrudge the wealthy who may have already earned their profits. Not to mention those middle to upper class members who just plain worked ridiculously hard. Something that used to be called the “American Way”. The group felt that a UBI, on a mass scale, would reduce the appetite for risk amongst the mass population, even if a few were emboldened. They did not accept the premise that UBI would lead to greater entrepreneurialism.
  3. Belief that work and participation in your own survival is a human responsibility both to yourself and to your community. The group did not believe in a “right to survive”. They support the “right to participate in your own survival”. The group believes that the community is responsible for caring for those who are “unable to participate in their own survival”. This might be a semantic argument, but the point for us was clear. Survival is not guaranteed, it must be worked for and that is the nature of life. In fact, the idea that anyone had a guaranteed right to survive was generally considered illogical.
  4. The group did not require Universal to mean that 100% of people must receive the benefit fully. They were supportive of the idea that high income earners could have their basic income effectively fully taxed, which of course reduces the cost of implementation. The group felt that it should be “means-tested” on both ends. The wealthy should be taxed on their UBI to lower the cost of the program. But on the receiveing end, all should work, who are able. This is a moral decision based in the belief that providing for ourselves, our family and our communities is our responsibility. They further felt it was appropriate to determine “who is able” as a community. Implicit in this point is the “ability to work”. If work disappears, then that reduces one’s ability to work. The group flat out rejects the notion of a “right NOT to work”. That of course is not the same as “you must have a job and be receiving pay”. The group roundly supports the value of “unpaid jobs” such as stay-at-home caretakers or volunteers.
  5. In the context of substantial technological unemployment, the group understood and accepted the idea that Universal Basic Income might be the only option. No other alternative was offered as yet.
  6. There was genuine concern about UBI and unintended consequences, such as laziness, forced re-location and subsequent low-income housing concentration and negative feedback loops. Some of the group were familiar with UBI studies and their “smallness” and “terminal value”. They recognizing that behavior associated with these tests is not likely to compare to behavior in a world that MUST rely on UBI, such as the conditions that might come to pass under technological unemployment. Therefore, they reject the notion that we “know” how people would react under a comprehensive and necessary UBI program, reverting to concerns that it would not encourage work of all kinds.
  7. Following onto that point, one who is able, must work, whether they like the work or not. Where work is defined as “putting in effort” to participate in one’s survival or to execute the will of the community if the community is providing the support. This is different than a “job”, which is associated with pay or a salary. Stay-at-home parenting is work, and provides great benefit to the community without pay. They also reject the notion that a worker should enjoy their work. In fact, the group laughed at the idea that someone shouldn’t have to do work they don’t enjoy. They all wondered who the lucky ones were who always enjoyed their work.
  8. The group points to Capitalism’s excellent success in wealth creation, accepts the principle that “investment” from the wealthy creates growth and new opportunities. They also felt that the profit/return motive has made the allocation of capital generally efficient and thus generally productive. Further the group accepts that the benefit from new opportunities may be to a diminshing number of participants and that a consequence has been an increase in income inequality. One of the supporting arguments for higher taxes and potentially a UBI was the concern about rising income inequality. They did not reject the notion however that Capitalism may have an end-game — technological unemployment.
  9. There was considerable concern about the misuse of cash designed to provide food, clothing and shelter. One member who has had significant dealings with the poverty-stricken noted that frequently those in need, needed far more than monetary support, as mental-illness and drugs were often associated with their situation. It was suggested that a UBI payment might be used directly for food, clothing and shelter, instead of as cash to avoid misuse. To which there was varied debate, which I tabled (another version of “off into the weeds”). There were doubts about the government’s ability to provide the “right” solutions for those needs and externalities associated with that process. There was no conclusion on the best approach, cash or vouchers for services.

To wrap up our take on Universal Basic Income and trying to tie it together with Capitalism and Christianity, I would say the group was happy to consider the concept, unwilling to toss out capitalism, unwilling to accept some of the primary arguments of UBI advocates and generally unmotivated to run out and support a Universal Basic Income. They were happy to understand it better. Happy to consider the pros and cons more than they ever had and I know that awareness of the issues has been raised. Notably, I think everyone in the group is now comfortable having an opinion on the subject and how it fits into their views on life, poverty, public policy and technological unemployment. Maybe you, the reader, are a little more comfortable too. Whether you agree or disagree with the thoughts presented here, I suspect that the group’s thoughts are fairly mainstream. If you are vehemently opposed to UBI or zealously advocating for UBI, this ought to help you understand how one group thinks. Maybe it will make for a more fruitful dialogue as these challenges are considered in the future.

Technology is an Autocracy - and the Risk from Externalities is Growing

I suspect this is not an idea that many have considered. To date, as a society we really have not appreciated how technological change occurs and we certainly have rarely considered the governance of new change. When we have a discovery or an advancement of science, like all other inventions, it isn’t accomplished based upon the will of the majority. There is no vote, there is no consideration for society at-large (externalities). Rarely is the downside risk considered. One person sees a problem. Their own personal view of that problem and they aim to fix it. That is entreprenuerial, the foundation upon which much of Western Capitalism is built. It is also Authoritarian. One person, one rule and little or no accountability. Scary when you think about it. When you combine this “process” and lack of control with our species’ other great skill, “problem solving”, you create technological innovation and advancement which has a momentum that feels unstoppable. In fact, if you even “suggest” (which I am not) halting technological progress you get one of two response, “Luddite” or “You can’t”. That is how inexorable society views technological change.

So let me explain this in more detail, all technological advancement is about someone, somewhere seeing a problem, a weakness, a difficulty, a challenge and deciding to overcome that challenge. It’s admirable and impressive. It’s also creating problems. As a species we poorly weigh all aspects of a decision, all the pros and all the cons. Will we make money from this? Is this advancement, “cool”, Does this make my life “easier” are often the only inputs to our production/purchase decisions. There is a broad societal acceptance that “easier”, “freer” and “convenient” are universally beneficial. A simple counter-argument, of course, can be found in the gym. Your muscles would insist that “easier”, “freer” and”convenient” are not the best way for them to be strengthened or stamina to be built. They require “challenge”, “difficulty” and “strain” in order to grow and improve.

So when a new advance comes along, if it makes our life easier, even in the smallest way, we snatch it up instantly. Take, for example, Alexa and Google Home. Hugely successful products already, but was it really that difficult to type out our search query? Defenders will say things like, “now I can search while I am doing something else” or “this frees up my hands to be more productive”. And of course supporters point to the disabled for the obvious assistance to someone who is unable to type. But let’s examine the other side of the coin. What are the downside risks to such a product? Usually, not part of the sales process, I’m afraid, so you have to think carefully to compile a list. For example, our verbal search, has it caused us to lose a different skill, like the unchallenged muscle, whereby the finding of the solution was equally as important as the actual answer. But on top of that specific challenge, (the lost process and associated effort that may have strengthened the muscle, which in this case is the mind), what are some of the associated externalities to voice assistants? Let’s take a look at a few.

Is that search query worth having Amazon or Google record EVERY SINGLE WORD your family speaks within hearing distance? How about considering the fact that Amazon and Google now build personal profiles about your preferences based upon this information. Do you realize that this then limits your search results accordingly? Companies are taking CHOICE away from you and suprisingly, people don’t seem to care, in fact some like the idea. Other externalities exist as well. Recently, an Alexa recorded the conversation of a family and sent it to random contacts.

An Amazon Echo recorded a family's conversation, then sent it to a random person in their contacts…
A family in Portland, Ore., received a nightmarish phone call two weeks ago. "Unplug your Alexa devices right now," a…www.washingtonpost.com

Or this externality?

Alexa and Siri Can Hear This Hidden Command. You Can't.
BERKELEY, Calif. - Many people have grown accustomed to talking to their smart devices, asking them to read a text…www.nytimes.com

Without getting too paranoid, this last one is downright creepy and dystopian, but its potential ramifications are catastrophic if carried to the extreme. I am certain that when you decided to purchase your voice assistant, none of these externalities were factored into your buying decision.

I use this as a real life example, because our evaluation of new technology is based upon bad math. Is it cool? Is it easier? Is it profitable? and Can I afford it/afford to be without it? Nowhere in that equation are the following:

1) Does it further eliminate privacy?

2) Does it make me lazy or more dependent upon a machine?

3) Does to keep me from using all aspects of my brain as much?

4) Does it allow me to interact with actual humans less?

5) What new and different risks are associated with this product?

6) If someone wants to do me harm, does this enable them to do so in an unforseen way?

One of the chief arguments of technological advancement is that it frees us up from the mundane, routine tasks. To assume that those tasks do not have value is ludicrous on its face, but more importantly, if we are “freed” up, what are we freed up to do? Usually, we are told it is high-minded things… be entrepreneurial… be poetic… be deep thinkers… solve bigger problems… spend more time with loved ones. To be honest, I don’t see an explosion of those endeavors. A further example of our bad math…

We adopt these technological advancements often without thought about the impact it may have on our psyche, our self-worth, our ambitions, or our safety. We make these choices because they are cool, or they make something easier. With purchase decision processes that are this simple and “upside-only” considered, developers of technology have it easy to make products attractive.

This blog post has only lightly touched on malice, but all of society should be concerned about malicious intent and technology’s impact on our suceptibility. The more connected we are, the more dependent upon technology we are, the easier it is to cause mass harm. Perfect examples are recent virus attacks that spread to over 47 countries in a matter of a few hours. Sometimes the consequences are minor such as locked up computers or minor hassles we deal with like corrupted programs. Other times the hacker/criminal steals money or spies on you. Regardless of the magnitude of the impact, the ability of a criminal to “reach you” and “reach many” has been increased almost infinitely.

Here’s a final externality — how would you function without Internet? Not just for an hour or two, but permanenetly? How about without power? These are modern day conveniences that are assumed to be permenent, but how permanent are they? Do you need to consider how to operate when they are gone? Our connectivity and reliance on power make us deeply dependent and woefully unprepared for these alternatives, even if the odds of occurence are small. Hollywood frequently paints a grim picture of the dystopian existence when these conveniences are taken away, however, our ancestors existed quite nicely. Would you be prepared to survive or even thrive? The chances of these calamities are greater than zero…

Awareness of externalities is important, Consideration about downside risk is crucial and a willingness to realize that everything we do or even purchase has pros AND cons to them… The more awareness of the “cons” that you have, the better chance you have to mitigate those risks and reap a greater benefit from the upside of our technology choices. Most importantly, as a society, we will make better collective decisions on our technological progress and thwart the dangers of Technological Autocracy.

The Value of Data in the Digital Age

To date, data is being valued and priced by everyone EXCEPT the creators of that content — YOU. If we want to change that many things need to happen, but it begins with taking the time to figure out how a person values their own data. So let’s dissect and see if we can shed some light on this idea.

First, this process is VERY unique, because for the first time EVER, every single person on the planet has the potential to sell a product. Second, instead of being a “consuming” culture and propelling the corporate world forward, human beings are the ones in a position to profit. Third, everyone’s value judgement on data is unique, personal and unquestionable. Fourth, the opportunties for people to enrich themselves in a world of possible technological unemployment is tremendously important to the welfare of society. Finally, on top of the social ramifications, there is the obvious moral ramifications. As highlighted by the misuse of your data by corprorations, this idea of individual data ownership is morally correct.

Now we are not talking about ALL data. If you use Amazon’s services to buy something. All of the information that you create by searching, browsing and buying on the Amazon site, also belongs to them. So while I can opine on the “value” of individual data, I am certain that the legal questions around data are just beginning to be sorted out.

So with all of that in mind, let’s examine how each individual person may value the data that they can provide. Noteworthy to this discussion is that every individual has a different value function. Different people will value different things about their data. So it is vital that we appreciate that each person will price things uniquely.

However the parameter that they weigh can be summarized in a few key variables which are covered below. So lets create a list and explain each one:

  1. Risk of Breach — Each data item, if fallen into the wrong hands can cause harm. This is the risk of breach. This risk will be perceived differently based upon the reputation for safety of the data user, a perceived sense of associated insurance and the context of the data itself. For example, let’s consider 4 tiers of risk of breach. Tier 1 ( HARMLESS) — the contents of my dishwasher. This data might have value to someone and could not harm me if used nefariously. Tier 2 (LKELY HARMLESS)— the contents of my refrigerator. Still like to be unable to hurt me, but since people may know what I consume, one could they possibly tamper with it. Tier 3 (HARMFUL — ONLY INCONVENIENT) Examples here might include, financial breach. Where often the risk is not only yours, there is a form of insurance (bank insurance or other similar example), but it is dangerous and painful when it occurs. Tier 4 (HARMFUL — PERSONAL SAFETY) Examples here might include your exact location, your health records, access to your cybernetics and/or your genetic code.
  2. Risk of Privacy — How sensitive or personal do we view the data items. On this risk, I beleive that pricing is rather binary or maybe parabolic. Many data items which we can produce do not make us concerned if made public. That is, until a line is crossed, where we consider them private. My pictures, fine. My shared moments, fine. My bedroom behavior, private. So when that line is crossed, the price of the associated data rises substantially. To continue the example, a manufacturer of an everyday item, such as pots and pans, may not have to pay a privacy premium for data associated with our cooking habits. However, a manufacturer of adult toys, may have to pay a substantial premium to gain access to the bedroom behavior of a meaningful size sample of data. This is a good time to remember that these pricing mechanisms are personal, true microeconomics and everyone will value the risk of privacy differently. Even to the point where the example I just gave may be completely reversed. Bedroom behavior, no problem… but keep your eyes of of my cooking.
  3. Time — how easy is it to generate the data. Can I generate the data simply by existing? That data will be cheaper. Do I have to engage in a use of my time to create the data, that data will be more expensive. Would you like me to watch your commercials? more expensive. Would you like me to fill out your survey? 2 questions is cheaper than 20 questions. Time is also a function about the entire mechanism for creating, monitoring the data.
  4. Applicability — is the data being asked of me relevant to me. This is a question of “known” versus “unknown”. If I regularly drink a certain type of coffee, I am more likely to accept coupons, ads, sales and promotions from my coffee shop than I am from the Tea emporium around the corner. The function here is inverted, as the applicability decreases, the value of access to “me” increases from my perspective. That is not to say that it also increases for the data consumer, so with respect to applicability we have typically juxtaposed supply and demand curves. Also, if you only value data based on the supply side (what I am willing to give), then you miss out on revenue opportunities by allowing people access to your attention to “broaden your exposure”.

If the world changes to a personal data driven model, then the corporate world and the artificial intelligence world, will have to learn how to examine these key variables. The marketplace where these transactions will occur MUST be a robust mechanism for price discovery whereby many different bids and offers are being considered on a real-time basis to determine the “price/value” of data This is why I have proposed the Personal Data Exchange as a mechanism for identifying this value proposition. Exchanges are in the business of price discovery, on behalf of their listed entities, in this case, “you”.

Personal Data — How to Control Info and Get Paid for Being You
Here’s a quick and easy way to break some of the current monopolies that exist in the personal data market (looking at…medium.com

In the end, this is the morally corrected position. For a variety of reasons it a justifiable and necessary change to a marketplace that was created largerly without your consent. Recent changes to the law, such as GDPR in Europe have begun to claw back the rights of the indidivual. But if we can get this done, it becomes a complete gamechanger. Please… get on board. Your thoughts and critiques are welcome and encouraged, ForHumanity.

Facebook and Cambridge Analytica - "Know Your Customer", A Higher Standard

Know Your Customer (KYC) is a required practice in finance. Defined as the process of a business identifying and verifying the identity of its clients. The term is also used to refer to the bank and anti-money laundering regulations which governs these activities. Many of you will not be familiar with this rule of law. It exists primarily in the financial industry and is a cousin to laws such as Anti-Money Laundering (AML) and the Patriot Act of 2001 and US Freedom Act of 2015. These laws were designed to require companies to examine who their clients were. Are they involved in illegal activities? Do they finance terrorism? What is the source of these monies? Does the customer engage in illegal activity? The idea was to prevent our financial industry from supporting or further the ability of wrong-doers to cause harm. So how does this apply to Facebook and the Cambridge Analytica issues?

I am suggesting that the Data Industry, which includes any company that sells or provides access to individual information should be held to this same standard. Facebook should have to Know Your Customer. Google should have to Know Your Customer. Doesn’t this seem reasonable? The nice part about this proposal is that it isn’t new. We don’t have to draft brand new laws to cover it, rather just modify some existing language. KYC exists for banks, now let’s expand it to social media, search engines and the sellers of big data.

Everywhere in the news today, we have questions about “who is buying ads on social media”? Was it Russians trying to influence an election? Was it neo-nazis, ANTIFA or other radical idealogues? Is it a purveyor of “fake news”? If social media outlets were required to KYC their potential clients then they will be able to weed out many of these organizations well before their propoganda reaches the eyes of their subscribers. Facebook has already stated that they want to avoid allowing groups such as these to influence their users via their platform. So it is highly reasonable to ask them to do it, or be penalized for failure to do so. Accountability is a powerful thing. Accountability means that it actuals gets done.

Speaking of “getting it done”, some of you may have seen Facebook’s complete about-face on its compliance with GDPR, moving 1.5 billion users out of Irish jurisdiction and to California where there are very limited legal restricitons. https://arstechnica.com/tech-policy/2018/04/facebook-removes-1-5-billion-users-from-protection-of-eu-privacy-law/

If you aren’t familiar with GDPR, it is Europe’s powerful new privacy law. For months, Facebook has publically stated how it would intend to comply with the law. But when push came to shove, their most recent move is to avoid the law and avoid compliance as much as possible. So flowery-language is something we often here from corporate executives on these matters, but in the end, they will still serve shareholders and profits first and foremost. So unless, these companies are forced to comply, don’t expect them to do it out of moral compunction, that’s rarely how companies operate.

Returning the the practical application of KYC, for a financial firm, this means that a salesperson has to have a reasonable relationship with their client, in order to assure that they are compliant with KYC. They need to know the client personally and be familiar with the source and usage of funds. If a financial firm fails to execute KYC and it turns out that the organization they are doing business with is charged with a crime, then the financial firm and the individuals involved would find swift ramifications, including substantive fines and potential jail time. This should apply to social media and the data industry.

Let me give you a nasty example. Have you looked at the amazing detail Facebook or Google have compiled about you? It is fairly terrifying and there are some out there (Gartner, for example) who have even predicted that your devices will know you better than your family knows you by 2022.

https://www.gartner.com/smarterwithgartner/emotion-ai-will-personalize-interactions/

Now assuming this is even close to true for many of us, then imagine where that information is sold to a PhD candidate at MIT, or other reputable AI program, except that PhD student, beyond doing his AI research is funnelling that data on to hackers on the dark web, or worse, to a nation-state engaged in cyberwarfare. How easy would it be for that group to cripple a large portion of the country? Or maybe, it has already happened, with examples like Equifax and its 143 million client breach. Can you be sure that the world’s largest hacks aren’t getting their start by accessing your data from a data reseller?

To be fair, in finance, often times you are taking in the funds and controlling the activities after the fact. You know what is going on. With data, often times you are selling access or actual data to the customer and no longer have control over their activities, it might seem. But this argument simply enhances my interest in Know Your Customer, because these firms may have little idea how this data is being used or misused. Better to slow down the gravvy train than to ride it into oblivion.

Obivously the details would need to be drafted and hammered out by Congress, but I am seeking support of the broader concept and encouraging supporters to add it to the legislative agenda. ForHumanity has a fairly comprehensive set of legislative proposals at this point which we would hope would be consider in the broad concept of AI policy. Questions, thoughts and comments are always welcome. This field of AI safety remains so new that we really should have a crowd-sourced approach to identify best-practices. We welcome you to join us in the process.

New C-Suite Position - AI and Automation Risk Management

Many of you will be familiar with the challenges that Facebook is facing.

The Cambridge Analytica saga is a scandal of Facebook's own making | John Harris
Big corporate scandals tend not to come completely out of the blue. As with politicians, accident-prone companies…www.theguardian.com

Internal disagreements about how data has been used. Was it sold? Was it manipulated? Was it breached? It has put the company itself at-risk and highlighted the need for a new position at the C-Suite level, one that most companies have avoided up until now. AI and Automation Risk Management.

Data is the new currency, Data is the new oil. It is the lifeblood of all Artificial Intelligence algorithms, machine and deep learning models. It is the way that our machines are being trained. It is the basis for the best and brightest to begin to uncover new tools for healthcare, new ways to protect security, new ways to sell products. In 2017, just Google and Facebook had a revenue close to $60 billion in advertising alone, all due to data. However, usage of that data is at-risk, because of perceived abuses by Facebook and others.

Data is also and more importantly about PEOPLE. Data is PERSONAL, even if there have been attempts to anonymize it. People need an advocate,inside the company, defending their legal, implied and human rights. This is a dynamic marketplace, with new rules, regulations and laws being considered and implemented all of the time. AI and Automation face some substantial challenges in their development, here is a short list and NONE of these are a priority for engineers and programmers, despite the occassional altruistic commentary to the contrary. As you will see the advancement of AI and Automation requires full-time attention.:

  1. Ethical machines and algorithms — There are millions and millions of decisions being made by machines and algorithms. Many of these decisions are meant to be based upon our value system as human beings. That means our Ethics need to be coded into the machines and this is no easy task with a single set of Ethics. Deriving profit from Ethics, is tenuous at best and there is certain to be a cost.
  2. Data and decision bias — Our society is filled with bias, our perceptions are filled with bias, thus our data is filled with bias. If we do not take steps to correct bias in the data, then our algorithmic decisions will be biased. However, correcting for bias may not be as profitable which is why it needs to be debated in the C-suite.
  3. Privacy — there is a significant push back forming on what is privacy online. GDPR in Europe is a substantial set of laws providing people with increased transparency, privacy and in some cases the right to be forgotten. Compliance with GDPR is one responsibility of the AI Risk Manager.
  4. Cybersecurity and Usage Security (a Know-Your Customer process for data uasge). Companies already engage in cybersecurity, but the responsiblity is higher when you are protecting customer data. Furthermore, companies should adopt the the finance industry standard of “Know Your Customer (KYC)”. Companies must know and understand how data is being used by clients to prevent abuses or illegal behavior.
  5. Safety (can the machines that are being built be controlled and avoid unintentional consequences to human life). This idea is a little farther afield for most, however now that an UBER, autonomous vehicle has been involved in a fatality, it is front and center. The AI Risk Manager’s job is to consider the possiblities of how a machine may injure a human being. Whether that be through hack, negligence, or system failure.
  6. Future of Work (as jobs are destroyed, how does the company treat existing employees and the displaced workers/communities) — This is the PR challenge of the role, depending on how the company chooses to engage it’s community. But imagine for a moment taking a factory with 1000 employees and automating the whole thing. that’s 1000 people directly effected. That’s a community potentially devestated, if all 1000 employees were laid off.
  7. Legal implications of cutting edge technology (in partnership with Legal team or outside counsel) — GDPR compliance, legal liability of machines, new regulations and their implementation. These are the domain of the AI Risk Manager in conjunction with counsel and outside counsel.

This voice is is a C-suite job and must have equal authority to the sources of revenue, in order to stand a chance of protecting the company and protecting the data, i.e. the people who create the data.

I am not here to tell you to stop using data. However if you believe that each of these companies, whose primary purpose is not compliance but instead to make profits, will always use this data prudently is naive at its best. Engineers and programmers solve problems, they have not been trained to consider existential risk, such as feelings, privacy rights, and bias. They see data of all kinds as “food” and “input” to their models. To be fair, I don’t see anything wrong with their approach. However, the company cannot let that exist unfettered. It is risking its entire reputation on using data, which is actually using PEOPLE’S PERSONAL AND OFTEN PRIVATE INFORMATION, to get results.

For many new companies and new initiatives, data is the lifeblood of the effort. It deserves to be protected and safeguarded. Beyond that, since data is about people, it deserves to be treated with respect, consideration, and fairness. Everyone is paying more and more attention to issues like this and companies must react. People and their data need an advocate in the C-Suite. I recommend the Chief of AI and Automation Risk Manager.

Personal Data - How to Control Info and Get Paid For Being You

Here’s a quick and easy way to break some of the current monopolies that exist in the personal data market (looking at Google, Facebook and Amazon). Let people own their own data, disseminate it as they see fit and, shockingly, get paid for what is rightfully theirs.

The concept is simple, the implementation is challenging and requires a good size investment from somewhere at the outset. I’ll explain the concept first and then circle back around to the implementation at the end.

A personal data exchange. Simply explained, each person 18 and over, has the right to list themselves on a “data” exchange, just like companies list their equity on a stock exchange. Each individual would get a listing off their own on the exchange. So ideally the exchange would have hundreds of millions of listings. One, for each unique person. This exchange, which would function as a clearinghouse for YOUR data. It would be the marketplace to go to for companies, of all kinds, to retrieve the data that they require from the sellers of data — YOU. If you list John Q Public on the exchange and want to severely limit the data you provide the world, that is your choice, however, because your data is used less frequently, you will get paid very little for being John Q Public. Privacy, is your right and you should be in control of your own data.

However, if John Q Public lists his data, answers questions submitted by the marketplace, responds in a timely manner to requests and is generally open about preferences, opinions and many other pieces of information in demand, then John Q Public will find a nice revenue stream associated with this very valuable data.

The marketplace is interactive. It starts with many of the key data items that are commonly available on a simple internet search. However, as you file your “listing” on the exchange then companies will seek your information, maybe thru GPS location, maybe via your search and query results. They may also send out questionaires. If you, the listing company, find yourself uncomfortable answering certain data questions, then stop. You are in control. You release your data. How you want, when you want. Do you want your shopping tracked? If yes, then allow it. Maybe only certain shopping is included. Furthermore, this data does not have to be “public”. It can be provided directly to a client company anonymously (anonymous data is cheaper to the company and pays you less as well). As the listing entity, providing the data, you will be in control. Companies can come to the marketplace and create datasets. Ask questions of the personal listings in an attempt to grow a business, design their alogirthms or pivot a marketing strategy. They can pay for high quality information which will be crucial to their business. As a “listed entity” you would have responsibility to respond, if you want to get paid. You must answer, truthfully, completely and in a timely manner. If the data proves to be useless over time because people lie and obscure relevant information, than the marketplace breaks down and people will lose CONTROL over the data, the way that it exists today.

It is hard to know how valuable this data is. Today, it is priced at zero by the owners of the data — you. Google and Facebook make nearly $100 billion per year, selling your data — selling you. I am certain that given the choice between no Google Search engine, no facebook and $1000 in your pocket. Most people will choose the $1000. Furthermore, it is an excellent way to democratize wealth as anyone will be eligible to participate.

Now, implementation is a challenge, but not insurmountable. The absolute key is that you must launch the exchange with many, many listings. Therefore, to motivate individuals to list on the exchange, they will need to be compensated to do so. I find that money has a way of getting people’s attention. However, we aren’t talking about a small amount of money, as the minimum number of listings probably needs to be 20–30 million people from the outset to make the dataset meaningful. Furthermore, an incentive system to encourage people to help others list will be valuable. So the capital required to incentivize the listing is significant, well over $2 billion just to launch. Fortunately, there are a few large pools of investors, unbiased and correctly aligned with individuals to fund the endeavor.

Secondly, the technological challenge of this exchange is not trivial. Handling over 100 million listings of John and Sarah Q Publics is a sizable endeavor. Furthermore, the front-end will need to be extremely flexible and user friendly to ensure that all listings can easily and efficiently manage their data.

The education process is also monumental as many people do NOT value their data, furthermore they do not value their privacy. They are content to give away this asset for free, or at least in exchange for a forum to fight about politics, post some pictures, search the web, have a laugh or order some goods to be delivered in less than 2 days. However, success here breeds success and with the right incentive structure, I believe that this process will go viral. If the money is sufficient enough, then data will become an important revenue stream for individuals and households.

Finally, this has to be done at a truly institutional level. It requires the cache of the New York Stock Exchange or Chicago Mercantile Exchange to demonstrate the “seriousness” of the concept, combined with the institutional fortitude to handle the sheer magnitude required to make this successful and finally with the technology chops and delivery expertise to pull it all together and provide an exceptional experience for the listed individuals. This idea has been tried or discussed on a smaller scale, but small scale doesn’t work for the companies in the data business. This idea is go big or go home, but in terms of privacy and control of personal data, there is no better way than to put you in charge and let the market pay you directly.

The disintermediation of Google, Facebook and Amazon in the “control your data business” is just a perk…

ForHumanity AI and Automation Awards 2017

Top AI breakthrough

AlphaGo Zero — game changing machine learning, by removing the human from the equation

https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help/

This is an incredibly important result for a series of reasons. First, AlphaGo Zero learned to master the game with ZERO input from human players and previous experience. It trained knowing only the rules of the game. Demonstrating that it can learn better and faster with NO human involvement. Second, the implication for future AI advancement is likely that humans are “in the way” of optimal learning. Third, the AlphaGo Zero went on to become a Chess Master in 4 hours, demonstrating an adaptability that has dramatically increased the speed of machine learning. DeepMind now has over 400 PhD’s working on Artificial GENERAL Intelligence.

Dumbest/Scariest thing in AI in 2017

Sophia, AI machine made by Hanson Robotics, granted citizenship in Saudi Arabia — VERY negative domino effect

http://www.arabnews.com/node/1183166/saudi-arabia#.WfEXlGXM2oI.twitter

The implications associated with “citizenship” for machines are far reaching. Citizenship in most locations includes the right to vote, to receive social services from the government and the right to equal treatment. The world is not ready and may never be ready to have machines that vote, or machines that are treated as equals to human beings. While this was an AI stunt, it’s impact could be devastating if played out.

Most immediately Impactful development — Autonomous Drive vehicles

Waymo reaches 4 million actual autonomous test miles

https://medium.com/waymo/waymos-fleet-reaches-4-million-self-driven-miles-b28f32de495a

Autonomous cars progressing to real world applications more quickly than most anticipated

https://arstechnica.com/cars/2017/10/report-waymo-aiming-to-launch-commercial-driverless-service-this-year/

The impact of autonomous drive vehicles cannot be understated. From the devastation of jobs in the trucking and taxi industry to the freedom of mobility for many who are unable to drive. Also, autonomous driving is likely to result in significantly fewer deaths than human driving. This impact carries through to the auto insurance industry, where KPMG reckons that 71% of the auto insurance industry will disappear in the coming years. Central Authority control on the movement of people is another second order consideration that not many have concerned themselves with.

Most impactful in the future — Machine dexterity

Here are a few amazing examples of the advancement of machine dexterity. As machines are able to move and function similarly to humans, then their ability to replicate our functions increases dramatically. While this dexterity is being developed, work is also being done to allow machines to learn “how” to move like a human being nearly overnight, instead of through miles and miles of code — machines teach themselves from video and virtual reality.

Boston Dynamics Atlas robot human level agility approaching, including backflips — short video, easy watch

https://www.youtube.com/watch?v=fRj34o4hN4I&feature=youtu.be

Moley introduces the first robotic kitchen –

https://www.youtube.com/watch?v=QDprrrEdomM&feature=share

Machine movement being taught human level dexterity with simulated learning algorithms — accelerating learning and approximating human process

https://www.facebook.com/futurismscience/videos/816173055250697/?fref=mentions

Most Scary — Science Fiction-esque development

North Dakota State students develop self-replicating robot

https://www.manufacturingtomorrow.com/news/2017/07/19/ndsu-students-develop-3d-printing-self-replicating-robot/10034/

The impact of machines being able to replicate themselves, is a concept for most dystopian sci-fi movies, however there are many practical reasons for machines to replicate themselves, which is what the researchers at North Dakota State were focused on. However, with my risk management hat on for AI Safety, it just raises a whole other set of rules and ethics that need to be considered which the industry and the world are not prepared for.

Most Scary AI real-world issues NOW

Natural Language processing finds millions of faked comments on FCC proposal regarding Net Neutrality

https://futurism.com/over-million-comments-backing-net-neutrality-repeal-likely-faked/

AI being used to duplicate voices in a matter of seconds

https://www.digitaltrends.com/cool-tech/ai-lyrebird-duplicate-anyones-voice/

Using AI, videos can be made to make real individuals appear to say things that they did not…

https://futurism.com/videos/this-ai-can-create-fake-videos-that-look-real/

Top AI Safety Initiatives

Two initiatives share the top stop. Mostly because of their practical applications in a world that remains too heavily devoted to talk and inaction. These efforts are all about action!

AutonomousWeapons.org produces Slaughterbots video

This video is an excellent story telling device to highlight the risk of autonomous weaponry. Furthermore, there is an open letter being written that all humans should support. Please do so. Over 2.2 mil views so far

https://youtu.be/9CO6M2HsoIA

Ethically Aligned Design v 2.0

John Havens and his team at IEEE, the technology standards organization have collaborated extremely successfully with over 250 AI safety specialists from around the world to develop Ethically Aligned Design (EAD) v 2.0. It is the beacon on which AI and Automation ethics should build their implementation. More importantly, anyone who is rejecting the work of EAD should immediately be viewed with great skepticism and concern as to their motives.

http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf

AI and Automation - Managing the Risk to Reap the Reward

Personally, I have been hung up on Transhumanism. For those of you not familiar Transhumanism is the practice of imbedding technology directly nito your body — merging man and machine, a subset of AI and Automation. It seems obvious to me that when given the chance, some people with resources will enhance/augment themselves in order to gain an edge at the expense of other human beings around them. It’s really the same concept as trying to get into a better university, or training harder on the athletic field. As a species we are competitive and if it is important enough to you, one will do what it takes to win. People will always use this competitive fire to gain an upperhand on others. Whether it is performance enhancing drugs in sports, or paid lobbyists in politics, money, wealth and power create an uneven playing field. Technology is another tool and some, if not many, will use it for nefarious purposes, I am certain.

But nefarious uses of AI are only part of the story. When it comes to transhumanism, there are many upsides. The chance to overcome Alzeheimers, to return the use of lost limbs, to restore eyesight and hearing. Lots of places where technology in the body can return the disabled to normal human function which is a beautiful story.

So in that spirit, I want to shine a light on all of the unquestionable good that we may choose to do with advances in AI and automation. Because when we have a clear understanding of both good outcomes and bad outcomes, then we are in a position to understand our risk/reward profile with respect to the technology. When we understand our risk/reward profile, we can determine if the risks can be managed and if the rewards are worth it.

Let’s try some examples, starting with the good:

  1. AI and Automation may enhance the world’s economic growth to the point where there is enough wealth to eradicate poverty, hunger and homelessness.
  2. AI & Automation will allow researchers to test and discovery more drugs and medical treatments to eliminate debilitating disease, especially diseases that are essentially a form of living torture, like Alzheimers or Cerebral Palsy.
  3. Advancement of technology will allow us to explore our galaxy and beyond
  4. Advancement in computing technology and measurement techniques will allow us to understand the physical world around us at even the smallest, building block level
  5. Increase our efficiency and reduce wasteful resource consumption, especially carbon-based energy.

At least those five things are unambiguously good. There is hardly a rational thinker around who will dispute those major advancements and their inherent goodness. It is also likely that the sum of that goodness, out-weighs the sum of any badness that we come across with the advancement of AI and Automation. Why am I confident in saying that? Because it appears to be true for the entirety of our existence. The reason technology marches forward is because it is viewed as inherently good, inherently beneficial and in most cases, as problem solving.

But here is where the problem lies and my concern creeps in. AI has enornormous potential to benefit society. As a result it has significant power. Significant power comes with significant downside risk if that power is abused.

Here are some concerns:

  1. Complete loss of privacy (Privacy)
  2. Imbedded bias in our data, systems and algorithms that perpetuate inequality (Bias)
  3. Susceptibility to hacking (Security)
  4. Systems that operating in and sometimes take over our lives, without sharing our moral code or goals (Ethics)
  5. Annihiliation and machine takeover, if at some point the species is deemed useless or worse, a competitor (Control and Safety)

There are plenty of other downside risks, just like there are plenty of additional upside rewards. The mission of this blog post was not to calculate the risk/reward scenarios today, but rather to highlight for the reader that there are both Risks and Rewards. AI and Automation isn’t just awesome, there are costs and they need to be weighed.

My aim in the 12+ months that I have considered the AI Risk space has been very simple, but maybe poorly expressed. So let me try to set the record straight with a couple of bullet points:

  1. ForHumanity supports and applauds the advancement of AI and Automation. ForHumanity also expects technology to advance further and faster than the expectation of the majority of humanity.
  2. ForHumanity believes that AI & Automation have many advocates and will not require an additional voice to champion its expected successes. There are plenty of people poised to advance AI and Automation
  3. ForHumanity believes that there are risks associated with the advancement of AI and Automation, both to society at-large as well as to individual freedoms, rights and well-being
  4. ForHumanity believes that there are very few people who examine these risks, especially amongst those who develop AI and Automation systems and in the general population
  5. Therefore, to improve the likelihood of the broadest, most inclusive positive outcomes from the advancement of technology, ForHumanity focuses at identifying and solving the downside risks associated with AI and Automation

When we consider our measures of reward and the commensurate risk, are we accurately measuring the rewards and risks? My friend and colleague John Havens at IEEE has been arguing for sometime now that our measures of well-being are too economic (profit, capital appreciation, GDP) and neglect more difficult to measure concepts such as joy, peace and harmony. The result is that we may not correctly evaluate the risk/reward profile, reach incorrect conclusions and charge ahead with our technological advances. It is precisely that moment that my risk management senses kick in. If we can eliminate or mitigate the downside risks, then our chance to be successful, even if measured poorly increases dramatically. That is why ForHumanity focuses on the downside risk of technology. What can we lose? What sacrifices are being made? What does a bad outcome look like?

With that guiding principle, ForHumanity presses forward to examine the key risks associated with AI and Automation, Safety & Control, Ethics, Bias, Privacy and Security. We will try to identify the risks and solutions to those risks with the aim to provide each member of society the best possible outcome along the way. There may be times that ForHumanity comes across as negative or opposed to technology, but in the vast majority of cases that is not true. We are just focused on the associated risks with technology. The only time we will be negative on a technology is when the risks overwhelm the reward.

We welcome company on this journey and certainly could use your support as we go. Follow our facebook feed ForHumanity or Twitter @forhumanity_org and of course on the website at https://www.forhumanity.center/

Ban Autonomous AI Currency/Capital Usage

The current and rightful AI video rage is the SlaughterBots video, ingeniously produced by Stuart Russell of Berkeley and the good people at autonomousweapons.org. If you have not watched AND reacted to support their open letter, you are making TWO gigantic mistakes. Here is the link to the video.

https://youtu.be/9CO6M2HsoIA

As mentioned, there are good people fighting the fight to keep weapons systems from being fully autonomous, so I won’t delve into that more here. Instead, it reminded me that I had not written a long conceived blog post about autonomous currency/capital usage. Unfortunately, I don’t have a Hollywood produced YouTube video to support this argument, but I wish that I did because the risk is probably just as great and far more insidious.

The suggested law is that no machine should be allowed to use capital or currency based upon its own decision. Like in the case of autonomous weaponry, I am arguing that all capital/currency should 100% of the time be tied to a specific human being, to a human decision maker. To be more clear, I am not opposed to automated Billpay and other forms of electronic payment. In all of those cases, a human has decided to dedicate capital/currency to meet a payment and to serve a specific purpose. There is human intent. Further, I recognize the merits of humans being able to automate payments and capital flows. This ban is VERY specific. It bans a machine, using autonomous logic which reaches its own conclusions from controlling a pool of capital which it can deploy without explicit consent from a human being.

What I am opposed to is machine intent. Artificial Intelligence is frequently goal-oriented. Rarer is it goal-oriented and ethically bound. Rarer still is an AI that is goal-oriented, where societal goals are part of its calculus. Since an AI cannot be “imprisoned”, at least not yet. And it remains to be seen if an AI can actually be turned “Off” (see this YouTube video below to better understand the AI “off button problem”). This ban is necessary.

https://youtu.be/3TYT1QfdfsM

So without the Hollywood fan fare of Slaughterbots, I would like to suggest that the downside risk associated with an autonomous entity controlling a significant pool of capital is equally as devestating as the potential of Slaughterbots. Some examples of risk:

  1. Creation of monopolies and subsequent price gauging on wants and needs, even on things as basic as water rights
  2. Consume excess power
  3. Manipulate markets and asset prices
  4. Dominate the legal system, fight restrictions and legal process
  5. Influence policy, elections, behavior and thought

I understand that many readers have trouble assigning downside risk to AI and autonomous systems. They see the good that is being done. They appreciate the benefits that AI and automation can bring to society, as do I. My point-of-view is a simple one. If we can eliminate and control the downside risk, even if the probability of that risk is low, then our societal outcomes are much better off.

The argument here is fairly simple. Humans can be put in jail. Humans can have their access to funds cut off by freezing bank accounts or limiting access to the internet and communications devices. We cannot jail an AI and we may be unable to disconnect it from the network when necessary. So the best method to limit growth and rogue behavior is to keep the machine from controlling capital at the outset. I suggest the outright banning of direct and singular ownership of currency and tradeable assets by machines ONLY. All capital must have a beneficial human owner. Even today, implementing this policy would have an immediate impact in the way that AI and Automated systems are implemented. These changes would make us safer and reduce the risk that an AI might abuse currency/capital as described above.

The suggested policy has challenges already today, that must be addressed. First, cryptocurrencies may already be beyond an edict such as this, as there is an inability to freeze cryptos. Second, most global legal systems allow for corporate entities, such as corporations, trusts, foundations and partnerships which HIDE or at least obfuscate the beneficial owners. See the Guidance on Transparency and Beneficial Ownership from the Financial Action Task Force in 2014, which highlights the issue:

  1. Corporate vehicles — such as companies, trusts, foundations, partnerships, and other types of legal persons and arrangements — conduct a wide variety of commercial and entrepreneurial activities. However, despite the essential and legitimate role that corporate vehicles play in the global economy, under certain conditions, they have been misused for illicit purposes, including money laundering (ML), bribery and corruption, insider dealings, tax fraud, terrorist financing (TF), and other illegal activities. This is because, for criminals trying to circumvent anti-money laundering (AML) and counter-terrorist financing (CFT) measures, corporate vehicles are an attractive way to disguise and convert the proceeds of crime before introducing them into the financial system.

The law that I suggest implementing is not a far leap from the existing AML and Know Your Customer (KYC) compliance that most developed nations comply with. In fact that would be the mechanism of implementation. Creating a legal requirement for human beneficial ownership, while excluding machine ownership. Customer Due Diligince and KYC rules would include disclosures on human beneficial ownership and applications of AI/Automation.

Unfortunately, creating this ban is only a legal measure. There will come a time when criminals will use AI and Automation in a way that will implement currency/capital nefariously. This rule will need to be in place to allow law enforcement to make an attempt to shut it down before it becomes harmful. Think about the Flash Crash of 2010, which eliminated $1 trillion dollars of stock market value in little more than a few minutes. This is an example of the potential damage that could be inflicted on a market or prices and believe-it- or-not,the Flash Crash of 2010 is probably a small, gentle example of downside risk.

One might ask who would be opposed to such a ban. There is a short and easy answer to that. It’s Sophia the robot created by Hanson Robotics (and the small faction of people advocating for machine rights already) that was recently granted citizenship in Saudia Arabia and now “claims” to want to have a child. If a machine is allowed to control currency, with itself as a beneficial owner, then robots should be paid a wage for labor. Are we prepared to hand over all human rights to machines? Why are we building these machines at all if we don’t benefit from them and they exist on their own accord? Why make the investment?

World's first robot 'citizen' says she wants to start a family
JUST one month after she became the world's first robot to be granted citizenship of a country, Sophia has said that…www.news.com.au

Implementing this rule into KYC and AML rules is a good start to controlling the downside risk from autonomous systems. It is virtually harmless today to implement. If and when machines are sentient, and we are discussing granting them rights, we can revisit the discussion. For now however, we are talking about systems, built by humanity to serve humanity. Part of serving humanity is to control and eliminate downside risk. Machines do not need to have the ability to control their own capital/currency. This is easy so let’s get it done.