privacy

The Value of Data in the Digital Age

To date, data is being valued and priced by everyone EXCEPT the creators of that content — YOU. If we want to change that many things need to happen, but it begins with taking the time to figure out how a person values their own data. So let’s dissect and see if we can shed some light on this idea.

First, this process is VERY unique, because for the first time EVER, every single person on the planet has the potential to sell a product. Second, instead of being a “consuming” culture and propelling the corporate world forward, human beings are the ones in a position to profit. Third, everyone’s value judgement on data is unique, personal and unquestionable. Fourth, the opportunties for people to enrich themselves in a world of possible technological unemployment is tremendously important to the welfare of society. Finally, on top of the social ramifications, there is the obvious moral ramifications. As highlighted by the misuse of your data by corprorations, this idea of individual data ownership is morally correct.

Now we are not talking about ALL data. If you use Amazon’s services to buy something. All of the information that you create by searching, browsing and buying on the Amazon site, also belongs to them. So while I can opine on the “value” of individual data, I am certain that the legal questions around data are just beginning to be sorted out.

So with all of that in mind, let’s examine how each individual person may value the data that they can provide. Noteworthy to this discussion is that every individual has a different value function. Different people will value different things about their data. So it is vital that we appreciate that each person will price things uniquely.

However the parameter that they weigh can be summarized in a few key variables which are covered below. So lets create a list and explain each one:

  1. Risk of Breach — Each data item, if fallen into the wrong hands can cause harm. This is the risk of breach. This risk will be perceived differently based upon the reputation for safety of the data user, a perceived sense of associated insurance and the context of the data itself. For example, let’s consider 4 tiers of risk of breach. Tier 1 ( HARMLESS) — the contents of my dishwasher. This data might have value to someone and could not harm me if used nefariously. Tier 2 (LKELY HARMLESS)— the contents of my refrigerator. Still like to be unable to hurt me, but since people may know what I consume, one could they possibly tamper with it. Tier 3 (HARMFUL — ONLY INCONVENIENT) Examples here might include, financial breach. Where often the risk is not only yours, there is a form of insurance (bank insurance or other similar example), but it is dangerous and painful when it occurs. Tier 4 (HARMFUL — PERSONAL SAFETY) Examples here might include your exact location, your health records, access to your cybernetics and/or your genetic code.
  2. Risk of Privacy — How sensitive or personal do we view the data items. On this risk, I beleive that pricing is rather binary or maybe parabolic. Many data items which we can produce do not make us concerned if made public. That is, until a line is crossed, where we consider them private. My pictures, fine. My shared moments, fine. My bedroom behavior, private. So when that line is crossed, the price of the associated data rises substantially. To continue the example, a manufacturer of an everyday item, such as pots and pans, may not have to pay a privacy premium for data associated with our cooking habits. However, a manufacturer of adult toys, may have to pay a substantial premium to gain access to the bedroom behavior of a meaningful size sample of data. This is a good time to remember that these pricing mechanisms are personal, true microeconomics and everyone will value the risk of privacy differently. Even to the point where the example I just gave may be completely reversed. Bedroom behavior, no problem… but keep your eyes of of my cooking.
  3. Time — how easy is it to generate the data. Can I generate the data simply by existing? That data will be cheaper. Do I have to engage in a use of my time to create the data, that data will be more expensive. Would you like me to watch your commercials? more expensive. Would you like me to fill out your survey? 2 questions is cheaper than 20 questions. Time is also a function about the entire mechanism for creating, monitoring the data.
  4. Applicability — is the data being asked of me relevant to me. This is a question of “known” versus “unknown”. If I regularly drink a certain type of coffee, I am more likely to accept coupons, ads, sales and promotions from my coffee shop than I am from the Tea emporium around the corner. The function here is inverted, as the applicability decreases, the value of access to “me” increases from my perspective. That is not to say that it also increases for the data consumer, so with respect to applicability we have typically juxtaposed supply and demand curves. Also, if you only value data based on the supply side (what I am willing to give), then you miss out on revenue opportunities by allowing people access to your attention to “broaden your exposure”.

If the world changes to a personal data driven model, then the corporate world and the artificial intelligence world, will have to learn how to examine these key variables. The marketplace where these transactions will occur MUST be a robust mechanism for price discovery whereby many different bids and offers are being considered on a real-time basis to determine the “price/value” of data This is why I have proposed the Personal Data Exchange as a mechanism for identifying this value proposition. Exchanges are in the business of price discovery, on behalf of their listed entities, in this case, “you”.

Personal Data — How to Control Info and Get Paid for Being You
Here’s a quick and easy way to break some of the current monopolies that exist in the personal data market (looking at…medium.com

In the end, this is the morally corrected position. For a variety of reasons it a justifiable and necessary change to a marketplace that was created largerly without your consent. Recent changes to the law, such as GDPR in Europe have begun to claw back the rights of the indidivual. But if we can get this done, it becomes a complete gamechanger. Please… get on board. Your thoughts and critiques are welcome and encouraged, ForHumanity.

Facebook and Cambridge Analytica - "Know Your Customer", A Higher Standard

Know Your Customer (KYC) is a required practice in finance. Defined as the process of a business identifying and verifying the identity of its clients. The term is also used to refer to the bank and anti-money laundering regulations which governs these activities. Many of you will not be familiar with this rule of law. It exists primarily in the financial industry and is a cousin to laws such as Anti-Money Laundering (AML) and the Patriot Act of 2001 and US Freedom Act of 2015. These laws were designed to require companies to examine who their clients were. Are they involved in illegal activities? Do they finance terrorism? What is the source of these monies? Does the customer engage in illegal activity? The idea was to prevent our financial industry from supporting or further the ability of wrong-doers to cause harm. So how does this apply to Facebook and the Cambridge Analytica issues?

I am suggesting that the Data Industry, which includes any company that sells or provides access to individual information should be held to this same standard. Facebook should have to Know Your Customer. Google should have to Know Your Customer. Doesn’t this seem reasonable? The nice part about this proposal is that it isn’t new. We don’t have to draft brand new laws to cover it, rather just modify some existing language. KYC exists for banks, now let’s expand it to social media, search engines and the sellers of big data.

Everywhere in the news today, we have questions about “who is buying ads on social media”? Was it Russians trying to influence an election? Was it neo-nazis, ANTIFA or other radical idealogues? Is it a purveyor of “fake news”? If social media outlets were required to KYC their potential clients then they will be able to weed out many of these organizations well before their propoganda reaches the eyes of their subscribers. Facebook has already stated that they want to avoid allowing groups such as these to influence their users via their platform. So it is highly reasonable to ask them to do it, or be penalized for failure to do so. Accountability is a powerful thing. Accountability means that it actuals gets done.

Speaking of “getting it done”, some of you may have seen Facebook’s complete about-face on its compliance with GDPR, moving 1.5 billion users out of Irish jurisdiction and to California where there are very limited legal restricitons. https://arstechnica.com/tech-policy/2018/04/facebook-removes-1-5-billion-users-from-protection-of-eu-privacy-law/

If you aren’t familiar with GDPR, it is Europe’s powerful new privacy law. For months, Facebook has publically stated how it would intend to comply with the law. But when push came to shove, their most recent move is to avoid the law and avoid compliance as much as possible. So flowery-language is something we often here from corporate executives on these matters, but in the end, they will still serve shareholders and profits first and foremost. So unless, these companies are forced to comply, don’t expect them to do it out of moral compunction, that’s rarely how companies operate.

Returning the the practical application of KYC, for a financial firm, this means that a salesperson has to have a reasonable relationship with their client, in order to assure that they are compliant with KYC. They need to know the client personally and be familiar with the source and usage of funds. If a financial firm fails to execute KYC and it turns out that the organization they are doing business with is charged with a crime, then the financial firm and the individuals involved would find swift ramifications, including substantive fines and potential jail time. This should apply to social media and the data industry.

Let me give you a nasty example. Have you looked at the amazing detail Facebook or Google have compiled about you? It is fairly terrifying and there are some out there (Gartner, for example) who have even predicted that your devices will know you better than your family knows you by 2022.

https://www.gartner.com/smarterwithgartner/emotion-ai-will-personalize-interactions/

Now assuming this is even close to true for many of us, then imagine where that information is sold to a PhD candidate at MIT, or other reputable AI program, except that PhD student, beyond doing his AI research is funnelling that data on to hackers on the dark web, or worse, to a nation-state engaged in cyberwarfare. How easy would it be for that group to cripple a large portion of the country? Or maybe, it has already happened, with examples like Equifax and its 143 million client breach. Can you be sure that the world’s largest hacks aren’t getting their start by accessing your data from a data reseller?

To be fair, in finance, often times you are taking in the funds and controlling the activities after the fact. You know what is going on. With data, often times you are selling access or actual data to the customer and no longer have control over their activities, it might seem. But this argument simply enhances my interest in Know Your Customer, because these firms may have little idea how this data is being used or misused. Better to slow down the gravvy train than to ride it into oblivion.

Obivously the details would need to be drafted and hammered out by Congress, but I am seeking support of the broader concept and encouraging supporters to add it to the legislative agenda. ForHumanity has a fairly comprehensive set of legislative proposals at this point which we would hope would be consider in the broad concept of AI policy. Questions, thoughts and comments are always welcome. This field of AI safety remains so new that we really should have a crowd-sourced approach to identify best-practices. We welcome you to join us in the process.

AI Safety - The Concept of Independent Audit

For months I have regularly tweeted a response to my colleagues in the AI Safety community about being a supporter of independent audit, but recently it has become clear to me that I have insufficiently explained how this works and why it is so powerful. This blog will attempt to do that. Unfortunately it is likely be longer than my typical blog, so apologies in advance.

Independent audit defined. ForHumanity, or similar entity, that exists, not for profit, but for the benefit of humanity would conduct detailed, transparent and iterative audits on all developers of AI. Our audit would review the following SIX ELEMENTS OF SAFEAI on behalf of humanity:

  1. Control — this is an analysis of the on/off switch problem that plagues AI. Can AI be controlled by its human operators? Today it is easier than it will be tomorrow as AI is given more access and more autonomy.
  2. Safety — Can the AI system harm humanity? This is a broader analytic designed to examine the protocols by which the AI will manage its behavior. Has it been programmed to avoid human loss at all costs? Will it minimize human loss if no other choice? Has this concept even been considered?
  3. Ethics/Standards — IEEE’s Ethically Aligned Design is laying out a framework that may be adopted by AI developers to represent best practices on ethics and standards of operation. 
    There are 11 subgroups designing practical standards for their specific areas of expertise (P7000 groups). ForHumanity’s audit would look to “enforce” these standards.
  4. Privacy — Are global best practices being followed by the company’s AI. Today, to the best of my knoweldge, GDPR in Europe is the gold standard of privacy standards and would be the model that we would audit on.
  5. Cyber security — regarding all human data and interactions with the company's AI, are your security protocols consistent with industry best practices? Are users safe? If something fails, what can be done about it?
  6. Bias — Have your data sets and algorithms been tested to identify bias? Is the bias being correct for, if not, why not? AI should not result in classes of people being excluded from fair and respectful treatment

The criteria that is analysed is important, but it is not the most important aspect of independent audit. Market acceptance and market demand are the key to make independent audit work. Here’s the business case.

We have a well established public and private debt market. It is well over $75 trillion US dollars globally. One of the key driving forces behind the success of that debt market is Independent Audit. Ratings agencies, like Moody’s, Fitch and Standard & Poor’s for decades have provided the marketplace with debt ratings. Regardless of how you feel about ratings agencies or their mandate, one thing is certain, they have provided a reasonable sense of the riskiness of debt. They have allowed for a marketplace to be liquid and to thrive. Company’s (issuers) are willing to be rated and investors (buyers) rely upon them for a portion of their investment decision. It is a system with a long track record of success. Here are some of the features of the ratings market model:

  1. Issuers of debt find it very difficult to issue bonds without a rating
  2. It is a for-profit business
  3. There are few suppliers of ratings which is driven by market acceptance. Many providers of ratings would dilute their value and create a least common denominator approach. Issuers would seek the easiest way to get the highest rating.
  4. Investors rely upon those ratings for a portion of their decision making process
  5. Company’s provide either legally mandated or requested transparency into their financials for a “proper” assessment of risk
  6. There is an appeals process for companies who feel they are not given a fair rating
  7. The revenue stream from creating ratings allows the ratings agencies to grow and rate more and more debt

Now I would like to extrapolate the credit rating model into an AI safety ratings model to highlight how I believe this can work. However, before I do that there is one key feature of the ratings agency model that MUST exist for it to work. The marketplace MUST demand it. For example, if an independent audit was conducted on the Amazon Alexa (this has NOT happened to date) and it failed to pass the audit or was given a subsequent low rating because Amazon had failed some or all of the SIX ELEMENTS OF SAFEAI, then you, the consumer, have to stop buying it. When the marketplace decides that these safety elements of AI are important, that is when we will begin to see AI safety implemented by companies.

That is not to say that these companies and products are NOT making AI safety a priority today. We simply do not know. From my work,there are many cases where they are not, however without independent audit, we cannot know where the deficiencies lie. We also can’t highlight the successes. For all I know, Amazon Alexa would perfectly pass the SIX ELEMENTS OF SAFEAI today. But until we have that transparency, the world will not know.

That is why independent audit is so important. Creating good products safely, is a lot harder than just creating good products. When companies know they will be scrutinized, they behave better — that is a fact. No company will want to have a bad rating published about them or their product. It is bad publicity. It could ruin their business and that is the key for humanity. AI safety MUST become a priority in the buying decision for consumers and business implementation alike.

Now, a fair criticism of independent audit is the “how” part and I concur wholeheartedly, but it shouldn’t stop us from starting the process. The first credit rating would be a train-wreck of a process compared with the analysis that is conducted by today’s analysts. So it will be now, but the most important part of the process is the “intent” to be auditted and the “intent” to provide SAFEAI. We won’t get it perfectly right the first time nor will we get it right everytime, but we will make the whole process a lot better with transparency and effective oversight.

Some will argue for government regulation (see Elon Musk), but AI and the working being done by global corporations have already outstripped national boundaries. It would be very easy for an AI developer to avoid scrutiny that is nationally focused than a process that is transnational and market driven. Below I have created a list of reasons that market driven regulation, which this amounts to, is far superior to government based regulation:

  1. Avoids the tax haven problem associated with different rules in different jurisdictions, which simply shifts development to the easiest location
  2. Avoids government involvement, which frequently had been sub-optimal
  3. Allows for government based AI projects to be rated — huge benefit regarding autonomous weaponary
  4. Tackles the problem from a global perspective, which is how many of the AI developers already operate
  5. Market driven standards can be applied rather than bureaucracy and politics using the rules as tools to maintain their power
  6. Neutrality

Now how will this all work. It’s a huge endeavor and it won’t happen overnight, but I suspect that anyone taking the time to properly consider this will realize the merits of the approach and the importance of the general issue. It is something we MUST do. So here’s the process I suggest:

  1. Funded - we have to decide that this important and begin to fund the audit capabilities of ForHumanity or a like-minded organization
  2. Willingness - some smart company or companies must submit to independent audit, recognizing that today, it is a sausage making endeavor, but one that they and ForHumanity are undertaking together to build this process and to benefit both humanity and their product for achieving SAFEAI
  3. Market acceptance- this is a brand recognition exercise. We must grow the awareness of the issues around AI safety and have consumers begin to demand SAFEAI
  4. Revenue positive, Licensing - once the first audit is conducted, the company and product will be issued the SAFEAI logo. Associated with this logo is a small per product licensing (cents) fee payable to ForHumanity or a like organization. This fee allows us to expand our efforts and to audit more and more organizations. It amounts to the corporate fee payable for the benefits to humanity. (Yes, I know it is likely to be passed on to the consumer)
  5. Expansionary - this process will have to be repeated over and over and over again until the transparency and audit process become pervasive. Then we will know when products are rogue and out of compliance by choice, not from neglect or the newness of the process
  6. Refining and iterative- this MUST be a dynamic process that is constantly scouring the world for best-practices, making them known to the marketplace and allowing for implementation and upgrade. This process should be collaborative with the company being audited in order to accomplish the end goal of AI safety
  7. Transparent - companies must be transparent in their dealings
  8. Opaque - the rating cannot be transparent in the short coming in order to protect the company and the buyers of the product, who still choose to purchase and use the product. It is sufficient to know that there is a deficiency somewhere, but there is no need to make that deficiency public. It will be between ForHumanity and the company itself. Ideally the company learns of their deficiency and immediately aims to remedy the issue
  9. Dynamic - this team cannot be a bureaucratic team, it must be comprised of some of the most motivated and intelligent people from around the world determined to deliver AI safety to the world at-large. It must be a passion, it must be a dedication. It will require great knowledge and integrity
  10. Action-oriented - I am a do-er, this organization should be about auditing not about discussing. Where appropriate and obvious, like in the case of IEEE’s EAD, adoption should supersede new discussions. Take advantage of the great work being done already by the AI safety crowd
  11. And it has to be more than me - As I have written these words and considered these thoughts, it is always with the idea that it is impossible for one person, or even a small group to have a handle on “best-practices”. This will require larger teams of people from many nationalities, from many industries, from many belief systems to create the balance and collaborative environment that can achieve something vitally important to all of humanity.

Please consider helping me. You don’t have to agree with every thought. The process will change daily, the goalposts will change daily, “best-practices” will change daily. You have to agree on two things:

  1. That this is vitally important
  2. That independent audit is the best way to achieve our cumulative goals

If you can agree on these two simple premise then join with me to make these things happen. That requires that you like this blog post. That you share it with friends. That you tell them that it is important to read and consider. It requires you to reconsider how you review and buy AI associated products. It requires you to take some action. But if you do these things, humanity will benefit in the long run and I believe that is worth it.

Privacy - GDPR for the US

You do not have the right to Privacy, at least not guaranteed by your constitution or by law. Years and years of application of the 4th amendment (unreasonable search and seizure) create a world of “reasonable expectation of privacy”. To give you an idea of how narrowly this is defined, in Smith v Maryland (1979) Supreme Court case, the dialing of digits on your telephone constituted consent to the telephone company to release your privacy. This decision is being applied over and over again in the digital age, but technology companies who use your data and information to enormous profit ($24 bil in Q2 2017 for Google alone). Recently the Amici Curiae, brief was prepared by 42 legal scholars to suggest that the times have changed and continued use of the Smith decision is problematic. I agree times 10. I have argued that we need to rebuild our right to privacy from the ground up with a constitutional amendment protecting privacy. The problem is two-fold, one that’s a hard thing to do. 2) James Comey speaking at a Boston Security conference on March 7th 2017 said “There is no such thing as privacy anymore in America”. Which indicates that the authorities (and the largest companies in the world) like their very long reach into your lives and your data.

Recently Europe has actually lead the way on privacy with GDPR (General Data Protection Regulation) which goes into effect in Europe on 25 May 2018. This new law effects any company around the world looking to do business with a European person. It is an excellent start to the rights of individuals in the digital age. I would advocate for something similar here in the US, but I expect the resistance to be substantial from the largest companies in the big data business, Amazon, Facebook, Google and now the ISPs. They have zero interest in your privacy, in fact if everyone went dark on these businesses, they would be hard pressed to exist. Here’s a summary of the key points that GDPR provides for the citizens of Europe:

Consent
The conditions for consent have been strengthened, and companies will no longer be able to use long illegible terms and conditions full of legalese, as the request for consent must be given in an intelligible and easily accessible form, with the purpose for data processing attached to that consent. Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it.

Data Subject Rights

Breach Notification

Under the GDPR, breach notification will become mandatory in all member states where a data breach is likely to “result in a risk for the rights and freedoms of individuals”. This must be done within 72 hours of first having become aware of the breach. Data processors will also be required to notify their customers, the controllers, “without undue delay” after first becoming aware of a data breach.

Right to Access
Part of the expanded rights of data subjects outlined by the GDPR is the right for data subjects to obtain from the data controller confirmation as to whether or not personal data concerning them is being processed, where and for what purpose. Further, the controller shall provide a copy of the personal data, free of charge, in an electronic fromat. This change is a dramatic shift to data transparency and empowerment of data subjects.

Right to be Forgotten
Also known as Data Erasure, the right to be forgotten entitles the data subject to have the data controller erase his/her personal data, cease further dissemination of the data, and potentially have third parties halt processing of the data. It should also be noted that this right requires controllers to compare the subjects’ rights to “the public interest in the availability of the data” when considering such requests. *personally I think this remains a weakness of the law as there is too much subjectivity

Data Portability
GDPR introduces data portability — the right for a data subject to receive the personal data concerning them, which they have previously provided in a ‘commonly use and machine readable format’ and have the right to transmit that data to another controller.

Privacy by Design
Privacy by design as a concept has existed for years now, but it is only just becoming part of a legal requirement with the GDPR. At it’s core, privacy by design calls for the inclusion of data protection from the onset of the designing of systems, rather than an addition. More specifically — ‘The controller shall..implement appropriate technical and organisational measures..in an effective way.. in order to meet the requirements of this Regulation and protect the rights of data subjects’. Article 23 calls for controllers to hold and process only the data absolutely necessary for the completion of its duties (data minimisation), as well as limiting the access to personal data to those needing to act out the processing.

Data Protection Officers
Currently, controllers are required to notify their data processing activities with local DPAs, which, for multinationals, can be a bureaucratic nightmare with most Member States having different notification requirements. Under GDPR it will not be necessary to submit notifications / registrations to each local DPA of data processing activities, nor will it be a requirement to notify / obtain approval for transfers based on the Model Contract Clauses (MCCs). Instead, there will be internal record keeping requirements, as further explained below, and DPO appointment will be mandatory only for those controllers and processors whose core activities consist of processing operations which require regular and systematic monitoring of data subjects on a large scale or of special categories of data or data relating to criminal convictions and offences. Importantly, the DPO:

  • Must be appointed on the basis of professional qualities and, in particular, expert knowledge on data protection law and practices
  • May be a staff member or an external service provider
  • Contact details must be provided to the relevant DPA
  • Must be provided with appropriate resources to carry out their tasks and maintain their expert knowledge
  • Must report directly to the highest level of management
  • Must not carry out any other tasks that could results in a conflict of interest.

As you can see this is a game changer as it relates to your existing privacy relationships in the United States and many other countries. Since privacy is not explicitly guaranteed by the Constitution, the people, have fought to claw back that privacy time and again. The proverbial slippery slope has been tilted against the individual from the outset. Which is why I suggest we change the slope completely and ask for the constitutional amendment. Failing that, I would like to see law makers begin to implement the features above from GDPR with the added increase that it would be law enforcement and not controllers who could consider the public record versus the appropriate right to be forgotten.

Lastly, I call on the technology world to build a series of privacy tool.

  1. Privacy App — designed to track your own data, who has it, what is it, where it goes. This app ought to plug into any aspect of your life and immediately change the privacy settings in an instant.
  2. Right to be Forgotten tool kit. An internet eraser that allows the individual to quickly and completely clean up their online life and their data footprint.

I suspect the both apps would be wildly successful, and thus are already in development. Personally I can’t wait to use them.

What Should the Partnership on AI become?

The Partnership on AI (https://www.partnershiponai.org/ ), for those that aren’t familiar with the group, brings together some of the most influential corporations in the Artificial Intelligence (AI) development businesses. They are wise enough to foresee that the growth in this technology will have profound influence on humanity in a variety of ways. They have recently created this non-profit organization to foster the successful and beneficial implementation of AI into society at large. Their Board of Trustees just published a series of thematic pillars and are currently looking to hire the leadership team for the Partnership .

Using the Partnership’s thematic pillars, I take the liberty to create an action plan on their behalf as their goals and ForHumanity’s goals have many like-minded tenets. In the sections below are my action items in italics, which follow each of the Partnership on AI’s thematic pillars (which are copied verbatim from the website).

1. SAFETY-CRITICAL AI

Advances in AI have the potential to improve outcomes, enhance quality, and reduce costs in such safety-critical areas as healthcare and transportation. Effective and careful applications of pattern recognition, automated decision making, and robotic systems show promise for enhancing the quality of life and preventing thousands of needless deaths.

However, where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy, and aligned with the ethics and preferences of people who are influenced by their actions.

We will pursue studies and best practices around the fielding of AI in safety-critical application areas.

This needs to be a primary function of the Partnership. The leadership ought to quickly scour the academic literature and operating AI businesses for best practice on safety and control. A team should be built to work directly with industry and request the transparency to identify these best practices and quickly implement a Partnership guide on AI safety. This would be published widely and carried directly back to the users of AI for compliance.

In addition, the Partnership should create a SAFEAI “Good Housekeeping Seal of Approval”. This seal of approval would be licensed to consumer products once the product has met the Partnership’s standard for safety,cyber security, privacy, data bias, control and ethics. This model would allow for a revenue stream into the Partnership that is its own. The SAFEAI seal would enhance its service to the masses as well as the Partnership’s brand. The SAFEAI seal would become a green light to purchasers of AI consumer products when it puts consumer’s minds at ease. A strong brand and a source of revenue will enhance the Partnership’s stature by giving it independence.

Finally, the Partnership should begin to convene user groups, such as parents, to identify their concerns regarding safety and AI. This will enable the SAFEAI seal to be used specifically on toys, during the early stages of its introduction. SAFEAI then becomes a natural talking point on mass media, such as the Today Show and GMA. This is another important step towards the goal of building the brand and reaching the mass market, educating them on AI, especially SAFEAI.

2. FAIR, TRANSPARENT, AND ACCOUNTABLE AI

AI has the potential to provide societal value by recognizing patterns and drawing inferences from large amounts of data. Data can be harnessed to develop useful diagnostic systems and recommendation engines, and to support people in making breakthroughs in such areas as biomedicine, public health, safety, criminal justice, education, and sustainability.

While such results promise to provide great value, we need to be sensitive to the possibility that there are hidden assumptions and biases in data, and therefore in the systems built from that data. This can lead to actions and recommendations that replicate those biases, and suffer from serious blindspots.

Researchers, officials, and the public should be sensitive to these possibilities and we should seek to develop methods that detect and correct those errors and biases, not replicate them. We also need to work to develop systems that can explain the rationale for inferences.

We will pursue opportunities to develop best practices around the development and fielding of fair, explainable, and accountable AI systems.

The Partnership should adopt IEEE’s Ethically Aligned Design standards immediately. Where there are areas of doubt or questions, the Partnership should immediately engage John Havens and the IEEE team to voice the concerns and make changes as appropriate, however EAD is an excellent jumping off point and the faster we achieve consensus, the easier it will be to create standards in the industry and subsequent compliance.

The Partnership should then create a business team that will seek audit level transparency from AI firms across their businesses and look to assign the Partnership’s seal of approval on SAFEAI at the firm level. This audit level function would remain confidential. Failures would not be published, however the SAFEAI seal may not be adopted until compliant. This business model for the non-profit is similar to the Credit Ratings model of corporate bond issuance. The Credit Ratings business model has proven successful for decades by increasing confidence from bond buyers and thus increased the acceptance of debt sold by corporations. AI firms today can only imagine the power of this seal of approval on their ability to transact business, especially in the public realms.

Systems and processes will have to be developed inside of the Partnership in order to continually improve it’s ability to evaluate data bias, cyber- security, ethics, privacy, control and standards compliance. Over time, this team would become the de facto experts in AI compliance.

3. COLLABORATIONS BETWEEN PEOPLE AND AI SYSTEMS

A promising area of AI is the design of systems that augment the perception, cognition, and problem-solving abilities of people. Examples include the use of AI technologies to help physicians make more timely and accurate diagnoses and assistance provided to drivers of cars to help them to avoid dangerous situations and crashes.

Opportunities for R&D and for the development of best practices on AI-human collaboration include methods that provide people with clarity about the understandings and confidence that AI systems have about situations, means for coordinating human and AI contributions to problem solving, and enabling AI systems to work with people to resolve uncertainties about human goals.

I think this tenet from the Partnership creates the opportunity for sponsor firms and SAFEAI participants to demonstrate their good faith to humanity broadly as they look to commercialize their developments of AI.

The Partnership can take a leadership role to provide smart, safe and responsible AI applications that benefit all of humanity. It is uniquely positioned to be an advocate for further adoption of AI because of the transparency and independence which member firms should provide. Furthermore, it should be a resource to humans and businesses who have concerns or questions about safety, ethics, cyber security, data bias,privacy and control issues.

The Partnership should be highly visible, speaking regularly at industry events as well as in the mass media to enhance its brand and become known as the independent resource on AI safety with the primary goal of advancing AI for the benefit of humanity.

In the implementation of AI, the Partnership is more like the referee, ensuring that game is being played by the rules — rules which are well known to both AI developers and consumers. Games that are played with referees, where the rules are well known to all, are always regarded as the most fair by all participants and onlookers. They also become the games that are most widely watched/participated in, so it is truly in everyone’s best interest.

4. AI, LABOR, AND THE ECONOMY

AI advances will undoubtedly have multiple influences on the distribution of jobs and nature of work. While advances promise to inject great value into the economy, they can also be the source of disruptions as new kinds of work are created and other types of work become less needed due to automation.

Discussions are rising on the best approaches to minimizing potential disruptions, making sure that the fruits of AI advances are widely shared and competition and innovation is encouraged and not stifled. We seek to study and understand best paths forward, and play a role in this discussion.

I view this as a vital role for the Partnership both directly to companies as well as in policy formation. Personally I am convinced that significant technological unemployment is an unavoidable consequence of the advancement of AI and automation. The Partnership is uniquely positioned to understand the sources of these job dislocations as well as the companies involved in their impetus.

I can envision a company-elected program, designed by the Partnership that helps any company with a multi-step process of managing technological unemployment. One step could be job retraining, both internally and externally. The Partnership would be uniquely positioned to see the “new” jobs being create by AI and could create a training program that takes advantage of that perch to devise programs that fulfill those “new” job descriptions.

Retraining will not solve all of the problems and there will likely be room for the creation of Universal Basic Income (or some other facsimile), both at the local level and the Federal level. This could take the form of an insurance program that corporations could subscribe to, whereby the pool becomes the broader corporate community at-large. Should companies choose to forego a strict bottom-line only approach they may choose to protect employees and communities that they directly influence. The insurance pool would exist for payouts to companies that experience technological unemployment, resulting in a “benefit” being paid to the employee.

Finally, much research will need to be done on the concept of Universal Basic Income (or facsimile) and the Partnership should sponsor research to understand this topic, its impact on society and possible forms of implementation. Then the Partnership will be well positioned to advise corporations and legislators as they consider the possible solutions to technological unemployment.

5. SOCIAL AND SOCIETAL INFLUENCES OF AI

AI advances will touch people and society in numerous ways, including potential influences on privacy, democracy, criminal justice, and human rights. For example, while technologies that personalize information and that assist people with recommendations can provide people with valuable assistance, they could also inadvertently or deliberately manipulate people and influence opinions.

We seek to promote thoughtful collaboration and open dialogue about the potential subtle and salient influences of AI on people and society.

Here I see the Partnership having to strike a delicate balance between data driven business and the humanity it serves. The Partnership ought to be a thought leader and independent auditor of AI systems. Violations of human rights and the law, cannot be tolerated. They must be swiftly addressed and rooted out at their core. This can be done in a myriad of ways,such as shining a light on the problem and/or providing a source for independent and unbiased corrections.

One solution might take the form of an audit, or application of standards and practice. Or another might take the form of media coverage and a comprehensive forum for open dialogue designed to reach not only consensus but practical application.

The SAFEAI format should provide a basis for privacy and how privacy is applied. I feel that it might involve choice, or “opting -in”, but once the standard is determined I view privacy as one of the applications of SAFEAI.

6. AI AND SOCIAL GOOD

AI offers great potential for promoting the public good, for example in the realms of education, housing, public health, and sustainability. We see great value in collaborating with public and private organizations, including academia, scientific societies, NGOs, social entrepreneurs, and interested private citizens to promote discussions and catalyze efforts to address society’s most pressing challenges.

Some of these projects may address deep societal challenges and will be moonshots — ambitious big bets that could have far-reaching impacts. Others may be creative ideas that could quickly produce positive results by harnessing AI advances.

The spirit of this tenet highlights the altruistic nature of the Board of Trustees at the Partnership on AI. The Partnership can provide a robust, independent verification of safety, ethics, standards, cyber security, privacy, bias and control that would enable large public users of AI (who have a diverse constituency)to have the confidence they need to make large scale decisions to implement AI into their processes.

Where AI impacts public policy, I do believe that the Partnership should have a view and a voice in the way AI is adopted into legislation. I could see a dedicated advocacy role form around the principles and research which the Partnership sponsors.

One example might eventually be in the realm of General AI. Prof. Nick Bostrom, of Oxford, has suggested that for the benefit of the common good, if/when General AI is achieved and unleashed, then it should no longer be held by an individual or single corporation but instead operated on behalf of all, for the common good. I could envisage a future where the Partnership becomes wholly representative of that common good and might even become entrusted with the guidance over a General AI system.

This last point highlights the most important skill that the Partnership should have, nimbleness. This world is dynamic and requires an agile organization to respond to the ever-changing challenges that AI will present. Nimbleness is a culture that must be cultivated. The Partnership should avoid being rigid in its action plans and instead use data driven research and active public feedback loops to look for new challenges. The Partnership should then use its considerable resources to swiftly tackle the challenge and always strive for practical, applicable solutions.

7. SPECIAL INITIATIVES

Beyond the specified thematic pillars, we also seek to convene and support projects that resonate with the tenets of our organization. We are particularly interested in supporting people and organizations that can benefit from the Partnership’s diverse range of stakeholders.

We are open-minded about the forms that these efforts will take.

I believe there are four additional topics that the Partnership should also be prepared to speak about 1) Income inequality 2) Transhumanism 3) AI and Faith 4) Life extension. Each of these four topics centered around AI has enormous impact on the way that society functions and our current version of humanity. The implications of these four topics and certainly others, yet unknown, will be vital in facilitating the integration of AI throughout humanity in a way that is most beneficial to all. Especially since some of these topics may foster deep division in humanity and create new minorities that need to be protected.

All of this speaks to an organization that is nimble, data driven and action-oriented in the exploration of the boundaries of the Partnership’s mandate. It also requires a continual feedback loop between the Partnership staff and stakeholders (both on the Board and for the common good) to explore if the Partnership should expand its mandate. This means that communication is paramount. The organization needs to create a robust culture of open communication and dialogue with a diverse array of constituents. This type of culture is not created overnight, but instead will only happen when the leadership team leads by example and rewards open and positive dialogue.

I hope that the Partnership on AI gets the right leadership and becomes everything that it could be. The future promises some significant upheaval and it needs strong organizations to provide guidance and be a aligned with the best interests of humanity.

Transhumanism, Life Extension and my Concerns

At ForHumanity, our primary mission is to defend the rights of Humanity in the age of AI and Automation. Trying to preserve the things that make us human in world where technological change and technological choice are unstoppable. To be clear, I have no interest in try to stop technological progress, this is not a Luddite movement. Trying to stop technological progress is a fruitless endeavor. Rather, I want to maximize the freedom to choose how and when technology is embraced or eschewed.

So when it comes to Transhumanism (Trans) and Life Extension (LE)technologies, such as the digital download of the mind, I see some substantial challenges and I wanted to get them down on paper and begin/join the dialogue. A few bullet points to start:

  1. People should have the choice to participate or not participate in Trans and LE technologies. I can imagine a few instances where participation in Trans technology might become mandatory. Two examples are implant identification and implant medical records/health monitoring. Both arguments are easily made by a central authority. Implant identification is for “our safety”. Medical health records and monitoring would be argued on a cost basis by the centralized provider of health care. If they know what is wrong with you, they can “fix” it and keep their costs down. I argue that neither technology can be forced upon you, but I believe there will come a day where it will.
  2. It is likely that when people do choose Trans, then their ability to do certain jobs will be dramatically enhanced, while non-Trans people will not be able to compete. This will create a new “inequality”.
  3. If and when the mind becomes downloadable, essentially allowing that mind to “live forever”, it introduces a series of challenges, especially around wealth creation and wealth retention for that downloaded entity. In the US, the estate tax was create to minimize the ability of a family to pass more and more assets down through the generations. The estate tax was firmly rooted in American wishes to avoid creating a aristocracy like the British. So unchecked, a digitally downloaded mind could acquire assets over hundreds of years exacerbating an already intolerable income inequality problem.
  4. Differentiation, like it or not, today there is already “technology prejudice”. If you don’t use Facebook, you are labelled “disconnected”. If you aren’t tethered to your phone, you are “unreachable and out-of-touch”. The older generations struggle to grasp the latest technologies in many cases, which isn’t a new phenomenon. However, the difference this time is that the changes are exponential. Already, middle-aged workers with 20 years of experience are finding themselves without work and without the qualifications for the majority of jobs suited to their experience and pay level. I believe that the divide between enhanced humans and regular humans will create a class system, if not an outright schism.
  5. Hackability, as personal privacy evaporates, the one place that I am certain I still have privacy, is in my mind. If I engage in Elon Musk’s Neural Lace or some other potential computer embedded brain apparatus, then the likelihood that I could be hacked has gone from zero to something greater than zero. There is already concern about the hackability of pace makers and similar devices, this is a real phenomenon.
  6. Simple loss of humanity — our quest for the elimination of errors and greater knowledge, destroys something beautiful about humanity, our chaos. Our errors are personal, its a rare person who wouldn’t tell you that they learned more from a mistake than anything else in their life. Our choices, sub-optimal and all, give life color and movement. Our lives have ups and downs and those roller coaster rides are part of the joy of life. Each step we take towards greater knowledge and fewer mistakes comes at a cost — our humanity. We stand on the edge of a precipice. Over the edge, is the worship of intelligence and I fear we are tetering on the brink, our center-of-gravity already over the edge. Intelligence is FAR from the thing that make life worth living, yet every business and every individual is chasing it as if our very lives depended upon it. Chaos avoidance is good for risk management and insurance company spreadsheets. It does not help us achieve “joie de vive”, the joy of life. And finally control
  7. The concept of living forever has a deeply rooted challenger in faith, especially the Christian and Muslim faiths. Efforts and successes on life extension will be met not only with skepticism, but cynicism, but the faith-based communities. I believe this will be an additional source of divide between humans. A divide that is rife with risk and opportune for conflict, oppression and subjugation by enhanced humans.

These are simply a few of my concerns and I wanted to share them. Please note that I intend for this to be a dynamic and working list. Challenges and additions are welcome, as feedback always is on my writings. As I explore Transhumanism, I will probably amend and adapt this post with the things that I am able to learn along the way. As always, ForHumanity can use your support, visit the website at https://www.forhumanity.center/ for more information.

Living in the Age of Machines - Do we need a second Bill of Rights?

The initial Bill of Rights, written by James Madison in 1789, served to facilitate the passage, acceptance and ratification of the constitution by the 13 colonies as the United States was being formed. These individual rights were identified specifically to protect freedoms that the people of the United States already had. Citizens were concerned that the new Federal Government might infringe upon them. In other words, the Bill of Rights was devised to AVOID future possible infringement by the federal government. The Bill of Rights was PRE-EMPTIVE. It has served as the basis for considerable thought, legal scrutiny and precedence for nearly 250 years. The rights guaranteed in The Bill of Rights are VITAL to the lives we live today and to the legal framework of society in the United States.

Since then the world has changed and expanded considerably. We live in a global world and even the farthest points of that world can be reached in a matter of hours. Information flows abundantly and instantly via the internet. Today people can pick up and move cross country in a matter of hours. Alternatively they can travel thousands of miles with minimal danger and they can get to their destination on their own path and at their own pace. Robust currencies, guaranteed by central governments, have become electronic, exchangeable and pervasive. Everything is for sale and can be purchased with the click of a button. Today, with a single video recording, the entire world may know what we are doing, where we are and how we are behaving. We choose how our families will be structured. Some take a pass on children others have a dozen, but no longer is family size based on need or on abysmal survival rates, but on freedom of choice.

Technology gives us great power and improves our lives 9 times out of 10. But there is something unsettling about the way that technology seduces us as well, and I think that many of us feel it in our souls. We have slowly become accustomed to sacrificing our privacy in the name of convenience. Sacrificing freedoms and rights in the name of safety. Just like technological growth has accelerated, I am concerned that those sacrifices will accelerate more quickly as well. That is what ForHumanity wants to safeguard against, because as technology becomes more and more pervasive, we expect these sacrifices to increases and become second nature to many. ForHumanity believes that it is vital now to protect a few the things that make us human, because it is likely that technological progress will infringe upon those rights sooner rather than later.

With the advent of Artificial Intelligence (AI)and Robotics, ForHumanity has become concerned that some of these freedoms that are not explicitly covered by the constitution may become usurped in the name of technology. Furthermore, as I have begun to engage in the dialogue that AI experts are having around future society, I see ideas and concepts that are petrifying. Many casual followers of the expansion of AI might be astonished at the ideas some leading academics and practitioners in AI are putting forward. It is my hope that a robust dialogue on this subject may serve to safeguard humanity and the rights we enjoy today. ForHumanity believes it is smart to protect these rights NOW before AI, both narrow and general, dramatically change the world we are living in. Before AI can threaten these rights that we hold dear. Rights that we have held so sacred and sacrosanct that our founders didn’t even think to write them down.

One might argue, do we really need to protect these rights now? I would counter that it can no longer wait. The concern is great enough that there are regular conferences and discussions happening where ethics, standards, practices and policy are being crafted already. It’s as if we are ordering the pizza NOW, if you speak up, you have a decent chance of getting the toppings that you would like. If you do not, you will eat what arrives, like it or not.

Here is my list of a few of the rights that we enjoy today but I would like us to consider guaranteeing. This list is not exhaustive and each of these ideas will need discussion and refining however I am not willing to wait and believe that the discussion must be started and work needs to be done now:

  1. Right to mobility (Right to drive)
  2. Right to use Currency (prevent machines from controlling currency)
  3. Right to Privacy (right to unplug)
  4. Right to Procreate

I was about to explore each one of these individually and then I realized… that’s going to take some time, so I will do it in four individual blog posts. So instead I want to leave you with 4 questions to ponder…

  1. Would you be happy if the government told you when and where you could go?
  2. Would you be happy if a 100% automated company could buy and sell vital resources, unchecked by humans?
  3. Would you be happy to be told that you MUST have electronic health records for government provided health care?
  4. Would you be happy to be told that in your life you could only have 1 child?

If you answer “No” to any or all of those questions, then you are aligned with the thinking of ForHumanity and would probably like to safeguard your rights on those issues. If you answered “Yes”, I’m not sure why, to be honest as I am seeking a serious path of least resistance here, but I would love to know and understand your thinking as well. I am keen for dialogue on each of these subjects, so please respond. I will aim to post four more blog posts, one for each of these points over the coming week.

PrivacyHuman RightsArtificial IntelligenceMachine LearningAutomation