SAFEAI - A Tool For Concerned Parents

What is SAFEAI?

SAFEAI is a program being designed by ForHumanity to bring together the best practices of Artificial Intelligence (AI) security, standards, ethics, privacy, bias, and control together with toy manufacturers to provide parents with the comfort of knowing the toys they buy for their children will be produced with the highest standards available.

SAFEAI is a new program which will incorporate the best thinking in the industry to tackle some very challenging issues in order to safeguard our youth from potential danger associated with Artificial Intelligence and the data associated with it. ForHumanity is joining with industry leaders from around the world to aggregate the best practices in artificial intelligence across a series of key areas:

  1. Security — are the toy manufacturers and the sub contractors using the absolute best processes to safeguard the data that is being generate by your children and their interactions with the toys. The Center for Internet Security (https://www.cisecurity.org/cybersecurity-best-practices/) provides on going updates about the latest and best efforts to maintain cybersecurity. The ForHumanity team will conduct semi-annual reviews and ad hoc follow ups with manufacturers to make sure that they continue to operate at the highest levels.
  2. Standards and ethics —Institute of Electrical and Electronics Engineers (IEEE) has developed a policy on standards and ethics for AI — Ethically Aligned Design (EAD). IEEE is the leading provider of standards to the world of technology and has been around since 1884. ForHumanity will continue to work with IEEE on Ethically Aligned Design to ensure that the measures covered in the EAD are implementable. ForHumanity will also work with toy manufactures to ensure that EAD is being followed for their AI inputs to toys.
  3. Privacy — IEEE is also creating standards for the privacy of data as it relates to children, the working group (P7004), which ForHumanity is privileged to be apart of, is working on the Standard for Child and Student Data Governance. This standard, as it is developed, will be enforced by ForHumanity auditors until then, ForHumanity will be guided by FERPA and GDPR (for 2018).
  4. Bias — there are currently no standards for the avoidance of bias in artificial intelligence. So ForHumanity will developed a series of internal tests to attempt to identify if bias exists in the artificial intelligence of the product. Our tests will be designed to identify bias associated with gender, nationality/ethnicity, sexual orientation, religion, handicaps and political affiliation.
  5. Control — ForHumanity tests that the artificial intelligence can be turned off, plain and simple. This may sound as simple as an on/off switch, which often it is. However, in the future it is anticipated that certain AI may be able to circumvent the on/off switch to remain functional. ForHumanity will work with the best practices of the industry to ensure that the AI in the product can be disabled at the owners discretion.

SAFEAI is a “Seal of Approval” which toy manufacturers can apply for and once a successful audit is completed receive a license to be used with their product promotions. The ForHumanity team will ensure that they are current and compliant on all implementable standards. When compliant the manufacturer may use the SAFEAI logo, consistent with the licensing agreement on all of their promotional items.

This seal should be a sign to parents that the product they are considering to buy is being held to the highest possible standard that the community of AI, privacy and standards organizations around the world feel is the highest possible standard. Sadly, it cannot be a guarantee, but at least parents will know that the intent for best efforts was being practiced.

This process is an important one for Parents as AI is increasing its reach into the realm of toys. Parents will want to know that the toys are safe and their families are protected. ForHumanity hopes that this process will gain traction as an important third-party watchdog on the way that AI companies are implementing their usage. It is our belief that simply knowing that ForHumanity is watching will increase the diligence and trustworthiness that all AI companies operate under.

We need your help and support to make this important endeavor meaningful to the Artificial Intelligence industry. Elon Musk just told the National Governors Association “…that AI is may represent the great risk to society at large and the regulation needs to be proactive, not reactive”, this is ForHumanity being proactive. Please tell people about SAFEAI, please share this information with others. If you can directly support ForHumanity please do so, but you can also help by buying products that are SAFEAI versus ones that are not. When parents begin to insist on their products being SAFEAI, then we will have real power to make a difference in our world, for good. That is our mission at ForHumanity and we will work tirelessly to help humanity in any way that we can. This is a start.