We stand with Ukraine

The Explainable AI Imperative Amid Global AI Regulation

Expert.ai Team - 3 June 2022

 

The General Data Protection Regulation (GDPR) was a big first step toward giving consumers control of their data. As powerful as this privacy initiative is, a new personal data challenge has emerged. Now, privacy concerns are focused on what companies are doing with data once they have it. This is due to the rise of artificial intelligence (AI) as neural networks accelerate the exploitation of personal data and raise new questions about the need for further regulation and safeguarding of privacy rights.

Core to the concern about data privacy are the algorithms used to develop AI models. Consumers want to understand the logic behind an AI model’s decision-making process, but algorithms are often unable to offer that level of insight. This lack of explainability breeds consumer distrust in AI programs.

Today, only about a quarter (28%) of citizens are willing to trust AI systems in general. This distrust is leading to calls for increased AI regulation in both the European Union (EU) and the United States. They seem to be making an impact as regulatory bodies have begun to move toward legislation that requires models to meet certain explainability thresholds including the ability to interpret and explain AI outcomes.

Given some of the very public issues companies have had regarding explainability (see: Apple and Amazon), regulation may not be such a bad thing. Let’s see why:

 

The Role of AI Explainability

AI is not an infallible technology. Models are prone to error and bias just as their human counterparts are. The degree to which these issues are magnified is determined by explainability. Consider the impact a biased algorithm based on data from medical devices could have on healthcare-related decisions. For these reasons, explainability is critical to AI models, especially those that directly influence consequential decisions.

In short, explainability is an attempt to infuse trust within AI. It provides a basis for understanding both the inputs and outputs of a model. With unsupervised machine learning, there is very little insight into the input machines used to train their models and algorithms. Adding a human in the loop for training and validation insight into inputs but it can be biased and difficult to scale.

 

What Is AI Explainability?

Explainable AI (XAI) models occur when humans can understand the AI results. This stands in contrast to black-box models in machine-to-machine learning, in which there is no insight into what machines learn and how that information impacts results.

How is AI explainability measured? Measuring AI explainability requires a human subject matter expert to understand how the machines arrived at the outcome. The ability to re-enact and retrace the results and check for plausibility is also required, as is the need to perform an AI impact assessment.

How are AI explainability and interpretability different? Interpretability is related to how accurate the machine learning model aligns cause with effect. Explainability relates to the knowledge or what’s known about input parameters so model performance can be justified or explained.

 

AI Explainability: The Problem It Solves

The core advantage to advancing AI explainability is to reach the goal of trustworthy AI. This requires models to be explainable so that the AI program’s actions and decisions can be understood and justified. It reveals the reasoning behind the prediction, so the information is more trusted. You can know what the model is learning and why it is learning it. You can see how the model is performing which increases transparency, trustworthiness, fairness, and accountability of the AI system.

It’s easier for stakeholders to trust and act upon the predictions made by explainable AI. Here are some examples of explainable AI to better understand its value:

  • Healthcare: Explainable AI can help clinicians explain their treatment recommendations to patients and provide insight into the variables that went into the decisions.
  • Finance: Explainable AI can explain why a loan was approved or denied, increasing trust in those outcomes.
  • Human Resources: Explainable AI can tell us why a candidate’s resume was selected or not selected for further review, helping to mitigate bias or unfairness.
  • Fraud: Explainable AI can tell us why a transaction was flagged as suspicious or possibly fraudulent while avoiding ethical risks associated with bias and discrimination.

 

What Are AI Explainability Challenges?

As calls for more AI regulation increase, trust in AI models needs to be achieved. This includes AI governance to track models and their results. The European Commission has already proposed creation of the first-ever legal framework on AI for its member states, which addresses the risks of AI and positions Europe to play a leading role globally.

More trust can be placed in model predictions and results when explainable AI is achieved. If problems are identified, you can see where they exist and address them directly by changing a rule. This transparency, enabled by symbolic AI, helps to minimize model bias because rules can be monitored and managed.

This stands in stark contrast with black-box ML models that use autonomous machine-to-machine learning. Explainability for these models is inherently difficult and costly to manage. This is because machine learning uses large quantities of training data and complex models and no clear understanding of the rules used to train the models. This requires a great deal of work to understand what the model is doing.

 

Mitigate AI Risk with Explainability

Many companies are moving to a risk-based approach to AI to counter this challenge. This involves categorizing AI systems based on risk level and creating guidelines and regulations to mitigate risks based on the level of impact.

The EU has introduced a risk-based proposal for regulating AI systems that pose a high or moderate risk to the rights and freedoms of European citizens (e.g., China’s credit scoring system). Part of a risk-based approach to AI involves deciding whether or not AI should be used to make a specific decision, because those decisions can have a damaging effect on society.

Core to a risk-based AI system would be an impact assessment to ensure minimal risk. Also included in the EU’s risk-based AI approach are requirements for transparency of AI in use (like chatbots). The goal of these types of risk-based AI rules are to:

  • foster trust in AI implementation;
  • align AI use with European values;
  • aid in the development of a secure, human-centered AI; and
  • mitigate the societal and human risks associated with AI use.

Hindering the widespread use of explainable AI is the difficulty of creating them. Explainable AI models can be challenging to train, tune, and deploy. The use of subject matter experts or data scientists to train models is time consuming and doesn’t scale. Another challenge with explainable AI is the inability for machine learning models to track how pieces of data relate to each other once fed into the algorithms.

For AI to be used at scale, a combination of machine learning and human oversight are needed, and that’s where symbolic AI can help. It provides the best of autonomous learning with human subject matter expertise to scale explainable AI.

 

Advance Trust with Symbolic AI

As regulatory actions toward AI advance, companies must take measures to build more trust into their AI initiatives. The most reliable and direct way to do this is by using a rule-based approach (symbolic AI) to develop your model.

With symbolic AI, relationships between concepts (logic) are pre-established within knowledge bases rather than made autonomously by machines. As a result, any undesirable or biased results can be traced back to the root cause and quickly fixed — without the cost, expense and time of retraining an entire model.

A rule-based approach enables you to develop fully explainable models. Full explainability is essential for models that make mission-critical decisions (e.g., healthcare- or finance-related actions). Anything less can leave companies at risk for serious legal and/or financial repercussions should an issue arise.

While full explainability is ideal, it is not necessary nor feasible for every AI model and use case. In many cases, making certain aspects of a model explainable can be more than sufficient. This is still enabled by symbolic AI but as part of a hybrid AI approach.

In a hybrid approach, explainability can be imparted in several valuable ways. For example:

  • You can make the feature engineering process explainable by using symbolic AI to extract the most relevant entities and use them to train your ML model.
  • You can annotate large amounts of unstructured data and use a human in the loop to validate them prior to training your ML model with the data.
  • You can use ML to automatically curate a set of symbolic rules by taking annotations as input.

Companies have not placed enough of a priority on model explainability and, in turn, customer trust. That is going to change, one way or the other. The question is, will you take the proactive measures to impart explainability now or will you wait for regulation to dictate that for you?

Hybrid Offers More Than Explainability

Learn more about the expert.ai Platform and some of the key capabilities it unlocks for you and your AI system.

Learn More
The Explainable AI Imperative Amid Global AI Regulation