We stand with Ukraine

Do You Trust Your AI Algorithm?

Expert.ai Team - 23 November 2020

The growth of artificial intelligence is incredible to watch. We have seen adoption in nearly every industry vertical to applications and systems we never thought possible. But while the AI possibilities grow by the day, our trust in them has stagnated.

Given how pervasive AI has become, many consumers simply take it for granted. As long as it provides immediate value to their lives (i.e., completes their task faster), they ask no questions. However, as privacy laws become more pervasive and consumers become more protective of their personal information, the question of transparency looms large.

This newfound prioritization of privacy has quickly led businesses to question and attempt to quantify their algorithmic trust. After all, if your organization cannot unequivocally trust its own algorithm, why should consumers?

Transparency Builds Trust in AI

On its surface, AI is an extremely powerful technology. We know it to streamline workflows and help us make quicker, more informed decisions. However, how much do we really understand about each application of AI? The truth is, we know very little — especially when it comes to what happens between every input and output.

Machine learning and deep learning models are notoriously difficult to explain, as most are built in a black box. Even high-level overviews of the AI process can be hard to come by given the complex and data-intensive nature of the AI system. This challenge has thrust explainable AI into the spotlight.

Explainable vs. Interpretable AI

While many are quick to use explainable AI and interpretable AI interchangeably, there is a clear distinction to be made. This distinction is determined by the level of understanding required by the end user.

Interpretable AI is essentially your college-level class. It provides a foundational understanding of a topic, but far from enough to speak fluently about it. Explainable AI is your masters-level class. Here, you gain a thorough understanding of a topic, down to the minute details.

Not every AI system needs to be explainable, but it needs to share enough for the layperson to understand. For example, a chatbot that follows a standard logic tree is very explainable. You can show a user or stakeholder exactly how an input leads to an output. On the other hand, a system that operates an autonomous vehicle is far more complex and, thus, takes an experienced professional to process the mechanical details.

Trust is the Foundation of Responsible AI

Mistakes are bound to be made by AI platforms, just as they are made by humans. You will never completely eradicate bias from your platform, but you must be conscious of where it exists and how it came to be.

Responsible AI is the layer beyond interpretable and explainable AI. It is a framework built upon four pillars, according to Accenture:

  1. Govern
  2. Design
  3. Monitorì
  4. Train

Each of these pillars helps to establish trust in your AI initiatives. If you take any shortcuts along the way, you immediately put your organization in a vulnerable position.

A Precautionary Tale

Consider the situation Apple encountered when launching the Apple Card in August 2019. As applications came flooding in, consumers quickly noticed a discrepancy between the credit limits offered to women compared to men.

When asked what caused the inherent bias, Apple had no explanation. As it turns out, they did not even have gender as an input (a design flaw), nor did they monitor the algorithm for bias themselves. While external input is perfectly fine, Apple is still responsible for understanding and responding to the issues they encounter (a lack of training).

Situations like these can become less frequent and harmful with a commitment to responsible AI. That’s not to say it’s easy, but it can be made easier with the right technologies.

The Role of Symbolic and Natural Language Understanding in Responsible AI

Machine learning is a great foundation for training your AI system, but it alone faces consistent challenges in establishing responsible AI. All machine learning models rely on large amounts of data to train itself and learn on a continual basis. While this allows the system to operate quickly and autonomously, the entire mechanism exists in a black box. Thus, any resulting errors cannot be pinpointed and you must retrain the entire system.

On the other hand, a symbolic AI system is founded on a rules-based approach. This provides complete explainability to your organization as the rules are clearly established and visible for all to see. Thus, any resulting errors can be easily located and you can quickly establish new rules to eliminate the issue.

In a hybrid AI model, you combine the best of both approaches: a human-like understanding of language that extracts value from unstructured data with the data processing capabilities of machine learning.

In every case, you are responsible for the personal data leveraged by your system. That means you are responsible for knowing exactly how it is used and expressing that to your customer or user base. The only way to truly ensure this measure of responsibility is by implementing a symbolic approach to AI.

While much of your AI success will depend on the technology you choose to adopt, natural language understanding and symbolic will put you one step closer to establishing a responsible AI system — one your organization can rely upon. So stop taking algorithmic trust for granted. It’s time to earn it.

Grow Your Explainable Knowledge

See how expert.ai Studio and expert.ai Edge NL API can build responsibility into your AI endeavors.

Try Expert.ai Studio
nl api