We stand with Ukraine

Responsible AI: You Know It When You Don’t See It 

[email protected] - 2 May 2023

The news cycle surrounding ChatGPT has literally brought AI into my family dinner table conversations. Recently, a family member told me “AI is so bad, I heard it on NPR the other day; we need to be concerned.” She was referring to the concept of responsible AI. And I don’t mean to pick on ChatGPT, but since it is the first public-facing use of a large language model (GPT 3.5 was the LLM initially behind ChatGPT) it serves as the example of how responsible AI has become a mainstream issue.  

So, while it will seem like I’m throwing ChatGPT under the bus here, the truth is that large language models (LLMs) have their place in the AI world and expert.ai is using them in multiple use cases in our enterprise go-to-market approach. But since ChatGPT raised the public consciousness for AI in general, this has led to a much larger general conversation about responsible AI. 

While ChatGPT has sparked much of the current debate, responsible AI has been an issue not just since the inception, but the conception, of AI. Perhaps the most famous example (although not the first) of AI running amok is the famous quote, “I’m sorry Dave, I’m afraid I can’t do that” from the 1968 movie 2001: A Space Odyssey. When ChatGPT first came out, most of the initial conversation was centered on students cheating on school papers and the end of the college essay as we know it. Soon after, deeper issues became part of a more serious discussion. 

That’s because LLMs, which, is what ChatGPT is based on, use public domain data like website content, Reddit posts, social media and other publicly available sources. As a result, the content they return can be toxic.  

OpenAI, the company that developed ChatGPT, had humans review the model to minimize toxicity but, as you can imagine, with 176B parameters, toxicity cannot be completely eliminated via manual human review. In addition, the generative capabilities of ChatGPT are so plausible sounding that you can’t immediately tell if what it’s writing is true or false. It reminds me of times my then 4-year daughter would spin such an engaging story that would then trail off into her still-developing imagination and we had to ask: “Is this a real story or a made-up story?”  

This is akin to what ChatGPT does and why the term “hallucinations” has been used to describe what it produces. Unlike a child’s innocent and usually detectable make-believe world, ChatGPT has authority in presentation; there are examples where it creates seemingly credible references and citations that do not exist. This recent article from The New York Times cites several examples of these types of credible sounding fabrications. We have specifically seen these same types of results in response to medical inquiries (prompts) and, if you think of medical professionals using ChatGPT for research, this is a dangerous result. 

Naturally, this brings up the question of responsible AI. Where does it come into play? What does it mean? And who is responsible for implementing it? Even with the cautious tales, there is something that isn’t discussed much as part of the responsible AI dialogand herein lies the issuethere is no single definition of responsible AI. 

What is Responsible AI?

Responsible AI is like a famous U.S. Supreme Court ruling from the 1960s on the definition of obscenity where Judge Potter Stewart famously said, “I know it when I see it.” It’s good to see that sometimes practicality wins the day even in a legal setting. In the case of responsible AI, perhaps it’s the other way around: “you know it when you don’t see it.” 

Early in 2022, expert.ai spent a considerable time developing and offering our own version of the concept (I deliberately did not use the term “definition” here). Our experience with over 250 customers and 350+ engagements has taught us that, by using hybrid AI, which can incorporate a rules-based approach, enterprises can move from “black box” AI to what we call the “green glass approach to AI.”  

A hybrid AI approach uses different techniques to solve a problem. For example, in scenarios that involve natural language, you could address a use case through a combination of a LLM, a machine learning or a rules-based (or semantic) approach. A hybrid approach uses more than one of those approaches to deliver a solution. You can learn more about hybrid AI here.  

As an added bonus, hybrid AI applied to text via natural language processing (NLP) can also improve both accuracy and carbon footprint for users. This is particularly relevant as enterprises on all continents are facing required disclosure on environmental, social and governance (ESG) issues. Here, transparency (explainability) and sustainability become critically important. 

Using hybrid NLP and incorporating a knowledge-based approach provides transparency and explainability and that’s how we get to our green glass approach. In our view, an AI solution must be: 

  • Transparent: Outcomes need to be easily explainable where humans can understand and track how the system arrived at the results. 
  • Sustainable: Using a hybrid AI approach is less compute-intensive than most pure machine learning or LLM approaches. Less computing = less energy = lower carbon footprint. 
  • Practical: Use the most efficient and reliable way to solve AI tasks. Don’t deploy technology just because it can be done, use objectives to shape solutions; in other words, right-size a solution to a problem that provides more value than it costs. 
  • Human-centered: Don’t just include a “human in the loop,” where data and inputs can be monitored and refined by users, but make sure the solution up-levels the work humans do from menial and redundant to engaged and valued. 

Lots of independent entities with whom we work, consult and participate have their own similar concepts as to what constitutes responsible AI. Some of those entities include: The Institute for Experiential AI at Northeastern University, NYU’s Center for Responsible AI and the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) Trustworthy and Responsible AI Center. Microsoft recently outlined its own responsible AI approach in a blog post following a summit on generative AI hosted by the World Economic Forum. The common thread that unites these approaches is a need to take an intentional and thoughtful approach to the business challenges we’re solving with AI and the technology choices we’re making along the way.