We all know that sometimes in business we need to get out to our own safe worlds to make decisions to do whatever needs to be done; however, it’s not always easy to change the way you work. We want to embrace change and innovate but we want also to make sure that the investment provides more value than it costs. This is what happens when we explore new technologies and deploy software to improve our companies (and our lives). And AI is no exception.
How to determine if AI is worth the investment
AI is software. Despite marketing hype or romantic idealization, AI is still just that: software. As any other software decision, the main driver to determine if the investment is worth it is to see if it can help you make or save money. In the AI domain of unstructured data and natural language understanding and processing (NLU/NLP), there are a vast range of processes and functional tasks that can be automated, and even taxonomy creation to solve use case challenges like contract analytics, email management or customer sentiment. AI can certainly bring capabilities, but it is up to you to look at the challenge or problem you want to solve to define the value of your solution.
Business performance vs AI model performance
In a business world where 60-80% of AI projects fail, asking yourself if AI is worth more than it costs is legitimate, in fact, it is necessary. But it is also reasonable to ask yourself why organizations were able to set themselves apart from the failures, and how they have been leading their 40-20% AI projects to success.
Software projects are successful when there is a clear focus on the right expectations. Forget the AI model performance; rather, define the right expectations and focus on them. It is not data scientists that can determine how/if AI is worth it as they look at model performance to see how well algorithms do with a given data set. It is the line of business that has the business problem or the opportunity and benefits from what could be realized with a successful AI implementation.
Scaling does not mean just delivering
Today, we’re seeing major investments by some of the world’s largest tech companies in machine learning, deep learning (ML, DL) and large language models (LLMs). Such approaches are based on statistics and pattern recognition and require an immense amount of data to run. When applied to language, this means that they predict based on the presence, the position and the frequency of a keyword or a pattern in a text. But typically, enterprise language data to train the models is insufficient (or there are not enough time and resources available to train them), not to mention that the main challenge is the language itself as its nuances do not provide enough consistency and predictability. This is one of the most relevant reasons why symbolic AI is being integrated into ML, DL (creating the so called “composite AI” or “hybrid AI” approach) to provide the best of both AI worlds: full machine intelligence with logic-based brains that improve with each application.
Through symbolic AI you can assign a meaning to each word based on embedded knowledge and context. In fact, it doesn’t try to predict something – it tries to emulate how we understand language and read content as human beings. The business advantage that makes symbolic AI different from learning approaches is in that you can work with much smaller data sets to develop and refine the AI’s rules. When combined, symbolic AI and learning approaches complement each other, delivering best results for NLP applications.
AI tradeoffs and costs
Other benefits of leveraging the strengths of symbolic AI to complement ML/DL include explainability. Learning approaches are defined as black boxes – this means that once the model has been trained, it is impossible to see why it behaves in a certain way. There is no way to correct any bias or remove questionable, controversial results since we don’t know how the algorithm arrived at a result and why provides a certain output. Since they are not explainable and you cannot understand the logic behind them, you are forced to go back to try to obtain the expected results, repeating the cycle in an inefficient way, because typically what you’re having to do is either add more data to it or label a lot more data. This is a particularly onerous tradeoff you should think through when you approach AI decisions within natural language. However, with ML/DP you can target pieces of a problem where explainability isn’t necessary, while symbolic AI – that can reach conclusions and make decisions via a transparent process (explainable AI), doesn’t require the considerable amount of training data that ML does.
AI that is responsible and explainable
According to a recent survey, “As AI failures expose companies and their customers to risks, and regulatory attention grows, evidence points to the value of cultivating RAI (responsible AI) policies even before an AI system rollout.”
There is no one steel-clad definition of responsible AI, but as the technology matures, and as customers themselves are becoming more attentive to issues related to the environment, social responsibility, equity and privacy, there is a growing acceptance and understanding that AI must include transparency (also expressed as explainability or accountability), sustainability (low carbon footprint), efficiency (practical AI) and a “human in the loop” approach (AI must be understandable to humans and not replace humans but humanize the work we perform allowing us to be more productive, efficient and happier in both our jobs as well as our lives in general.) The combination of these four aspects is a practical framework that helps you to think through the full range of AI tradeoffs and costs to define the value you want to create as well as the AI approach and the work necessary to scale and keep your AI system operational.