By now we know that AI is a proven ally for insurance processes. Language data is at the heart of these processes, and it requires technology that can deal with its complexity, at scale. This is where AI provides real value, and it’s also where a lot of interest has been centered since ChatGPT was introduced more than a year ago. Fast forward to today, and we’re constantly hearing about the potential of generative AI and LLMs, the use cases being applied, the partnerships being activated, and it’s clear: companies around the world are testing it out.
For example, Bain recently announced that GenAI represents a $50 billion financial opportunity for insurance businesses worldwide, with revenue generating potential of up to 20% and cost reduction benefits of up to 15%. A BCG article from December cited the potential for GenAI to deliver a 3-4% reduction in claims payout and 20-30% reduction in loss-adjustment expenses. US insurer New York Life recently said that they are using GenAI in marketing, underwriting and predictive modeling.
Still, there are risks. And while this is the case with any technology, generative AI and LLMs do inject certain risks that more traditional AI approaches do not. This includes a tendency for hallucinations or making things up, and those for data privacy and security. A Cisco Systems survey of privacy and security professionals cited by The Wall Street Journal found that 92% of respondents “believed generative AI was fundamentally different from other technologies and required new techniques to manage data and risks.”
At expert.ai, we’ve integrated GenAI and LLM capabilities in our own hybrid AI platform and in our Enterprise Language Model for Insurance, ELMI, which supports capabilities like generative summarization, zero-shot extraction and generative Q&A through the expert.ai Platform for Insurance. By leveraging hybrid AI, we are able to use both semantic AI, which is explainable since it is rules-based, and domain-specific, leveraging the years of knowledge we have acquired working with insurance companies. At the same time, we combine GenAI using the best-fit LLM for optimized results and costs.
So, despite the tested use cases and early gains, it’s important to remember that any use of AI requires a balanced approach, and this is especially the case for generative AI.
Six GenAI Adoption Challenges for Insurers
From infrastructure and security requirements to expertise, there are several factors that companies need to address prior to adoption:
Access to Skills and Expertise
Building and working with LLMs requires a broad range of expertise, from data science and machine learning to domain-specific knowledge in insurance processes and regulations. Data scientists and machine learning engineers play a critical role in developing the algorithms that underpin LLMs. This requires a deep understanding of natural language processing (NLP) techniques and the ability to manage and process large datasets.
Additionally, professionals with expertise in insurance industry practices are essential for tailoring LLM applications to address sector-specific challenges, such as claims processing or fraud detection. Given the high demand for these skills, partnering with vendors or integrators can support you through the learning curve and provide the specialized skills and experience necessary to help you take advantage of these advanced capabilities.
Data Governance
Effective data governance is critical for navigating the complexities of using AI within insurance processes. The evolution of the technology, as well as emerging industry guidance in response to GenAI and LLM capabilities, requires policies and procedures that ensure the integrity, security and proper usage of data. This includes measures to protect customer privacy, secure data against breaches and ensure that AI-generated insights are both accurate and free of bias.
As AI models are trained on company data, insurers must ensure that their output does not inadvertently expose sensitive information or perpetuate biases, highlighting the importance of continual monitoring and review of AI systems.
Model Accuracy
A high level of model accuracy and quality is paramount for insurers to truly benefit from adding GenAI or LLM capabilities. This involves rigorous testing and validation processes to ensure that the language models perform up to, and beyond, human accuracy benchmarks and remain hallucination free.
In a blog post, Allianz highlighted the need for industry-specific expertise in any language model being used: “We see LLMs working very well for general language tasks, but they are not yet designed for the specific technical language we use in insurance, such as legal analysis of policy wordings or claims handling,” says Meenesh Mistry, Business Model Transformation Executive at Allianz Commercial. “GenAI models need to be further trained and customized for the language, data, and processes of our industry. This is the focus of our use cases.”
In most cases, it’s not merely about adopting the most advanced AI technology; it’s about choosing the best approach for a specific challenge and then fine-tuning these models to align with the specific needs and operational realities of the insurance industry.
Initial and Ongoing Costs
The initial investment in GenAI will require not only the acquisition of technology but also the expense associated with training staff to ensure data quality, security and for integrating systems. Ongoing expenses, such as software updates, maintenance and potential scaling, must be accounted for upfront.
Despite these costs, the long-term benefits of enhanced operational efficiency, improved customer satisfaction, and more accurate risk assessments can outweigh the initial financial outlay if implementation is well planned.
Regulatory Compliance
Insurance companies must navigate a complex web of compliance regulations that govern data privacy, consumer protection and AI usage. The dynamic nature of these regulations, especially in different jurisdictions, requires companies to be agile and proactive in their compliance efforts. Furthermore, ethical use of AI demands transparency in how data is used and how decisions are made, especially in cases that involve sensitive personal information or those that have significant financial implications for customers.
The AI Approach
Selecting the most appropriate approach requires a nuanced understanding of the different AI technologies available and their potential applications within insurance operations.
Companies must weigh several factors simultaneously, such as the complexity of tasks, the volume of data and the level of transparency and interpretability required. The key lies in identifying which AI capabilities align best with the company’s objectives and customer needs, ensuring that the chosen approach delivers the desired outcomes in a practical, efficient cost-efficient and ethical manner.
It’s a challenge for insurers, as well as companies in any industry, to acquire the depth and breadth of expertise in-house to successfully make these selections independent of vendor and consultative expertise.
Conclusion
While Generative AI and LLMs offer significant opportunities for innovation in the insurance sector, companies must carefully consider and address the challenges related to access, expertise, costs and governance. By doing so, insurers can harness their full potential to revolutionize their operations and offer superior value to their customers.