As environmental, social and governance issues continue to grow in importance for organizations, the move to regulate reporting on ESG metrics has begun, and responsible AI will play a critical role, in more ways than one.
Environmental, social and governance factors aren’t just important for investment ratings. Тhey will become required disclosure for certain types of companies in the European Union starting in 2024.
While the regulations will standardize how companies report, the need to avoid penalties isn’t the only factor motivating companies to be mindful of ESG issues. A Forrester Consulting survey of decision-makers in risk and finance roles found that nearly all respondents—90%—expect ESG data to be very important to their company by 2025. The same could be said for AI, and this is where the two can work together: creating business value and competitive advantage and driving innovation by working with all of the types of data that power businesses.
The Role of Responsible AI
It’s no secret that AI comes with its own risks. Algorithmic bias, energy-intensive processes and privacy concerns are just a few of the issues that could be at odds with an ESG strategy. However, the key to embracing a Responsible AI approach is being able to understand the different approaches available, their capabilities and their impacts, and make the best choice that aligns with your business goals.
At expert.ai, our Green Glass Approach offers a concrete framework to advance responsible AI, while urging industry-wide emphasis be placed on ensuring that deployments are:
The Value of Responsible AI for ESG
Environmental, social and governance factors are not just financial or numerical in nature. Depending on what is material for your business, it includes a wide range of information on your activities. This includes information on your suppliers and partners, and can come from a wide range of sources:
- Internal documentation
- Third-party reports
- Open-source web content
- Real-time sources like social media and online forums
We’re talking about data in the form of text and language—unstructured data—and this is where AI technologies like natural language processing (NLP) and natural language understanding (NLU) provide the most value. This includes taking massive amounts of human language data from many sources, capturing what’s most important so that companies have the knowledge and insight to inform their decision making to create a clear picture of risks and opportunities.
Here’s just a sample of the type of information that ESG reporting will take into account:
- Environmental: climate change mitigation (including greenhouse gas emissions), climate change adaptation, resource use and biodiversity.
- Social and human rights: gender equality, working conditions and respect for human rights.
- Governance: information on how the company manages sustainability matters, risk management and internal controls over the sustainability reporting process, business ethics and lobbying activities.
Your Responsible AI Checklist
We use our Green Glass approach to address the risks that AI poses. Use this as your starting point when evaluating solutions to manage your data and any other business challenge where you would consider using AI.
Algorithm Integrity
- Do you know where your data comes from?
- Do you know how your data has been labeled?
- Can you explain how your model reached its results?
- Does your model have safeguards for preventing algorithmic bias?
Decision makers need data they can trust. With AI, the output of algorithms depends on several factors, including the AI approach used for training and the data they are trained on.
Further, a system that performs tasks and/or reaches results that are not explainable presents risks for enterprises at many levels, especially at the level of trust. While audits may not be required for every process or industry, some industries, such as finance and healthcare, absolutely rely on transparency into how the system behaves and why.
As our CEO Walt Mayo likes to say, “If the technology you’re using is helping you make a better decision, you should understand how it’s helping you.”
Privacy and Legality
- Does your data include any personally identifiable data (PII)?
- Does your training data include anything that could be considered under copyright?
- Do you have practices in place to remove PII or copyrighted material from your data sets?
- Have you allowed humans in the loop during the training process and beyond?
The European data protection laws cited in the temporary ban of ChatGPT in Italy and investigations in other EU countries highlight the risk of personal data—names, email addresses, physical addresses and more—being part of an application’s training data. This runs up against regulations like the European General Data Protection Regulation (GDPR) where individuals have certain rights as data subjects, including “the right to be informed about how their data is collected and used, and [the right] to have their data removed from systems.”
Earlier this year, Gartner highlighted the lack of an enterprise-grade privacy policy in OpenAI’s existing ChatGPT privacy policy. It recommends having humans in the loop to verify output and putting policies in place that limit how it is used to avoid disclosing confidential information.
Energy Consumption
- Does your solution depend on large volumes of data?
- Which method(s) will you be using to train the model?
- How much energy will your solution require for training and operation?
The energy consumption and carbon footprint tied to training and maintaining AI models have become a primary concern for organizations, especially under an ESG lens. Large language models based on massive data sets with billions of parameters are notoriously expensive to operate, but so is any compute-intensive model that depends on large volumes of data to train and retrain.
As we’ve mentioned, not all AI approaches are equal, and this includes how they consume energy. Researchers at Accenture Labs found that models that use smaller data volumes can still achieve sufficient accuracy: “Training an AI model on 70% of the full dataset reduced its accuracy by less than 1%, but cut energy consumption by a staggering 47%.”
Tests at our own expert.ai R&D Labs have demonstrated that high levels of accuracy can be obtained using alternative methods, simultaneously reducing energy consumption and the pollution produced by 100 times in the training phase and about 25% less in the prediction phase. The expert.ai Platform enables you to leverage less energy-intensive techniques.
Responsible AI Policy
A final question to consider: Does your AI approach align with your company values?
Stakeholders, shareholders and the wider public are all placing greater scrutiny on company activities. Organizations must be prepared to face questions about many choices, or face claims of greenwashing or worse, a lack of trust. This includes the choice of data and AI algorithms in place.
Ensuring that these choices align with company values and existing ESG commitments can help guide your decisions for the future and maintain integrity within the organization and regulatory landscape. Having a responsible technology policy can further enhance the alignment of choices with company values, and ESG commitments and ensure integrity within the organization and regulatory framework.