+ | - | reset
  • Artificial Intelligence (AI) is rapidly changing risk management and compliance.
  • However, AI can create new types of risks for businesses, such as amplifying bias or leading to opaque decisions.
  • Integrated audit software solutions are needed to manage existing and potential risks.

Artificial Intelligence (AI) has become an imperative for companies across industries. Despite the hype, AI is creating business value and, as a result, is rapidly being adopted around the world. Last year, the McKinsey Global Survey reported “a nearly 25 percent year-over-year increase in the use of AI in standard business processes”. The transformative power of AI is already affecting a range of functions, including customer service, brand management, operations, people and culture, and more recently, risk management and compliance.

This latter development should not surprise anyone. At its core, risk management refers to a company’s ability to identify, monitor and mitigate potential risks, while compliance processes are meant to ensure that it operates within legal, internal and ethical boundaries. These are information-intensive activities – they require collecting, recording and especially processing a significant amount of data and as such are particularly suited for deep learning, the dominant paradigm in AI.

Indeed, this statistical technique for classifying patterns – using neural networks with multiple layers – can be effectively leveraged for improving analytical capabilities in risk management and compliance.

AI systems create new types of risks

However, early experience shows that AI can create new types of risks for businesses. In hiring and credit, AI may amplify historical bias against female and minority background applicants, while in healthcare it may lead to opaque decisions because of its black box problem, to name just a few. These risks are amplified by the inherent complexity of deep learning models which may contain hundreds of millions of parameters. This encourages companies to procure third-party vendors’ solutions about which they know little of the inner functioning.

Image: Statista/Technalysis Research

Consequently, executives face a fundamental challenge: how to maximise the benefits of AI for various business functions without creating intractable risk and compliance issues?

Previously, we called for the introduction of risk/benefit assessment frameworks to identify and mitigate risks in AI systems. Yet, such frameworks are highly contextual and require high interdisciplinary expertise and multistakeholder collaboration. Not every organisation can afford such talents or have the required processes. Further, it’s perfectly reasonable to assume that a given company has deployed different AI solutions for various use cases, each requiring a distinct framework. Designing and keeping track of these frameworks could quickly become an impossible task even for the most experienced risk managers. In this situation, an intuitive response would be to proceed with caution and limit the use of AI for low-risk applications to avoid potential regulatory violations. But, this can only be a temporary solution. In the long run, this would be a self-defeating strategy considering the immense potential of AI for business growth.

So, what is a sensible alternative?

The need for Enterprise Audit Software for AI systems

We argue that maximising the benefits of AI solutions for businesses white mitigating their adverse risks could be partially achieved by using appropriate audit software. There is already a plethora of audit software for ensuring that companies’ processes meet legal and industry standards across industries from finance to healthcare.

What’s needed now is an integrated audit solution which includes the management of risks related to AI. Such a solution should have three core functions:

1. Documenting the behavior of all AI solutions used by a company. This implies monitoring AI solutions and analysing their features distribution to investigate statistical dependencies. Consider the case of an AI solution for hiring: one should have clear insights into which features (e.g. attended university, years of experience, gender, etc.) have the most impact on recommendations.

2. Assessing compliance with a set of defined requirements. Once one understands the outcome of a model (i.e. why a hiring model is making a particular recommendation), it’s important to assess compliance with certain specifications that could range from legislation (such as the EU’s Non-Discrimination Law) to organisational guidelines.

3. Enabling cross-department collaboration. This audit software should ease multistakeholder collaboration – especially between risk managers and data scientists who oversee AI solutions – by providing the appropriate information. For instance, risk managers need non-technical explanations about which requirements are met or not, while data science teams may be more interested in the performance characteristics of the model. When a non-compliance issue is identified, the audit software should provide recommendations for the appropriate interventions to the technical teams.

Developing such audit software for AI systems would go a long way in addressing the risks associated with AI. Yet, responsible AI cannot be fully automated. There is no universal list of requirements that one must meet to mitigate all existing and potential risks, because the context and industry domain will often determine what items are needed. As a consequence, risk managers and their ability to exercise judgment will remain essential. The rise of AI will only enable them to focus on what they do best: engage with other colleagues across departments to design and execute a sound risk-management policy.

This article was first published here.

Photo by Donald Giannatti on Unsplash.

Rate this article

0 / 5. 0

Is this article good for you?
solen feyissa hWSNT Pp4x4 unsplash
4.0
8  Minutes

GenAI: Creating Value Through Governance

29 November 2024

READ MORE
Share
anika huizinga RmzR87vTiYw unsplash scaled
5.0
7  Minutes

How Do You Weigh a Biased Perception of Risk?

05 August 2024

READ MORE
Share
luis benito PO4ATjlp fg unsplash scaled
5.0
6  Minutes

Five Pillars of Successful Digital Transformation

05 June 2024

READ MORE
Share
eric prouzet ZGfEQbJw6HQ unsplash scaled
5.0
10  Minutes

Managing Conflicts of Interest to Protect the Well-Being of the Company

06 May 2024

READ MORE
Share
annie spratt QckxruozjRg unsplash
1.0
17  Minutes

PwC’s Global Workforce Hopes and Fears Survey 2023 – Is Your Workforce Rei...

29 March 2024

READ MORE
Share
pexels pixabay 270549 scaled
5.0
12  Minutes

False Friends or Good Ends? The CIO’s Four-Point Guide to Navigating Technology ...

27 March 2024

READ MORE
Share

Survey

ICDM
Homepage