+ | - | reset
  • Artificial Intelligence (AI) is rapidly changing risk management and compliance.
  • However, AI can create new types of risks for businesses, such as amplifying bias or leading to opaque decisions.
  • Integrated audit software solutions are needed to manage existing and potential risks.

Artificial Intelligence (AI) has become an imperative for companies across industries. Despite the hype, AI is creating business value and, as a result, is rapidly being adopted around the world. Last year, the McKinsey Global Survey reported “a nearly 25 percent year-over-year increase in the use of AI in standard business processes”. The transformative power of AI is already affecting a range of functions, including customer service, brand management, operations, people and culture, and more recently, risk management and compliance.

This latter development should not surprise anyone. At its core, risk management refers to a company’s ability to identify, monitor and mitigate potential risks, while compliance processes are meant to ensure that it operates within legal, internal and ethical boundaries. These are information-intensive activities – they require collecting, recording and especially processing a significant amount of data and as such are particularly suited for deep learning, the dominant paradigm in AI.

Indeed, this statistical technique for classifying patterns – using neural networks with multiple layers – can be effectively leveraged for improving analytical capabilities in risk management and compliance.

AI systems create new types of risks

However, early experience shows that AI can create new types of risks for businesses. In hiring and credit, AI may amplify historical bias against female and minority background applicants, while in healthcare it may lead to opaque decisions because of its black box problem, to name just a few. These risks are amplified by the inherent complexity of deep learning models which may contain hundreds of millions of parameters. This encourages companies to procure third-party vendors’ solutions about which they know little of the inner functioning.

Image: Statista/Technalysis Research

Consequently, executives face a fundamental challenge: how to maximise the benefits of AI for various business functions without creating intractable risk and compliance issues?

Previously, we called for the introduction of risk/benefit assessment frameworks to identify and mitigate risks in AI systems. Yet, such frameworks are highly contextual and require high interdisciplinary expertise and multistakeholder collaboration. Not every organisation can afford such talents or have the required processes. Further, it’s perfectly reasonable to assume that a given company has deployed different AI solutions for various use cases, each requiring a distinct framework. Designing and keeping track of these frameworks could quickly become an impossible task even for the most experienced risk managers. In this situation, an intuitive response would be to proceed with caution and limit the use of AI for low-risk applications to avoid potential regulatory violations. But, this can only be a temporary solution. In the long run, this would be a self-defeating strategy considering the immense potential of AI for business growth.

So, what is a sensible alternative?

The need for Enterprise Audit Software for AI systems

We argue that maximising the benefits of AI solutions for businesses white mitigating their adverse risks could be partially achieved by using appropriate audit software. There is already a plethora of audit software for ensuring that companies’ processes meet legal and industry standards across industries from finance to healthcare.

What’s needed now is an integrated audit solution which includes the management of risks related to AI. Such a solution should have three core functions:

1. Documenting the behavior of all AI solutions used by a company. This implies monitoring AI solutions and analysing their features distribution to investigate statistical dependencies. Consider the case of an AI solution for hiring: one should have clear insights into which features (e.g. attended university, years of experience, gender, etc.) have the most impact on recommendations.

2. Assessing compliance with a set of defined requirements. Once one understands the outcome of a model (i.e. why a hiring model is making a particular recommendation), it’s important to assess compliance with certain specifications that could range from legislation (such as the EU’s Non-Discrimination Law) to organisational guidelines.

3. Enabling cross-department collaboration. This audit software should ease multistakeholder collaboration – especially between risk managers and data scientists who oversee AI solutions – by providing the appropriate information. For instance, risk managers need non-technical explanations about which requirements are met or not, while data science teams may be more interested in the performance characteristics of the model. When a non-compliance issue is identified, the audit software should provide recommendations for the appropriate interventions to the technical teams.

Developing such audit software for AI systems would go a long way in addressing the risks associated with AI. Yet, responsible AI cannot be fully automated. There is no universal list of requirements that one must meet to mitigate all existing and potential risks, because the context and industry domain will often determine what items are needed. As a consequence, risk managers and their ability to exercise judgment will remain essential. The rise of AI will only enable them to focus on what they do best: engage with other colleagues across departments to design and execute a sound risk-management policy.

This article was first published here.

Photo by Donald Giannatti on Unsplash.

Rate this article

0 / 5. 0

Is this article good for you?
annie spratt QckxruozjRg unsplash
5.0
17  Read

PwC’s Global Workforce Hopes and Fears Survey 2023 – Is Your Workforce Rei...

29 March 2024

READ MORE
Share
pexels pixabay 270549 scaled
5.0
12  Read

False Friends or Good Ends? The CIO’s Four-Point Guide to Navigating Technology ...

27 March 2024

READ MORE
Share
donald giannatti Wj1D qiOseE unsplash scaled
5.0
6  Read

2024 Tech Takeaways for Leaders

27 February 2024

READ MORE
Share
michael dziedzic qDG7XKJLKbs unsplash scaled
5.0
13  Read

Why You Need to Embrace Superfluidity and Rethink Your Operating Models

09 January 2024

READ MORE
Share
dane deaner KLkj7on c unsplash scaled
1.0
9  Read

How Boards Can Drive a Shift Toward Governance for Good

01 December 2023

READ MORE
Share
pexels google deepmind 17483874 scaled
5.0
29  Read

Strengthening the Bonds of Human and Machine Collaboration

20 October 2023

READ MORE
Share

Survey

ICDM
Homepage