+ | - | reset

Boards are responsible for how generative AI is used at the companies they oversee. Asking company leaders the right questions will help unlock the technology’s value while managing its risk.


Company executives are scrambling to understand and respond to generative AI. This technology is still nascent, but of those who have used it, few doubt its power to disrupt operating models in all industries.

We recently provided a view of how CEOs might start preparing for what lies ahead.1 But what is the role of the board? Many board members tell us they aren’t sure how to support their CEOs as they grapple with the changes that generative AI has unleashed, not least because the technology seems to be developing and getting adopted at lightning speed.

The early use cases are awe inspiring. A software developer can use generative AI to create entire lines of code. Law firms can answer complex questions from reams of documentation. Scientists can create novel protein sequences to accelerate drug discovery. But the technology still poses real risks, leaving companies caught between fear of getting left behind—which implies a need to rapidly integrate generative AI into their businesses—and an equal fear of getting things wrong. The question becomes how to unlock the value of generative AI while also managing its risks.

Board members can help their management teams move forward by asking the right questions. In this article, we provide four questions boards should consider asking company leaders, as well as a question for members to ask themselves.

Questions for Management

Generative AI models—deep learning models trained on extremely large sets of unstructured data—have the potential to increase efficiency and productivity, reduce costs, and generate new growth. The power of these “foundation” models lies in the fact that, unlike previous deep learning models, they can perform not just one function but several, such as classifying, editing, summarizing, answering questions, and drafting new content. This enables companies to use them to launch multiple applications with relative ease, even if users lack deep AI and data science know-how.

Board members can equip their C-suite to harness this potential power thoughtfully but decisively by asking the following four broad questions.

How Will Generative AI Affect Our Industry and Company in the Short and Longer Term?

Forming any sensible generative AI strategy will require an understanding of how the technology might affect an industry and the businesses within it in the short and longer term. Our research suggests that the first wave of applications will be in software engineering, marketing and sales, customer service, and product development.2 As a result, the early impact of generative AI will probably be in the industries that rely particularly heavily on these functions—for example, in media and entertainment, banking, consumer goods, telecommunications, life sciences, and technology companies.

Even so, companies in other industries should not delay in assessing the potential value at stake for their company. The technology and its adoption are moving too fast. Recall that the public-facing version of ChatGPT reached 100 million users in just two months, making it the fastest-growing app ever. And our research finds that generative AI can increase worker productivity across industries, adding up to $7.9 trillion in value globally from adoption of specific use cases and the myriad ways workers can use the technology in everyday activities.3 Each company will want to explore immediate opportunities to improve efficiency and effectiveness. Those that don’t may quickly find themselves trailing behind competitors that answer customer queries more accurately and faster or launch new digital products more rapidly because generative AI is helping write the code. They risk falling behind on the learning curve, too.

Simultaneously, companies will want to begin looking further out. No one can predict the full implications of generative AI, but considering them is important. How might the competitive environment change? How might the business benefit, and where does it look vulnerable? And are there ways to future-proof the strategy and business model?

Are We Balancing Value Creation with Adequate Risk Management?

An assessment of the new frontiers opened by generative AI will rightly make management teams eager to begin innovating and capturing its value. But that eagerness will need to be accompanied by caution, as generative AI, if not well managed, has the potential to destroy value and reputations. It poses the same—and more—risks as traditional AI.

Like traditional AI, generative AI raises privacy concerns and ethical risks, such as the potential to perpetuate bias hidden in training data. And it heightens the risk of a security breach by opening up more areas of attack and new forms of attack. For example, deepfakes simplify the impersonation of company leaders, raising reputation risks. There are also new risks, such as the risk of infringing copyrighted, trademarked, patented, or otherwise legally protected materials by using data collected by a generative AI model.

Generative AI also has a propensity to hallucinate—that is, generate inaccurate information, expressing it in a manner that appears so natural and authoritative that the inaccuracies are difficult to detect. This could prove dangerous not just for companies but for society at large. There is widespread concern that generative AI could stoke misinformation, and some industry experts have said it could be as dangerous to society as pandemics or nuclear war if not properly regulated.4

Companies will therefore need to understand the value and the risks of each use case and determine how these align with the company’s risk tolerance and other objectives. For example, with regard to sustainability objectives, they might consider generative AI’s implications for the environment because it requires substantial computing capacity.

From there, boards need to be satisfied that the company has established legal and regulatory frameworks for the knowable generative AI risks assumed across the company and that AI activities within the company are continually reviewed, measured, and audited. They will also want to ensure mechanisms are in place to continually explore and assess risks and ethical concerns that are not yet well understood or even apparent. How, for example, will companies stand up processes to spot hallucination and mitigate the risk of wrong information eliciting incorrect or even harmful action? How will the technology affect employment? And what of the risks posed by third parties using the technology? A clear-eyed early view on where problems might lie is the key to addressing them.

The bottom line is that AI must always be subject to the effective oversight of those designing and using it. Support for the effort can come from government regulatory frameworks and guidance being developed on how to use and apply generative AI. It will be important for companies to keep abreast of these.

How Should We Organize for Generative AI?

Many companies took an experimental approach to implementing previous generations of AI technology, with those keenest to explore its possibilities launching pilots in pockets of the organization. But given the speed of developments within generative AI and the risks it raises, companies will need a more coordinated approach. Getting stuck in pilot mode really isn’t an option. Indeed, the CEO of one multinational went as far as to ask each of his 50 business leaders to fully implement two use cases without delay, such was his conviction that generative AI would rapidly lend competitive advantage.

Company leaders should consider appointing a single senior executive to take responsibility for the oversight and control of all generative AI activities. A smart second step is to establish a cross-functional group of senior people representing data science, engineering, legal, cybersecurity, marketing, design, and other business functions. Such a team can collaborate to formulate and implement a strategy quickly and widely.

Bear in mind too that a foundation model can underpin multiple use cases across an organization, so board members will want to ask the appointed generative AI leader to ensure that the organization takes a coordinated approach. This will promote the prioritization of use cases that deliver fast, high-impact results. More complex use cases can be developed thereafter. Importantly, a coordinated approach will also help ensure a full view of any risks assumed.

The board will also want to check that there’s a strategy for establishing what is likely to be a wide range of partnerships and alliances—with providers that customize models for a specific sector, for example, or with infrastructure providers that offer capabilities such as scalable cloud computing. The right partnerships with the right experts will help companies move quickly to create value from generative AI, though they will want to take care to prevent vendor lock-in and oversee possible third-party risks.

Do We Have the Necessary Capabilities?

To keep pace with generative AI, companies may need to review their organizational capabilities on three fronts.

Technology

The first front is technology. A modern data and tech stack will be the key to success in using generative AI. While foundation models can support a wide range of use cases, many of the most impactful models will be those fed with additional, often proprietary, data. Therefore, companies that have not yet found ways to harmonize and provide ready access to their data will be unable to unlock much of generative AI’s potentially transformative power. Equally important is the ability to design a scalable data architecture that includes data governance and security procedures. Depending on the use case, the existing computing and tooling infrastructure might also need upgrading. Is the management team clear about the computing resources, data systems, tools, and models required? And does it have a strategy for acquiring them?

Talent

The introduction of generative AI, like any change, also requires a reassessment of the organization’s talent. Companies are aware they need to reskill the workforce to compete in a world where data and AI play such a big role, though many are struggling to attract and retain the people they need. With generative AI, the challenge just got harder. Some roles will disappear, others will be radically different, and some will be new. Such changes will likely affect more people in more domains and faster than has been the case with AI to date.

The precise new skills required will vary by use case. For example, if the use case is relatively straightforward and can be supported by an off-the-shelf foundation model, a generalist may be able to lead the effort with the help of a data and software engineer. But with highly specialized data—as might be the case for drug development—the company may need to build a generative AI model from scratch. In that case, the company may need to hire PhD-level experts in machine learning.

The board will therefore want to query leadership as to whether it has a dynamic understanding of its AI hiring needs and a plan for fulfilling them. Also, the existing workforce will need to be trained to integrate generative AI into their day-to-day work and to equip some workers to take on new roles. But tech skills are not the only consideration, as generative AI arguably puts a premium on more advanced analytical and creative skills to supplement the technology’s capabilities. The talent model may therefore need to change—but with consideration of a caution raised recently at the World Economic Forum: using AI as a substitute for the work of junior-level talent could endanger the development of the next generation of creators, leaders, and managers.5

Organizational Culture

Finally, a company’s culture shapes how well it will succeed with generative AI. Companies that struggle with innovation and change will likely struggle to keep pace. It’s a big question, but does the company have the learning culture that will be a key to success? And does the company have a shared sense of responsibility and accountability? Without this shared sense, it is more likely to run afoul of the ethical risks associated with the technology.

Both questions involve cultural issues that boards should consider prompting their management teams to examine. Depending on what they find, reformulating a company’s culture could prove to be an urgent task.

A Question for the Board

As boards try to support their CEOs in creating value from generative AI and managing its risks, they will also want to direct a preliminary, fundamental question to themselves: Are we equipped to provide that support?

Unless board members understand generative AI and its implications, they will be unable to judge the likely impact of a company’s generative AI strategy and the related decisions regarding investments, risk, talent, technology, and more on the organization and its stakeholders. Yet, our conversations with board members revealed that many of them admit they lack this understanding. When that is the case, boards can consider three ways to improve matters.

The first option is to review the board’s composition and adjust it as necessary to ensure sufficient technological expertise is available. In the past, when companies have struggled to find technology experts with the broader business expertise required of a board member, some have obtained additional support by setting up technology advisory boards that include generative AI experts. However, generative AI will likely have an impact on every aspect of a company’s operations—risk, remuneration, talent, cybersecurity, finance, and strategy, for example. Arguably, therefore, AI expertise needs to be widespread so that the full board and all its committees can properly consider its implications.

Second, the board can improve its members’ understanding of generative AI. Training sessions run by the company’s own experts and by external experts on the front line of developments can give board members an understanding of how generative AI works, how it might be applied in the business, the potential value at stake, the risks, and the evolution of the technology.

Third, the board can incorporate generative AI into its own work processes. Hands-on experience in the boardroom can build familiarity with the technology and appreciation of its value and risks. Moreover, because generative AI can improve decision making, it would be remiss of boards not to explore its potential to help them perform their duties to the best of their ability. For example, they might use it to surface additional critical questions on strategic issues or to deliver an additional point of view to consider when making a decision.

Generative AI is developing fast, and companies will have to balance pace and innovation with caution. The board’s role is to constructively challenge the management team to ensure this happens, keeping the organization at the forefront of this latest technological development yet intensely mindful of the risks. The questions posed here are not, of course, exhaustive, and more will arise as the technology progresses. But they are a good place to start. Ultimately, board members hold responsibility for how generative AI is used in the companies they oversee, and the answers they receive should help them meet that responsibility wisely.

The article was first published here.

Photo by Google DeepMind on Unsplash.

Rate this article

0 / 5. 0

Is this article good for you?
solen feyissa hWSNT Pp4x4 unsplash
4.0
8  Minutes

GenAI: Creating Value Through Governance

29 November 2024

READ MORE
Share
growtika ZfVyuV8l7WU unsplash scaled
5.0
7  Minutes

The Big Leap: Getting Data AI-Ready

17 October 2024

READ MORE
Share
pexels tiger lily 4481323 scaled
5.0
18  Minutes

Time to Rethink Talent in the Boardroom

27 September 2024

READ MORE
Share
anika huizinga RmzR87vTiYw unsplash scaled
5.0
7  Minutes

How Do You Weigh a Biased Perception of Risk?

05 August 2024

READ MORE
Share
nicolas hoizey poa Ycw1W8U unsplash scaled
5.0
17  Minutes

Ready. Set. Scale. Shaping Leaders for Hypergrowth

10 July 2024

READ MORE
Share
bernard hermant IhcSHrZXFs4 unsplash scaled
5.0
2  Minutes

Putting Security at the Epicentre of Innovation: Malaysia Findings from the 2024 G...

15 May 2024

READ MORE
Share

Survey

ICDM
Homepage