It’s time to move beyond quick efficiency gains to a cohesive AI strategy that is actionable and provides options in a fast-changing space.
- Generative artificial intelligence brings huge potential — but many are stymied by significant uncertainty and organizational constraints.
- Prioritize a small number of cross-cutting initiatives to bridge gaps and move up the AI maturity curve.
- Within each initiative, determine where to act now versus decide later – while identifying the criteria and thresholds used to trigger future activities.
Generative artificial intelligence (GenAI) poses a dilemma. On the one hand, its transformative potential and rapid acceleration are creating an imperative for business leaders to act — and move quickly. On the other hand, significant uncertainty and organizational constraints are slowing uptake and dissuading many from launching major initiatives.
While companies are investing in AI — 43% of CEOs have already begun, with another 45% planning to do so in the next year — many are pursuing quick efficiency gains rather than more fundamental changes to maximize AI’s growth potential. Ninety percent1 of organizations are still in the earliest stages of AI maturity — running proofs-of-concept or developing capabilities in pockets. In this environment, how do you ensure your actions today are aligned with building an AI-ready enterprise for the future? How do you chart a course amid so much uncertainty?
EY teams have developed a process for creating an actionable, focused and adaptive strategy tailored to this environment of uncertainties and constraints. This approach identifies the most impactful strategic initiatives, distinguishes near-term priorities from longer-term issues, and provides optionality in a fast-changing space.
In this article, we refer to both artificial intelligence (AI) and generative AI (GenAI). This is deliberate and context driven.
AI is a broad term for a set of technologies that develop or simulate intelligence in machines, including by performing tasks that traditionally required human intelligence. GenAI is a subset of AI, referring to a specific category of models capable of generating new and original content, including text, images, video and music.
In recent months, the remarkable capabilities of ChatGPT and other GenAI models have captured the public imagination, creating an imperative to act and accentuating organizational challenges. We therefore use the term “GenAI” in the context of these near-term implications.
But AI is about more than the recent wave of GenAI models. It has been evolving for decades, and the future will bring more tech breakthroughs. Recognizing this, we use the term “AI” in the context of companies’ longer-term strategy, business models and organizational change.
Set Goals and Identify Challenges
Start by setting overarching goals, aligned to your organizational values and purpose. We believe an AI strategy should be guided, at minimum, by certain core objectives. AI’s unprecedented ability to enter the most human of domains — intelligence and creativity — makes augmenting human capabilities a key strategic focus. Growing concerns about the risks raised by AI mean that building confidence in your AI systems needs to be a fundamental principle. Finally, to drive exponential value, your strategy cannot be piecemeal or siloed — it needs an end-to-end approach.
To achieve these goals, you need to identify and address your biggest gaps. Think of this in two ways. First, what is the gap between your current state and your desired future state? To measure this, you need a maturity model, such as the EY.ai Maturity Model, to benchmark your current AI implementation relative to a mature, enterprise-wide deployment of AI.
Second, focus on the gaps — the uncertainties and organizational constraints — that are limiting your ability to quickly move up the maturity curve. Companies across sectors typically face multiple uncertainties and constraints. These include being inundated by large numbers of unprioritized use cases, while lacking an overall vision on business transformation and value creation; uncertainty about AI regulation and the risks raised by new use cases; and talent and information technology (IT) infrastructure gaps.
Companies Face Critical Challenges in Developing and Implementing AI
Common challenges, risks and uncertainties in adopting Generative AI
- Unclear how and when GenAI will shift business models and competitive dynamics
- Unclear how to get from a use case agenda to a value agenda
- Uncertain which use cases to prioritize and how they align with overall strategy
- Unclear how to measure the financial and non-financial value created by AI investments
- Constraints to establishing AI partnerships including contractual, logistical and commercial complexity
- Unclear which data and parts of the tech stack will be commoditized vs. more valuable in the future
- Legacy issues around data availability, quality, bias, and fitness for consumption by AI
- Lack of experience and capabilities needed to make institutional knowledge and operating procedures ready for consumption by large language models (LLMs)
- Increased GDPR risk around data privacy, security and ethics
- Uncertainty over how AI regulations will evolve across jurisdictions, and what new compliance requirements they will bring
- Lack of experience building governance for probabilistic models such as large language models (LLMs) where a given input can generate a variety of different outputs, instead of a consistent one
- Unclear how to manage cybersecurity risks – both from malicious agents using GenAI and the complexity of working with multiple external GenAI partners
- Insufficient access to AI talent and capabilities, especially around the skills to augment and customize third party large language models (LLMs) with firms’ proprietary data
- Unclear how and when GenAI will reshape the nature of work and skills required across the organization
- Lack of familiarity and buy-in from employees on GenAI, relative to leadership
- Wariness of B2C customers about integrating AI into their lives, limiting household uptake
- Skepticism of B2B customers unable to identify credible companies from a sudden explosion in AI providers
- Risk of customer backlash and brand damage from poor customer-facing implementations of AI
- Uncertainty over how to shift from automating individual processes to reimagining the Finance, Tax and Legal functions
- Uncertainty around issues of intellectual property and data privacy in third party large language models (LLMs) and implications for companies using those models
- Limited auditability of AI models poses a barrier to future compliance and reporting
- Rigid supply chains that resist being dis-assembled and re-assembled limits the ability to integrate AI
- Complex supplier networks where participants differ in their technological sophistication making it difficult to create data flows and end-to-end integration
- Issues of AI interpretability leading to questions around supply chain control and decision making
An interactive chart listing examples of the most common uncertainties and constraints companies typically face by function.
Launch strategic AI initiatives
A chasm separates these goals and challenges. Bridging it requires prioritizing a small number of strategic initiatives that are both cross-cutting and aligned. This means addressing multiple uncertainties or constraints simultaneously while working together to achieve the core objectives listed above, further your company’s purpose and accomplish a shared vision.
Based on these criteria, as well as a series of interviews and workshops with EY AI and strategy specialists, we have identified five strategic initiatives addressing the gaps commonly faced by companies across sectors. Within each initiative, leaders should decide where to act now and what to decide later — while identifying the specific criteria and thresholds that will trigger those future activities.
Initiative 1: Establish an “AI Control Tower”
To reduce risk and align resources, direction must come from the top.
To develop a strategic vision and ensure alignment with it, your AI strategy needs a control tower. Unlike “centers of excellence” that many companies are creating to centralize technical capabilities for use case execution, the control tower is the business unit charged with defining your organization’s strategy and ensuring that your resources and the other four initiatives are aligned to this vision. It needs to be led either by someone in the C-suite, or someone with a direct line to the C-suite. It should be empowered to allocate capital and command sufficient resources to work across business functions.
The benefits of this approach are exemplified by an Australian water utility that EY teams have worked with. The utility was concerned that its uncoordinated use of AI in business processes scattered across the organization was creating significant risk. The utility assessed its AI maturity and developed a clear roadmap for achieving its strategic ambition. A key component of the new strategy was establishing a control tower AI office, which in turn enabled a systematic prioritization of use cases, the establishment of company-wide best practices and governance, as well as the upskilling of talent and tech capabilities. The result was not just reduced risk, but more value capture from its AI investments.
Where to act now
Appoint a leader with strong experience leading digital transformation. Empower them to build a team with the right size, seniority, budget and skills to coordinate across your organization. Establish relationships with the board and key committees around AI risk and governance. Begin identifying the metrics you’ll later use to measure progress and return on investment.
What to decide later
- Decide which use cases, business models and alliances to wind down, consolidate, or scale up. Do this on an ongoing basis, using the metrics established earlier, and in coordination with the initiatives responsible for business models and functions and ecosystem alliances.
- Determine how the control tower should evolve over time. Decide, for example, whether to become a dedicated function to maintain strong central governance or to transition to a federated model with authority delegated across functions to increase flexibility and speed of innovation.
Initiative 2: Reimagine Your Future Business Model and Functions
AI is an opportunity to transform from the ground up.
Preparing your organization for the era of AI requires anticipating and preparing for the wide-ranging disruptions it is likely to unleash. So far, businesses are mostly thinking incrementally: “How could GenAI make existing processes more efficient?” rather than “How could AI transform business functions and business models from the ground up?” According to EY research, 91% of organizations are using AI primarily to optimize operations, develop self-service tools like chatbots, or automate processes; only 8% are driving innovation, such as new or improved offerings.
Where to act now
In the near term, continue applying GenAI to specific use cases with the goal of improving efficiency and productivity. Prioritize use cases using a couple of criteria.
First, focus on the greatest value creation opportunities by assessing how AI can drive impact to the bottom line of the organization. Use all tools available, such as the EY.ai Value Accelerator, to help identify and implement AI initiatives and solutions based on their contribution to metrics such as revenue, cost and EBITDA.
As EY teams have seen in recent months while helping several clients assess and/or implement such opportunities, value acceleration can be found in actions such as using generative content and automated workflows to boost the conversion rate of sales representatives (in this case, at a business information services company — a $100 million opportunity) to automating processes across engineering, customer services, knowledge management and other functions (at a telecommunications and media conglomerate — a $1-1.5 billion opportunity).
Second, in this early and evolving risk environment, focus on lower-risk use cases. For instance, some internal functions are lower risk than many public-facing ones that could invite consumer backlash and brand damage.
At the same time, move beyond use cases by laying the groundwork for a long-term vision and direction. If taking on the entire business model proves too challenging, given the uncertainties about AI’s evolution, consider instead edging toward the business model from both ends: a bottom-up and top-down approach.
In the top-down approach, develop one or more scenarios envisioning how your sector might be reinvented in the future and how your value proposition would need to change to remain competitive. Identify metrics to track which scenarios are becoming more plausible and thresholds for when your organization needs to take additional action.
In the bottom-up approach, start by revisiting roles and processes where you anticipate AI will play a significant role. As AI takes over a portion of the work, what new roles will your workforce play? Use your growing understanding of how roles will change to build out a vision for corresponding business functions.
What to decide later
- As AI becomes more prevalent in certain parts of the enterprise, reinvent these business functions based on the increasing capabilities of AI and the changing roles of people.
- As questions are resolved (e.g., around the evolution of particular scenarios, new market offerings or entrants) embark on a fuller exploration of business model disruption. Ask yourself, how, in this changing environment, will you create, deliver and capture value in new ways.
Initiative 3: Ensure Confidence in AI
Robust governance frameworks are needed to address a broad range of risks.
As the use of AI increases across the enterprise, so will the risks and stakeholder expectations. These go well beyond legacy issues such as privacy and cybersecurity — or even widely known AI risks such as biased training data or “hallucinations” providing fictitious information. The next wave of risks and expectations will include use case-specific issues, from the explainability of loan application denials, to the accuracy of medical diagnoses, or the ability of people to control autonomous vehicles.
It will also include broader risks, such as intellectual property issues related to Large Language Model (LLM) training data and implications for third-party users of these models; the risk that hallucinations prove harder to fix than many are assuming; or the possibility that AI fails to deliver on its potential in the immediate future.
Regulators are responding to these risks with new legislation, the most prominent of which is the EU’s proposed AI Act (for more, see our recent study). But AI is a fast-moving space, while legislating is, by design, consultative and slow.
“Despite the growing need for robust AI regulation, it’s going to be extremely hard to achieve,” says Gordon M. Goldstein, Adjunct Senior Fellow at the Council on Foreign Relations. “Television took five years to regulate, airlines took 20 years to regulate, and most estimates for AI think it will take a decade to regulate this technology.”
Therefore, much will depend on robust governance frameworks developed proactively by companies to build confidence in their AI applications.
EY recently helped a global biopharmaceutical company develop such a governance framework, by deploying multi-disciplinary teams of digital ethicists, IT risk practitioners, data scientists and subject-matter resources who assessed how successfully the business is mitigating risks at every layer of the tech stack. The EY team found several gaps, which the client is addressing.
Unfortunately, such approaches are not yet the norm. While a recent EY survey found 77%2 of executives agree GenAI will require significant changes to their governance to manage issues of accuracy, ethics, and privacy, a 2022 EY study found that only 35% of organizations even have an enterprise-wide governance strategy for AI.
A robust governance approach should aim to build confidence in AI across a wide set of stakeholders — not just consumers and regulators, but also employees, C-suites and boards. To pull this off, it should cover the entire tech stack — data, model, process and output.
Critically, it must account for a unique characteristic of GenAI. “LLMs are probabilistic, not deterministic,” says Nicola Morini Bianzino, EY Global Chief Technology Officer and Co-Leader of EY.ai. “Unlike prior IT platforms, giving an LLM a particular input does not lead to the same output every time. GenAI models instead produce a range of outputs with an underlying probability distribution — and any approach to measuring confidence needs to similarly adopt a probabilistic approach.”
Regulation has long been a compliance exercise. With AI, governance will become strategic — a driver of growth and competitive advantage. If you can do a better job increasing confidence in your AI, you will achieve more market penetration and competitive advantage.
Where to act now
Establish bodies to oversee your AI governance such as an AI council or AI ethics committee. Consider establishing ethical principles for your AI, similar to those adopted by many non-governmental organizations and big tech companies. Use these principles to guide policies and procedures.
Television took five years to regulate, airlines took 20 years to regulate, and most estimates for AI think it will take a decade to regulate this technology.
Gordon M. Goldstein – Adjunct Senior Fellow at the Council on Foreign Relations
What to decide later
- Based on use case prioritization and timing of deployment, implement controls for risks associated with new use cases as they are rolled out.
- Implement a probabilistic approach to test the robustness of these controls and estimate the degree of confidence across the tech stack. Continue to monitor confidence over time to ensure it does not decline with the addition of new data or the release of new model versions.
- Prepare for newly passed legislation by understanding the changes your enterprise will need to implement for compliance. As new regulations are rolled out, implement updates to controls, policies and internal reporting systems.
Ensure that any new use cases at a minimum comply with existing regulations (e.g., GDPR) with respect to issues such as privacy and data residency. At the same time, work with initiatives responsible for business models and functions to map the emerging risks created by new use cases. Coordinate with the AI confidence initiative to begin defining controls addressing these risks.
Track evolving government regulations across the markets in which you operate. Include these potential regulations when envisioning how AI might disrupt your industry long-term and ask potential ecosystem partners about their preparedness for these regulations.
What to decide later
- Based on use case prioritization and timing of deployment, implement controls for risks associated with new use cases as they are rolled out.
- Implement a probabilistic approach to test the robustness of these controls and estimate the degree of confidence across the tech stack. Continue to monitor confidence over time to ensure it does not decline with the addition of new data or the release of new model versions.
- Prepare for newly passed legislation by understanding the changes your enterprise will need to implement for compliance. As new regulations are rolled out, implement updates to controls, policies and internal reporting systems.
Initiative 4: Address Talent and Technology Gap
Almost two-thirds of companies are hampered by skills gaps and legacy IT.
Companies face their biggest gaps in two functions: Talent and IT. Almost two-thirds (62%) of companies2 agree that their ability to maximize the value of GenAI is hampered by their data structures, legacy technology, or key skill gaps — a challenge that is consistent across sectors.
These gaps include capabilities, such as machine learning engineers, that companies possess and need to scale up — but may be in short supply. The bigger challenge, though, is not capabilities that need to be scaled up as much as entirely new capabilities that need to be sourced or developed. Integrating LLMs, for instance, will require capabilities such as knowledge graphs and retrieval-augmented generation (RAG) systems, which most companies are not familiar with.
Companies will need new capabilities for integrating GenAI models. For example:
- Knowledge graphs. The data that companies have is typically an inventory of past transactions (e.g., sales orders, customer interactions). But to become a useful co-pilot across the enterprise, GenAI needs to learn from other kinds of data as well: the operating processes, sector knowledge and expertise embedded in the minds of employees. To be usable by GenAI, this knowledge needs to be converted into a structured and interconnected semantic network, also known as a knowledge graph.
- Retrieval-augmented generation. Large companies are looking to integrate LLMs by supplementing them with their own sector-specific and proprietary datasets. This is enabled using a process known as “retrieval-augmented generation” (RAG), in which a program supplements the initial user prompt with relevant information from multiple sources, giving the GenAI system a richer prompt and leading to better responses.
With respect to skill gaps, GenAI itself can provide part of the answer. Already, GitHub’s Copilot is accelerating code writing; one study showed developers using it were 55% faster on a specific coding task. This not only helps alleviate skills shortages, but also has other benefits — 74% of Github’s Copilot users say it allows them to focus on more satisfying work, while 60% report feeling more fulfilled in their jobs.
Indeed, AI could have a profound impact on human fulfilment and potential. “This could be the greatest democratizing force of our time,” says Beatriz Sanz Sáiz, EY Global Consulting Data and Analytics Leader and Co-Leader of EY.ai. “AI will augment work, create new jobs and increase human potential. It could expand access to education for millions, while allowing lower-skill workers to take on higher-paying opportunities.”
“Interactive charts highlighting the gaps between employer and employee expectations within seven industry sectors on the improvements AI will deliver, and how the expectations gap is diminished with increased AI utilization.”
But realizing AI’s human potential requires human acceptance and adoption. Unfortunately, the EY Work Reimagined 2023 Survey highlights a different kind of talent gap: an emerging expectations disparity between leaders and employees. While both expect GenAI to improve work, leaders have significantly higher expectations than employees. Exposing employees to GenAI may help, since sectors with higher adoption also perceive more benefits from the technology. Yet, leaders rank training on GenAI ninth out of 11 possible employee development priorities.
In the longer term, the opportunity is to fill a different kind of gap: between today’s Talent and IT functions and the AI-empowered Talent and IT functions of the future. While AI will reshape functions across the enterprise, some of the biggest opportunities for a fundamentally different approach are in these two functions, which are simultaneously at the frontlines of deploying AI and most directly impacted by it.
Where to act now
Engage with GenAI platform providers. Develop or source the computational power, data fabric, and algorithm requirements for your enterprise’s GenAI objectives. Develop or source capabilities required for integrating enterprise models, such as RAG and knowledge graphs, or evaluate the feasibility of leveraging open-source customized models. Similarly, focus on preparing your proprietary data for use in integrating GenAI models, by ensuring it is properly vetted, cleansed, secured and processed.
Artificial intelligence could be the greatest democratizing force of our time.
Beatriz Sanz Sáiz – EY Global Consulting Data and Analytics Leader
Fill key skills gaps. Use GenAI to augment or streamline repetitive tasks and elevate workers. Upskill workers to prepare them for future roles. Launch AI pilots for workers in selected roles to build proficiency in using GenAI, as well as to learn and refine your approach for the broader rollout. Coordinate with the ecosystem partnering initiative as appropriate to fill talent gaps.
Address the gap in employee buy-in with consistent messaging highlighting how AI is not here to take away jobs, as much as empower your people and free employee time for more fulfilling work. Leverage case studies from early successes and employee testimonials to make the case.
What to decide later
- As technologies and offerings mature, decide which capabilities and infrastructure you need to develop in-house versus source from external vendors or partners, based on an ongoing assessment of which parts of the tech stack are becoming commoditized and which remain critical for creating and capturing value.
- Track the progress of GenAI models to assess the pros and cons of open-source vs. proprietary models and determine where in the organization you should deploy each type of model.
- Based on the timing of rollout across the enterprise, retrain your broader workforce with the skills required for working alongside AI — from prompt engineering to interpretation and filtering of outputs.
Initiative 5: Develop an Ecosystem of Alliance
Alliances will be essential in this rapidly evolving space.
Ecosystems of external alliances unlock tremendous value. They can drive double-digit revenue growth and cost efficiencies, while increasing access to a wider pool of talent and capabilities. Unfortunately, in a 2021 EY study of 300 CEOs from the Forbes 2000, only 29% had a strategy that included an ecosystem of external alliances — meaning many companies are relatively inexperienced with this approach.
GenAI’s ability to work with unstructured data could overcome a key obstacle to external partnering: data interoperability. In a world of structured data, partnering with external entities often required data to be cleaned and reformatted to make it interoperable — a slow, labor-intensive task. With GenAI, the interoperability challenge is diminished and, as companies build out knowledge graphs to capture their best practices and business processes, it will become increasingly easy to seamlessly combine not just data, but knowledge and processes across organizations — driving new offerings and business models.
All of this should open the floodgates to a world of faster and easier multiparty alliances. That’s good news because alliances will be essential in this rapidly evolving space. Developing an LLM is such a massive undertaking that partnering to integrate existing platforms will be vital. Similarly, alliances with GenAI solution providers will be useful to close talent and tech gaps and reengineer business functions.
The expanded use of ecosystem partnering, however, also increases risk and governance challenges. Combining data across organizations raises the specter of collective liability: you are as vulnerable as your weakest link. Based on our experience conducting AI strategic assessments for a multinational oil and gas firm and other clients, partnering with AI providers across multiple business functions makes third-party risk management a key component of strategy and governance. Given the growing landscape of AI vendors, companies need to ensure that ecosystem partnering is closely aligned with the strategic initiative responsible for ensuring confidence in AI.
Where to act now
If you’re new to ecosystems, get started — both because GenAI has lowered the barrier to entry, and because companies orchestrating ecosystems capture greater revenue share than those that just participate. Identify the strengths that make you an attractive partner, such as proprietary data, deep sector knowledge and robust cybersecurity. At the same time, define what you’re looking for in partners, including the ability to fill gaps and complement your proprietary data. Establish pilots with multiple entities. Coordinate with the AI control tower to regularly review the performance of these alliances.
What to decide later
- Decide which alliances to prioritize for further investment based on initial success and the evolving partner landscape. Winnow unsuccessful pilots and scale up successful ones.
- Identify new partners as new gaps and needs emerge.
- Move from a series of alliances to multiparty ecosystems in which various entities contribute unique competencies to achieve
Summary
If AI delivers on its potential, it could be every bit as transformative as the personal computer has been over the last five decades, supercharging productivity, unleashing innovation, and spawning new business models — while disrupting those that don’t adapt quickly enough.
The uncertainty and resource constraints confronting many companies are real, but don’t let them become an excuse for inaction and delay. The five initiatives described here provide a path through these challenges. It’s not too early to start transitioning from tactical to strategic, and to begin developing a long-term vision for your company.
That vision, and the strategy it informs, can be adjusted as uncertainty gets resolved. There’s much you can decide later.
And there’s much on which you can act now.
- A pulse survey of 150+ executives from global companies, the majority of which have $5B or more in annual revenue. Collected at EY’s Innovation Realized events in May 2023.
- Global survey of 800+ executives across business functions, including 50% from the C-suite. Respondents represent companies with $1 billion or more in annual revenue, across 15+ sectors, and headquarters in 20+ countries across the Americas, EMEIA, and Asia Pacific. Data was collected from June to July 2023.