+ | - | reset

Most executives know that artificial intelligence (AI) has the power to change almost everything about the way they do business—and could contribute up to $15.7 trillion to the global economy by 2030. But what many business leaders don’t know is how to deploy AI, not just in a pilot here or there, but throughout the organization, where it can create maximum value.

The “how” is the sticking point with any emerging technology, and AI is no exception. How do you define your AI strategy? How do you find AI-literate workers or train existing staff? What can you do to get your data AI-ready? How do you ensure your AI is trustworthy?

To complicate matters, the answers to these questions often vary from one company to the next—and the environment is continually evolving. But businesses can’t wait for the dust to settle. AI adoption, which has happened in fits and starts, will accelerate in 2019.

AI reality check

To get a read on where organizations currently stand, we surveyed more than 1,000 US business executives at companies that are already investigating or implementing AI. A full 20% said their organizations plan to implement AI enterprise-wide in 2019. If these ambitious plans pan out, many leading US companies will become AI-enhanced—not just in pockets of the organization, but throughout their operations.

How to scale AI

Last year, we offered eight predictions about how AI was likely to develop over the course of 2018, with implications for business, government and society. The trends we identified—including AI’s true workforce impact, a call for all companies to focus on responsible AI and emerging threats around cybersecurity and national competitiveness—are even more relevant today. But, as we head into 2019—with AI increasingly moving from the lab to offices, factories, hospitals, construction sites and consumers’ lives—a different approach is needed. We’re not just highlighting what is likely to happen; we’re telling business leaders what they must make happen with AI.

1. Structure: Organize for ROI and momentum

If you’re considering AI for your business, it’s time to scale up or give up. Leading companies are already starting to move their AI models into production, where they will run operations to enhance decision-making and provide forward-looking intelligence to people in every function. If you’re serious about AI, formalize your approach and develop company-wide capabilities so successful (and smaller) projects can be replicated and built into a greater whole.

Don’t shoot for the moon

AI is going to transform nearly everything about your business and markets. That’s a good reason to take action—but it’s not a good enough reason to do too much too quickly. If done right, developing an AI model for one specific task can enhance an existing process or solve a well-defined business problem, while simultaneously creating the potential to scale to other parts of the enterprise.

One fact about AI algorithms that may surprise business users: there aren’t that many of them. The same algorithms are capable of solving most business problems for which AI is relevant, so if you can successfully apply them in one area of your business, you can usually use them in others.

For example, every company has to process invoices. By automatically extracting information, even from invoices that aren’t fully standardized, AI tools can automate the process to reduce costs and processing time.

You can then modify and use the AI component to speed up other parts of the company—such as customer service, marketing, tax and supply chain management—that also consume huge amounts of unstructured and semi-structured data.

The goal is to build a portfolio of reusable building blocks, to create both quick ROI and momentum to scale. Executives in our survey are embracing this strategy: they ranked developing AI models and data sets that can be used across the organization as the most important capability they would focus on in 2019.

The right AI foundation

When AI initiatives begin with AI specialists, they sometimes struggle to gain broad traction. When they come from the business side, projects may have a limited focus may not take full advantage of the technology. In both cases, isolated teams may create duplicate—or incompatible—efforts.

The answer is oversight from a diverse team that includes people who have business, IT and specialized AI skills and represents all parts of your organization. You need to be disciplined, creating an organizational structure that crosses functions and enables you to establish a clear AI strategy. A center of excellence (CoE) is often the best way to build this AI foundation—and the model we expect to become most prevalent. (We have a CoE at PwC.) Some companies may choose to add AI responsibilities to existing analytics or automation groups, or to other established CoEs.

Wherever this group resides, its responsibilities should cover business questions, such as how to identify use cases and how to develop accountability and governance. It should establish and oversee enterprise-wide data policies. And it should determine technology standards, including architecture, tools, techniques, vendor and intellectual property management and just how intelligent AI systems need to be.

Finally, the AI team should create and manage a digital platform for collaboration, support and resource management. Think of it as the one-stop shop for AI efforts: a virtual environment with pluggable tools, where business and tech professionals will share resources (such as data sets, methodologies and reusable components) and collaborate on initiatives.

2. Workforce: Teach AI citizens and specialists to work together

As we predicted last year, upskilling non-AI professionals to work with AI has become a crucial part of workforce strategy. A new class of tools, including AutoML, which streamlines and automates part of the process for creating AI models, is democratizing AI. 38% of executives will focus efforts on AI tools for business people—the second-ranked capability they will cultivate after reusable data sets and models.

But user-friendly AI is still complex. Even with basic training, business people may not fully understand different AI algorithms’ parameters and performance levels. They could accidentally apply the wrong algorithms, with unintended results.

The answer is a workforce strategy that creates three levels of AI-savvy employees—and provides ways for all three to work together successfully.

Citizen users, citizen developers, data scientists

As AI spreads, most of a company’s employees will need training to become AI citizen users. They’ll learn how to use the company’s AI-enhanced applications, support good data governance and get expert help when needed.

A more specialized group, perhaps 5 to 10% of your workforce, should receive further training to become citizen developers: line-of-business professionals who are power users and can identify use cases and data sets, and work closely with AI specialists to develop new AI applications.

Finally, a small but crucial group of data engineers and data scientists will do the heavy lifting to create, deploy and manage AI applications.

To get these three groups up and running, you’ll need to systematically identify new job skills and roles. What work do you need citizen users or developers to handle? What applications require an experienced data scientist?

You’ll then need an equally systematic approach to filling those roles—both internally and externally—and encouraging the different groups to collaborate. Enterprise-wide upskilling should address both technical skills and digital ways of working. Performance and compensation frameworks will have to adapt.

Many employees will successfully upskill to fill new roles, but some won’t be able to make the transition. So you need to prepare for some turnover.

Meet the AI jobs challenge

For many leaders, trying to size AI’s impact on jobs has become a fool’s errand. They know it’s happening, but just how big or small the number (and when the jobs market will feel it) is open for debate. Estimates range widely, including those from PwC’s international jobs automation study, which put the short-term impact at less than 3% of jobs lost by 2020, but as high as 30% by the mid-2030s.

Executives in our survey agree that, for now, AI isn’t taking away jobs in their organizations. In fact, twice as many executives said AI will lead to an increased headcount (38%) as those who said AI will lead to job cuts (19%) in their organization.

Right now the challenge is to fill jobs: 31% of executives are worried about the inability to meet the demand for AI skills over the next five years. Upskilling can create citizen users and developers, but you’ll likely need to hire highly trained programmers and data scientists. Forging partnerships with colleges or launching apprenticeship schemes are a good place to start.

Workplace culture is also a big factor here. Many AI specialists want to work for a company that is using AI for good. Many also value workplaces with the organizational setup, resources, definition of roles, exciting research and individual empowerment that will inspire them to do great work in collaboration with other talented people.

3. Trust: Make AI responsible in all its dimensions

As we predicted a year ago, concerns have grown over how AI could impact privacy, cybersecurity, employment, inequality and the environment. Customers, employees, boards, regulators and corporate partners are all asking the same question: can we trust AI? So it’s no surprise that executives say ensuring AI systems are trustworthy is their top challenge for 2019.

How they’ll overcome that challenge depends on whether they’re addressing all facets of responsible AI:

  1. Fairness: Are you minimizing bias in your data and AI models? Are you addressing bias when you use AI?
  2. Interpretability: Can you explain how an AI model makes decisions? Can you ensure those decisions are accurate?
  3. Robustness and security: Can you rely on an AI system’s performance? Are your AI systems vulnerable to attack?
  4. Governance: Who is accountable for AI systems? Do you have the proper controls in place?
  5. System ethics: Do your AI systems comply with regulations? How will they impact your employees and customers?

You should build in accountability for each area, whether inside your AI CoE or in an adjacent group that works closely with the CoE. An increasing number of companies are overseeing responsible AI through ethics boards or chief ethics officers for technology, with AI as part of their remit. It’s an encouraging trend, which we expect will accelerate. You may also need to create job roles that combine technical expertise with an understanding of regulatory, ethical and reputational concerns.

Set up controls and balance trade-offs

To establish controls over AI’s data, algorithms, processes and reporting frameworks, you’ll need blended teams of technical, business and internal audit specialists. As they continually test and monitor controls, these teams will have to consider appropriate trade-offs.

With interpretability, for example, you want to strike the right balance between performance, cost, a use case’s criticality and the extent of human expertise involved. A self-driving car, an AI healthcare diagnosis and an AI-led marketing campaign would all require different levels and kinds of interpretability and related controls.

Algorithms that explain themselves

Other ways to make AI more trustworthy are coming from advances in AI itself, particularly in the area of explainable AI, or XAI. The XAI program from the Defense Advanced Research Projects Agency (DARPA), for example, is working on more interpretable algorithms. The goal is an AI solution that can explain its rationale, its strengths and weaknesses and convey how it will act in the future.

As we predicted last year, in 2019 a growing number of enterprises will want to open up AI’s black box and make AI’s decisions more transparent, interpretable and provable. They’ll also need to anticipate when algorithms will require auditing. In the future, we expect some governments to make some level of interpretability a regulatory requirement.

4. Data: Locate and label to teach the machines

Last year, we showed that AI answers the big question about data: how to create value. In our survey, the top AI-related data priority for 2019 is to integrate AI and analytics systems to gain business insights from data.

That’s a realistic goal. AI can be used with data and analytics to better manage risk, help employees make better decisions, automate customer operations and more.

But there’s a problem—a big one. Our survey indicates that businesses aren’t providing the foundation that AI needs to be successful. Less than one-third of executives say labeling data is a priority for their business in 2019.

How AI learns

For machine learning to detect significant patterns in the present and predict the future, it must be taught. Show it enough historical data on consumer behavior, for example, and it will eventually be able to predict how those consumers—and others who are like them—will behave going forward.

But to create the data sets needed for training, you have to label the data. A simplified example is determining whether a consumer is satisfied or not. For those data sets to help support AI across the enterprise (those consumers may interact with more than one business line), you’ll need standards for labeling them consistently.

An AI CoE can create and monitor data standards, as well as develop systems and processes that make it easier for employees to create usable, labeled data sets for future use.

New tools to fill the gaps

Even with better data governance, there will be challenges. Some business problems have AI solutions that would require training data that companies may not have available.

But new, lean and augmented machine learning techniques can enable AI to produce its own data based on a few samples. They can also transfer models from one task with lots of data to another one that lacks data. AI can sometimes synthesize its own training data by using techniques such as reinforcement learning, active learning, generative adversarial networks and digital twins. Simulations based on probabilities can also create “synthetic” data that can be used to train AI.

Pay attention to policy

The AI policy landscape is still in its infancy. Many policymakers see this moment as the beginning of an AI arms race in need of public funding and deregulation. Others are calling for comprehensive guidelines that address ethical algorithms, workforce retraining, public safety, antitrust and transparency. As we predicted last year, countries have begun to compete through national AI strategies. As of December 2018, more than two dozen countries had released strategies or were in the process of formulating them.

At the same time, emerging regulations around data privacy will also impact AI and may limit its growth because it affects how companies operating globally can use data generated across territories. Europe’s General Data Protection Regulation went live in May 2018, and the California Consumer Privacy Act is coming in 2020. GDPR and CCPA have differences, but both give individuals the right to see and control how organizations collect and use their personal data—as well as recourse should they suffer damages due to bias or cybersecurity breaches.

Companies should take a global approach to regulatory issues by aligning teams that are helping shape policies in different jurisdictions and address compliance by applying best practices globally. Complying with GDPR, for example, even if your company has no European operations, will get you ready for CCPA and other future regulations.

5. Reinvention: Monetize AI through personalization and higher quality

Boosting the top and bottom lines with AI is not a distant dream. Many businesses are already using AI to improve operations and enhance the customer experience. But in 2019, a number of them will plan or develop new business models based on AI and investigate new revenue opportunities. Many will cultivate these new businesses in separate parts of their organization, distinct from CoEs that are more internally focused.

Right now, the greatest gains from AI are coming from productivity enhancements, as businesses use AI to automate processes and help employees make better decisions. But as our Global Artificial Intelligence Study found, the majority of AI’s economic impact will come from the consumption side, through higher-quality, more personalized and more data-driven products and services. Healthcare, retail and automotive could see the most immediate benefits, according to our analysis of more than 300 AI use cases. =

AI in healthcare, for example, could enable new business models based on monitoring patient lifestyle data, provide quicker and more accurate diagnoses of cancer and other diseases and generate personalized and adaptive health insurance. Retailers are already using AI to anticipate trends and guide the business to meet them. Next up is hyper-personalized retail: AI and automation make it feasible for retailers to offer a growing number of products or services made specifically for one individual.

Your robot strategy consultant

AI is even being used to help guide some of these decisions by gamifying strategy. For instance, a leading auto manufacturer has been using AI to test more than 200,000 go-to-market scenarios for autonomous, ridesharing fleets. The model has helped identify key economic drivers and optimal levels for infrastructure and vehicles.

Investing in AI startups

Established business aren’t the only ones trying to monetize AI—start-ups are proliferating. As of Q3 2018, there were 940 AI companies identified by the PwC / CB Insights MoneyTree™ Report. US venture capital investment in those that are private—some 790 companies—is booming: $6.6 billion in the first three quarters of 2018, compared to $3.9 billion in the same period of 2017, according to the MoneyTree Report™.

And not all of that money is flowing from Sand Hill Road and private equity firms: record amounts are coming from corporations, either through venture capital arms or direct investments. In 2018, some $983 million was invested by companies looking to stake outside AI development. They also acquired AI companies outright: 35 companies valued at $3.8 billion. Investing in—rather than developing—AI, is a trend we expect to accelerate. Our 2018 Digital IQ Survey, for example, found that while just 8% of companies were making significant direct investments in AI, another 52% would be interested in pursuing acquisitions or alliances.

6. Convergence: Combine AI with analytics, the IoT and more

AI’s power grows even greater when it is integrated with other technologies, such as analytics, ERP, the Internet of Things (IoT), blockchain, and even—eventually—quantum computing. The benefits of this convergence trend aren’t limited to AI. It’s where the greatest gains from all the essential eight technologies will come from.

These technologies need AI

36% of the executives surveyed told us that managing the convergence of AI with other technologies is a top AI challenge for 2019—putting it on a par with retraining employees and just below ensuring trust in AI. Helping advanced, predictive and streaming analytics further evolve with AI is a common priority. This convergence can make new data-driven business models more powerful.

The IoT can also reap big benefits when combined with AI. A large enterprise may soon have millions of IoT sensors gathering information from business equipment and consumer devices. AI and analytics will play a critical role in finding patterns in this tidal wave of data to support everything from systems maintenance to marketing insights. Embodied AI, which embeds AI chipsets directly into IoT devices to create local intelligence, will help meet this challenge.

Data and DevOps

Successfully integrating AI with other technologies begins with data. Organizations that have invested in identifying, aggregating, standardizing and labeling data—with the data infrastructure and storage to back it up—will be well-placed to combine AI with analytics, the IoT and other technologies.

However, to integrate AI with other enterprise systems, human specialists will have to converge too. Instead of data scientists completing an algorithm, then handing it off to an IT specialist to code an application programming interface (API) or sending it to someone in the business who will then apply it, these teams should work together from the start.

One part of the answer involves DevOps techniques, which put development and operational teams in a feedback loop for constant collaboration and interactive changes to new products. Another part will involve creating new roles for employees to serve as translators for and liaisons among the various teams.

Another point to consider: as AI is integrated with technologies and advanced systems that work around the clock, its algorithms will need a continuous flow of new data from which to learn. Otherwise, AI models will be working with outdated data, which will degrade their performance. Models will also need regular testing, updating and replacement.

The article is taken from here.
Photo by Markus Spiske on Unsplash

Rate this article

4.4 / 5. 5

Is this article good for you?
andy kelly 0E vhMVqL9g unsplash scaled
5.0
8  Minutes

How Leadership in Financial Organisations Build Trust in AI: Lessons from Boards o...

19 July 2024

READ MORE
Share
ilgmyzin agFmImWyPso unsplash scaled
5.0
7  Minutes

How Will ChatGPT and Other Generative AI Impact Leadership

27 March 2023

READ MORE
Share
windmill 5591464 1920
1.0
7  Minutes

Four Ways Technology Can Contribute to Business Sustainability Plans

24 August 2022

READ MORE
Share
possessed photography JjGXjESMxOY unsplash scaled
5.0
5  Minutes

When Your Board Champions AI, Will You Win the Future Consumer?

16 August 2021

READ MORE
Share
tim swaan eOpewngf68w unsplash scaled
5.0
3  Minutes

Building Resillience For The Next Normal

08 April 2021

READ MORE
Share
GgbZwZznIbyZzB8KcxSw scaled
4.8
7  Minutes

Breaking Up or Disrupting Big Tech?

25 November 2019

READ MORE
Share

Survey

ICDM
Homepage