CXOs Need to Get Onboard with AI Risk Management
SPONSORED BY NUTANIX
AI offers a range of benefits, but CXOs must mitigate the risks of artifical intelligence adoption.
AI has emerged as a key driver in enterprise digital transformation across industries, delivering new insights and improving decision making. Nevertheless, it also poses major risks within the business now and in the near future. Board level CXOs are central to ensuring organizations remove the risks of adopting artificial intelligence (AI) technology.
The events of 2020 have made senior leadership teams prioritize and better understand how technology can transform an organization. Enterprise cloud computing, collaboration tools, and other technologies have enabled businesses to remain viable despite having to close offices and slow down production lines to protect staff and comply with health guidelines and social distancing requirements. According to the Harvey Nash CIO Survey, before the Covid-19 pandemic hit the global economy, over half of the world’s CIOs had seen their technology budgets expand. The study found that CIOs received extra budget to focus on operational efficiency, digital customer service, and digital products and services. All three are part and parcel of the digital transformation of business and society, which was already becoming powerful and was massively accelerated by the pandemic. “Technologies from AI to cryptocurrency and online shopping are changing how we live, and what it means to be human,” says leading analyst firm Gartner, “CIOs and IT leaders must help their organizations adapt in this changing world.”
Months after Harvey Nash and Gartner reported their findings, huge swathes of society digitally transformed and will remain in their new digital state. This change has led to exponential growth in data levels that organizations have to manage to deliver insight, great service, and to optimize their businesses. AI is, therefore, an essential tool in managing both the data and the digital economy organizations operate within.
“By 2024, AI identification of emotions will influence more than half of the online advertisements you see,” predicts Gartner. “With the increasing popularity of sensors that track biometrics and the evolution of artificial emotional intelligence, businesses will be able to detect consumer emotions and use this knowledge to increase sales. Along with environmental and behavioral indicators, biometrics enable a deeper level of hyper-personalization. Brands should be transparent and educate consumers about how their data is being collected and used.”
“AI is not magic; it works on the data you give it, so make sure that you give it the right data,” adds the Digital Commerce Chief Architect at a major global consumer goods manufacturer.
The good news for CXOs is that there is a wealth of guidance available to ensure that AI adoption is low risk and delivers business benefits. The bad news, according to Tim Gordon, founding partner of Best Practice AI, a research firm that helps organizations with their AI adoption, is that there are too many frameworks. Gordon told business research organization Leading Edge Forum (LEF) that there are “hundreds of frameworks to choose from.” He believes these frameworks are beginning to come together around a set of themes that will benefit CXOs and their boards: fairness and inclusivity, transparency, social benefit, responsibility, and accountability.
As CXOs begin to carry out proof of concept projects or implement AI, regulations are already surfacing that the board needs to be aware of, and ensure the business is compliant. Europe’s General Data Protection Regulation (GDPR) is one of the most significant to enter the business environment, while similar takes on GDPR are coming into force in California (for example, the California Consumer Protection Act) as well as in Asia Pacific and Latin American economies.
“Boards need to get to grips with the regulations on the horizon, not only within their own region but also the international cooperation and competition factors, to ensure they’re prepared to continue innovating,” says Leading Edge Forum. Gordon adds that organizations are already putting themselves at risk with AI implementations: “The data being used to train algorithms is based on existing biases that reflect societal bias, and finally, the new business models that are enabled by AI.”
“The compliance and reputational risks of artificial intelligence pose a challenge to traditional risk-management functions,” finds business research and advisory group McKinsey. “To remain competitive, firms in nearly every industry will need to adopt AI and the agile development approaches that enable building it efficiently to keep pace with existing peers, and digitally native market entrants. But they must do so while managing the new and varied risks posed by AI and its rapid development.”
Research by McKinsey finds that the risk management practices of organizations will need to significantly modernize. “Over the past two years, AI has increasingly affected a wide range of risk types, including model, compliance, operational, legal, reputational, and regulatory risks. Many of these risks are new and unfamiliar in industries without a history of widespread analytics use and established model management,” McKinsey says. Adding that vertical markets with a higher data and analytics maturity are not immune to struggling with managing the risks posed by AI. “AI makes the risks manifest in new and challenging ways. For example, financial services institutions have long worried about bias among individual employees when providing consumer advice. But when employees are delivering advice based on AI recommendations, the risk is not that one piece of individual advice is biased but that, if the AI recommendations are biased, the institution is actually systematizing bias into the decision-making process.”
CXOs and their senior leadership teams will need to broaden the range of risk models and tools they use to understand every aspect of their business, whether an internal process or external sales and marketing.
Experts believe that by 2023 most G7 economies will have regulations in place for AI.
The greatest risk organizations face is that they lose the trust of their customers and also the team members. As LEF states, trust is a key factor in competitive advantage for organizations and, in particular AI-driven businesses. “Organizations which are highly trusted will find it easy to obtain the data they need to power automated decision-making systems, while low-trust businesses will come up against increasing barriers in the form of reputational costs, regulatory frameworks, or pure operational cost to obtain the data they need to function.”
“Proving a new technology takes time, so it is important to do a proof of concept to get the business and yourself familiar with the technology in a new context, and you can then get close to the compliance and business case of your organization with its own data,” says a CXO in the foods sector.
Organizational trust begins at the very top of the leadership team. Boards will need to be capable of handling the risk vectors of AI from an operational, regulatory, and reputational level. CXOs will require additional support in this new environment, which is why search firms like Savannah Group see a rising demand for digital technology native non-executive directors.
Despite the risks, AI provides myriad business and societal benefits, which will outweigh the risks. Technology analysts Gartner believe that by 2023 the number of people with disabilities in employment will triple, and AI plays a part in this as the technology reduces barriers to access.
Real benefits are visible everywhere. At a major European hospital, for example, oncologist Dr. Yvonne Rimmer uses an AI application to understand tumors, reporting faster process times and more tailored treatments. She told a medical journal: “It’s important for patients to know that the AI is helping me do my job; it’s not replacing me in the process. I double-check everything the AI does, and can change it if I need to.”