Your business needs an A.I. watchdog. Here’s how to make sure it has teeth

An artist’s rendering of a robotic watchdog
“CEOs should ask A.I. review boards to identify A.I. risks; develop guidelines to prevent their companies from falling victim to them; and constantly monitor the growing risks as the algorithms evolve,” the authors write.
Courtesy of Getty Images

Eight days. That’s how long Google’s Advanced Technology External Advisory Council (ATEAC), an eight-member committee set up in 2019 to guide the company’s development of A.I., survived before the company dissolved it

The committee imploded for several reasons. Google wanted the ATEAC to meet just four times a year. It expected its members to work pro bono. And although the digital giant claimed that the ATEAC’s efforts would inform its A.I. use, it wasn’t clear which projects the committee would monitor, to whom it would report, or which executive(s) would act on its recommendations. In retrospect, the ATEAC was consumed by the rising organizational and societal skepticism about its role because it simply wasn’t set up for success. 

As CEOs expand their organizations’ uses of A.I., they face complex challenges. They find that they have to manage tradeoffs among objectives such as profits, consumer safety, reputation, ethics, and values that are often in conflict with one another. These tradeoffs force them to choose between making decisions that will have tangible short-term impacts and those with mid- to long-term implications that are difficult to evaluate. 

These ever more complex tradeoffs are inevitable with A.I. First, A.I. allows companies to offer new services—such as personalized recommendations for each consumer and preventive maintenance of every machine—that they couldn’t until now. Second, A.I.-generated tradeoffs often end up having a major impact because companies can scale A.I. rapidly. Third, A.I. learns and evolves over time, even without human supervision, so the risks are harder to predict. Microsoft’s chatbot Tay turned racist in 2016 less than 16 hours after it went online. Besides, in the absence of regulations and guidelines, business leaders find it tough to identify and manage the risks from using A.I.

That’s why several companies—digital giants, such as FacebookMicrosoft, and Google, as well as analog leaders, such as Merck or Orange—have set up A.I. watchdog boards or supervisory committees (A.I. review boards, in a phrase) to oversee their A.I. efforts. There’s public support for the idea; a recent Brookings survey found that around 66% of people believed that every company should have an A.I. review board. Moreover, other sectors, such as health care and biotech, have found expert committees—in the U.S., they’re called institutional review boards—to be the best way to deal with ethical dilemmas in hospitals, stem cell research, and scientific laboratories.

But as Google’s ill-fated experience shows, it’s critical to figure out in advance how to constitute an A.I. review board, how it should function, and how it will relate to the parent organization. Our work shows that business should adhere to two guidelines if it wants A.I. review boards to do their job. 

Provide a clear purpose

A company should give its A.I. review board clear mandates and distinct roles to play. In that context, business could set two objectives. At the outset, CEOs should ask A.I. review boards to identify early—and if possible, to anticipate—A.I. risks; develop guidelines to prevent their companies from falling victim to them; and constantly monitor the growing risks as the algorithms are live and possibly evolve. 

Then CEOs should ask these committees to evaluate the tradeoffs they must make when the choice is between two bad options. That’s perhaps the toughest part of an A.I. board’s role. Consider, for instance, a drugstore chain we worked for that recently used A.I.-powered geospatial analysis and historical data to map the U.S.; rank neighborhoods by algorithmic predictions of the profits or losses its stores there were likely to make; and recommend where it should invest in opening more stores and identify locations where it ought to disinvest. 

Top management quickly realized that the A.I. would consistently recommend store closures (and no openings) in neighborhoods with large minority and low-income populations. Management immediately stepped in because although keeping stores open in those areas would result in losses, the retailer didn’t want to reduce the access of disadvantaged communities to drugs and pharmaceuticals, or implement decisions that weren’t aligned with its purpose and values. 

Award the power to act

Companies have to empower A.I. review boards to execute decisions. With a purely advisory role unlikely to resonate with stakeholders anymore, the A.I. review board has to be made core, not peripheral, to the business. Every CEO must therefore figure out how the A.I. review board will enforce decisions, and how its powers will dovetail with those of the company’s board and top management team. 

Drawing from the well-established practices of financial services firms that have to constantly deal with compliance issues and financial risks, and adapting them for an A.I.-driven world, we find several ways in which companies can ensure that their A.I. review boards don’t become echo chambers. 

* Accountability. Companies must ensure that top management teams, led by CEOs, are accountable for the consequences of A.I.-related decisions. This is the most critical step in managing A.I. risks. One of the members of the executive committee—ideally, the chief risk officer—should chair the A.I. committee, so they can routinely share expertise, communicate concerns, and present their conclusions at Executive Committee meetings. Like a commercial bank’s chief risk officer, the A.I. committee head must sign off on every A.I. project, and the committee should enjoy veto power over those projects. 

* Training. Executives in most organizations are still learning about how to use A.I., and about the risks from the unintended consequences of its deployment, making it tough for them to use experience to make good decisions. Most companies’ executive committee members must therefore get more familiar with the technology through A.I. and data-literacy programs, custom workshops, and executive education programs. 

* Diversity. An A.I. review board must be made up of several nominees from within, as well as outside, the organization. Because its mandate will cover a variety of issues—from business and social ones to financial and ethical risks—the members must come from diverse backgrounds, and some should be known for their unconventional viewpoints. 

Because A.I. regulations are still nascent, the A.I. review board must also include representatives of legal bodies, the global A.I. ecosystem, and civil society. The latter is critical to ensure that the company gains a social license for A.I. In addition to stakeholder representatives, the A.I. review board should have representatives from the organization’s legal, HR, and diversity functions as well as the A.I., digital, and data teams. 

* Communication. An A.I. board must foster links with the entire organization, not just the company’s board and top management team, and set up communication channels with its operational teams to detect and mitigate unanticipated A.I. risks. Most banks, for instance, have created joint decision-making processes that involve executives from the business and risk functions. They evaluate every risky business opportunity and “red flag” the decisions that must be escalated. 

Before a company launches an A.I. project, the review board must conduct an audit based on the project and the underlying systems to identify the possible risks and tradeoffs; designate the risk-owners on the team who will be accountable for different risks; and create real-time escalation procedures to follow when risks crop up. Then it must work with the project teams to lay down countermeasures; sign off on the risk-mitigation plans; and ensure that the plans are implemented in a timely manner. 

Because A.I. can learn and evolve, and the environment where it operates can also change, the A.I. review board must put in place monitoring mechanisms all along the technology life cycle and conduct periodic reviews to understand how the A.I. and the environment are changing. To be more than a paper tiger, an A.I. review board must be able to make public its recommendations and declare whether the company followed them or not, so it also has to foster links with the outside world. 

Having said all that, it’s important to underline that CEOs must ensure that their A.I. review boards don’t impede the technology’s innovation and deployment. Ironically, the stronger, and more credible, an A.I. review board is, the less likely it is that it will prove to be a bottleneck. After all, as Alexander Pope once wrote, it’s only a little knowledge that is a dangerous thing. 

Read previous Fortune columns by François Candelon

François Candelon is a managing director and senior partner at BCG and global director of the BCG Henderson Institute.

Theodoros Evgeniou is professor at Insead, World Economic Forum Partner on A.I., member of the OECD Network of Experts on A.I., BCG Henderson Institute Adviser, and cofounder and chief innovation officer of Tremau.

Maxime Courtaux is a project leader at BCG and ambassador at the BCG Henderson Institute. 

Some of the companies mentioned in this column are current or past clients of BCG.

Never miss a story: Follow your favorite topics and authors to get a personalized email with the journalism that matters most to you.