Digital transformation – the challenges of Artificial Intelligence (AI) for boards
By Peter Snowdon | 08/04/2021 in Blog posts
Digital transformation continues to be one of the greatest challenges facing companies. The risk of falling behind could be fatal for a business. But there are also real risks for firms as they develop associated new technology capability.
Businesses are increasingly looking to artificial intelligence (AI) to gain competitive advantage. However, whilst AI may offer enormous prospective benefits to companies, there are considerable possible downsides for boards if the planning and implementation of AI capability is flawed.
Artificial intelligence and boards
AI systems are able to perform tasks and functions which have traditionally required human intelligence. Much of the technology is complex but the challenges of AI are not just an issue for the IT department, they reach to the very top of firms and ultimately must be addressed by the board.
AI is potentially transformative for many businesses and it is inevitable that many boards are having to develop and agree effective AI strategies as well as overseeing and challenging the management teams responsible for technology planning and the implementation of AI solutions.
In practice, most board members are likely to need technical support performing their roles in relation to AI. As with any other area of decision making, developing and overseeing the implementation of an AI strategy is a collective exercise by the board as a whole, not just those directors who may have relevant technical skills. Directors are not required to be experts on every subject but they are expected to be fully engaged in board decision making, seeking appropriate support and expertise where needed.
In some areas of industry, boards and individual directors may already have developed effective approaches towards decision making where the challenge faced is technically complex. For example, many banks and financial firms already rely on complex algorithms and modelling when carrying out functions such as calculating pricing or assessing risk. Directors are not expected to be familiar with every detail of these tools, but they are expected to satisfy themselves that there is proven capability in the business that understands such tools and knows how to use them appropriately and assess how they are performing in practice.
Machine learning (ML), where a machine teaches itself to perform tasks without programming, can present particular challenges to businesses and boards. There is anecdotal evidence which suggests that when machines are left to their own devices, they can make decisions which are unlawful or otherwise discriminatory. One often mentioned example1 concerned a technology company which used ML to assess job applications. Unfortunately for the firm, the ML system assessed applications based on historic data which mostly contained applications that had come from men, a reflection of male dominance in the technology sector. As a result, the machine taught itself that applications from men were preferable and it actively discriminated against female applicants. The high-profile failings of driverless car technologies are another example of how things can go awry. Facial recognition technology failures of the bald-man-identified-as-a-ball variety are well known.
Whilst on one view, it may be argued that errors of this type are part of the learning process for use of ML capability, it is possible to envisage serious consequences for businesses from ML failures where machines learn from past inappropriate human decisions. Such risk could present ethical challenges for companies and boards. It may not always be appropriate for machines to make decisions just because they can do so more cheaply and more efficiently. Even where there is not obvious failure of technology, most companies and boards need to satisfy themselves that the business can demonstrate how and why a system has made a decision. Simply saying the ‘computer says no’ is not adequate, companies need to be able to explain why a decision was made. The balance between commercial benefit and potential for detriment arising from ‘black box’ systems, means that boards will need to develop a clear position on their risk appetite.
As reliance on algorithms grows, boards are going to be increasingly at risk of these complex tools failing. Risk can never be excluded, but failure by boards to challenge and demand evidence as to an algorithm’s efficacy may be difficult to defend if it turns out to have material weaknesses. Boards should look to satisfy themselves that AI capability is effective based on credible internal audit trail and may also wish to seek an external assessment undertaken by one of the specialist firms offering algorithm audits.
Data and information
Big data and information generally are essential to the operation of sophisticated AI capability. However, the data provided to AI systems must be of sufficient quality and untainted by human bias. ‘Rubbish in rubbish out’ failings could lead to significant negative outcomes for firms.
Paradoxically, much of the data used in building AI capability is sourced via low tech manual human input. There are said to be thousands of workers involved in labelling and classifying data for input into AI systems. There have been criticisms that workers performing these tasks, so-called ‘ghost workers’ are sometimes low paid and have to suffer poor working conditions, such criticisms may be an added risk for boards to consider.
However, poor working conditions are not the only issue that may present risk to businesses using AI capability. A recent BBC article2 cites a study relating to YouTube that found that an algorithm had banned some LGBTQ content. It turned out that this did not reflect a weakness in the algorithm but came from the workers behind the scenes who were in a country where LGBTQ content was subject to censorship.
Boards also need to consider the extent to which customer data can be used in AI capability. Have customers consented to their information being used in this way? Is there sufficient transparency as to how their data will be used? How do companies deal with customers who refuse to allow their data to be used in this way?
The use of AI capability for marketing is already well established. However, as businesses now hold so much information on their customers there can be a fine line between developing targeted, acceptable marketing and approaches that are intrusive, manipulative or just plain creepy.
Using customer data may also lead to unintended greater commercial risk, for example enhancing a customer service so that it is better tailored to an individual customer profile could risk the firm appearing to provide a personal recommendation to a customer thereby taking on unintended liability.
Both the Prudential Regulatory Authority and the Financial Conduct Authority have produced helpful material on AI risk and board issues. As with much regulatory comment the material produced by both regulators includes material which is useful for non-financial institutions as well.
Bank of England: “Managing Machines: the governance of artificial intelligence“. Speech given by James Proudman, Executive Director of UK Deposit Takers Supervision
Bank of England: “Machine learning in UK financial services“
BBC article “AI: Ghost workers demand to be seen and heard“
1 Managing Machines: the governance of artificial intelligence – James Proudman Executive Director of UK Deposit Takers Supervision FCA Conference on Governance in Banking 4 June 2019
2 BBC article “AI: Ghost workers demand to be seen and heard“