CORONAVIRUS UPDATE: We can offer all our board services digitally and support you in these ways.

AI in Business: Accountability, responsibility, and informed decision-making in the boardroom

By | 24/06/2022 in Blog posts
AI - artificial intelligence

AI is widely recognised as a game changer, offering opportunities and benefits for businesses of all sizes. But with this comes risk, and boards need to be aware of these in order to make informed decisions about where and how to use AI in their organisations. In this article we explore some of the steps companies need to take to design and implement governance arrangements to mitigate risk.

In February 2022 the Prudential Regulatory Authority (PRA) and the Financial Conduct Authority (FCA) published the Final Report of the Article Intelligence Public-Private Forum (the PPF Report). Although the primary audience for the PPR Report is the financial services industry, many of the issues discussed have relevance to a wider commercial audience. This factsheet summarises the benefits and risks associated with AI and provides some guidance on how these risks might be managed within a governance framework.

Benefits of AI

There are many potential benefits that can be derived from the use of AI within businesses. These benefits include:

  • Increased efficiency and productivity through the automation of tasks which are currently carried out manually
  • Improved decision making through the use of data-driven analytics
  • Enhanced customer experience through the personalisation of services
  • Creation of new products and services through the development of new insights from data

Risks associated with AI

There are also a number of risks associated with the use of AI which businesses need to be aware of. These risks include:

  • The potential for job losses as a result of automation
  • The possibility of biased decision making if data used to train AI systems is not representative
  • The need for significant investment in order to develop and implement AI systems
  • The potential for increased cyber security risks as a result of the use of AI technology

Governance of AI

Given the potential benefits and risks of AI, it is important that businesses have a clear governance framework in place in order to ensure that AI systems are developed and used responsibly. This governance framework should cover areas such as data protection, ethical considerations and risk management.

Data protection

It is important to ensure that data is used responsibly and in a way that respects the privacy of individuals. Personal data should only be collected if it is necessary for the purposes of the AI system, and individuals should be informed about how their data will be used. It should also be made clear to individuals how they can access and correct their personal data if necessary.

Ethical considerations

It is important to consider the ethical implications of the technology. For example, AI systems may be used to make decisions that could have a significant impact on people’s lives, such as whether or not they are eligible for a particular job or insurance policy. It is important to ensure that these decisions are fair and unbiased.

There are also concerns about the potential for AI systems to be used for malicious purposes, such as creating fake news or spreading disinformation. It is important to consider how to prevent misuse of AI systems.

The PPF Report suggests as a starting point that businesses need to see the risk of AI within the context of their ‘existing governance frameworks and structures’ because AI models are likely to interact with other risk and governance processes such as data governance and operational risk management. It may therefore be possible to ‘use and adapt existing governance frameworks to manage the novel challenges of AI’ and/or combine ‘existing governance frameworks into one overarching AI governance framework’. It may also ‘be beneficial to develop a set of AI risk principles and map them to existing risk frameworks. This could, for example, help a company to focus on new risks and assist with staff training. Other options might include the establishment of ‘a cross-functional body’ made up of disciplines such as audit, research, data and other areas such as compliance for an FS business.

On the other hand, the PPF Report acknowledges that adopting AI into a business may mean a company will need ‘new risk management and governance frameworks’. For some businesses a two-level framework of governance may be appropriate. The first level should be strategic, developing principles and standards for the whole business, with a second ‘execution-level’ framework ‘[applying] the standards on a use-case basis’ across the business.

Governance frameworks should be ‘aligned with the risk and materiality of the use-case’. High risk and high impact systems, such as those which result in significant consumer outcomes, will need more time and resource to work through than lower risk systems such as simple chat-bots.

The PPF Report concludes that ‘accountability, responsibility, and informed decision-making are central themes to any discussion on AI governance.

. . .
Whether firms adapt existing governance structures or establish new frameworks, they need to clearly define the relevant roles and lines of accountability at all stages of the AI governance hierarchy

Responsibilities should be allocated across a business for each element of a company’s approach to AI including the design and development of systems, their use in the “front-line” as well as to Board members who have the ultimate responsibility for agreeing a company’s strategy.

Accountability, responsibility, and informed decision-making are central themes to any discussion on AI governance. The PPF report aims to inform and shape future debates on and contribute to the development of policies and practices that ensure responsible and ethical use of AI technologies.

Peter Snowdon is a legal and corporate governance expert, with a particular interest in issues affecting financial services firms, banks and investment firms. A former partner at Norton Rose, he also worked for the Financial Services Authority (FSA) prior to joining Bvalco.

Linkedin Box

We review