Requirements for AI corporate governance

How should companies set up their AI corporate governance to be accountable to a complex set of regulations?

ORGANISATION DESIGNRESPONSIBLE AI

12/15/20233 min read

woman in black long sleeve shirt using macbook
woman in black long sleeve shirt using macbook
Regulation adds to the confusion

The pace of change of AI technology moves far faster than regulation ever will. The UK online safety bill took 5 years to become law. ChatGPT took a mere 2 months to grow to over 100 million users from launch. In order to stay relevant and not to need to be rewritten every few months, the EU has chosen to make its AI Act principles-based. It then expects the market to settle on standards that fulfil the act’s principles. So by proving compliance to appropriate standards, companies can prove compliance to the act.

The trouble is, at the time of writing there were over 360 standards in online standards hubs. There is horizontal regulation such as the EU AI Act. There is vertical regulation e.g. covering financial sector by the UK FCA. There might also be specific use case regulation e.g. use of AI in medical imaging.

Furthermore, it seems that every consultancy, big tech company and even non-governmental organisation publishes its principles on AI governance and ethics. From Accenture to UNESCO, via G7, IBM, Microsoft and OECD, everyone has a view on how companies should govern their use of AI. Which standard or framework will win out? Companies are left to choose the framework they like the best and then hope for the best.

yellow metal chain
yellow metal chain
Governance across the supply chain

It’s unlikely that most companies will have all of: access to training data, ability to create large-scale foundation models, resources to train foundation models, and market opportunities to apply the model. Each company will take a position in the AI supply chain. However, coherent governance is required across the whole of the supply chain.

For example, in order that a decision made by a model is unbiased, the data must be unbiased, the training of the foundation model must be unbiased, and the application of the model must be unbiased. These three steps are potentially by three different companies. The governance methodology for each must be able to describe how bias is identified and removed in a way that satisfies the rest of the chain, and regulators.

two human hands painting
two human hands painting
How to make corporate governance an enabler

The best corporate governance should be set up as an enabler for the organisation to experiment and find exactly the right application for AI. The right governance framework should be principles-based, matching the principles of regulators:

  • Transparency

  • Reliability

  • Justice

  • Accountability

  • Liability

The framework should be able to describe to regulators, customers and suppliers how these principles are assured.

AI governance should maintain surveillance over the regulations to track changes so that the rest of the company doesn’t have to.

Governance can provide enablement services that accelerate development of AI solutions such as:

  • An ethically safe dataset

  • Sandbox testing service

  • Pre- and post-hoc anonymisation tools

  • Pre-built AI DevOps for compliance functions (equivalent to Data Subject Requests in UK GDPR)

The corporate governance team should model diversity both in terms of the people in the team, but also the skills needed. The best governance teams will have:

  • Data science

  • Behavioural science

  • Big data engineering

  • DevOps

  • Regulation, quality and ethics

AI governance that acts as a proxy for the regulator will be a brake on the business and missed opportunity. The best governance will enable the rest of the business through modelling principles, surveillance of regulations, development tools and a diverse team.