We cannot wait for Regulation to stop being ineffectual

Organisations and individual practitioners mustn’t wait for regulators to tell them to be responsible.

RESPONSIBLE AI

11/24/20232 min read

black and white robot toy on red wooden table
black and white robot toy on red wooden table

Standards are essential in any marketplace. Whenever there’s a flow of goods and services in exchange for perceived value at a negotiated price, it is essential that there is a robust detailed definition of what these goods and services actually are. Compliance assures quality, safety, interoperability. Proof of compliance demonstrates accountability, transparency. Achieving best practice efficiently reduces cost. Rejecting solutions that do not demonstrate compliance reduces the cost of rectifying future mistakes. All of this takes cost out of the marketplace, making the goods and services affordable for more people. Standards are good.

The problem is, there are currently too many standards in the space of AI. A quick search in https://aistandardshub.org/ returns 360 different standards (across all domains). Who has time to review them all, figure out which are relevant, and create a combined feature set that complies with just the relevant standards set? Having too many standards is the same as having none.

The marketplace will eventually choose its preferred standards, the one that’s cheapest to adhere to, and goods and services that backed the wrong one will eventually die out. Even standards that earn their place in the marketplace will eventually get replaced by the new better/cheaper. RIP betamax, mini discs, GSM. But whilst there are multiple standards in the marketplace, there is structural cost from inefficiencies.

Regulation is important in marketplaces that have the potential for societal harm. Regulatory frameworks provide a set of guidelines and requirements that should protect consumers from these harms. Regulation generally avoids defining adherence to any specific standard as governments do not want to distort marketplaces. For the same reason they are often voluntary not mandatory. Both of these reasons make them ineffectual when there are no settled standards.

In previous new technology sectors, the technology normally races ahead, regulation takes years to catch up, standards settle down in decades. We don’t have this time to enforce responsible AI through regulation.

AI demonstrably causes societal harms already. AI technology is moving fast, is already cheap enough to be democratised, and so is already hugely impactful. Organisations and individual practitioners mustn’t wait for regulators to tell them to be responsible. They mustn’t use the excuse of a confusing standards environment not to know how to be responsible. Their purview must look beyond individual buyers of their goods, but the society that they are part of. The stakes are too high.