Are the opportunities of AI overstated?

AI is the most powerful tool we have built. It’s transformative, but maybe only for a fortunate few.

RESPONSIBLE AI

12/8/20233 min read

Digital Ethics Summit 2023 logo
Digital Ethics Summit 2023 logo

AI is everywhere, in iPhones and Teslas and call centre chatbots. Nearly everything we touch bears the digital fingerprints from a machine learning model. But not everyone is benefiting. I was at the Digital Ethics Summit in London this week and one of the conclusions that surprised me is that the opportunities that harnessing AI could bring are not accessible by the majority of people on the planet.

Consider the impressive story of Osiris as developed by Addenbrookes hospital in Cambridge. This AI system has successfully cut the waiting time between referral and radiotherapy treatment, which will hopefully improve patient outcomes. The project was given £0.5M grant funding, had access to patient data, and Microsoft made the foundation model open-source. How many countries are able to bring these kinds of resources to bear? And even with these resources, it still took 10 years to deploy the system. How many countries can afford to invest for 10 years before seeing a return?

person wearing lavatory gown with green stethoscope on neck using phone while standing
person wearing lavatory gown with green stethoscope on neck using phone while standing

Consider the escalating cost of training the very largest models in use. Open AI estimated that it costed them $4.6M to train ChatGPT. The money was spent on compute resources, energy to power the compute, effort to curate the training data, and months of effort to train the model. Air Street Capital is predicting that the cost of training a next generation Large Language Model will exceed $1B next year. Who can afford that? Only geopolitical systems with the largest capital and will to create the education, utility and supply chain infrastructure needed to train these very largest models - US, EU and China. The rest of us will have to make to with affordable pre-trained subsets of these LLMs for our needs.

Over-concentration of AI in a tripolar world is only one problem (some might say tripolar is better than the current Silicon Valley hegemony). The less visible problem is that the training data is also over-concentrated in the same way. Roughly a third of the web pages in the internet are written by Americans. There are no countries in the global south represented in the top 10 list of internet site contributors. So it seems self-evident that the current and next gen LLMs are being trained with a dataset that underrepresents the majority of countries, and so its outputs are biased against all but the already wealthy countries.

a handicapped badge on a pole in front of a building
a handicapped badge on a pole in front of a building

Even in the wealthy countries, AI is starting to be used in situations that disadvantage already disadvantaged populations. A particularly attractive use case for cash-strapped governments is to build AI tools that assess applicants for means-tested benefits to try to root out fraud. These suspicion machines prompt human investigators with a list of the most likely fraudsters for investigation. Except AI implemented badly, flagging false positives, causes actual harms to individuals. Just ask sub-postmasters in South Wales or benefit claimants in Rotterdam. Lives have been ruined because of intrusive investigation and needless prosecution. The icing on this egregious cake is that in both these examples the cost saving of the implemented system was hugely overestimated by the consultancies implementing the technology.

Those who say that new AI tools will bring benefits to all are kidding the rest of us. If you are fortunate to be wealthy and living in a wealthy country, AI tools can give you opportunities to be even more wealthy. But the rest of us will have to learn how to work around AI that doesn’t represent us, doesn’t understand us, yet makes impactful decisions on our lives.