Are the risks of AI overstated?
AI is the most powerful tool we have built. It’s worth managing the risks to realise the transformative benefits.
RESPONSIBLE AI
12/10/20232 min read
There will be around 40 national elections in 2024. The ones that catch the eye are US, UK, Germany, Poland, India and Taiwan. In the 2020 US presidential election, it’s estimated that there were disinformation factories with hundreds of staffers spending tens of millions of dollars to publish false information. This information caused real harms, eroding trust in the electoral process and the ramifications are currently working their way through the courts. Generative AI has reduced the cost and effort of doing the same in the 2024 election a thousand-fold.
GenAI can do all of: writing content; enriching with audio and video; posting everywhere; adding comments; linking and upvoting and otherwise fooling the social algorithms to giving prominence; bringing disinformation into your personal feed. It can do all of this with only a few prompts to guide it. The cost is tens of thousands of dollars and a week or so of a software engineer’s time. It’s possible that everyone that votes next year will have seen as much disinformation as information before voting.
It’s sadly inevitable that there will be a breakdown of trust and a pollution of public discourse. But will this be enough for bad actors with global reach to be able to change votes?
Recent studies in scientific journals suggest that the answer is yes, and in 3 distinct ways: polarising so that voting becomes an expression of identity rather than evaluation of policy; encouraging people not to turn out at all; or encouraging people to believe lies about a candidate thereby changing their vote.
There are technical solutions being proposed to watermark political content so that consumers can be sure that they are genuine. But this needs to be accompanied by education so that people recognise that content without a watermark should be treated with suspicion. And accompanied by empathy and willingness to listen to other points of view so that people emerge from their own social media echo chamber.
This one example illustrates what’s at stake with our use of GenAI. When the risks of AI are to inhibit free and fair democratic process, it’s hard to argue that risks are overstated. The solutions are in a small part technical, but in the much larger part us needing to fight our own human nature.
But the biggest risk, as I also argued in my last blog post, is that whole communities and countries are unable to access the transformative benefits of AI because of bad actors and concentration of resources solely with the wealthy.