Don’t leave your AI vulnerable to hackers

One of the sure signs that AI software is just like any other software development is the fact that it can be hacked like any other software.

SOFTWARE ENGINEERINGRESPONSIBLE AI

10/31/20241 min read

Man with a halloween pumpkin for a head writing computer software
Man with a halloween pumpkin for a head writing computer software

The novelty of AI is the use of models to generate insight. A data scientist will engineer features, test hypotheses and create models whose efficacy can be described in quality metrics. They’ll consider the ethical impact of their model, whether it’s fair to cohorts and individuals. They’ll harden against edge cases and small perturbations in features leading to large differences in decisions. They’ll protect their model against decisions maliciously changed by poisoning input data. The hardened model is then deployed, insights are generated, job done.

Except it’s not. Even if the model’s decisions are safe, there’s the entire supply chain to assure as well. Models are often derived from Foundation Models (for example Large Language Models). How do we know that these Foundation Models have been maintained securely? After all, the common pickle model package format has been found to be insecure and could be easily and invisibly tampered with.

The application that uses the model also uses foundation third party libraries built using third party package managers and may access cloud services using third party APIs. How do we assure that these were all built to the right level of security rigour? For example typo-squatting could trick a developer into sharing access credentials of sensitive information.

The answer is to remember that in the main, as far as hackers are concerned AI software is just software. So all the best security practices of code hardening, risk assessments of third parties and red teaming are still needed for AI software. New code, old dangers.