As data-driven decision-making becomes more common across industries, firms need to protect their algorithms from attacks and oversight that may result in failure. One specific type of attack, an adversarial data injection, involves feeding altered data to change the output of a model. Adversarial data can trick fraud detection systems at banks into accepting altered photos of cheques and fool facial recognition systems used for identity verification.
INSEAD Assistant Professors Georgina Hall and Pavel Kireyev talk with Yaron Singer and Kojin Oshiba, co-founders of Robust Intelligence – a company funded by Sequoia – about how algorithms can protect other algorithms, and how firms can reduce the risks of failure in their decision-making systems.