While artificial intelligence has not yet turned out the way science fiction predicted, machine learning systems are definitely working all around us. Many of them aren’t necessarily complex, but they do adapt output based on internal algorithms, learned parameters and external stimulus. Most folks understand a learning system can be corrupted by feeding poor input but the practice of doing so is not so trivial. Some German researchers have developed a proof of concept to ‘poison’ a learning system given some knowledge of the learning vectors (seed data, algorithm, etc.). The impact of their research revealed less about whether poisoning could be done and more about revealing the magnitude by which proper poisoning can sway a system. It wouldn’t be far fetched for a financial institution’s high frequency trading system to become the target of a poisoning attack – one only needs to make it alter its behavior in the slightest of predictable ways to completely cash out on its actions.
Similarly tagged OmniNerd content: