When Bots Take Over the Stock Market: Evasion Attacks Against Algorithmic Traders

Elior Nehemya, Yael Mathov, Asaf Shabtai, Yuval Elovici

In recent years, machine learning has become prevalent in numerous tasks, including algorithmic trading. Stock market traders utilize learning models to predict the market’s behavior and execute an investment strategy accordingly. However, learning models have been shown to be susceptible to input manipulations called adversarial examples. Yet, the trading domain remains largely unexplored in the context of adversarial learning. This is mainly because of the rapid changes in the market which impair the attacker’s ability to create a real-time attack. In this study, we present a realistic scenario in which an attacker gains control of an algorithmic trading bots by manipulating the input data stream in real-time. The attacker creates an universal perturbation that is agnostic to the target model and time of use, while also remaining imperceptible. We evaluate our attack on a real-world market data stream and target three different trading architectures. We show that our perturbation can fool the model at future unseen data points, in both white-box and black-box settings. We believe these findings should serve as an alert to the finance community about the threats in this area and prompt further research on the risks associated with using automated learning models in the finance domain.1 introductionIn recent history, stock markets have been a major vehicle for personal and institutional investing. When buying or selling financial assets via the stock exchange, traders gain or lose money based on changes in the assets’ value. In order to maximize his/her profits, the trader needs to predict changes in the market as accurately as possible. However …