A brief review of algorithmic trading
What is algorithmic trading?
Algorithmic trading, also referred to as algo trading, can be defined as electronic execution of trading orders following a set of predefined instructions for dealing with variables such as time, price and volume. The aim is to leverage speed and computational resources, and to make trading more systematic by eliminating the effect of human emotions relating to trading activity and ensuring efficient execution of a trade.
Automation in financial markets started in the early ’70s, when the New York Stock Exchange (NYSE) introduced the Designated Order Turnaround (DOT) system; a version of DOT, known as SuperDOT, was introduced in 1984. The system allowed placing orders electronically. In the ’80s, program trading became popular and was used widely in trading. With fully electronic markets gaining popularity, along with it came the introduction of program trading. Program trades had predefined or pre-programmed instructions to enter or exit trades based on factors such as time and price. The following table shows the evolution of the different types of trading algorithms used by financial institutions.
- Early 1970s — Designated Order Turnaround (DOT) system. DOT was introduced to computerise order flow in financial markets to increase efficiency by sending orders directly to a specialist on the trading floor. DOT was formerly used by the NYSE for small-order entries. In DOT systems, the user places an order into the system that is then sent to a specialist on the trading floor; the specialist executes the order and the user gets a confirmation of the transaction in real time.
- 1984 — SuperDOT — an upgraded version of DOT. SuperDOT systems use a software or online service, or indirectly access the electronic system, to place orders.
- 1990s — Electronic Communication Networks (ECNs). ECNs changed the market structure and encouraged algo trading by allowing smaller differences between the actual bid and offer prices.
- 1996–97 — IBM’s MGD and HP’s ZIP outperformed human traders. MGD is the modified version of the GD algorithm (named after its inventors Steven Gjerstad and John Dickhaut). The GD algorithm calculates a belief function based on data from recent market history and then estimates each possible bid, offer price or probability of the bid, and the offer is accepted at that price. In the modified version of GD, there was one simple change — forcing the belief function to show zero probability of acceptance when the bid is lower than the lowest trade price and for offers higher than the previous highest trade price. This reduced the volatility of the GD algorithm. Another algorithm was Zero Intelligence Plus (ZIP), developed by HP. These algorithms outperformed trading done by humans and introduced a new approach to trading.
- 2005 — Regulation National Market System. The system made it mandatory for market orders to be posted as well as executed electronically.
- 2006 — Automatic Program Algorithms. By 2006, one-third of European and US stocks were being traded using algorithms or automated electronic programs, making the process more efficient and faster than traditional trading.
- 2009 — High-Frequency Trading (HFT). This can be defined as a method of algo trading that makes use of complex programs to transact a large number of stock orders in a fraction of a second. This became popular when exchanges offered incentives to certain companies to add liquidity to the market. For example, liquidity providers such as supplemental liquidity providers (SLPs) provide liquidity (by creating high volume on exchanges — selling assets held in inventory or short selling assets) for existing quotes on the NYSE and add competition to the market. SLPs were introduced after the 2008 collapse of Lehman Brothers.
- In HFT, orders are placed and managed by algorithms that often copy the role of a market maker. A market maker can be an individual or an exchange firm that sells and buys the securities, for example, a brokerage house. The order placement is two-sided — buy low, and sell high to benefit from bid-ask spreads.
- 2010 — Flash Crash. This happened on 6 May 2010; it started at 2:32 pm and lasted approximately 36 minutes. It can be defined as withdrawal orders for a stock in an electronic securities market that quickly amplify price declines before recovering. A flash crash results in a large volume of stocks being sold in a very short period of time, resulting in sharp declines. However, by the end of the day, the price of securities rebounds, and it is as if the crash never occurred.
- 2010 to present — AI-/ML-driven algo trading. Over the past decade, AI and ML have been used extensively for trading. From sentiment analysis to algorithmic predictions — analysing millions of data points and executing trades at the optimal price, to ensure efficiency in trading by mitigating risks and providing higher returns — AI and ML have a wide range of application in trading, which we discuss further below.
AI/ML in algo trading
There is a wide range of applications in AI algorithmic trading. Trading systems making use of AI operating in the market can execute orders on their own without the need for human intervention. AI algorithms can help enhance liquidity management and execution of large orders with minimum market impact — by optimising order size and duration in a dynamic fashion based on market conditions. 
Using AI for sentiment analysis to identify trends and trading signals is not new; this has been done by traders for decades as they mined news reports and public announcements to understand the impact of non-financial information on stock prices. In the current scenario, text mining and analysis of social media data such as tweets, Instagram posts and Facebook posts are done by making use of NLP algorithms that could, in turn, impact trading decisions. It is easier to automate, analyse and identify existing patterns in large volumes of data with the help of these algorithms.
The differentiating factor in AI algorithmic trading and electronic/systematic trading is the use of reinforcement learning in trading. Reinforcement learning-based AI models can adjust to dynamically changing market conditions, as opposed to traditional systems that take a significant amount of time to adjust to the same parameters. Strategies based on past data may not give good returns, as trends or patterns identified in past data cannot be identified in real-time data. Hence, the use of ML models shifts the focus of analysis towards predicting and analysing trends in real time, making use of “walk-forward” tests instead of back-testing. 
AI algorithmic trading has gone through different phases of development, adding a layer at every step of the traditional trading process. Initial algorithms mainly focused on buy and sell orders with simple parameters; these were followed by algorithms that allowed for dynamic pricing. Then came algorithms with strategies to break large orders and reduce potential market impact.  The current algo trading strategies are based on deep learning, including neural networks designed to provide best order placement to minimise market impact. 
What started as a process to automate the trading process and reduce human intervention to place and execute orders has transformed into making use of AI/ML models to improve trading decisions.  ML-based algorithms will allow for dynamically adjusting decision making while trading; therefore, safeguards need to be put in place to mitigate risks, and automated control mechanisms need to be introduced to halt execution of the algorithm if it goes beyond the limits specified in the risk model.
Possible risks of AI in trading
The use of similar AI algorithmic trading by multiple traders could create stress in the market. Arbitrage opportunities will be reduced because of the popularity of the widely used models; this would reduce margins and bid-ask spreads and benefit the customer, but could also lead to convergence and one-way markets.  Convergence also leads to higher risk of cyberattack, since it is easier for cyber criminals to influence or manipulate agents acting in the same way rather than agents with distinct behaviour. 
AI algorithmic trading could also increase illegal practices aimed at manipulating the market. One such practice is “spoofing”. Spoofing is an illegal market practice where bids are placed to buy or offer to sell securities only to cancel the bid or offer before the deal is executed. This creates a false sense of demand in the market that eventually ends in manipulating market behaviour or action of other investors, allowing the “spoofer” to gain profit from market fluctuations. Spoofing was possible in trading even before algo trading came into the picture, but it has gained prominence with the evolution of algo trading and high-frequency trading.
To prevent spoofing, in July 2013, the US Commodity Futures Trading Commission (CFTC) and Britain’s Financial Conduct Authority (FCA) brought a case against spoofing where the Dodd-Frank Act was applied for the first time. In 2011, Michael Coscia placed spoofed orders to gain profit of nearly USD1.6m; he was later charged with six counts of spoofing and fined USD1m. 
To reduce risks from using AI-based models in trading, safeguards need to be put in place. Testing every single release of an algorithm before deploying it for trading is important, as is generating automated control mechanisms that would immediately switch off the model to mitigate risks associated with algorithms.
How Acuity Knowledge Partners can help
Our unique offering — together with our team of data scientists, engineers, SMEs in banking operations and trading, and specialists in the compliance domain — provides significant help in the field of algorithmic trading. This includes automating data pipeline ingestions, automating trading strategy implementations, developing AI and machine learning models, and building robust data-quality controls to ensure business and integrity logic is followed. With our expertise, our clients confidently navigate the complexities of algorithmic trading, enabling faster and accurate executions.
About the Author
Neelima is a Data Scientist with 2.5 years of experience in building advanced analytics and ML models. She has programming experience in Python, C++, SQL and knowledge of Big Data tools (Hadoop, Hive Queries).
Originally published at https://www.acuitykp.com