Guest Commentary from Quantitative Brokers’ Robert Almgren Responding to Chicago Fed’s John McPartland’s White Paper:

I would like to thank John McPartland for proposing what I think of as a “full employment act for quants.” As a quantitative analyst myself, I think that his proposals would provide an extremely rich set of opportunities for sophisticated traders to leverage their analytical capabilities to understand, better than other market participants, what is really happening in the market and how to profit from it.

But as a market participant, I think that these proposals would complicate a trade match process whose main goals should be to be simple, robust, transparent, and predictable. All traders should easily be able to know what will happen when they submit a market or a limit order. Exchanges should concentrate on making their match engines absolutely bullet-proof and as rapid as possible. Then leave it up to the market participants to design optimal strategies. Exchanges can continue to use their existing surveillance functionality to identify and penalize obviously harmful behaviors, but for the most part they should let the market work.

Most of the recent high-profile problems at exchanges have come from over-complication on the side of the exchange.

  • In May, the CME attracted negative press attention because trade fills were disseminated a few milliseconds earlier than quote updates. Some participants are able to use one-lot “canary orders” to detect price moves before they were publicly known, and to profit from that information. The fix for that problem is to keep the match engine simple so that all information can be disseminated essentially instantly, or at least with internal processing lags that are smaller than the processing lags on the client side that would be needed to profit from the information. Fancier algorithms would make the technology burden worse, not better.
  • In equity markets, one enabler of predatory high-frequency trading is the existence of dozens of different order types (34 different order types). The most notorious were “flash orders”, but various other order types have subtle nuances that can provide advantages to participants who really understand them and know all the possibilities. Anecdotes from market participants suggest that very few really understand or are even aware of them all. If the exchanges offered only a few simple order types, then market participants can instead spend their time understanding the underlying assets being traded.
  • It seems that about once a month, we have a news story about an exchange experiencing a technology error that shuts down trading for some length of time. Making the underlying algorithms more complicated will only make this worse, and distract from the exchanges’ fundamental role: to provide a reliable and efficient platform to deliver matches between buyers and sellers, with rules that make it easy for everyone to understand what is happening.

There is little evidence that high-frequency trading is actually bad for markets. As McPartland cites, most academic studies conclude that HFT is generally good for liquidity and for price discovery. Most of the obvious predatory strategies are kept in check by exchange rules, for example requiring a minimum ratio of fills to limit orders, and by surveillance.

One must also remember that allocation rules have no effect on market orders. It is still generally true that anyone who is willing to pay the spread can execute immediately against the displayed quantity. At worst, the visible quantity may have been withdrawn a few milliseconds earlier, but use of a marketable limit order will prevent the order from filling at worse prices. The allocation algorithm affects only participants who are hoping to improve their execution by the amount of the bid-ask spread, or those who are trying to outsmart all other users by extracting predictive information. The hypothetical virtuous fundamental trader who has a strong opinion about the proper price has no reason to worry about market details at all. And reducing the minimum price increment, or “tick size”, might be the best thing that exchanges could do to advantage the fundamental traders.

With that as background, let me comment on a few details of McPartland’s suggestions:

(1) Mixing of pro rata and time priority algorithms has been implemented by several exchanges. The CBOT (part of CME group) uses a simple percentage split for 2-year Treasury Futures and for some calendar spread contracts. NYSE LIFFE has been through several iterations of their weighted pro rata algorithm. The version described by McPartland, in which the pro rata weighting is adjusted by the index number of each order in the queue, was introduced in August 2007. A December 2007 white paper by Karel Janecek and Martin Kabrel of RSJ Invest in Prague described how that algorithm could be gamed by splitting an order into a number of one-lots and one large chunk; the one-lots served only to push down the priority of other orders. In August 2010 LIFFE replaced that algorithm by one in which the weighting was determined by the preceding *volume* in the queue rather than the *number* of preceding orders, which is not susceptible to gaming by splitting.

In April 2013, LIFFE announced a new version of the weighted pro rata algorithm, to take effect May 29 for Short Sterling and Euroswiss, which shifted the weighting function to make it more like time priority (a QB technical report describes the changes and compares to CME’s approach) Because of unspecified technical difficulties on the first day of trading, this change was rolled back, to be reintroduced on some future date. All this is an illustration that designing allocation algorithms that suitably interpolate between pro rata and time priority is not trivial, nor is it easy to implement them correctly and robustly. Exchanges should keep it simple.

(2) Using a function of time rather than sequence, and allowing orders to “go back in time” to acquire earlier priority, are intriguing ideas. But they violate a very important principle: market participants should be able to understand how their order will be handled. In a FIFO market, I know that my position in the queue depends only on the quantity present in the book at the time I submit my order, which is public. In a pro rata market, later orders may arrive and reduce my fill fraction, but again this is public information and I may modify my order in response.

If fill percentages depend on all the separate *times* that previous orders arrived in the book, then I cannot imagine a data distribution protocol that would let me form an accurate picture of how likely I am to be filled (especially given the proposal that exchanges should disseminate only total size at each level). If later orders are allowed to jump ahead of me because of their fixed lifetime, it is even harder. Because I am a quant, I can imagine several ways in which I might build statistical models, but they will not be easy.

(3) Discrete-time auction matching has been proposed since long before high-frequency trading. For example, Nicholas Economides and Robert A. Schwartz in 1995 pointed out that trade execution requires buyers and sellers to meet in both price and time. Just as discrete price grids facilitate meeting in price, discrete auctions facilitate meeting in time. The random time proposed by McPartland is a new element.

Most proposals for discrete auctions assume that the state of the order book is continuously disseminated as new quotes arrive (as opposed to, say, the sealed bid process for US Treasury auctions). It is hard to see how this would reduce the data flow compared to continuous execution. There will perhaps be even more rapid positioning of orders in the book compared with a market in which matched orders are immediately executed and removed.

There might be even less disincentive to post fleeting orders with the auction mechanism than with continuous trading. If I post a bid or offer into the market and cancel it after 50 msec, then with a random auction occurring randomly each 500 msec I know that the probability of fill is at most 1/10. In a continuous marketplace, a hostile trader may jump on my order at any time, since he may choose his execution time in response to my order.

Another difficulty with random auctions is the complexity of hedging.  A common strategy for trading correlated assets is to post limit orders on each side, and when one limit order is filled immediately send an aggressive order in the other asset. This relies on the ability to execute immediately, that is, to have some context in which certain execution is available to someone who is willing to pay the spread. Execution at random times would make such hedging strategies very difficult to implement precisely, and would provide ample scope for quants to model the inherent risk content.

To summarize, I think that while the proposals in this white paper are intriguing, they would make life vastly more difficult for most users of markets. They violate the fundamental principle that execution results should be predictable. Immediate and certain execution should be available for those who are willing to pay the spread, and for those who choose to try to capture the spread, the rules should be simple and clear. Exchanges should focus on technological solidity and speed. Sophisticated execution strategies should be left to those who choose to make it their business, and others may choose to use execution brokers to capture the final basis points of execution quality. Keep execution simple.

Robert Almgren
President and Cofounder
Quantitative Brokers

Pin It on Pinterest

Share This