What would you do to get an extra 30 milliseconds in your business day? Well, if you work in the capital markets, plenty.
After all, this is the industry that inspired not one, but two enterprises to lay cable across the Arctic Ocean last year in order to shave around 20 milliseconds between trading centers in London and Tokyo. And, for years now, trading firms have paid co-location fees to exchanges to be as close as possible to computers executing trades in order to cut distance and, therefore, time for each trade. No doubt that explains why in a recent survey SAP conducted of top IT infrastructure issues in the capital markets low latency led the list, trumping other critical problems like access to global markets, rick controls, and even co-location.
It’s not rocket science to understand why traders are so obsessed with saving a millisecond every chance they get. According to the London Financial Times, in 2012 the Quantitative and Derivative Strategies group inside Morgan Stanley estimated that 84% of all buying and selling on U.S. markets was done through computer-to-computer trades, up from around half in 2010. Thus, any advantage a trading firm has is not in the waving of hands by a savvy trader on an exchange floor; rather, it’s the ability to shave milliseconds during machine-to-machine communications.
But it is, in a way, rocket science to know how to save those precious clicks of the clock. You need to have an information infrastructure that is capable of processing and analyzing enormous amounts of data in real time, taking advantage of all the efficiencies and optimizations that are possible from, if you will, the co-location of a server’s memory and processor. Effective trading algorithms need to consider a wide range of information sources from news analytics and volatility conditions to multi-asset risk modeling and sentiment analytics, then act of the results; not in some report that prompts a trader to consider buying or selling, but by the system in real time when that momentary advantage is there.
Let’s take a fictitious example of high-frequency trade on the NASDAQ exchange, where computer-to-computer trading is the norm. With a 30 millisecond advantage a trading firm can buy stocks likely to be bought after the opening bell by, say, a mutual fund or any other entity without the time advantage. In this example, the trading firm’s algorithm identifies an equity and acquires 5,000 shares of it at $21 per share within 50 milliseconds after the NASDAQ opens. It then immediately sells those shares to the mutual fund for $21.01, a penny-per share profit, or $50 in this example.
For those outside the capital markets, that might not sound like much of a profit for such technology investment. However, as those in the industry understand, with nearly 8 million trades per day on NASDAQ alone, the opportunities for profits are compelling. With 3,600,000 milliseconds every hour, theoretically a trader with a 30 millisecond edge has 120,000 opportunities every hour of the trading day to make a penny or two per share. To say that it all adds up to real money is an understatement.
SAP HANA delivers that 30 millisecond advantage to traders in high-frequency markets. It also reinstates the functional elegance that many firms strived so hard to achieve and have seemingly lost due to the technology debt that most trading platforms are plagued with today. It seems logical to me, then, that if traders are willing to pay a premium for high-speed trans-Arctic networks or spend millions annually for co-location facilities to save precious fractions of a second for each trade, the benefits of SAP HANA are obvious when little bits of time equal lots of money, particularly since SAP HANA can be implemented in a non-disruptive manner.
* * *
Please join me Tuesday 18 June in New York at the Securities Industry and Financial Markets Association (SIFMA) annual conference, SIFMA Tech 2013, for my keynote, where I will discuss the next generation of intelligent trading applications using In-memory computing.