Posted By Gbaf News
Posted on April 18, 2017

By Drew Shields, Chief Technology Officer at Trading Technologies
The financial services industry has traditionally had a reputation for acting cautiously — keeping risks to a minimum. Over the past decade, though, competition across industries has forced FS companies to become more nimble, more aggressive. One of the most important moves many companies have made has been to embrace cloud computing.
But moving to the cloud and leveraging it effectively are two different things. Companies in the trading space, like those in other industries, are tapping the cloud in creative ways to build new applications and scale more effectively. As they add cloud resources, they face challenges when it comes to managing cloud costs, optimizing cloud resources and improving cloud governance.
At Trading Technologies, as early adopters of cloud technologies in capital markets, we’ve seen first hand the issues companies face transitioning to the cloud. We’ve learned important lessons about how to manage assets across typical proprietary data centers and the cloud.
We develop software for professional traders, connecting them with futures exchanges throughout the world. Risk management is critical to our business. But speed and functionality are, too, and in recent years it became clear that we needed to build a next-generation trading platform to compete at the highest level of efficiency in our industry, and continue to serve our customers as best we can.
In the past, trading systems were built and deployed in roughly the same manner. Today, there are a variety of approaches for building these systems – and two of them leverage the cloud to varying degrees.
The first method is to create a classic deployed trading system, which is still the norm in our industry. You allocate hardware in a data center. You deploy software to it. Every client application needs some kind of dedicated connectivity. These systems are costly to maintain, but generally give you the ability to optimize every part of the system for performance.
On the opposite end of the spectrum is the new “cloud trading platform.” These systems leverage web services accessible over the internet typically via a thin client. They simplify accessibility and will often boast that they are high-performance, but in reality, they don’t source raw market data and order-routing feeds and are therefore significantly slower, less secure and less reliable than their classic, large-infrastructure predecessors.
We went with a third option when we built TT – a hybrid solution that hosts some assets in the cloud and others in private data centers. By blending the two approaches, we don’t have to compromise performance for accessibility, cost for customizability, or control for stability.
The platform itself is a turnkey solution that improves the performance for complex trading strategies and gives brokers and traders access to integrated pre-trade risk controls. It doesn’t rely on others to aggregate market data or route orders through an outside infrastructure. It embraces new technologies like HTML5 and JavaScript, making the system easier to access.
The cloud portion is used for a handful of functions. We use Cassandra and Scala in AWS to create a “forever audit trail,” making sure every message is encrypted, stored and retrieved. This implementation combines the best of what the cloud offers in terms of scalability and cost. We also use the cloud to store UI preferences and settings, the security master database and historical market data going back more than 10 years. This hybrid cloud-colo implementation allows us to better scale and reduce the costs our users incur.
Still, our move to the cloud has not been completely free of challenges. Working with AWS can be complicated, especially when it comes to managing Reserved Instances and figuring out how to optimize those costs on an ongoing basis. To help navigate through the issues, we worked with a reseller, SHI, to implement CloudHealth, a cloud service management platform (CSM).
Based on our experience, the following are some important ways a CSM platform can help you manage costs as you scale your own cloud implementation:
- Sprawl: We have an extremely fluid engineering environment that prizes speed and agility. A corollary of that is that sometimes people spin up instances and forget to bring them back down. That can add cost quickly. Being able to identify and terminate unused infrastructure eliminates a significant amount of sprawl.
- Reporting: Optimizing cost versus efficiency means monitoring for utilization across CPU, memory, disk and network. Detailed reports can help deliver recommendations and insights for immediate savings.
- Rightsizing and productivity: Getting recommendations for how to provision the right assets for the right workloads saves a lot of hassle for the staff. Engineering should be focused on exactly that – engineering. We don’t want them distracted by performance and rightsizing issues.
- Automation: A recent test showed that we can save 30 percent on compute costs by running lights-on/lights-off policies for non-production infrastructure that is only needed during times of peak access or demand.
Revamping your core isn’t easy, and I know from experience that there is no good one-size-fits-all approach. There are always many options to consider, and cloud technologies can play a pivotal role in the decision. Taking time to make cloud part of the solution, rather than just another problem, can get you where you need to go.
Drew Shields is Chief Technology Officer at Trading Technologies, a provider of professional trading software and solutions.