By Eric Jorgensen, VP Sales EMEA, Virtual Instruments
Discussion of updating and consolidating an organisation’s IT environment for the optimum use, of storage and handling of sensitive data are topics of the highest priority. Across all industries, but particularly within banking and finance, the sheer volume need for real-time availability and agility of data, maintaining compliance and meeting Service Level Agreements for financial applications are all issues that are as high on the business agenda as ever.
For many, the existing technological infrastructure in financial institutions simply cannot keep up with the demands of today’s datacentres. CIOs and storage infrastructure departments are faced with the prospect of escalating costs and disruption to daily business. When it comes time for IT updates, decisions made around investing in new capacity, applications and infrastructure management tools are critical to an organisation’s performance and future. Delaying is not an option, according to Gartner, “By 2015, 20 percent of Global 1,000 organisations will have established a strategic focus on ‘information infrastructure’ equal to that of application management.”
While it’s all well and good to build your datacentre specifications and ideal design from the ground up – that is if you have the luxury of investing millions in an entirely new datacentre with the fastest and newest technology – the reality is that there is a process in procurement and upgrades, and this takes time. There is a definite gap between the introduction stage of a product and its time of adoption and growth. Consequently there is often a lag behind the hype of the latest datacentre developments, such as Flash storage systems. In addition, many will be faced with upgrading within a legacy mainframe environment. This can produce a range of escalating interoperability issues, while the challenges facing each organisation are entirely unique.
So how do you go about creating the ideal efficient datacentre infrastructure within these current constraints and challenges?
The answer is one that some have already started to employ with successful ROI results: A highly informed and phased methodology with a strategic twist – it is all based around performance. The datacentre should be looked at from the position of Infrastructure Performance Management (IPM). This involves looking at the infrastructure as a whole and then drilling down to focus on the areas that require work, such as growing applications. That way you get a view on what is working well, and where any changes need to be made. It’s also important to be informed about how legacy systems have dealt with issues in the past and to have absolute clarity on budgets and maintenance plans.
You can work with the infrastructure you have and take a gradual approach to applications and decisions about datacentre upgrades, while taking advantage of the many ‘try before you buy’ opportunities currently on the market. While maintaining some older technologies can still makes sense, trying to squeeze the most out of existing systems does not always make for the best economic decision in the long term. Technology advancement has brought new solutions to the market that is a better, more modern substitute than an all-out replacement.
When it comes to datacentre migration and consolidation, it is always useful to learn from the experiences of others. A large Scandinavian bank is one such organisation that is no stranger to these challenges. It wanted to gain cost efficiencies by modernising its IT infrastructure to mitigate risk and ensure it continued to be competitive in the future. With thousands of business and personal accounts, it was crucial that transactions continued to run 24/7, so there was no option to switch off servers – as might be possible with other types of business – making maintenance windows limited.
The Storage Infrastructure team quickly realised that in order the make informed decisions during the migration and consolidation programme they needed to be able to monitor the entire infrastructure during the transformation. Using an Infrastructure Performance Management solution is invaluable for companies going through this period of change, allowing analysis and the collection of valuable data. Previous tools only showed the usage of storage, so if something went wrong it cost days, potentially weeks, locating the problem – real-time monitoring allows much better organisational control.
By monitoring their infrastructure, the bank was able to spot bottle necks in its storage area network and instantly find a solution. It also enabled them to pinpoint and utilise untapped existing resources such as thousands of unused ports, instead of purchasing new switches to accommodate, thereby making an immediate saving. In addition, the right IPM solution helps to ensure performance is maintained over mission critical applications, so vital for this industry. The bank introduced a programme of modernisation, including risk, change management and staff competency training. A steering committee reviewed risks each week. The modernisation programme for the infrastructure involved the relocation of a datacentre to negate proximity risks and following on from this they also decided to build an entirely new centre from scratch, using the lessons learned from the first move.
The organisation had managed to successfully integrate an entirely new storage solution, with automatic tiering. After thorough analysis, the relocation began with the aim to decommission the old hardware and close the datacentre to save cost. The old datacentre was 1,200 squares metres – the new one is only 230 square metres, which means it’s greener, more energy efficient and as everything is virtualised it uses less space.
The proximity risk was resolved, the Network, Storage, SAN, Server and Infrastructure were all modernised and virtualisation was introduced for the first time, reaching a level of 90% ratio by the end of the migration. In this highly complex programme the bank managed to move 1,600 pieces of equipment over a six month period, virtualised the system which significantly lowered costs and migrated the storage to new technology.
There is no doubt that this safe, high transparency, phased approach, adopted by some of Europe’s largest banks and financial services organisations, worked and has helped future proof their systems even within in the complex and fluctuating environment of the finance industry. Their experience proves that although the prospect of datacentre consolidation may sound formidable, with proper visibility, planning and management, better ROI and optimum level datacentre performance is an achievable outcome.
About Virtual Instruments
Virtual Instruments delivers the industry’s only real-time Infrastructure Performance Management solution. The award winning VirtualWisdom® platform provides unparalleled visibility into the performance, health and utilisation of the entire open systems infrastructure – empowering customers to guarantee the performance and availability of their mission critical applications across physical, virtual and cloud computing environments. Through a unique combination of software and hardware, VirtualWisdom captures, persists, correlates, analyses and presents a breadth and depth of data never before possible. This highly accurate and comprehensive view enables customers to stop reactive troubleshooting, start managing performance and achieve cost optimisation. Virtual Instruments can be found online at http://www.virtualinstruments.com.