Simon Pamplin, Director Systems Engineering WEST EMEA
Since the collapse of Lehman Brothers, controversy and complications have plagued the banking industry. However, behind the lending debates and economic blame game, the IT infrastructure holding the industry up has taken a hit too.
When you read the news it’s now common to read about banking ‘glitches’ that have caused network outages, or integration issues as the result of legacy IT systems. These often result in severe financial, operational or PR issues for banks. Take the poor IT integration and legacy issues for example, which have led to deals such as Santander’s proposed buyout of 316 UK branches from RBS falling through. The Spanish bank cited IT integration costs and complications in separating out branches and creating a new business as the reasons it pulled out of the deal.
Outdated IT systems are a big risk for the banking industry. With complications around IT integration in the banking sector curbing growth and impacting customer satisfaction, it’s more important than ever to have a reliable, capable and scalable network in place. Whether it’s in the retail or investment sector, banks simply can’t afford their networks to go down.
The uptake smartphones and tablets is leading to an increase in online and mobile banking, which requires 24/7 network access. As the banking industry innovates, legacy networks are fast becoming business inhibitors, failing to deliver the kind of capacity needed to meet consumer demand.
For investment banks, the need for a reliable and secure network is vital, more so, from a financial point of view with world markets dependent on them. A network collapse could result in catastrophic losses across international markets.
Consumers won’t tolerate outages and limited access to mobile services and the stock markets won’t stop because one investment bank’s infrastructure has fallen over. So how can financial organisations ensure that their legacy IT infrastructures can meet such demands?
In the past IT departments have chosen the “sticking plaster” approach , one that tries to sweat the most out of legacy infrastructure before reluctantly initiating change. However, this is only a short-term solution as, ultimately, the network will still fail to meet the business’ overall needs.
Data centre of the future – the “On-Demand” data centre
It’s clear that a new approach is needed; one that represents a major evolution in networking toward a highly virtualised, open and flexible network infrastructure and one that will evolve with new technologies and practices (such as Software Defined Networking). With an infrastructure that combines physical and virtual networking elements, customers can provision the capacity – compute, network, storage and services – required to deliver high-value applications faster and easier compared to legacy data centre networks.
How can banks begin to make the transition to the data centre of the future and secure their networks? There are a few key steps to consider:
At the heart of any data centre is the physical networking infrastructure, one that provides the connectivity between applications, servers and storage. However, not all networking infrastructures are equal and for banks that want to embrace a highly flexible and agile on-demand model –that delivers a blueprint that unifies vital areas of the data centre, from fabrics to storage to physical and virtual infrastructure – a fabric-based networking topology is required. A fabric-based network, both at the IP and storage layers, simplifies network design and management to address the growing complexity in IT and data centres today and delivers key features like logical chassis, distributed intelligence and automated port profile migration.
On top of the physical infrastructure will be a virtual or logical layer. This is well-established in the server domain with hypervisor technology. The same concepts are now being applied to both storage and IP networks with technologies such as overlay networks enabled through a variety of tunneling techniques. Next we will see network services virtualised, thanks to the introduction of virtual switches and routers. Network Function Virtualisation represents an industry movement towards software or Virtual Machine (VM)-based form factors for common data centre services.
In addition to the physical and virtual/logical layer will be controllers (for the network, servers and data storage). One such example is the network controller, which is implemented in software and tracks the status of the network and provides well-defined KPIs. The complete architecture is built around applications that directly affect the underlying infrastructure and guarantees the best possible application uptime, performance and security.
Finally, the entire data centre environment must be managed by orchestration frameworks that allow for the rapid and end-to-end provisioning of virtual data centres. There are many approaches in the market, such as VMware vCloud Director and the OpenStack community. OpenStack, for example, allows customers to deploy network capacity and services in their cloud-based data centres far quicker than with legacy network architectures and provisioning tools.
The data centre of the future will therefore be a combination of the most valuable aspects of the physical and virtual layers. Such a data centre will give banking organisations the ability to flexibly deploy data centre capacity in real-time, whenever and wherever they need it.
It will also deliver an improved return on investment (due to scaling, scale multi-tenancy and time and money savings).
So, for banks wanting to make the journey to the on-demand data centre, they must look for technology partners that are focused on delivering a network infrastructure that enables this vision. Find this kind of partner and the banking industry will be well on its way to creating a data centre environment for the future that will preserve and prepare its networks for the increases in data.