PUE, or Power Usage Effectiveness, had been the watchword in data centre circles for years, but for very valid reasons it has never fully captured the imagination of those in charge of the IT infrastructure for financial institutions. Now, after adding further metrics looking at resilience and design capacity, The Green Grid has made sure financial organisations will sit up and notice its new Performance Indicator (PI) by showing the business how risk and efficiency are inextricably linked.
Taking data centre efficiency to the next level
Up until a few months ago it seemed that hardly a day went by without a PUE story in the data centre or tech press – a brand new datacentre highlighting how efficient it was with the power it uses or a government department patting itself on the back for having the most eco-friendly data centres.
But a PUE score, while extremely useful in measuring energy efficiency, tells only part of the story. It gives no indication of how likely that datacentre is to fail or whether, when operating at full load, the temperatures reached impacts performance. As a result, for investment banks and other financial institutions, this metric was in many ways redundant.
The focus for investment banks and other mission critical facilities is to minimise risk and ensure 100% uptime. The business will happily sacrifice energy and capacity to protect against failure. The energy costs are so insignificant compared to the turnover that they will always remain a secondary concern.
In fact this comparison between the financial sector and other sectors highlights how far PI has travelled and the huge benefits it offers datacentres in any sector. PUE still plays an important part, but only a part. The other metrics within the new performance indicator should convince financial institutions of the benefits.
PI – Made to measure
The Green Grid’s new approach looks at three metrics. The first is PUE, which measures how efficient the facility is in relation to its defined energy target. The second, IT Thermal Resilience, identifies the risk of equipment overheating in the event of cooling failure or planned maintenance. Finally, there is IT Thermal Conformance, which looks at how much of the IT equipment is operating with a suitable inlet air temperature during normal operation.
Using these metrics together gives an unprecedented overview of the performance of any data centre. For the first time the business can see the link between energy and risk and capacity. More importantly, by changing one, we can now see the impact on the other two. It provides a baseline model, helping to understand the here and now, consequently, allowing the prediction and measurement of change. Progress can be tracked over time and when any change is introduced, the performance effects of these changes can be assessed.
While most data centres want a balance of energy efficiency, reliability and capacity, a financial institution will focus on reliability. Some of the most critical financial data centres are based in the centre of cities to improve latency and often have to compromise size for location. On this occasion setting a benchmark that focuses on getting the most out of that small space, but maintaining reliability, would be the most appropriate approach.
There are other data centres in this space that compromise both efficiency and reliability in the pursuit of speed and capacity, knowing they can quickly redirect the data elsewhere if there is a problem. For them the benchmark would be focused entirely on getting the most out of the data centre without worrying about risk. The beauty of PI is the business can now decide what is important to them.
Predicting the future
There are four levels to the new PI, the first two focus purely on measuring the current state of the data centre. This is valuable in itself, but also sets a solid base to do far more. Levels three and four focus on the future state, predicting what will happen in the future and using computational fluid dynamics to understand what will happen when things inevitably change, either through failure or after a technology upgrade.
The most advanced and reliable implementation of PI requires simulation. With this data you can accurately predict the outcome of any given scenario. For example, if you simply want to introduce newer and faster technology you can see in advance if any problems will arise against your benchmark. Alternatively, and considering the use cases of some financial institutions, you might want to find out if you are running your facility at full capacity, or if there is room do more and do it faster.
Using these powerful benchmarking and predictive capabilities of the highest level of PI provides a reliable framework to create a data centre that is tailored precisely to your business requirements now and in the future. Whether that is a data centre that is performing at an optimal reliable level, or one that is pushing the boundaries of reliability to deliver more capacity and speed at critical times.
It was understable why mission critical or financial institutions had no interest in PUE, a stand-alone metric focused purely on power efficiency. With resilience capacity and future predictions added to the mix it could be argued that the business case for PI is now not only compelling from an operational and technical perspective, but also in the business sense. If the market isn’t already trying it, there is the advantage of early adoption. If they are already using it, how can you expect to compete without it.