globe {{(header.eyebrow.langSelector.label != '') ? header.eyebrow.langSelector.label : 'Choose Language'}}
{{ popupData.primarybody }}
{{ distyMobilePopUpData.title }}
{{ distyMobilePopUpData.primarybody }}
{{ distyMobilePopUpData.secondarybody }}

Part List

{{addedBomQuantity}} {{addedBomName}} Added
{{totalQuantityInBom}} item(s) View List >>

Part List

  1. {{product.name}}

    {{product.description}}

    {{product.quantity}} item(s)
View List >>

Three Steps MTDC Providers Can Take to Reduce Energy Expenses

Jun 2022
Colocation
shutterstock_2136788491-scaled

It seems like IT leaders everywhere want the flexibility to run workloads wherever it makes the most sense; in an on-premises data center, public clouds, edge environments, and increasingly, in multi-tenant data centers (MTDCs).

Whether it’s to provide an optimized user experience for remote workers, or to take advantage of cutting-edge technologies, or to meet regional regulatory and compliance mandates, or simply to reduce management costs, MTDCs have become a critical part of highly distributed architectures. And while demand and occupancy are on the rise, so too are operating expenses for the MTDC providers who have assumed responsibility for the care and feeding of tenant applications and infrastructure.

While all MTDC facilities require power in order to function, savvy providers are focused on finding ways to become more energy-efficient in an effort to more effectively control – and hopefully reduce – costs.

Cooling, which alone accounts for nearly 37% of the overall data center power consumption and is the fastest rising data center operation expense, is a logical place to start.  Fortunately, from an infrastructure perspective, there are proven strategies for finding and fixing cooling inefficiencies, as well as optimizing cooling  moving forward.

Here are three steps providers can take to get on the path to significant savings:

Step 1: Conduct a comprehensive facility assessment.

Racks house servers. Servers generate heat. Reducing heat, otherwise known as cooling, requires power. Sometimes lots of power. This is why it’s not uncommon to find half-empty racks inside data centers. Operators tap into power needed to regulate the temperature of a fully utilized rack.

A first step toward solving this kind of cooling conundrum is for data center design and efficiency experts to conduct an on-site assessment of the facility. Assessments can shine a light on myriad opportunities for reduce energy consumption of both passive and active data center equipment. For example, if those empty slots in racks are left uncovered, then cold air is being pushed into them for no good reason. Assessment can help identify where opportunities exist to close these openings with blanking panels in order to stop cooled air from passing through.

Assessments can also identify the need to implement cold aisle containment systems, which enclose these areas within a physical barrier. They separate supply and return airflow, and thus eliminate hot and cold air mixing. Some physical infrastructure providers – Panduit included – provide universal aisle containment solutions that are designed to work in mixed environments comprised of racks and equipment of different shapes and sizes provided by various vendors. While these systems are an up-front capital expenditure, they more than pay for themselves over time given that they can increase efficiency by as much as 40 percent.

Step 2: Implement sensors to instrument the environment.

Hotspots are local temperature variations that occur in a data center. They are detrimental to performance and can require significant power and cooling resources to correct them. Containment, as mentioned earlier in this post, is a proven remediation strategy. But hotspots can be hard to find. By installing wireless sensors, providers can monitor infrastructure health for these types of anomalies on a 24x7 basis.

At Panduit, we helped our customer WWT — who suspected that hotspots were causing them significant energy inefficiencies — find them, fix them, and immediately realize significant savings after instrumenting the environment with the Panduit SynapsenseTM wireless monitoring and cooling control solution. Comprised of wireless sensors and turnkey intelligent software, it provides quantifiable, actionable information that the WWT team uses to address multiple containment gaps and cooling inefficiencies. With SynapSense, they also learned they could reduce water temperatures of the chilled water plant and reduce fan speeds in certain areas. Overall, WWT realized a 50% annual savings on cooling costs and a 20% overall energy savings.

Step 3: Leverage automation for ongoing optimization

Data centers are dynamic environments. In fact, now more than ever, tenants are running various types of workloads that each come with their own unique requirements for power and cooling. But once operators gain an understanding of all these workload needs, they can use this data to build benchmarks, and then leverage automation technologies to adjust air flow and cooling to meet tenant-specific – and workload-specific – requirements, on-demand.

CyrusOne, another Panduit customer, implemented the SynapSense SynapSoft® Software to continuously align cooling capacity with changing workloads. They use wireless sensors to measure and automatically adjust air temperature and pressure and fan speeds in order to optimize efficiencies without compromising performance.

For MTDC providers, having this ability to automatically dial in power and cooling based on workload needs goes a long way towards accurately calculating and charging tenants for power consumption. Historically, tenants have typically been charged flat fee based on the percentage of space they occupy. However, these days tenants are asking for more visibility into their actual power usage and demanding to be charged accordingly.

This post is the first in a series focused on energy efficiency and sustainability strategies, solutions and success stories for MTDC providers. Be sure to subscribe above for updates so you don’t miss out on our next post on cabling considerations for  advance efficiency goals.

Author:

Jeff Paliga