Why data centers need a more tailored approach to sustainability assessment

Author:

Our industry is engaged in an important dialogue to improve the efficiency and resilience of real assets through transparency and industry collaboration. This article is a contribution to this larger conversation and does not necessarily reflect GRESB’s position.

As digital infrastructure continues to expand globally, data centers are rapidly becoming one of the most critical—and energy-intensive—asset classes within the real estate and infrastructure landscape. Their unique operational profile—characterized by high and continuous energy demand, intensive cooling requirements, strict uptime expectations, and rapidly evolving IT technology cycles—means that traditional sustainability assessment frameworks developed for conventional commercial buildings often fail to fully capture their performance dynamics.

The sustainability gap goes beyond energy efficiency

While energy efficiency has historically been the primary focus of the sector, there is growing recognition that wider sustainability impacts remain insufficiently addressed. These include areas such as renewable energy sourcing, circular economy and IT lifecycle management, waste heat reuse, and water consumption—many of which extend beyond the direct control of operators and require coordination across supply chains, infrastructure systems, and adjacent industries. This highlights a broader “sustainability gap” within the sector, where operational excellence does not necessarily translate into holistic environmental performance.

Why benchmarking needs more relevant metrics

From an industry perspective, this raises a fundamental question for the market: should highly specialized asset types such as data centers be assessed through more tailored sustainability metrics? While portfolio-level ESG benchmarking remains essential to ensure comparability across asset classes, data centers require performance indicators that reflect their operational realities—such as power usage effectiveness (PUE), cooling efficiency, IT load management, renewable energy procurement, water stewardship, and resilience strategies. At the same time, there is a growing need to incorporate metrics that capture lifecycle impacts and supply chain dependencies, which are critical to understanding the true environmental footprint of these assets.

Frameworks are starting to evolve

In response to these evolving challenges, the market is increasingly shifting toward asset-type-specific approaches to sustainability assessment. One example is the recent update of BREEAM In-Use for data centers, which introduces criteria designed to better align with the technical and operational characteristics of these facilities, while also expanding the scope to address broader sustainability considerations beyond energy efficiency. Such frameworks aim to bridge the gap between engineering-driven performance standards and holistic ESG assessment methodologies.

However, achieving meaningful progress will depend not only on improved frameworks but also on greater collaboration, transparency, and knowledge sharing across the industry, as well as stronger alignment between operators, investors, technology providers, and policymakers. Without this systemic approach, many of the most critical sustainability challenges—particularly those related to energy sourcing, circularity, and infrastructure integration—will remain difficult to address.

Going forward: a more nuanced approach will be essential

Ultimately, developing more nuanced and sector-specific approaches to sustainability measurement will be essential to ensure that the rapid growth of data centers aligns with broader climate, efficiency, and resilience objectives, while also supporting market differentiation, regulatory readiness, and long-term value creation.

Planning for expansion

The project had been developed by the design team with future expansion of the computing areas already in mind: PUE projections across the expansion timeline, chiller capacity sized for future load rather than current draw, and airflow validated under the expanded configuration while the system was still on paper. When the time came to expand, the engineering foundation was already in place.

The alternative is what happens on most projects: design for current load, treat expansion as a future problem, then reengineer an already built system when growth arrives. This results in higher cost, fewer options, and more operational risk than if the growth had simply been modeled upfront.

This matters particularly now. Data center power density is growing, and the facilities being built in this cycle will need to handle loads that were not in the original brief. If the cooling system, power distribution, and airflow management were not designed with that growth in mind, adaptation becomes expensive. If they were, it becomes a planned phase, not an emergency retrofit.

That engineering question belongs at the design stage, not after the first configuration is already locked in.

Beyond compliance

The Bologna project succeeded because thermo-fluid-dynamic engineering was part of the design process from the beginning, not layered on at the documentation stage. This is not about whether contractors or architects are doing their jobs well. It is about when the engineering questions that determine long-term performance actually get asked.

CFD simulation, failure scenario modeling, and external airflow analysis are not compliance tasks to be checked off for certification. They are design tools. For data centers—where mechanical systems are the core of the facility and performance gaps compound over decades—timing makes all the difference. The engineering decisions that guarantee a facility’s resilience, not just on commissioning day but five years later when loads have grown, do not happen during construction. They happen at the drawing board.

Across projects, there is a consistent pattern: facilities where thermo-fluid-dynamic engineering shaped the system from early design stages tend to perform better, cost less to operate, and adapt more readily to changing loads than those where it arrives as a validation exercise. That is the real lesson from Bologna. The technical work itself is standard practice. What makes it effective is simply executing it when the design is still open to change.

This article was written by Tatiana Medaru, Green Building Certification Analyst at EVORA. Learn more about EVORA here.

Read more from our partners.

 

Industry Insights