The Pulse by GRESB
The Pulse by GRESB is an insightful content series featuring the GRESB team, partners, GRESB Foundation members, and other experts. Each episode focuses on an important topic related to either GRESB, sustainability issues within real assets industry, decarbonization efforts, or the wider market.
- Watch on Youtube
- Listen on Spotify
- Listen on Apple Podcasts
Building Confidence in Your Data
In this episode of The Pulse by GRESB, host David Tassadogh, Manager, Data Quality and Validation at GRESB, speaks with Elliott Barnes, Managing Consultant at EVORA Global, about the critical role of data quality in GRESB Assessments and ESG reporting. Together, they explore what data confidence means in practice—covering data coverage, completeness, accuracy, and assurance—and why high-quality, asset-level data has become a key differentiator for performance. The conversation examines how investor expectations are evolving, the risks associated with low-quality data, and how stronger data foundations enable better benchmarking, scoring optimization, and more informed decision-making. Listen for insights on data validation approaches, market trends, and practical steps organizations can take to improve data quality and unlock deeper value from their ESG data.
Transcript
Can’t listen? Read the full transcript below. Please note that edits have been made for readability.
David: Hi everyone, and welcome to The Pulse. I’m David Tassadogh, Manager at GRESB, dealing with data quality and validation. I’m accompanied today by Elliott Barnes to discuss the topic of data quality. Elliott, would you like to give yourself a small introduction?
Elliott: Perfect. Thanks David, and thanks for having me today. I’m Elliott Barnes, a Managing Consultant at EVORA Global, leading the GRESB service line. Good to be here.
David: We’d like to spend a bit of time today discussing the topic of data quality and its importance when it comes to the GRESB Assessments—and of course, ESG and sustainability. We have a set of topics that we’d like to discuss today, and to set the stage, Elliott, would you mind helping the listeners understand: when we talk about data confidence in the context of GRESB, what does that really mean?
Elliott: Of course. For today’s session, when I talk about data, I’m specifically going to be referring to the asset-level spreadsheet. But generally, when we’re talking about data confidence, we’re referring to data coverage, completeness, and the accuracy of the data—being able to confidently confirm that it reflects reality, and being able to trace that back to the source data through validation and verification processes.
A step beyond that would be the completion of assurance as well. Additionally, I think GRESB does try to describe its approach to data validation in a pretty systematic way, where we have evidence validation, quantitative checks, automated integrity rules, and statistical outliers added on top of all of this.
David: So I think when you come in as the consultant and are assuring a lot of that data that comes in—and we, on the flip side, are trying to run these tests internally against the benchmark data that we have prior to publishing the reports—it starts to build a pretty solid picture of entities’ ESG habits and practices. But then, why would you think that this is such a critical issue in GRESB reporting today, and in the ESG industry as a whole, I suppose?
Elliott: It’s so important because, as an industry, we’re seeing improvements year-on-year. If we take it from a GRESB perspective, across a lot of clients I work with, we’re seeing pretty much full points in management. We’re seeing close to full points in performance in the components that aren’t within the asset-level spreadsheet.
The main differentiator now is the data that goes into that asset-level spreadsheet. If you want to be number one in your peer group, you want to be a five-star fund—this is where the difference is made. The rest of the submission often is pretty high scoring; this is the make-or-break.
Beyond that, investors are taking an increasing interest in GRESB results. They’re asking more and more questions now around: what do these results mean? How have you driven this reduction, say at an asset level? And you’re getting these more detailed questions. So you need to have confidence in the data set that’s feeding into this, and be able to give more tangible answers than we’ve seen in the past.
David: I think that lines up quite well with the GRESB 2028 Performance Roadmap that’s being worked on. It will attempt to tackle a lot of these challenges that we’re facing in that area.
Really addressing the fundamental issue that you’ve highlighted, where we’re seeing scores converging—not just in the Real Estate Assessment, but across all assessments: fund, asset, etc. So that really is going to be key in our ability to continue to differentiate and separate the top performers from the rest.
Elliott: That change will be really interesting. I know there have been some initial conversations with the wider market around what these changes look like. Over the next couple of years, as they start to materialize more, it is going to separate those that are really pushing the needle forward and making impactful change, and allow them to stand out even more. So it’s definitely exciting to see what that looks like.
David: So what do you think are the risks if we continue to operate with potentially low-quality data sitting in GRESB submissions?
Elliott: I think one of the biggest risks there is reliability. Having lower-quality data sets realistically limits what a participant can score for that fund. GRESB has been able to establish itself as a benchmark for the industry when it comes to overall ESG performance. So being able to increase the tools and requirements by which data comes in is only a good thing.
Whether you have 100% coverage or 50% coverage, it needs to be high-quality for that proportion of data you have available. From a client perspective, low-quality or low-scoring submissions can reduce access to pools of capital, especially where investors have high ESG requirements. And regardless of what we’re seeing on the global political stage, this topic isn’t going away.
The way questions are asked might change, but the underlying data requirements are not going anywhere. Low-quality data prevents you from understanding what’s driving consumption patterns, and it prevents you from being able to reduce consumption and improve efficiency at the asset level. The higher the granularity of data, the more insights you have—and the more targeted impact you can have.
David: So weak data is not just an administrative nuisance—it can affect reported outcomes and downstream decision-making for both the reporting entity and its investors. So what does high-quality data enable for clients and for their investors?
Elliott: It allows you, firstly, to optimize your GRESB submission. When you look at how GRESB structures scoring—like the 60–40 weighting for tenant and landlord data—being able to go beyond whole-building data to more specific spaces within your assets allows you to maximize scoring. As more participants move toward the top end of the benchmark, this becomes a key differentiator. It also feeds into data coverage and like-for-like comparisons.
It also allows you to use tools like ASHRAE more effectively. With granular data, you can identify which assets are below or above the benchmark—and even those within, say, ±10%. That enables much more targeted action. For example, if an asset is slightly above the threshold, you can focus on reducing landlord consumption to bring it below.
Similarly, if it’s just below, you can take steps to keep it there. But you can’t do that in a meaningful way with low-quality data. High-quality, granular data gives you the confidence to act. This applies not just to energy and GHG emissions, but also to water. For example, in areas like reuse and recycling, many participants don’t score highly—but better data can help capture additional points. And especially for four- and five-star funds, even a fraction of a point can make the difference between ranking first or fourth in a peer group. That’s where this really matters.
David: And I think I would add that GRESB has the Real Benchmark product, which is a good example of this direction of travel. It allows managers to analyze detailed energy and GHG data across their portfolios—covering around 200,000 assets—and understand projected performance against different decarbonization pathways, like CRREM or ASHRAE. This is where data confidence becomes even more valuable. It moves beyond a compliance exercise and becomes something that supports deeper, asset-level decision-making.
Elliott: Just to add, it also demonstrates the maturity of a fund—both in terms of submission quality and the metering setup needed to obtain that data.
David: So from your perspective, as someone who has worked with numerous entities dealing with GRESB Assessments, how do you see GRESB’s approach to data quality and validation addressing market expectations? And where do you see room for further evolution?
Elliott: I think, first, GRESB has enabled globally comparable data through this submission process—which hadn’t really been done at that scale before. The asset-level spreadsheet is designed to work across regions, countries, and sectors, which is great for comparability across markets and peers.
On the validation side, the outlier checks and other tools are extremely helpful for spotting errors. The Asset Portal flags issues early, giving participants time to fix them before submission. That ultimately improves the quality and robustness of the final report.
Beyond that, GRESB has pushed the market to improve data acquisition. There’s increasing demand for full coverage and completeness, and GRESB has played a key role in shaping both market and investor expectations.
David: Fantastic. And lastly—one more topic. For organizations looking to build more confidence in their data, where should they start?
Elliott: The first step is always to map out your assets—where the meters are, what spaces they cover, and what data you have. It’s not always exciting, but it has a big long-term impact in understanding your position and identifying gaps.
Secondly, identify what can be automated in terms of data collection. There are a lot of tools available now that can reduce that burden. Where manual collection is required, check your leases. Is there a green lease in place? Is there a data collection clause for that tenant or space? And make sure there’s a clear frequency for data collection. Leaving everything to the last minute—especially for calendar-year reporting—can create a lot of pressure. A more regular and structured approach is much more effective.
And finally, you don’t have to do this alone—there are many providers in the market who can support this process.
David: Indeed, GRESB has also been having more conversations around that—particularly around automation, smart metering, and data aggregation for reporting.
Thank you very much. This has been a great conversation. Thank you for your time.
Elliott: Thank you. Really enjoyed it.
David: Alright, that’s all we have time for today. Thank you everyone for joining and listening to our conversation. Thank you, Elliott, for your insightful remarks—and we look forward to having you on a future episode of The Pulse.