GRESB data quality and procedures 

At GRESB, ensuring the highest possible standard of data quality is a core part of our mission. This document outlines the full spectrum of measures we have developed to ensure the integrity, reliability, and comparability of the data submitted through our assessments.

While data validation checks are a central component, GRESB’s approach to quality control is broader: it encompasses extensive participant guidance, training materials, structured review opportunities, and continuous improvement of internal flagging systems for data validation. These efforts reinforce our role as a global standard setter in sustainability reporting and demonstrate that developing and maintaining a high-quality reporting platform is a complex, multifaceted process.

Participant guidance and training

Good data quality starts with a proper understanding of GRESB’s reporting requirements. This is why we offer extensive guidance materials to all participants, including the Reference Guide, Scoring Document, Aggregation Handbook, and Asset Spreadsheet Instructions. These documents are continually updated to reflect evolving standards and best practices.

GRESB also invests heavily in participant training. We offer online training modules and conduct targeted webinars to support participants in understanding scoring logic, validation rules, and data formatting.

Data quality checks

As part of its value proposition to investors, managers, and operators participating in the assessments, GRESB conducts both automated and manual data quality checks on data submitted through the Portal. These checks are designed to reduce the risk of misreporting and ensure the integrity of the data submitted to GRESB. The following section of the document outlines the structure of these controls and explains how and when they are applied.

Data quality procedures are made up of two complementary components:

  • Qualitative evidence file validation (Manual Validation): This process involves the review of all submitted evidence documents using a combination of human reviewers and automated tools.
  • Quantitative data checks: This process focuses on individual data points uploaded to the portal and applies additional controls to quantitative data points that are not subject to Manual Validation—i.e. where no supporting evidence is required. The emphasis is on data points that are prone to errors because they are generally more complex and granular to calculate.

Together, these two layers of validation aim to improve the consistency, reliability, and integrity of reported data across the assessment.

Please note that these checks should be viewed as an additional safeguard in addition to indicators covering assurance and verification of data. While GRESB’s scoring system promotes the use of third-party assurance and/or verification, it does not yet mandate it as part of GRESB submissions—making these additional controls essential.

Scope

The additional data quality checks cover all four GRESB assessments (Real Estate, Infrastructure Fund, Asset and Development) and exist at the asset, portfolio, and organizational level. The depth and breadth of these controls can be categorized as follows:

Reporting characteristics: Checks on the core performance values reported by the entity. These play a key role in calculating metrics such as intensities and, as such, must be closely monitored.

  • Gross Asset Value
  • Revenue
  • Asset floor size
  • Ownership values
  • Portfolio completeness (verification that portfolios report all assets and are not cherry-picking)
  • Fund-asset links (Infrastructure Fund Assessment checks are used to make sure a fund has created a link in the GRESB Portal to all of its submitted asset assessments in order to get the correct score in results season. This can be a simple human error but can have unintended consequences on scores of the Fund Performance component.) 

Performance data: Operational metrics reported by the entities at the asset-level across the various tables, with particular attention paid to metrics that drive scoring. These include Data Coverage of time and floor area, Like-for-Like, and Intensities. The Performance Component metrics for which tests are run are the following:

  • Energy 
  • GHG (Scopes 1 & 2), including specific tests on the validity of Market and Location-based GHG data 
  • Water 
  • Waste  
  • Biodiversity & Habitat (New in 2025) 
  • Health & Safety (New in 2025) 

Quantitative approach

For all topics and indicators listed in the “Scope” section above, GRESB conducts at least one of two checks where relevant to identify potential missreporting.

  1. Automatic integrity checks: Built into the portal and flag in real time when logical gates are breached
  2. Statistical modelling analyses: Statistical tools applied to flag outliers
Check TypeDescriptionExample
Field-level validationEnsures that data entry is of the correct format, type, and within an acceptable range.Maximum Floor Area cannot be larger than expected.

Vacancy rate and share of ownership must be between 0% and 100%.

Asset location (Country, State, and City) must be present and must allow geocoding.

Mandatory fields must be filled in with a valid input.
Cross-field consistencyEnsures that the relationships between fields make logical and mathematical sense.Total Floor Area must be greater than or equal to the sum of sub-floor areas.

Data availability period must be within reporting year and take ownership period into account.

Floor areas reported for specific utilities cannot exceed specified area size.

Cross-checks between reporting energy and existence of corresponding GHG emissions scopes.
Rational checksBased on real-world scenarios to ensure physical feasibility and logical integrity of reported data.If full energy data coverage is reported, there must be a positive energy consumption value to reflect physical reality (an asset with full coverage must consume some energy).
Check TypeDescriptionExample
Descriptive StatisticsIncludes measures of central tendency (mean, median, mode) and dispersion (min, max, range, variance, standard deviation, standard error). Used to detect anomalies.An asset reporting high or low energy use while others report typical usage ranges flagged based on min/max/mean/median or standard deviation thresholds.
Interquartile Range (IQR)Flags intensity data points that fall outside 1.5× the interquartile range of peer group. IQR does not assume a normal distribution—helps identify extreme values robustly.A Scope 1/m2 or energy consumption/GAV emissions value 3× higher than all other assets in the same country and sector is flagged as an outlier beyond the IQR bounds.
Log-Transformed RegressionFor right-skewed distributions (e.g., Gross Asset Value), log transformation is used before regression to normalize data and reduce influence of outliers.A log-linear model uses prior year GAV and floor area to predict current GAV. Large residuals (prediction errors) are flagged for investigation.
Content & Accounting ChecksFocuses on consistency in reporting logic and alignment with protocols like the GHG Protocol—e.g., correct use of market- vs. location-based Scope 2 emissions reporting.An entity reports Scope 2 emissions using location-based values but deducts green energy certificates—a practice only valid under the market-based method.
Energy Efficiency IntensityYear-on-year performance trends and floor area changes are monitored to detect unexpected jumps or drops in intensity.An asset’s energy intensity improves dramatically with no corresponding floor area change—triggering a review of the data or methodology used.

Flagging/Signaling approach

Data points identified through one or a combination of these approaches are systematically flagged to affected participants during the month of August, when GRESB carries out an outreach to these participants seeking explanations for the abnormality in the reported values. Changes must then be made during the September Assessment Correction period.

If an entity does not comply with the request for additional information, or provides a justification that does not align with GRESB’s reporting requirements, the data point will remain visible in the entity’s assessment response. However, GRESB reserves the right to remove the relevant data point from the Benchmark and/or exclude it from the scoring model.

In 2024, GRESB reached out to 259 reporting funds representing over 3,000 assets. Of these, 85 funds are making corrections.

Assessment correction period as a quality mechanism

Introduced in 2021 is the Assessment Correction Period; this window allows participants to respond to flagged issues and amend their data submissions before results are finalized. This step not only improves data quality but also builds confidence in the process by giving participants an opportunity to correct unintended reporting errors, either flagged by GRESB (as mentioned in the previous section) or by their own recognition.

The Assessment Correction Period being supported by targeted outreach and validation flags ensures that data issues are addressed in a transparent and timely manner before results are locked in.

Looking ahead: Enhancing GRESB’s data quality controls

GRESB recognizes that participant reporting behavior continues to evolve, and our validation processes must advance in parallel. To stay ahead of these changes, we will develop an internal roadmap aimed at strengthening our ability to detect reporting inconsistencies and anomalies.

This initiative will focus on enhancing our statistical and data-driven testing methods, combining quantitative techniques with industry insights to better identify human errors and behavioral outliers across different regions, sectors, and asset types.

We also aim to improve how we detect challenges in data collection—particularly when tenants or third-party contractors are involved. For example, we will refine our ability to flag unusual data coverage patterns in tenant-controlled assets where certain metrics are unlikely to be provided, especially when compared to peer reporting behavior.

Ultimately, GRESB aims to integrate live and real-time validation flags directly into the Portal wherever possible, providing participants with immediate visibility and more time to review and correct their data before final submission.

Questions or feedback?

Contact us