CU1: Customer Satisfaction Monitoring

Maximum Score

Not scored

Prefill

Eligible

Validation

Other answer is manually validated

2026 Updates

None


Has the entity undertaken customer satisfaction surveys within the last three years?

Assessment Instructions

Intent: What is the purpose of this indicator?

This indicator assesses whether and to what extent the organization engages with customers regarding their satisfaction with the services provided by the asset. Using consistently applied metrics can help analyze and compare the outcomes, despite the many variations between entities.

Input: How do I complete this indicator?

Select Yes or No: If selecting 'Yes', tick all applicable checkbox(es).

Percentage of customers covered = Number of customers (e.g., organizations) who received the satisfaction survey during the reporting year / Total number of customers at the time of survey administration x 100%

Survey response rate = Number of individual survey responses / Number of customers who received the satisfaction survey x 100%

  • For example, if the survey was sent to 100 customers and 40 responded, the response rate would be 40%.

Survey metrics: The entity can indicate the quantitative metrics used for the survey. Reporting using the ‘other’ answer option is possible. Ensure that the ‘other’ answer provided is not a duplicate or subset of another option.

Exceptions

Select Yes or No: GRESB is seeking to standardize the scope and boundaries of reporting to allow for more accurate benchmarking and to progressively move towards scoring of performance. If the scope of the data reported for this indicator does not exactly match the reporting scope (facilities, ancillary activities, and time period) as reported in “Entity and Reporting Characteristics” (EC3, RC3, RC4), then answer ‘No’ to this question and describe these exceptions in the “Exceptions” text box. Please note that if the entity answers 'No', then GRESB will not provide reporting-year performance data intensity values in the Benchmark Report.

Examples include:

  • Temporal: toll road includes data on energy consumption from its street lighting within its boundary, but due to a data glitch, it lost this data for a two-month period during the reporting year.

  • Physical: A power plant includes a switchyard facility within its reporting boundary but does not have data on water discharge for this facility.

  • Operational: An airport includes the operation of mobile equipment within its reporting boundary, but not for aircraft, since these are operated by airlines.

Terminology

Customer satisfaction survey

A written survey conducted by the entity, or by a third party on its behalf, that gives the customer the opportunity to provide feedback on the services provided.

Independent third party

In the context of survey administration, an independent third party is an external organization that is financially and structurally independent from the reporting entity and is responsible for administering the survey in a matter that safeguards the impartiality, integrity, and quality of survey results. For a survey to be considered third-party administered:

  • The organization administering the survey must be a separate legal entity from the fund manager or reporting entity.

  • The administrator must ensure that the reporting entity cannot access, edit, or influence individual responses or aggregated survey results once the survey is launched.

  • Participant anonymity must be preserved, and the survey process must protect the integrity of the data and the resulting quantitative metrics.

The use of survey development tools, such as SurveyMonkey and SurveyGizmo, may not qualify as an independent third party unless the tool’s service explicitly includes independent creation and administration. This must be clearly specified in the evidence provided.

Net Promoter Score

The Net Promoter Score ® (NPS) is a customer loyalty metric developed by Bain & Company, Fred Reichheld, and Satmetrix. It divides customers, tenants or employees into three segments: passives, detractors and promoters, using the following question “On a scale of 0 to 10, how likely would you be to recommend this company (or this product) to friends and colleagues?” The Net Promoter Score ® (NPS) ratings of 9 or 10 indicate promoters; 7 and 8, passives; and 0 through 6, detractors. The NPS is the percentage of promoters minus the percentage detractors.

Overall satisfaction score

An overarching metric in a satisfaction survey, with no prescribed scale, that measures how happy an employee or tenant is with the organization, lease, and/or services provided. The industry best practice is a 1-5 scale - very poor, poor, average, good, and excellent, respectively.

Quantitative metric

Any measure or parameter of satisfaction that can be represented numerically.

Validation: What evidence is required?

No evidence required. Only the 'Other' answer is manually validated.

Other Answer

Add a response that applies to the entity but is not already listed. Ensure that the ‘Other’ answer provided is not a duplicate or subset of another option (e.g. “recycling” when “‘Waste” is selected). It is possible to report multiple ‘Other’ answers. It is possible to report multiple ‘Other’ answers. If multiple ‘Other’ answers are accepted, only one will be counted towards scoring.

Validation Basics

Scoring

Has the entity undertaken customer satisfaction surveys within the last three years?

Scoring: How does GRESB score this indicator?

Materiality-based Scoring

The relevance of the 'Customer satisfaction' issue (determined by the GRESB Materiality Assessment (RC7)) defines the materiality weighting of this indicator.

The weighting is set at one of four levels:

  • No relevance (weighting: 0)

  • Low relevance (weighting: 0)

  • Medium relevance (weighting: 1)

  • High relevance (weighting: 2)

Where an issue is of 'No relevance' or ‘Low relevance’ the indicator is not considered in scoring (i.e. it has a weighting of 0%). If an issue is of 'Medium relevance' the indicator counts towards the Performance Component score with ‘standard’ weighting. If an issue is of 'High relevance' the indicator counts towards the Performance Component score with higher than ‘standard’ weighting.

As a result, the weight of this indicator may differ for each participant based on their materiality profile. The weighting of the material (scored) indicators in the Performance Component is automatically redistributed to ensure that the Component retains its overall weighting of 60% of the Asset Assessment.

For more details, download the GRESB Materiality & Scoring Tool.

Scoring Of Metrics

The scoring of this indicator is equal to the fraction assigned to the selected option, multiplied by the total score of the indicator.

Note that the percentage of customers covered in the survey does not act as a multiplier but must be a number more than 0 to receive the corresponding points.

Other Answer

The 'Other' answer is manually validated and assigned a score, which is used as a multiplying factor, as per the table below:

Validation Status
Multiplier

Accepted

1/1

Not Accepted

0

Duplicate

0


References

Get Support: Solution Providers

GRESB Solution Providers are independent, third-party organizations within the GRESB Partner network that offer specialized products, tools, and services to support sustainability performance outside the GRESB Assessment process.

Currently, there are no GRESB Solution Providers associated with this indicator.

GRESB will continue to update this section as the GRESB Solution Provider network grows. Please check back regularly to find GRESB Solution Providers who can support your sustainability performance.

Last updated

Was this helpful?