Implementing a Standard Evaluation Approach and Standard Measures in 511 Customer Satisfaction Surveys
As part of the evaluation of the Arizona 511 Model Deployment, a customer survey was conducted to assess customers' use and perceptions of the service. In January 2003, a 511 national evaluation panel was convened to provide input on the evaluation of the Arizona 511 Model Deployment and to develop a standardized evaluation approach and set of measures ("core" questions) that would be tested in the Arizona 511 customer survey.
The 511 national evaluation panel envisioned two key benefits to having standard evaluation components. First, the standardization of the approach and measures would produce nationally comparable evaluation data, and program managers would be able to aggregate the data and look at national trends in the use and perception of 511. Second, a standardized methodology and core questions would facilitate the process of conducting customer surveys among other 511 evaluators and would save funds that otherwise would have been spent on survey design.
Based on the experience of testing the standard evaluation approach and measures in Arizona, the core set of questions was refined and lessons learned were developed to provide 511 evaluators with useful information in conducting their own user evaluations.
Customer surveys are a valuable tool for obtaining feedback on the performance of 511, on what improvements are necessary, and on the effectiveness of marketing strategies. Well-designed surveys that are properly executed provide valuable insights on how to increase customer satisfaction with the service. Based on the experience of testing the standard evaluation approach and measures in Arizona, the following general guidelines are offered to 511 evaluators with regard to the content and design of their survey questions:
- Set priorities regarding survey content by defining key analysis areas and developing hypotheses to test in each analysis area. In Arizona, the three key analysis areas included customer satisfaction, mobility, and efficiency. A set of hypotheses was developed for each of these areas, and then survey questions were written to test the hypotheses.
- Review all previous customer feedback. State deployers should be logging all customer feedback on their 511 service (whether written, by phone, or by email). Evaluators should refer to these logs to see if there are issues that require further investigation in the survey.
- Design questions that address aspects of the service that are problematic. A survey provides a good opportunity for exploring specific problematic aspects of the service. The Arizona 511 service, for example, had received a number of complaints on its new voice-recognition feature, so several questions in the survey addressed this topic.
- Develop questions that address customer satisfaction with recently implemented system enhancements (or potential future improvements). If a service has been upgraded, the survey should contain specific questions that will yield useful data on how the enhancements are perceived. At the same time, deployers should measure customers' priorities with regard to future potential improvements. In Arizona, for example, the evaluators designed questions to explore customers' use of and satisfaction with the system enhancements implemented as part of the Model Deployment, including the new menu options.
- Include a "Comments" box at the end of the survey. The final question of the Arizona customer satisfaction survey was an open-end question that asked respondents if they had any additional comments regarding the 511 service. This enabled customers to provide feedback on issues or concerns that may not have been addressed in the survey. All customer satisfaction surveys should include an open-end question of this sort, as it can provide useful insights on users' perceptions of the service.
- Design your survey using rigorous survey writing practices. The quality of the data rests in large part on the quality of the questions. Questions must be properly designed so that they will yield meaningful data. Be sure to write balanced, unbiased questions, and use simple, straightforward language. If the survey is to be administered over the phone, do not use more than five response categories (or scale points), so as to limit respondent burden.
- Be sensitive to survey length. Respondents will be more likely to participate if they are assured at the start of the survey process that the survey will not take longer than 10 to 15 minutes. Moreover, respondents will be more likely to complete the full survey once they have started if it is not too long.
- Conduct a pre-test. In the Arizona evaluation, the pre-test confirmed that there were no major flaws with the survey instrument. However, based on the pre-test interviews, minor editing changes were made to the survey. The pre-test was also valuable in providing insights regarding response rates, the length of the intercept survey and main survey, and respondents' initial reaction to being intercepted. If resources are limited, even a small pre-test with approximately 10 respondents would be beneficial to the study.
Customer evaluations are an important tool for obtaining feedback about a service or product, but in order to obtain reliable data, the survey instrument has to be designed using rigorous methods. The evaluation team needs to ensure that the right questions are being asked and that they are written in a manner that is unbiased and easy to understand.