Content references source material no longer available at its original location.
Adhere to the proposed standard evaluation approach in order to facilitate high response rates and to collect reliable data on 511 implementation.

Experience from Implementing a Standard Evaluation Approach and Measures as Part of the Arizona 511 Model Deployment.

Date Posted
01/10/2007
TwitterLinkedInFacebook
Identifier
2007-L00329

Implementing a Standard Evaluation Approach and Standard Measures in 511 Customer Satisfaction Surveys

Summary Information

As part of the evaluation of the Arizona 511 Model Deployment, a customer survey was conducted to assess customers' use and perceptions of the service. In January 2003, a 511 national evaluation panel was convened to provide input on the evaluation of the Arizona 511 Model Deployment and to develop a standardized evaluation approach and set of measures ("core" questions) that would be tested in the Arizona 511 customer survey.

The 511 national evaluation panel envisioned two key benefits to having standard evaluation components. First, the standardization of the approach and measures would produce nationally comparable evaluation data, and program managers would be able to aggregate the data and look at national trends in the use and perception of 511. Second, a standardized methodology and core questions would facilitate the process of conducting customer surveys among other 511 evaluators and would save funds that otherwise would have been spent on survey design.

Based on the experience of testing the standard evaluation approach and measures in Arizona, the core set of questions was refined and lessons learned were developed to provide 511 evaluators with useful information in conducting their own user evaluations.

Lessons Learned

The data collection effort in Arizona had several objectives. First, the user survey would serve as a test of key hypotheses in the areas of customer satisfaction, mobility, and efficiency. Second, the evaluation team wanted to collect data from a representative sample of users, so that it would be possible to generalize findings from the sample to all Arizona 511 customers. Finally, as a national evaluation of a model deployment, a key objective of this survey effort was to use the data to inform other 511 program managers. In other words, this evaluation would serve as a "bellwether," providing lessons learned that could be applied to future 511 customer surveys. With this in mind, the national evaluation team determined that the standard evaluation approach should adhere to rigorous survey methods in order to facilitate high response rates and the collection of high-quality, reliable data. Ideally, all state evaluators conducting customer surveys would adopt this standard evaluation approach. However, in reality, evaluators may not have the funds, the staff, or the overall support to implement all elements of the standard evaluation approach. At a minimum, evaluators need to think about how their decisions regarding the survey approach impact the quality of the data. The following lessons learned are based on the experience of fielding the standard evaluation approach in Arizona.

  • Accept that surveys necessitate some level of inconvenience to the customer. The most cost-effective method for obtaining representative customer feedback is to intercept 511 callers. Unquestionably, this is an inconvenience to customers, particularly those who may be in the midst of their trip. However, some level of inconvenience to the customer will have to be tolerated.
  • Use representative sampling to enable generalizing from the sample to the population of all users. Ensuring that the views expressed in the survey represent the views of all 511 customers necessitates a representative sampling of customers. In Arizona, a random intercept was used, whereby every "nth" call would be intercepted and transferred to a local research firm. With representative data, one can feel confident that the issues and priorities raised by the survey sample reflect the issues and priorities of the population of all users, and this provides the evaluation team with an informed basis for making its decisions.
  • Employ a two-stage approach for the overall evaluation design. In Arizona, the study design included two stages: 1) a brief intercept questionnaire (where the caller scheduled an interview appointment to complete the longer survey) and 2) completion of the customer survey at the scheduled time. This two-stage design was employed because: holding the interview after the trip would help ensure that data could be collected regarding that specific trip (and that specific 511 call); scheduling the interview at a later, more convenient time (as opposed to in the middle of their trip) could increase callers' willingness to cooperate.
  • Use a live intercept to increase response rates. Research has shown that live interviewers result in higher response rates than do automated interviewers. In addition to explaining the purpose and importance of the study, a live interviewer can answer any questions that the caller may have regarding the survey process and can convert refusals to participants.
  • Conduct the second stage interview (if using a two-stage approach) as soon as possible after the trip. The core set of questions includes a series of questions about the specific trip the caller was making when he or she was intercepted. To insure the accuracy of these measures, the interview should be completed within 48 hours after the trip so that respondent recall is not a problem. Moreover, evaluators should plan on extra resources for the multiple attempts needed to reach callers and complete an interview.
  • Intercept callers after they have made their 511 menu selection (if practical). By intercepting callers further down in the phone tree, it is possible to enable a proportional selection according to service type. However, this method may be impractical if very few customers are using certain service types, due to the significant cost and time involved with obtaining a sufficient sample size. In the Arizona evaluation, the intercept functionality did not permit intercepting customers after they had made their menu selection.
  • Identify first-time callers in the initial intercept so they can be asked to complete the appropriate survey. There are many questions in the complete survey that first-time callers may not be able to answer due to their lack of experience with the service, so the main survey should be adapted for "first-time users".
  • Develop a detailed sampling plan. The sampling plan provides a roadmap for the study, describing who will be interviewed, the number of respondents, and the process for collecting the data. Above all else, the sampling plan involves a thoughtful consideration of the purposes of the data collection. The sampling plan developed by the Arizona evaluation team is a strong example of the type of up-front work that is required for a successful study.
  • Rely on call data to establish the sampling interval for the intercept. Establish the appropriate sampling interval so that a sufficient number of interviews are collected for different days of the week and hours of the day. The Arizona evaluation team relied on call data records to determine the proportion of users who are first time callers (versus repeat callers), as well as to identify the volume of calls by day of week and time of day.
  • Test all aspects of the intercept functionality (even before the formal pre-test) and require that the survey research firm provide a detailed Acceptance Test Plan. The complex nature of the two-stage intercept approach – intercept, transfer, and return transfer – calls for a rigorous testing of the functionality of the method, so that corrections and adjustments can be made. For example, testing in Arizona revealed that a significant number of calls were being bounced back to 511 (and not received by the survey firm). While the evaluation team was not able to resolve this problem, it did adjust the sampling interval to ensure that the requisite number of interviews could be obtained in the allotted time frame. Even for less complex survey designs, testing of the functionality is necessary and every effort should be made to ensure a smooth survey process.
  • Allocate sufficient resources (time and money) to the development and testing of the intercept functionality. The Arizona evaluation team spent more time than anticipated facilitating problems with the intercept functionality and, likewise, the local survey firm had to spend time trying to resolve problems with the phone company. The time and resources required for the testing and adjustment of the recruitment method needs to be adequately accounted for in the survey budget and schedule.
  • Include the use of incentives in the administration of the study (if the budget allows). While incentives are not the criteria by which the success of a study is judged, research has shown that incentives boost response rates and reduce the duration of time spent intercepting, thus reducing that cost. Thoughtful consideration should be given to what constitutes an appropriate incentive that will not result in biased participation.

The different components of the standard evaluation approach were developed with the purpose of collecting high-quality, reliable data. In particular, the central challenge of designing the standard approach was the achievement of a high response rate. Many of the key design decisions in the Arizona evaluation were made in support of obtaining a sufficient survey response; with high response rates there is greater confidence that the sample data are indeed representative of the larger population of 511 users.

Implementing a Standard Evaluation Approach and Standard Measures in 511 Customer Satisfaction Surveys

Implementing a Standard Evaluation Approach and Standard Measures in 511 Customer Satisfaction Surveys
Source Publication Date
12/02/2005
Author
Margaret Petrella, Jane Lappin, Volpe National Transportation Systems Center Cambridge, Massachusetts
Publisher
U.S. Department of Transportation Federal Highway Administration - Office of Transportation Management

Keywords Taxonomy: