The source document is a report designed to act as a planning tool for implementing Transit ITS in rural systems. It includes four sections: a guidebook for planning rural transit ITS applications, best practices in rural transit ITS, transit ITS case studies and transit ITS resources. This review only covers the lessons highlighted in the guidebook section. The purpose of the guidebook is to assist transit systems, especially the managers, in identifying and addressing present and future needs using ITS technologies. Among the lessons learned is the importance of using a three level approach when deploying ITS, identifying and learning about the available ITS technologies, identifying financial resources and the need to develop a database.
Evaluating the implementation of the Transit ITS application means measuring the degree to which an application met the objectives and thus, the goals. The following paragraphs provide some insight on what to consider when performing a Transit ITS evaluation.
- Capture performance results shortly after application implementation: The ideal time for capturing meaningful performance results is after the Transit ITS application is fully implemented and has been operating successfully for at least three months. Taking the time to evaluate newly implemented Transit ITS applications provides an opportunity to observe how their performance compares to the estimates and expectations that you developed earlier in the planning process.
- Develop a database to keep collected data organized and available: The data collected needs to correspond to the evaluation measures that the system manager chooses. The first task of database development is to assure that all items of source data are available and kept up to date. Some data items might be acquired on a sampled basis. A reason for doing so could be the difficulty or excessive cost of acquiring the data on a more frequent basis. The most important point is that the data present reliable estimates of whatever they purport to measure. Transit ITS and other computer software applications can result in potential increases in efficiency and reliability of data collection and information development.
- Determine evaluation measures based on the required data items. The desired reports can be derived directly from the spreadsheet or other programs in which the evaluation statistics are calculated. The Base Case defines the state of the system against which future progress and goal achievement is to be measured. Normally it is designated to be the existing system before any proposed Transit ITS applications are employed.
- Consider that the development and maintenance of the evaluation database requires considerable time and effort. It is important to protect this investment against destruction or damage. Power failures, lightning strikes, computer viruses, computer failures, software bugs, and operator errors are possible sources of computer system failure that could harm or eliminate the evaluation database. Consequently, it is recommended that the evaluation database be backed-up daily. For very small systems weekly back-ups may be adequate. Backing up the database merely means copying it onto a disk, tape, or CD. To guard against fire, theft, or some other disaster, the database back-up disk should be stored off-site in a bank or some other fireproof vault.
As this experience illustrates, it is important to evaluate Transit ITS applications in an organized way. In order to accomplish this, evaluations should take place shortly after application implementation and performance data should be kept in an organized database. Additionally, performance measures should be based on required data items and consideration should be taken to the amount of time and effort that it takes to develop and maintain the database. Following these suggestions may lead to increased evaluation efficiency and agency productivity