How do you know if your program is doing what it’s meant to do? The obvious answer is to develop metrics and a means for tracking that data to see that outcomes meet or exceed expectations. But what do you do if you don’t have access to the data you need but do have access to something close? When is “close” good enough?
I’ve been assisting a state agency to oversee its welfare employment and training program. Due to budget, time and other constraints, the computer system used by this program was hastily implemented with an emphasis on providing a platform for caseworkers and their supervisors to do their work, but very few tools were included to assist the agency in monitoring the program. Ideally, the system should provide administrative data on:
- How many people entered the program
- The activities they were assigned to
- The outcomes of those activities (e.g. became employed, earned a high school equivalency degree)
- Ultimately, the number of individuals who no longer receive welfare benefits because they are working
Somewhere in the system’s databases, this data exists, but at this time, that data isn’t readily available.
The challenge I was given was to provide the agency with meaningful administrative information. My goal was to provide the state:
- Dashboards and other graphic tools showing the program’s status (e.g. caseload counts and statuses – active, participating, and not participating) and process measures (e.g. number of individuals assigned to an activity, number of individuals with participation hours) and
- Other graphs identifying trends and variances that warrant a more thorough review (why does it appear that Region X is outperforming the rest of the state)
To do this I had to figure out what data was available and evaluate the data’s limitations. I found three good sources: two reporting widgets that let the user run reports for specified periods of time and a weekly spreadsheet that captured a great deal of data about the entire caseload at the individual level.
The three reports provide data for a specific timeframe (Exp: from 1/1 to 1/31, X number of appointments, Y number of individuals attended) but lacked a means for showing trends over time.
For program administration, longitudinal data is the key. I had to think about 1) what tool to use, 2) what data to pull and at what level, and 3) what would be the most helpful data for the agency.
The available reports allow me to drill down to the state, regions and county office level. But there are 90+ counties and 110+ offices which is too unwieldy, so I limited the data to regional and state levels. I developed a series of Excel workbooks and uploaded data from the three reports to create a series of dashboards and graphs using pivot tables and charts. The results have proven to be informative and useful to the agency offering insight into the overall health of the program. The resulting reports aren’t definitive – but dips and gains over time identify strengths and weakness in the program’s processes that warrant further digging. The agency is pursuing system updates to reproduce my reports plus generate new reports.