You’ve just introduced a shiny new process, system, or other element intended to make things more productive. There may be more bells, whistles, and pretty graphics, but how do you know if your new process is an improvement from what you had before? The obvious answer is “What does the data show?” but that answer assumes you have measurements in place.
In a perfect world, you’d have data from your old process measuring key aspects, like timeliness (percentage of tasks completed in a prescribed timeframe) and accuracy (percentage of tasks completed correctly). You can duplicate those measures with the new process and compare the two to see how the new process is performing.
However, even if you don’t have exact measures to compare the before and after, you may still be able to come up with something that can approximate the effects of the change and indicate if things are better, worse, or about the same.
For example, say a state rolled out a new social service eligibility system. The old and new systems use the same eligibility criteria and internal processes but produce different-looking results. The old system used a process that lumped applicants at the same address into common cases. In contrast, the new system determines eligibility at the individual level and doesn’t create a group case. So, the old system had 10,000 cases while the new system had 30,000 individual cases.
Also, the old system produced monthly reports while the new system runs weekly reports. The changes in what constitutes a case mean that comparing caseloads isn’t helpful. And because the reporting system used different reporting cycles, volume numbers and other data don’t sync either. So, what can you compare?
Based upon this, it looks like it took four weeks for the system users to adjust to the new system, but after that transition period, applications are processed at a similar rate to the old system. If the expectation was that the new system would improve timeliness, that doesn’t seem to have happened yet, and the state may need to look more closely at its processes to see what could be improved.
The moral of the story is to look at the data you have and figure out how to use it as a tool to compare the old with the new. You couldn’t (and shouldn’t) use this type of data for accountability purposes but rather as an indicator of where things stand or where things are going.