Many organizations see quality management as one of the final steps in a task. While of course, reviewing and refining an overall task is important, at netlogx we are trained to build quality into a project from inception to close.
As organizations continue to transition to a project-focused or PMO-structured model, it can be challenging to begin thinking about quality management as an iterative process rather than a singular event. When a project is properly managed, quality applies to every task in a project’s lifecycle.
In order to successfully manage quality, you must develop metrics that are right for your organization. When first deciding what these quality metrics should look like, a great starting point is developing observation methods around work and tasks. Establishing how, why, and when tasks are observed will help determine what it is exactly that you are attempting to observe and what that data will mean to your quality objectives.
If your organization has a knowledge repository of best practices or lessons learned from previous projects, you can also draw from those as a starting point. When determining the reasons behind your observations, think of those reasons in terms of how they relate to your mission and goals for the task under observation. What makes a successful event?
If you are able to identify the unique inputs that dictate the success, or lack of, in the work you are monitoring then you are able to quantify that information. Having this measurable data around tasks allows the observer to view the tasks separately and independently of each other, and vice versa, as they relate to the whole. If the metrics from these observations indicate that there is an issue in one area of a process, the team can perform root cause analysis to understand why that particular input is problematic. The key understanding is how the processes are supposed to work and how you can observe both successful tasks and failures.