Nearshore Americas

Essential Metrics to Use to Evaluate Application Development and Testing Vendors

Developing and implementing consistent, actionable performance metrics for your Application Development and Maintenance and Testing program is one of the best ways to ensure you get value for money. One of the most important aspects of developing effective metrics is to resist measuring everything that can be measured. Focus only on measuring what matters.

According to CORE (Centre for Outsourcing Research and Education) research conducted in 2011,  one of the largest weaknesses found was that almost half of the organizations struggled to identify a compact set of metrics that aligned with business priorities. Over-abundance of metrics often obscured the core set and made it challenging for clients to aggregate relevant information and derive intelligent insights. Others recognized the issue not as a problem of quantity but of quality; stating that metrics were being tracked, but did not link to the organization’s ultimate goals.

This is an area where there is a tendency to try to measure too many things, often resulting in lots of data, plenty of noise but not much actionable information

No matter how complex or far reaching your metrics are, most managers pay attention to only a few metrics. In addition to simplifying the metrics measurement process, this guide to “best practices” will help you develop metrics that marry the relationship and performance information you really need with the information vendors can readily provide.

There are three types of Metrics that you should consider creating to measure the success of your program and vendor relationships:

1) Relationship level – these Metrics focus on how the relationship between the two companies is working, and how satisfied you are with their responsiveness and your access to their thought leadership and innovation.

2) Customer level – these Metrics focus on how well the vendor is performing tactical, “table stakes” tasks, like invoice accuracy and incident management.

3) Statement of Work (SOW) level – these Metrics focus on how well the vendor is delivering quality outcomes, on time and on budget, against each Statement of Work.

Measuring Performance

Relationship and Customer level Metrics are performance management tools and controls that are not unique to ADM and Testing vendor relationships. A good way to simplify governance processes and reduce workload associated with managing multiple vendors is to develop and deploy a generic set of Relationship and Customer levels across the population.

The primary focus of this article is on the SOW level Metrics. This is an area where there is a tendency to try to measure too many things, often resulting in lots of data, plenty of noise but not much actionable information. The best way to simplify your thinking about what to measure is to establish an overarching framework that addresses expected and important outcomes. By this I mean deciding how to group everything you want to measure into a few overarching categories. Three or four categories are simple to remember, easy to manage and simplify the communication process.

Grouping Metrics

I’d recommend grouping SOW level Metrics into “Quality,” “Efficiency” and “Effectiveness” categories, then creating sub-level metrics that align with each category. When you’re deciding on the sub-level Metrics, which are labeled “Type” in this example, give careful consideration to which metrics are referenced in best practices research coupled with what the vendor already measures internally. If you’re not sure what your vendors measure, just ask. They’ll appreciate your efforts to align your performance reporting requirements with their existing processes.

This is an example of “Quality” Metrics for Application Development and Maintenance. The Type column identifies the type of quality metric being measured; the Description specifies what is being measured; Waterfall or Agile refers to the software development methodology; Reporting periods can be set according to the timing of SOW deliverables, against major milestones or for long term relationships by quarter.

The same approach is taken to develop “Efficiency” and “Effectiveness” Metrics and any other categories you wish to measure. (Click on the chart for an expanded view.)

 

 

 

 

 

 

 

 

 

 

 

 

 

And here is an example of “Quality” Metrics for Testing: (Click on the chart for an expanded view.)

Sign up for our Nearshore Americas newsletter:

 

 

 

 

 

 

 

 

 

 

 

Conclusion

This approach is extremely useful for managing outcomes for each SOW and every vendor. Over time, you will have a sufficient volume of actual results for each vendor and every SOW. The data can be analyzed for opportunities and issues. You can identify fact-based opportunities for Project Teams and vendors to achieve better quality, higher productivity levels and lower costs. Ultimately, this is the most predictable way to increase the value for money.

Linda Tuck Chapman is a seasoned Outsourcing and Vendor Governance expert. You can reach Linda at (416) 452-4635, lindatuckchapman@ONTALA.com  or visit ONTALA Performance Solutions

 

 

phaller

Add comment