Tag Archive for: software metrics

Better time-to-market is a priority for any development team, but it can come at the cost of accuracy and quality. There are many metrics to track progress and evaluate success in dev teams, but the following are especially crucial for those seeking to balance speed against accuracy and quality.

Cycle Time

Cycle time measures how much time is spent working on an issue – in other words, how long it takes the issue to “cycle” from one state to another (usually from open to closed). You can look at cycle time by type (how long it took a bug to go from open to closed) or by status (how long a bug sat open before it was closed).

Monitoring cycle time helps you set accurate expectations and identify bottlenecks or other pain points affecting your team. A spike in cycle time often points to a breakdown or a blockage somewhere in your process, meaning it’s time for an open conversation about what’s working and what’s not.

Release Cycle Time

This metric tracks your team’s total time per release: from the moment when a release began to when it was shipped. Tracking release cycle time can pinpoint holdups and process inefficiencies that delay delivery. Understanding release cycle time also helps you determine the accuracy of your ship dates and manage expectations around future releases.


Velocity measures your team’s delivered value: the amount of value-added work, like features ready to test or ship, delivered during an agreed-upon time period.

Teams generally calculate velocity across three or more intervals to establish average velocity. Average velocity serves as a baseline for setting delivery expectations, understanding if your team is blocked and determining whether your process changes are working.

Your volatility is also relevant – that is, how much and how often iterations deviate from average velocity. High volatility signals that something is broken in your process or team.


Throughput is your team’s total work output: the number of features, tasks, and bugs that are ready to ship or test within a given time period.

Usually, teams refer to last period’s throughput, but average throughput can be an especially helpful metric when it comes to capacity planning: comparing average throughput against the current workload tells you when your team might be overloaded.

Throughput helps measure your alignment with your goals. For example, if your primary goal is bug-squashing, you should expect to see plenty of defect tickets being resolved; if you’re focused on features, you should see more feature tickets than bugs. When your team’s progress is blocked, or their energies are pointed in the wrong direction, throughput drops.

Open Pull Requests

Pull requests are how developers ask their teams to review changes they’ve made to a code repository. This metric measures how many pull requests in a given repository —  or across all repositories — have been opened but not closed.

Your number of open pull requests will affect your throughput and iteration flow, so keeping an eye on this number can improve your team’s delivery time.

Tracking open pull requests over time also helps you spot bottlenecks in your feature delivery process.

Work in Progress

Work in progress (WIP) is the total number of tickets or issues that your team is currently working on. Like throughput, WIP is an objective measure of a team’s speed as a present moment indicator.

Ideally, WIP remains stable over time; an increase in WIP can identify inefficiencies in your process or signal that your team is overstretched (unless you’ve added more team members, of course!).

You can also divide your WIP number by the number of contributors on your team to get a sense of average WIP per developer. This number should be as close to a 1:1 ratio as possible, since too much multitasking tends to affect the quality and accuracy of your deliverables.

Iteration Flow

Iteration flow is a visualization of the status of your tickets over time. Flow is a good description, since this metric visualizes the shift of work from one status to another as your team moves toward completion through a defined process. It measures time to delivery along with the repeatability and health of your process.

Monitoring iteration flow helps highlight breakdowns in your process and get ahead of delivery problems, and it gives everyone on your team a tool to evaluate the predictability of their work.

Difficulty of Feature Implementation

This metric reflects your team’s perception of how difficult it was to implement a particular feature. An anonymous poll every sprint that asks your team to weigh in on the challenges of implementing the most recent feature(s) is a good way to collect the qualitative data you need.

Your team’s perception of the difficulty of feature implementation can have a huge impact on product roadmap and sprint planning. This metric helps you understand when your team would benefit from additional resources or extra training – or when they are ready to tackle more challenging work.

Should we release faster?

This metric represents your team’s perception of whether the team should speed up its cadence of releases. As with difficulty of feature implementation, you’ll need qualitative data to evaluate this metric. You’ll get the best information by polling your team every month or every quarter, since this metric doesn’t tend to fluctuate as much from sprint to sprint.

How your team feels about the pace of their sprints can give you insight into velocity, throughput and overall team health, while adding crucial context to your release window accuracy metric.

To get the full story on tracking data points for development teams — including more information on how to measure them and why ­— download our eBook, The Essential Guide to Software Development Team Metrics.

The first article in this two-part series suggested some specific metrics that can help any software-developing organization better understand how it operates and the results it gets. This article will present some suggestions for creating a healthy measurement culture in the organization and give you several practical tips for metrics success.

Creating a Measurement Culture

A software practitioner’s first reaction to a new metrics program often is fear. People are afraid the data will be used against them, that it will take too much time to collect and analyze the data, or that the team will fixate on getting the numbers “right” rather than on building good software. Creating a healthy software measurement culture and overcoming such resistance will take diligent, congruent steering by managers who are committed to measurement and sensitive to these concerns.

To help you overcome the fear, educate our team about the metrics program. Tell them why measurement is important and how you intend to use the data. Make it clear that you will never use metrics data either to punish or reward individuals—and then make sure you don’t. A competent software manager does not need metrics data from individuals to distinguish the effective team contributors from those who are struggling.

It’s also important to respect the privacy of the data. It’s harder to misuse data if managers don’t know who the data came from. Classify each base measure you collect into one of these three privacy levels:

• Individual: Only the individual who collected the data about his own work knows it is his data, although it may be pooled with data from other individuals to provide an overall project profile.

• Project Team: Data is private to the members of the project team, although it may be pooled with data from other projects to provide an overall organizational profile.

• Organization: Data may be shared throughout the organization.

As an example, if you’re collecting work effort distribution data, the number of hours each individual spends working on every development or maintenance phase activity in a week is private to that individual. The total distribution of hours from all team members is private to the project team, and the distribution across all projects is public to everyone in the organization. View and present the data items that are private to individuals only in the aggregate or as averages over the group.

Make It a Habit

Software measurement need not consume a great deal of time. Commercial tools are available for measuring code size in many programming languages. Activities such as daily time tracking of project work constitute a habit each developer gets into, not a burden. Commercial problem tracking tools facilitate counting defects and tracking their status, but this requires the discipline to report all identified defects and to manage them with the tool. Develop simple tracking forms, scripts, and reporting tools to reduce the overhead of collecting and reporting the data. Use spreadsheets and charts to track and report on the accumulated data at regular intervals.

Tips for Metrics Success

Despite the challenges, many software organizations routinely measure aspects of their work. If you wish to join this club, keep the following tips in mind.

Start Small. Because developing your measurement culture and infrastructure will take time, consider using the Goal-Question-Metric method described in the first article in this series to select a basic set of initial metrics. Once your team becomes used to the idea of measurement and you’ve established some momentum, you can introduce new metrics that will provide the additional information you need to manage your projects and organization effectively.

A risk with any metrics activity is dysfunctional measurement, in which participants alter their behavior to optimize something that is being measured instead of focusing on the real organizational goal. As an illustration, if you’re measuring productivity but not quality, expect some developers to change their programming style to visually expand the volume of code they produce, or to code quickly without regard for defects. I can write code very fast if it doesn’t have to run correctly. The balanced set of measurements helps prevent dysfunctional behavior by monitoring the team’s performance in several complementary dimensions that collectively lead to project success.

Explain Why. Be prepared to explain to a skeptical team why you wish to measure the items you choose. They have the right to understand your motivations and why you think the data will be valuable. Be sure to use the data that is collected, rather than letting it rot in the dark recesses of a write-only database. Encourage team members to use the data themselves and to identify other measures that will help them improve their work. When my small software group at Kodak established a work effort metrics program long ago, we found it to be more helpful than we expected. One team member expressed it well: “It was interesting to compare how I really spent my time with how I thought I spent my time and how I was supposed to spend my time.”

Share the Data. Your team will be more motivated to participate in the measurement program if you tell them how you’ve used the data. Share summaries and trends with the team at regular intervals and get them to help you understand what the data is saying. Let team members know whenever you’ve been able to use their data to answer a question, make a prediction, or assist your management efforts.

Define Metrics and Procedures. It’s more difficult and time-consuming to precisely define the base and derived measures than you might think. However, if you don’t pin these definitions down, participants are likely to interpret and apply them in different ways. As a result, you might end up combining apples and oranges, which yields fruit salad instead of meaningful insights. Define what you mean by a “line of code” or a “user story”, spell out which activities go into the various work effort categories, and agree on what a “defect” is. Write clear, succinct procedures for collecting and reporting the measures you select.

Understand Trends. Trends in the data over time are more significant than individual data points. Some trends may be subject to multiple interpretations. Did the number of defects found during system testing decrease because the latest round of testing was ineffective, or did fewer defects slip past development into the testing process? Make sure you understand what the data is telling you, but don’t rationalize your observations away.

Measurement alone will not improve your software performance. Measurement provides information that will help you focus and evaluate your software process improvement efforts. Link your measurement program to your organizational goals and process improvement program. Use the process improvement initiative to choose improvements, use the metrics program to track progress toward your improvement goals, and use your recognition program to reward desired behaviors.

Also Read A Software Metrics Primer, Part 1

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

My friend Nicole was a quality manager at a large company recognized for its software process improvement and measurement success. Once I heard her say, “Our latest code inspection only found two major defects, but we expected to find five, so we’re trying to figure out what’s going on.” Few organizations have deep enough insight into their software development process to make such precise statements. Nicole’s organization spent years establishing effective processes, measuring critical aspects of its work, and applying those measurements on well-managed projects that develop high-quality products.

Software measurement is a challenging but essential component of a healthy and highly capable software engineering culture. This two-part series, adapted from my book Practical Project Initiation: A Handbook with Tools (Microsoft Press, 2007), describes some basic software measurement principles and suggests some metrics that can help you understand and improve the way your organization and its projects operate.

Why Measure Software

Software projects are notorious for running over schedule and budget and having quality problems to boot. Software measurement lets you quantify your schedule and budget performance, work effort, product size, product quality, and project status. If you don’t measure your current performance and use the data to improve your future work estimates, those estimates will forever remain guesses. Because today’s current data becomes tomorrow’s historical data, it’s never too late to begin recording key information about your project.

You can’t track project status meaningfully unless you know the actual effort and time spent on each task compared to your plans and the progress you’ve accomplished as a result. You can’t sensibly decide whether your product is stable enough to ship unless you’re tracking the rates at which your team is finding and fixing defects. You can’t assess how well your new development processes are working without some measure of your current performance and a baseline to compare against. Metrics help you better control your software projects and better understand how your organization works.

What to Measure

You can measure many aspects of your software products, projects, and processes. The trick is to select a small, balanced set of metrics that will help your organization track progress toward its goals. The Goal-Question-Metric (GQM) method promoted by Professor Victor Basili of the University of Maryland is an effective technique for selecting appropriate metrics to meet your needs. With GQM, you begin by selecting a few project or organizational goals. State the goals to be as quantitative and measurable as you can. They might include targets such as the following:

  • Reduce maintenance costs by 50% within one year.
  • Improve schedule estimation accuracy to within 10% of actuals.
  • Reduce system testing time by three weeks on the next project.
  • Reduce the time to close a defect by 40% within three months.

For each goal, think of questions you would have to answer to judge whether your team is reaching that goal. If your goal was “reduce maintenance costs by 50% within one year,” these might be some appropriate questions:

  • How much do we spend on maintenance each month?
  • What fraction of our maintenance costs do we spend on each application we support?
  • How much do we currently spend on adaptive maintenance (adapting to a changed environment), perfective maintenance (adding enhancements), and corrective maintenance (fixing defects)?

Finally, identify metrics that will provide answers to each question. Some of these will be simple items you can count directly, such as the total budget spent on maintenance. These are called base measures. Other metrics are derived measures, which are calculated from one or more base measures, often as simple sums or ratios. To answer the final question in the previous list, you must know the hours spent on each of the three maintenance activity types and the total maintenance cost over a period of time.

Note that I expressed several goals in terms of a percentage change from the current level. The first step of a metrics program is to establish a current baseline, so you can track progress against it and toward your goals. I prefer to establish relative improvement goals (“reduce maintenance by 50%”) rather than absolute goals (“reduce maintenance to 20% of total effort”). You can probably reduce maintenance to 20 percent of total organizational effort within a year if you are currently at 30 percent, but not if you spend 80 percent of your effort on maintenance today.

Your balanced metrics set should eventually include items relating to the six basic dimensions of software measurement: size, time, effort, cost, quality, and status. Table 1 suggests some metrics that individual developers, project teams, and development organizations should consider collecting. Some of these metrics relate to products, others to projects, and the remainder to processes. You should track most of these metrics over time. For example, your routine project tracking activities should monitor the percentage of requirements implemented and tested, the number of open and closed defects, and so on. I recommend including the following base measures fairly early in your metrics program:

  1. Product size: Count lines of code, function points, object classes, number of requirements, or GUI elements.
  2. Estimated and actual duration: Count in units of calendar time.
  3. Estimated and actual effort: Count in units of labor hours.
  4. Work effort distribution: Record the time spent in various development and maintenance activities.
  5. Requirements status: Track the number of requirements that are proposed, approved, implemented, verified, deleted, and rejected.
  6. Defects: Track the number, type, severity, and status of defects found by testing, peer reviews, and customers.

Table 1: Appropriate Metrics for Software Developers, Teams, and Organizations

Level Appropriate Metrics
Individual Developers
  1. Work effort distribution
  2. Estimated and actual task duration and effort
  3. Code covered by unit testing
  4. Number and type of defects found by unit testing
  5. Number and type of defects found by peer reviews
Project Teams
  1. Product size
  2. Estimated and actual duration between major milestones
  3. Estimated and actual staffing levels
  4. Number of tasks planned and completed
  5. Work effort distribution
  6. Requirements status
  7. Requirements volatility
  8. Number of defects found by integration and system testing
  9. Number of defects found by peer reviews
  10. Defect status distribution
  11. Percentage of tests passed
  12. Productivity (also called velocity)
Development Organization
  1. Overall project cycle time
  2. Planned and actual schedule performance
  3. Planned and actual budget performance
  4. Schedule and effort estimating accuracy
  5. Released defect levels
  6. Reuse effectiveness

It’s important to remember that even if you conclude that all of the metrics in Table 1 are valuable, you cannot possibly begin a metrics program that encompasses so many dimensions of measurement. It would overwhelm the practitioners, and you’d likely end up with far more data than you’re prepared to analyze and use to steer the organization’s development activities.

The second article in this series will continue with some recommendations about the importance of creating a measurement culture in the organization, as well as presenting five tips for metrics success.

Read A Software Metrics Primer, Part 2.

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.