Tag Archive for: karl wiegers

Disciplined software organizations collect a focused set of metrics about each of their projects. These metrics provide insight into the size of the product; the effort, time, and money that the project or individual tasks consumed; the project status; and the product’s quality. Because requirements are an essential project component, you should measure several aspects of your requirements engineering activities. This two-part series, adapted from my book More about Software Requirements, describes several meaningful metrics related to requirements activities on your projects.

Product Size

The most fundamental metric is the number of requirements in a body of work. Your project might represent requirements by using a mix of use cases, functional requirements, user stories, feature descriptions, event-response tables, and analysis models. However, the team ultimately implements functional requirements, descriptions of how the system should behave under specific conditions.

Begin your requirements measurement by simply counting the individual functional requirements that are allocated to the baseline for a given product release or development iteration. If different team members can’t count the requirements and get the same answer, you have to wonder what other sorts of ambiguities and misunderstandings they’ll experience. Knowing how many requirements are going into a release will help you judge how the team is progressing toward completion because you can monitor the backlog of work remaining to be done. If you don’t know how many requirements you need to work on, how will you know when you’re done?

Of course, not all functional requirements will consume the same implementation and testing effort. If you’re going to count functional requirements as an indicator of system size, your analysts will need to write them at a consistent level of granularity. One guideline is to decompose high-level requirements until the child requirements are all individually testable. That is, a tester can design a few logically related tests to verify whether a requirement was correctly implemented. Count the total number of child requirements, because those are what developers will implement and testers will test. Alternative requirements sizing techniques include use case points and story points. All of these methods involve estimating the relative effort to implement a defined chunk of functionality.

Functional requirements aren’t the whole story, of course. Stringent nonfunctional requirements can consume considerable design and implementation effort. Some functionality is derived from specified nonfunctional requirements, such as security requirements, so those would be incorporated appropriately into the functional requirement size estimate. But not all nonfunctional requirements will be reflected in this size estimate. Be sure to consider the impact of nonfunctional requirements upon your effort estimate. Consider the following situations:

  • If the user must have multiple ways to access specific functions to improve usability, it will take more development effort than if only one access mechanism is needed.
  • Imposed design and implementation constraints, such as multiple external interfaces to achieve compatibility with an existing operating environment, can lead to a lot of interface work even though you aren’t providing additional new product functionality.
  • Strict performance requirements might demand extensive algorithm and database design work to optimize response times.
  • Rigorous availability and reliability requirements can imply significant work to build in failover and data recovery mechanisms, as well as having implications for the system architecture you select.

You’ll also find it informative to track the growth in requirements as a function of time, no matter what requirements size metric you use. One of my clients found that their projects typically grew in size by about twenty-five percent before delivery. Amazingly, they also ran about twenty-five percent over the planned schedule on most of their projects. Coincidence? I think not.

Requirements Quality

Consider collecting some data regarding the quality of your requirements. Inspections of requirements specifications are a good source of this information. Count the requirements defects you discover and classify them into various categories: missing requirements, erroneous requirements, unnecessary requirements, incompleteness, ambiguities, and so forth. Use defect type frequencies and root-cause analysis to tune up your requirements processes so the team makes fewer of these types of errors in the future. For instance, if you find that missing requirements are a common problem, your elicitation approaches need some adjustments. Perhaps your business analysts aren’t asking enough questions or the right questions, or maybe you need to engage more appropriate user representatives in the requirements development process.

If the team members don’t think they have time to inspect all their requirements documentation, try inspecting a sample of just a few pages. Then calculate the average defect density—the number of defects found per specification page—for the sample. Assuming that the sample was representative of the entire document (a big assumption), you can multiply the number of uninspected pages by this defect density to estimate the number of undiscovered defects that could still lurk in the specification. Less experienced inspectors might discover only, say, half the defects that actually are present, so use this estimated number of undiscovered defects as a lower bound. Inspection sampling can let you assess the document’s quality so that you can determine whether it’s cost effective to inspect the rest of the requirements specification. The answer will almost certainly be yes.

Also, keep records of requirements defects that are identified after the requirements are baselined, such as requirements-related problems discovered during design, coding, and testing. These represent errors that leaked through your quality control filters during requirements development. Calculate the percentage of the total number of requirements errors that the team caught at the requirements stage. Removing requirements defects early is far cheaper than correcting them after the team has already designed, coded, and tested the wrong requirements.

Two informative metrics to calculate from inspection data are efficiency and effectiveness. Efficiency refers to the average number of defects discovered per labor hour of inspection effort. Effectiveness refers to the percentage of the defects originally present in a work product that was discovered by inspection. Effectiveness will tell you how well your inspections (or other requirements quality techniques) are working. Efficiency will tell you what it costs you, on average, to discover a defect through inspection. You can compare that cost with the cost of dealing with requirements defects found later in the project or after delivery to judge whether improving the quality of your requirements is cost effective.

The second article in this series will address metrics related to requirements status, change requests, and the effort expended on requirements development and management activities.

Also read Measuring Requirements, Part 2

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

In the first part of this two-part series I described the value of managing risks formally on a software project and listed numerous common risks in various categories. This article describes the various activities associated with the practice of risk management and recommends specific information you should record about each risk you identify.

Risk Management Components

As with other project activities, begin risk management by developing a plan, perhaps using the risk management plan template available at www.ProjectInitiation.com. Small projects can include a concise risk management plan as a section within the overall project plan. Risk management consists of the activities illustrated in Figure 1 and described below.

Risk Assessment

Risk assessment is the process of examining a project to identify areas of potential risk. Risk identification can be facilitated with the help of a checklist of common risk areas for software projects, as I described in the first article in this series. Risk analysis examines how project outcomes might change with modification of risk input variables. In other words, just how could the risk harm your project.

Loss
Probability Low Medium High
Low

Low

Low

Medium

Medium

Low

Medium

High

High

Medium

High

High

 

Figure 2: Risk exposure is a function of probability and potential loss.


Risk prioritization helps the project focus on its most severe risks by assessing the risk exposure. Exposure is the product of the probability of incurring a loss due to the risk and the potential magnitude of that loss. I usually estimate the probability from 0.1 (highly unlikely) to 1.0 (certain to happen), and the loss (also called impact) on a relative scale of 1 (no problem) to 10 (deep tapioca). Multiplying these factors together provides an estimate of the risk exposure due to each item, which can run from 0.1 (don’t give it another thought) through 10 (stand back, here it comes!). It’s simpler to estimate both probability and loss as High, Medium, or Low. Figure 2 shows how you can estimate the risk exposure level as High, Medium, or Low by combining the probability and loss estimates.

 

Risk Avoidance

Risk avoidance is one way to deal with a risk: don’t do the risky thing! You might avoid risks by not undertaking certain projects, or by relying on proven rather than cutting-edge technologies when possible. You might be able to transfer a risk to some other party, such as a subcontractor.

Risk Control

Risk control is the process of managing risks to achieve the desired outcomes. Risk management planning produces a plan for dealing with each significant risk, including mitigation approaches, owners, and timelines. Risk resolution entails executing the plans for dealing with each risk. That’s when you actually control the risk. Finally, risk monitoring involves tracking your progress toward resolving each risk item.

Let’s look at an example of risk management planning. Suppose the “project” is to take a hike through a swamp in a nature preserve. You’ve heard the swamp might contain quicksand, so the risk is that we might step in quicksand and be injured or even die. One strategy to mitigate this risk is to reduce the probability of the risk actually becoming a problem. A second option is to consider actions that could reduce the impact of the risk if it does in fact become a problem. So, to reduce the probability of stepping in the quicksand, we might look for signs of quicksand as we walk and draw a map so we can avoid these areas on future walks. To reduce the impact if someone does step in quicksand, the members of the tour group could rope themselves together. That way if someone does encounter some quicksand the others could quickly pull him to safety.

Even better, is there some way to prevent the risk from becoming a problem under any circumstances? Maybe we build a boardwalk as we go so we avoid the quicksand. That will slow us down and cost some money. But, we don’t have to worry about quicksand any more. The very best strategy is to eliminate the root cause of the risk entirely. Perhaps we should drain the swamp, but then it wouldn’t be a very interesting nature walk. By taking too aggressive a risk approach, you can eliminate the factors that make a project attractive in the first place.

Documenting Risks

Simply identifying the risks facing a project is not enough. We need to write them down in a way that lets us communicate the nature and status of risks throughout the affected stakeholder community over the duration of the project. Figure 3 shows a form I’ve found to be convenient for documenting risks. It’s a good idea to keep the risk list itself separate from the risk management plan, as you’ll be updating the risk list frequently throughout the project. You can download an alternative template for your risk list from www.ProjectInitiation.com. This format includes essentially the same information that’s in Figure 3, but it’s laid out in a way that’s amenable to storing in a spreadsheet or as a table in a word-processing document.

ID: <sequence number or a more meaningful label>
Description: <List each major risk facing the project. Describe each risk in the form “condition – consequence.”>
Probability: <What’s the likelihood of this risk becoming a problem?> Loss: <What’s the damage if the risk does become a problem?> Exposure: <Multiply Probability times Loss.>
First Indicator: <Describe the earliest indicator or trigger condition that might indicate that the risk is turning into a problem.>
Mitigation Approaches: <State one or more approaches to control, avoid, minimize, or otherwise mitigate the risk.>
Owner: <Assign each risk mitigation action to an individual for resolution.> Date Due: <State a date by which the mitigation approach is to be implemented.>

 

 

 

 

 

Figure 3: A risk documentation form.

Use a condition–consequence format when documenting risk statements. That is, state the risk situation (the condition) that you are concerned about, followed by at least one potential adverse outcome (the consequence) if that risk should turn into a problem. Often, people suggesting risks state only the condition—“The customers don’t agree on the product requirements”—or the consequence—“We can only satisfy one of our major customers.” Pull those together into the condition-consequence structure: “The customers don’t agree on the product requirements, so we’ll only be able to satisfy one of our major customers.” This statement doesn’t describe a certain future, just a possible outcome that could harm the project if the condition isn’t addressed.

Keep the items with high risk exposures at the top of your priority list to focus your risk-control energy. Set goals for determining when each risk item has been satisfactorily controlled. Your mitigation approaches for some items may focus on reducing the probability, whereas the approach for other risks could emphasize reducing the potential loss or impact. With any luck, some of your mitigation strategies will help you control multiple risk factors.

Risk Tracking

As with other project management activities, you need to get into a rhythm of periodic monitoring. You may wish to appoint a risk manager for the project. The risk manager is responsible for staying on top of the things that could go wrong, just as the project manager stays on top of the activities leading to project completion. It’s a good idea to have someone other than the project manager serve as the risk manager. The project manager is focused on what he has to do to make a project succeed. The risk manager, in contrast, is identifying factors that might prevent the project from succeeding. In other words, the risk manager is looking for the black cloud around the silver lining that the project manager sees. Asking the same person to take these two opposing views of the project can lead to cognitive dissonance; in an extreme case, his brain can explode.

Keep the top ten risks highly visible and track the effectiveness of your mitigation approaches regularly. New risks might float up into the top ten as you gradually beat the initial list of top priority items into submission. You can drop a risk off your radar when your mitigation approaches have reduced the risk exposure from that item to an acceptable level. Don’t conclude that a risk is controlled simply because the selected mitigation action has been completed. Controlling a risk might require you to change the risk control strategy if you conclude it isn’t working.

A student in a seminar once asked, “What should you do if you have the same top five risks week after week?” A static risk list suggests that your risk mitigation actions aren’t working. Effective mitigation actions should lower the risk exposure as the probability, the loss, or both decrease over time. If your risk list isn’t changing, check to see whether the planned mitigation actions have been carried out and whether they had the desired effect.

Also, look for new risks that might arise during the course of the project. Conditions can change, assumptions can prove to be wrong, and other factors might lead to risks that weren’t apparent or perhaps did not even exist at the beginning of the project. Escalate risks that aren’t being controlled to the attention of senior managers or other stakeholders. They can then either stimulate corrective actions or else make a conscious business decision to proceed in spite of the risks.

Learning from the Past

We can’t predict exactly which of the many threats to our projects might come to pass. However, most of us can do a better job of learning from previous experiences to avoid the same pain and suffering on future projects. As you begin to implement risk management approaches, record your actions and results for future reference. The risks are out there. Find them before they find you.

Also read Know Your Enemy: An Introduction to Risk Management, Part 1

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

 

 

Software engineers are eternal optimists. When planning software projects, we usually assume that everything will go exactly as planned. Or, we take the other extreme position: the creative nature of software development means we can never predict what’s going to happen, so what’s the point of making detailed plans? Both of these perspectives can lead to software surprises, when unexpected things happen that throw the project off track. In my experience, software surprises are never good news.

Risk management has become recognized as a best practice in the software industry for reducing the surprise factor. Although we can never predict the future with certainty, we can apply risk management practices to peek over the horizon at the traps that might be looming. Then we can take actions to minimize the likelihood or impact of these potential problems. Risk management means dealing with a concern before it becomes a crisis. This improves the chance of successful project completion and reduces the consequences of those risks that cannot be avoided.

During project initiation take the time to do a first cut at identifying significant risks. It’s possible that the risks will outweigh the potential benefits of the project. More likely, getting an early glimpse of potential pitfalls will help you make more sensible projections of what it will take to execute this project successfully. Build time for risk identification and risk management planning into the early stages of your project. You’ll find that the time you spend assessing and controlling risks will be repaid many times over.

What Is Risk?

A “risk” is a problem that could cause some loss or threaten the success of your project, but which hasn’t happened yet. And you’d like to keep it that way. These potential problems might have an adverse impact on the cost, schedule, or technical success of the project, the quality of your products, or team morale. Risk management is the process of identifying, addressing, and controlling these potential problems before they can do any harm.

Whether we tackle them head-on or keep our heads in the sand, risks have a potentially huge impact on many aspects of our project. The tacit assumption that nothing unexpected will derail the project is simply not realistic. Estimates should incorporate our best judgment about the potentially scary things that could happen on each project, and managers need to respect the assessments we make. Risk management is about discarding the rose-colored glasses and confronting the very real potential of undesirable events conspiring to throw the project off track.

Why Manage Risks Formally?

A formal risk management process provides multiple benefits to both the project team and the development organization as a whole. First, it gives us a structured mechanism to provide visibility into threats to project success. By considering the potential impact of each risk item, we can focus on controlling the most severe risks first. We can marry risk assessment with project estimation to quantify possible schedule slippage if certain risks materialize into problems. This approach helps the project manager generate sensible contingency buffers. Sharing what does and does not work to control risks across multiple projects helps the team avoid repeating the mistakes of the past. Without a formal approach, we cannot ensure that our risk management actions will be initiated in a timely fashion, completed as planned, and effective.

Controlling risks has a cost. We must balance this cost against the potential loss we could incur if we don’t address the risk and it does indeed bite us. Suppose we’re concerned about the ability of a subcontractor to deliver an essential component on time. We could engage multiple subcontractors to increase the chance that at least one will come through on schedule. That’s an expensive remedy for a problem that might not even materialize. Is it worth it? It depends on the downside we incur if indeed the subcontractor dependency causes the project to miss its planned ship date. Only you can decide for each individual situation.

Typical Software Risks

The list of evil things that can befall a software project is depressingly long. The enlightened project manager will acquire lists of these risk categories to help the team uncover as many concerns as possible early in the planning process. Potential risks to consider can come from group brainstorming activities or from a risk factor chart accumulated from previous projects. In one of my groups, individual team members came up with descriptions of the risks they perceived, which I edited together and we then reviewed as a team.

Following are several typical risk categories and some specific risks that might threaten your project. Have any of these things have happened to you? If so, add them to your master risk checklist to remind future project managers to consider if it could happen to them, too. There are no magic solutions to any of these risk factors. We need to rely on past experience and a strong knowledge of software engineering and management practices to control those risks that concern us the most.

Dependencies

Some risks arise because of dependencies our project has on outside agencies or factors. We cannot usually control these external dependencies. Mitigation strategies could involve contingency plans to acquire a necessary component from a second source, or working with the source of the dependency to maintain good visibility into status and detect any looming problems. Following are some typical dependency-related risk factors:

  • Customer-furnished items or information.
  • Internal and external subcontractor relationships.
  • Inter-component or inter-group dependencies.
  • Availability of trained and experienced people.
  • Reuse from one project to the next.

Requirements Issues

Many projects face uncertainty and turmoil around the product’s requirements. Some uncertainty is tolerable in the early stages, but the threat increases if such issues remain unresolved as the project progresses. If we don’t control requirements-related risks we might build the wrong product or build the right product badly. Either outcome results in unpleasant surprises and unhappy customers. Watch out for these risk factors:

  • Lack of a clear product vision.
  • Lack of agreement on product requirements.
  • Inadequate customer involvement in the requirements process.
  • Unprioritized requirements.
  • New market with uncertain needs.
  • Rapidly changing requirements.
  • Ineffective requirements change management process.
  • Inadequate impact analysis of requirements changes.

Management Issues

Although management shortcomings affect many projects, don’t be surprised if your risk management plan doesn’t list too many of these. The project manager often leads the risk identification effort, and most people don’t wish to air their own weaknesses (assuming they even recognize them) in public. Nonetheless, issues like those listed here can make it harder for projects to succeed. If you don’t confront such touchy issues, don’t be surprised if they bite you at some point. Defined project tracking processes and clear project roles and responsibilities can address some of these conditions.

  • Inadequate planning and task identification.
  • Inadequate visibility into project status.
  • Unclear project ownership and decision making.
  • Unrealistic commitments made, sometimes for the wrong reasons.
  • Managers or customers with unrealistic expectations.
  • Staff personality conflicts.

Lack of Knowledge

Software technologies change rapidly and it can be difficult to find suitably skilled staff. As a result, our project teams might lack the skills we need. The key is to recognize the risk areas early enough so we can take appropriate preventive actions, such as obtaining training, hiring consultants, and bringing the right people together on the project team. Consider whether the following factors apply to your team:

  • Lack of training.
  • Inadequate understanding of methods, tools, and techniques.
  • Insufficient application domain experience.
  • New technologies or development methods.
  • Ineffective, poorly documented, or ignored processes.
  • Technical approaches that might not work.

Outsourcing

Outsourcing development work to another organization, possibly in another country, poses a whole new set of risks. Some of these are attributable to the acquiring organization, others to the supplier, and still others are mutual risks. If you are outsourcing part of your project work, watch out for the following risks:

  • Acquirer’s requirements are vague, ambiguous, incorrect, or incomplete.
  • Acquirer does not provide complete and rapid answers to supplier’s questions or requests for information.
  • Supplier lacks appropriate software development and management processes.
  • Supplier does not deliver components of acceptable quality on contracted schedule.
  • Supplier is acquired by another company, has financial difficulties, or goes out of business.
  • Supplier makes unachievable promises in order to get the contract.
  • Supplier does not provide accurate and timely visibility into actual project status.
  • Disputes arise about scope boundaries based on the contract.
  • Import/export laws or restrictions pose a problem.

The second article in this series will describe the various activities associated with the practice of risk management and recommend the specific bits of information you should record about each risk you identify.

Also read Know Your Enemy: An Introduction to Risk Management, Part 2

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

In the previous two articles in this series, adapted from my book Practical Project Initiation, I’ve described fifteen practices the project manager can apply to lay the foundation for a successful project, plan the project, and estimate the work to be done. In this final article I share two additional estimation practices, three good practices for tracking your progress throughout the project, and one practice for learning how to execute your future projects more successfully.

Estimating the Work (continued)

Practice #16: Use estimation tools. Many commercial tools are available to help project managers estimate entire projects. Based on equations derived from large databases of actual project experience, these tools can give you a spectrum of possible schedule and staff allocation options. They’ll also help you avoid the “impossible region,” combinations of product size, effort, and schedule where no known project has been successful. The tools incorporate a number of “cost drivers” you can adjust to make the tool more accurately model your project, based on the technologies used, the team’s experience, and other factors. You can compare the estimates from the tools with the bottom-up estimates generated from a work breakdown structure. Reconcile any major disconnects so you can generate the most realistic overall estimate.

Practice #17: Plan contingency buffers. Projects never go precisely as planned. The prudent project manager incorporates budget and schedule contingency buffers at the end of phases, dependent task sequences, or iterations to accommodate the unforeseen. Use your project risk analysis to estimate the possible schedule impact if several of the risks materialize, then build that projected risk exposure into your schedule as a contingency buffer. An even more sophisticated approach is critical chain analysis, a technique that pools the uncertainties in estimates and risks into a rational overall contingency buffer. Chapter 10 of Practical Project Initiation is all about contingency buffers.

Your manager or customer might view these contingency buffers as padding, rather than as the sensible acknowledgment of reality that they are. To help persuade skeptics, point to unpleasant surprises on previous projects as a rationale for your foresight. If a manager elects to discard contingency buffers, he has tacitly absorbed all the risks that fed into the buffer and assumed that all estimates are perfect, no scope growth will occur, and no unexpected events will take place. Sound realistic to you? Of course not. I’d rather see us deal with reality—however unattractive—than to live in Fantasyland.

Tracking Your Progress

Practice #18: Record actuals and estimates. Unless you record the actual effort or time spent on each project task and compare them to the estimates, your estimates will forever remain guesses. Someone once asked me where to get historical data to improve her ability to estimate future work. My answer was, “If you write down what actually happened today, that becomes historical data tomorrow.” It’s really not more complicated than that. Each individual can begin recording estimates and actuals, and the project manager should track these important data items on a project task or milestone basis. In addition to effort and schedule, you could estimate and track the size of the product, in terms of requirements, user stories, lines of code, function points, GUI screens, or other units that make sense for your project.

Practice #19: Count tasks as complete only when they’re one hundred percent complete. We give ourselves a lot of partial credit for tasks we’ve begun but not yet fully completed: “I thought about the algorithm for that module in the shower this morning, and the algorithm is the hard part, so I’m probably about sixty percent done.” It’s difficult to accurately assess what fraction of a sizable task has actually been finished at a given moment.

One benefit of using inch-pebbles (see Practice #6 in Part 2 of this series) for task planning is that you can break a large activity into a number of small tasks (inch-pebbles) and classify each small task as either done or not done—nothing in between. Project status tracking is then based on the fraction of the tasks that are completed and their size, not the percentage completion of each task. If someone asks you whether a specific task is complete and your reply is, “It’s all done except…,” then it’s not done! Don’t let people “round up” their task completion status. Instead, use explicit criteria to determine whether an activity truly is completed.

Practice #20: Track project status openly and honestly. An old riddle asks, “How does a software project become six months late?” The rueful answer is, “One day at a time.” The painful problems arise when the project manager doesn’t know just how far behind (or, occasionally, ahead) of plan the project really is. Surprise, surprise, surprise.

If you’re the PM, create a climate in which team members feel it is safe for them to report project status accurately. Run the project from a foundation of accurate, data-based facts, rather than from the misleading optimism that can arise from the fear of reporting bad news. Use project status information and metrics data to take corrective actions when necessary and to celebrate when you can. You can only manage a project effectively when you really know what’s done and what isn’t, what tasks are falling behind their estimates and why, and what problems, issues, and risks remain to be tackled.

The five major areas of software measurement are size, effort, time, quality, and status. It’s a good idea to define a few metrics in each of these categories. Instilling a measurement culture into an organization is not trivial. Some people resent having to collect data about the work they do, often because they’re afraid of how managers might use the measurements. The cardinal rule of software metrics is that management must never use the data collected to either reward or punish the individuals who did the work. The first time you do this will be the last time you can count on getting accurate data from the team members.

Learning for the Future

Practice #21: Conduct project retrospectives. Retrospectives (also called postmortems and post-project reviews) provide an opportunity for the team to reflect on how the last project, phase, or iteration went and to capture lessons learned that will help enhance your future performance. During such a review, identify the things that went well, so you can create an environment that enables you to repeat those success contributors. Also look for things that didn’t go so well, so you can change your approaches and prevent those problems in the future. In addition, think of events that surprised you. These might be risk factors to look for on the next project. Finally, ask yourself what you still don’t understand about the project, so you can try to learn how to execute future work better.

It’s important to conduct retrospectives in a constructive and honest atmosphere. Don’t make them an opportunity to assign blame for previous problems. Chapter 15 of Practical Project Initiation describes the project retrospective process and provides a worksheet to help you plan your next retrospective. It’s a good idea to capture the lessons learned from each retrospective exploration and share them with the entire team and organization. This is a way to help all team members, present and future, benefit from your experience.

The twenty-one project management best practices I’ve described in this series of articles won’t guarantee your project a great outcome. They will, however, help you get a solid handle on your project and ensure that you’re doing all you can to make it succeed in an unpredictable world.

Also read Project Management Best Practices, Part 1
Also read Project Management Best Practices, Part 2

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

In Part 1 of this series, adapted from my book Practical Project Initiation, I shared four best practices that can help you lay the foundation for a successful project and two practices that are useful for project planning. This article continues the series by describing five additional project planning best practices and several practices for estimating the work needed to complete the project.

Planning the Project (continued)

Practice #7: Develop planning worksheets for common large tasks. If your team frequently undertakes certain common tasks—such as implementing a new class, executing a system test cycle, or performing a product build—develop activity checklists and planning worksheets for these tasks. Each checklist should include all of the steps the large task might need. These checklists and worksheets will help each team member identify and estimate the effort associated with each instance of the large task he must tackle. People work in different ways and no single person will think of all the necessary tasks, so engage multiple team members in developing the worksheets. Tailor the worksheets to meet the specific needs of individual projects. I’ve used such worksheets when creating eLearning versions of my training courses. They helped me avoid overlooking an important step in my rush to finish the project.

Practice #8: Plan to do rework after a quality control activity. I’ve seen project plans that assumed every test will be a success that lets you move on to the next development activity. However, almost all quality control activities, such as testing and peer reviews, find defects or other improvement opportunities. Your project schedule should include rework as a discrete task after every quality control task. Base your estimates of rework time on previous experience. If you collect a bit of data, you can calculate the average expected rework effort to correct defects found in various types of work products. And if you don’t have to do any rework after performing a test, great; you’re ahead of schedule on that task. This is permitted in all fifty states and in many other countries. Don’t count on it, though.

Practice #9: Manage project risks. If you don’t identify and control project risks, they’ll control you. A risk is a potential problem that could affect the success of your project. It’s a problem that hasn’t happened yet—and you’d like to keep it that way. Simply identifying the possible risk factors isn’t enough. You also have to evaluate the relative threat each one poses so you can focus your energy where it will do the most good.

Risk exposure is a combination of the probability that a specific risk could materialize into a problem and the negative consequences for the project if it does. To manage each risk, select mitigation actions to reduce either the probability or the impact. You might also identify contingency plans that will kick in if your risk control activities aren’t as effective as you hope.

A simple risk list doesn’t replace a plan for how you will identify, prioritize, control, and track risks. Incorporate risk tracking into your routine project status tracking. See Chapter 6 of Practical Project Initiation for an overview of software risk management.

Practice #10: Plan time for process improvement. Your team members are already swamped with their current project assignments. If you want the group to rise to a higher plane of software development capability, though, you’ll have to invest in process improvement. This means you’ll need to set aside some time from your project schedule for improvement activities. Don’t allocate one hundred percent of your team’s available time to project tasks and then wonder why they don’t make any progress on the improvement initiative.

Some process changes can begin to pay off immediately, but you won’t reap the full benefit from other improvements until the next project. Process improvement is a strategic investment in the organization. I liken process improvement to highway construction: It slows everyone down a little bit for a time, but after the work is done, the road is a lot smoother and the throughput is greater.

Practice #11: Respect the learning curve. The time and money you spend on training, self-study, consultants, and developing improved processes are part of the investment your organization makes in sustained project success. Recognize that you’ll pay a price in terms of a short-term productivity loss—the learning curve—when you first try to apply new processes, tools, or technologies. Don’t expect to get fabulous benefits on the first try, no matter what the tool vendor or the consultant claims. Make sure your managers and customers understand the learning curve as an inescapable consequence of working in a rapidly changing, high-technology field.

Estimating the Work

Practice #12: Estimate based on effort, not calendar time. People generally provide estimates in units of calendar time. I prefer to estimate the effort (in labor-hours) associated with a task, and then translate the effort into a calendar-time estimate. A twenty-hour task might take 2.5 calendar days of nominal full-time effort, or two exhausting days. However, it could also take a week if you have to wait for critical information from a customer or stay home with a sick child for two days. I base the translation of effort into calendar time on estimates of how many effective hours I can spend on project tasks per day, any interruptions or emergency bug fix requests I might get, meetings, and all the other places into which time disappears.

If you track how you actually spend your time at work, you’ll know how many effective weekly project hours you have available on average. Tracking time like this is illuminating. Typically, the effective project time is only perhaps fifty to sixty percent of the nominal time team members spend at work, far less than the assumed one hundred percent effective time on which so many project schedules are planned.

Practice #13: Don’t over-schedule multitasking people. The task-switching overhead associated with the many activities we are all asked to do reduces our effectiveness significantly. Excessive multitasking introduces communication and thought process inefficiencies that reduce individual productivity. I once heard a manager say that someone on his team had spent an average of eight hours per week on a particular activity, so therefore she could do five of them at once. Forty hours per week divided by eight is five, right? In reality, she’ll be lucky if she can handle three or four such tasks. There’s just too much friction associated with multitasking.

Some people multitask more efficiently than others, even thriving on it. But if certain of your team members thrash when working on too many tasks at once, set clear priorities and help them do well by focusing on just one or two objectives at a time.

Practice #14: Build training time into the schedule. Estimate how much time your team members spend on training activities each year and subtract that from the time available for them to work on project tasks. You probably already subtract out average values for vacation time, sick time, and other assignments; treat training time the same way.

Recognize that the high-tech field of software development demands that all practitioners devote time to ongoing education, both on their own time and on the company’s time. Arrange just-in-time training when you can schedule it, as the half-life of new technical knowledge is short unless the student puts the knowledge to use promptly. Attending a training seminar can be a team-building experience, as project team members and other stakeholders hear the same story about how to apply improved practices to their common challenges.

Practice #15: Record estimates and how you derived them. When you prepare estimates for your work, write down those estimates and document how you arrived at each of them. Understanding the assumptions and approaches used to create an estimate will make them easier to defend and adjust when necessary. It will also help you improve your estimation process. Train the team in estimation methods, rather than assuming that every software developer and project leader is naturally skilled at predicting the future. Develop estimation procedures and checklists that people throughout your organization can use.

The Wideband Delphi method is an effective group estimation technique. This technique asks a small team of experts to anonymously generate individual estimates from a problem description and reach consensus on a final set of estimates through iteration. Participation by multiple estimators and the use of anonymous estimates to prevent one participant from biasing another make the Wideband Delphi method more reliable than simply asking a single individual for his best guess. Chapter 11 of Practical Project Initiation presents a tutorial on the Wideband Delphi estimation method.

The final article in this series will describe two additional estimation tips, along with several good practices for tracking your progress and learning how to plan and manage future projects more effectively.

Also read Project Management Best Practices, Part 1
Also read Project Management Best Practices, Part 3

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

Managing software projects is difficult under the best circumstances. The project manager must balance competing stakeholder interests against the constraints of limited resources and time, ever-changing technologies, and challenging demands from high-pressure people. Project management is a juggling act, with too many balls in the air at once.

Unfortunately, many new project managers receive little training in how to do the job. Anyone can learn to draw a Gantt chart, but effective project managers also rely on the savvy that comes from experience. Learning survival tips from people who’ve already done their tours of duty in the project management trenches can save you from learning such lessons the hard way.

This three-part series, adapted from my book Practical Project Initiation: A Handbook with Tools (Microsoft Press, 2007), introduces twenty-one valuable practices that can help both rookie and veteran project managers do a better job with less pain. The practices are organized into five categories:

  1. Laying the foundation.
  2. Planning the project.
  3. Estimating the work.
  4. Tracking your progress.
  5. Learning for the future.

When initiating a new project, study this list of practices to see which ones would be valuable contributors to that project. Build the corresponding activities into your thinking and plans. Recognize, though, none of these practices will be silver bullets for your project management problems. Also, remember that even “best” practices are situational. They need to be selectively and thoughtfully applied only where they will add value to the project.

Laying the Foundation

Practice #1: Define project success criteria. At the beginning of the project, make sure the stakeholders share a common understanding of how they’ll determine whether this project is successful. Begin by identifying your stakeholders and their interests and expectations. Next, define some clear and measurable business goals. Some examples are:

  •  Increasing market share by a certain amount by a particular date.
  • Reaching a specified sales volume or revenue.
  • Achieving certain customer satisfaction measures.
  • Saving money by retiring a high-maintenance legacy system.

These business goals should imply specific project success criteria, which again should be measurable and trackable. These goals could include achieving schedule and budget targets, delivering committed functionality that satisfies acceptance tests, complying with industry standards or government regulations, or achieving specific technology milestones. The business objectives define the overarching goal. It doesn’t matter if you deliver to the specification on schedule and budget if those factors don’t clearly align with business success.

Not all of these defined success criteria can be your top priority. You’ll have to make some thoughtful tradeoff decisions to be sure that you satisfy your most important priorities. If you don’t define clear priorities for success, team members can work at cross-purposes. Chapter 4 of Practical Project Initiation presents a tutorial on defining project success criteria.

Practice #2: Identify project drivers, constraints, and degrees of freedom. Every project must balance its functionality, staffing, budget, schedule, and quality objectives. Define each of these five project dimensions as either a constraint within which you must operate, a driver strongly aligned with project success, or a degree of freedom you can adjust within some stated bounds. I explain this idea more fully in my book Creating a Software Engineering Culture (Dorset House, 1996).

I’m afraid I have bad news: not all factors can be constraints, and not all can be drivers. If you are given a defined feature set that must delivered with zero defects by a specific date by a fixed team size working on a fixed budget, you will most likely fail. An over constrained project leaves the project manager with no way to deal with requirement changes, staff turnover or illness, risks that materialize, or any other unexpected occurrences.

I once heard a senior manager and a project leader debate how long it would take to deliver a planned new large software system. The project leader’s top-of-the-head guess was four times as long as the senior manager’s stated goal of six months. The project leader’s response to the senior manager’s pressure for the much shorter schedule was simply, “Okay.” A better response would have been to negotiate a realistic outcome through a dialogue:

  •  Does something drastic happen if we don’t deliver in six months (schedule is a constraint), or is that just a desirable target date (schedule is a driver)?
  • If the six months is a firm constraint, what subset of the requested functionality do you absolutely need delivered by then? (Features are a degree of freedom.)
  • Can I get more people to work on it? (Staff is a degree of freedom.)
  • Do you care how well it works? (Quality is a degree of freedom.)
  • Can I get more funding to outsource part of the project work? (Cost is a degree of freedom.)

Practice #3: Define product release criteria. Early in the project, decide what criteria will indicate whether the product is ready for release. Some examples of possible release criteria are:

  • There are no open high-priority defects.
  • The number of open defects has decreased for X weeks, and the estimated number of residual defects is acceptable.
  • Performance goals are achieved on all target platforms.
  • Specific required functionality is fully operational.
  • Quantitative reliability goals are satisfied.
  • Specified legal, contractual, or regulatory goals are met.
  • Customer acceptance criteria are satisfied.

Whatever criteria you choose should be realistic, objectively measurable, documented, and aligned with what “quality” means to your customers. Decide early on how you will tell when you’re done, track progress toward your goals, and stick to your guns if confronted with pressure to ship before the product is ready for prime time. See Chapter 5 of Practical Project Initiation for more about defining product release criteria.

Practice #4: Negotiate achievable commitments. Despite pressure to promise the impossible, never make a commitment you know you can’t keep. Engage in good-faith negotiations with customers, managers, and team members to agree on goals that are realistically achievable. Negotiation is required whenever there’s a gap between the schedule or functionality the key project stakeholders demand and your best prediction of the future as embodied in project estimates.

Principled negotiation involves four precepts, as described in Getting to Yes by Roger Fisher, William Ury, and Bruce Patton (Penguin USA, 1991):

  • Separate the people from the problem.
  • Focus on interests, not positions.
  • Invent options for mutual gain.
  • Insist on using objective criteria.

Any data you have from previous projects will strengthen your negotiating position, especially because the person with whom you’re negotiating likely has no data at all. However, there’s no real defense against truly unreasonable people.

Plan to renegotiate commitments when project realities (such as staff, budget, or deadlines) change, unanticipated problems arise, risks materialize, or new requirements are added. No one likes to have to modify his commitments. But if the reality is that the initial commitments won’t be achieved, let’s not pretend that they will right up until the moment of disappointing truth.

Planning the Project

Practice #5: Write a plan. Some people believe the time spent writing a plan could be better spent writing code, but I don’t agree. The hard part isn’t writing the plan. The hard part is doing the planning—thinking, negotiating, balancing, asking, listening, and thinking some more. Actually writing the plan is mostly transcription at that point. The time you spend analyzing what it will take to solve the problem will reduce the number of surprises you encounter later in the project. A useful plan is much more than just a schedule or task list. It also includes:

  • Staff, budget, and other resource estimates and plans.
  • Team roles and responsibilities.
  • How you will acquire and train the necessary staff.
  • Assumptions, dependencies, and risks.
  • Target dates for major deliverables.
  • Identification of the software development life cycle that the project will follow.
  • How you will track and monitor the project.
  • Metrics that you’ll collect and analyze.
  • How you will manage any subcontractor relationships.

Your organization should adopt a standard software project management plan template, which each project can tailor to best suit its needs. You might start with the project management plan template available at www.ProjectInitiation.com. Adjust and shrink this template to suit the nature and size of your own projects.

If you commonly tackle different kinds of projects, such as major new product development projects as well as small enhancements, I suggest you adopt a separate project plan template for each project class. The project plan should be no longer or more elaborate than necessary to make sure you can successfully execute the project. One page might suffice in some cases. But always write a plan.

Practice #6: Decompose tasks to inch-pebble granularity. Inch-pebbles are miniature milestones (get it?). Breaking large tasks into multiple small tasks helps you estimate them more accurately, reveals work activities you might not have thought of otherwise, and permits more accurate, fine-grained status tracking. Select inch-pebbles of a size that you feel you can estimate accurately. I feel most comfortable with inch-pebbles that represent tasks of about five to fifteen labor-hours. Overlooked tasks are a common contributor to schedule slips. Breaking large problems into smaller bits reveals more details about the work that must be done and improves your ability to create accurate estimates. You can track progress based on the number of inch-pebbles that the team has completed at any given time, compared to those you planned to have done by that time.

Part 2 of this series will continue with additional practices for planning the project and several tips for preparing more realistic estimates.

Also read Project Management Best Practices, Part 2
Also read Project Management Best Practices, Part 3

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

High-performance projects have effective processes for all of the requirements engineering components: elicitation, analysis, specification, validation, and management. To facilitate the performance of these processes, every organization needs a collection of appropriate process assets. A process encompasses the actions you take and the deliverables you produce; process assets help the team members perform those processes consistently and effectively. These process assets will help those involved in the project understand the steps they should follow and the work products they’re expected to create.

Table 1 identifies some valuable process assets for requirements engineering. No software process rule book says that you need all of these items, but they will all assist your requirements-related activities. Procedures should be no longer than they need to be to let team members consistently perform the tasks effectively. They need not be separate documents. For instance, an overall requirements management process could include the change control procedure, status tracking procedure, and impact analysis checklist.

Following are brief descriptions of each of these process assets. Many of these are available at http://www.processimpact.com/goodies.shtml. Keep in mind that each project should tailor the organization’s assets to best suit its needs.

Table 1. Key process assets for requirements development and requirements management.

Requirements Development Process Assets

Requirements development process. This process describes how to identify stakeholders, user classes, and product champions in your domain. It should address how to plan the elicitation activities, including selecting appropriate elicitation techniques, identifying participants, and estimating the effort and calendar time required for elicitation. The process describes the various requirements documents and models your project is expected to create and points the reader toward appropriate templates. The requirements development process also should identify the steps the project should perform for requirements analysis and validation.

Requirements allocation procedure. Allocating high-level product requirements to specific subsystems (including people) is necessary when developing systems that include both hardware and software components or complex products that contain multiple software subsystems. Allocation takes place after the system-level requirements are specified and the system architecture has been defined. This procedure describes how to perform these allocations to ensure that functionality is assigned to the appropriate components. It also describes how you will trace allocated requirements back to their parent system requirements and to related requirements in other subsystems.

Requirements prioritization procedure. To sensibly reduce scope or to accommodate added requirements within a fixed schedule, we need to know which planned system capabilities have the lowest priority. I’ve developed a spreadsheet tool for prioritization (at http://www.processimpact.com/goodies.shtml) that incorporates the value provided to the customer, the relative technical risk, and the relative cost of implementation for each feature, use case, or functional requirement.

Vision and scope template. Step 1 for managing the dreaded specter of scope creep is to document the project’s scope someplace, so we can judge whether proposed new requirements are in or out of scope. The vision and scope document is a concise, high-level description of the new product’s business requirements. It delineates what’s in and what’s out, thereby providing a reference for making decisions about requirements priorities and changes.

Use-case template. The use-case template provides a standard way to describe tasks that users need to perform with a software system. A use-case definition includes a brief description of the task, descriptions of alternative behaviors and known exceptions that must be handled, and additional information about the task.

Software requirements specification template. The SRS template provides a structured, consistent way to organize the functional and nonfunctional requirements for whatever product you’re building. Consider adopting more than one template to accommodate the different types or sizes of projects your organization undertakes. This can reduce the frustration that arises when a “one size fits all” template or procedure isn’t suitable for your project.

SRS and use-case defect checklists. Formal inspection of requirements documents is a powerful software quality technique. An inspection defect checklist identifies many of the errors commonly found in requirements documents. Use the checklist during the inspection’s preparation stage to focus your attention on common problem areas.

Requirements Management Process Assets

Requirements management process. This process describes the actions a project team takes to deal with changes, distinguish versions of the requirements documentation, track and report on requirements status, and accumulate traceability information. The process should list the attributes to include for each requirement, such as priority, predicted stability, and planned release number. It should also describe the steps required to approve the SRS and establish the requirements baseline.

Change control process. A practical change control process can reduce the chaos inflicted by endless, uncontrolled requirements changes. The change control change process defines the way that a new requirement or a modification to an existing requirement is proposed, communicated, evaluated, and resolved. A problem-tracking tool facilitates change control, but remember that a tool is not a substitute for a process.

Requirements status tracking procedure. Requirements management includes monitoring and reporting the status of each functional requirement. You’ll need to use a database or a commercial requirements management tool to track the status of the many requirements in a large system. This procedure should also describe the reports you can generate to view the status of the collected requirements at any time.

Change control board charter. The change control board (CCB) is the body of stakeholders that decides which proposed requirements changes to approve, which to reject, and in which product release or iteration each approved change will be incorporated. The CCB charter describes the composition, function, and operating procedures of the CCB. It’s worth taking the time to describe how a group of key decision makers will collaborate.

Requirements change impact analysis checklist and template. Estimating the cost and other impacts of a proposed requirement change is a key step in determining whether to approve the change. Impact analysis helps the CCB make smart decisions. An impact analysis checklist helps you contemplate the possible tasks, side effects, and risks associated with implementing a specific requirement change. An accompanying worksheet provides a simple way to estimate the labor for the tasks.

Requirements traceability matrix template. The requirements traceability matrix lists all the functional requirements, the design components and code modules that address each requirement, and the test cases that verify its correct implementation. The traceability matrix should also identify the parent system requirement, use case, business rule, or other source from which each functional requirement was derived.

Simply passing out a bunch of process documents to your team members is no guarantee of success. Well-meaning improvement leaders sometimes get carried away, creating much more process documentation than they need. Your process materials should be light enough to be unintimidating, practical, and effective—but no lighter.

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

The first article in this two-part series suggested some specific metrics that can help any software-developing organization better understand how it operates and the results it gets. This article will present some suggestions for creating a healthy measurement culture in the organization and give you several practical tips for metrics success.

Creating a Measurement Culture

A software practitioner’s first reaction to a new metrics program often is fear. People are afraid the data will be used against them, that it will take too much time to collect and analyze the data, or that the team will fixate on getting the numbers “right” rather than on building good software. Creating a healthy software measurement culture and overcoming such resistance will take diligent, congruent steering by managers who are committed to measurement and sensitive to these concerns.

To help you overcome the fear, educate our team about the metrics program. Tell them why measurement is important and how you intend to use the data. Make it clear that you will never use metrics data either to punish or reward individuals—and then make sure you don’t. A competent software manager does not need metrics data from individuals to distinguish the effective team contributors from those who are struggling.

It’s also important to respect the privacy of the data. It’s harder to misuse data if managers don’t know who the data came from. Classify each base measure you collect into one of these three privacy levels:

• Individual: Only the individual who collected the data about his own work knows it is his data, although it may be pooled with data from other individuals to provide an overall project profile.

• Project Team: Data is private to the members of the project team, although it may be pooled with data from other projects to provide an overall organizational profile.

• Organization: Data may be shared throughout the organization.

As an example, if you’re collecting work effort distribution data, the number of hours each individual spends working on every development or maintenance phase activity in a week is private to that individual. The total distribution of hours from all team members is private to the project team, and the distribution across all projects is public to everyone in the organization. View and present the data items that are private to individuals only in the aggregate or as averages over the group.

Make It a Habit

Software measurement need not consume a great deal of time. Commercial tools are available for measuring code size in many programming languages. Activities such as daily time tracking of project work constitute a habit each developer gets into, not a burden. Commercial problem tracking tools facilitate counting defects and tracking their status, but this requires the discipline to report all identified defects and to manage them with the tool. Develop simple tracking forms, scripts, and reporting tools to reduce the overhead of collecting and reporting the data. Use spreadsheets and charts to track and report on the accumulated data at regular intervals.

Tips for Metrics Success

Despite the challenges, many software organizations routinely measure aspects of their work. If you wish to join this club, keep the following tips in mind.

Start Small. Because developing your measurement culture and infrastructure will take time, consider using the Goal-Question-Metric method described in the first article in this series to select a basic set of initial metrics. Once your team becomes used to the idea of measurement and you’ve established some momentum, you can introduce new metrics that will provide the additional information you need to manage your projects and organization effectively.

A risk with any metrics activity is dysfunctional measurement, in which participants alter their behavior to optimize something that is being measured instead of focusing on the real organizational goal. As an illustration, if you’re measuring productivity but not quality, expect some developers to change their programming style to visually expand the volume of code they produce, or to code quickly without regard for defects. I can write code very fast if it doesn’t have to run correctly. The balanced set of measurements helps prevent dysfunctional behavior by monitoring the team’s performance in several complementary dimensions that collectively lead to project success.

Explain Why. Be prepared to explain to a skeptical team why you wish to measure the items you choose. They have the right to understand your motivations and why you think the data will be valuable. Be sure to use the data that is collected, rather than letting it rot in the dark recesses of a write-only database. Encourage team members to use the data themselves and to identify other measures that will help them improve their work. When my small software group at Kodak established a work effort metrics program long ago, we found it to be more helpful than we expected. One team member expressed it well: “It was interesting to compare how I really spent my time with how I thought I spent my time and how I was supposed to spend my time.”

Share the Data. Your team will be more motivated to participate in the measurement program if you tell them how you’ve used the data. Share summaries and trends with the team at regular intervals and get them to help you understand what the data is saying. Let team members know whenever you’ve been able to use their data to answer a question, make a prediction, or assist your management efforts.

Define Metrics and Procedures. It’s more difficult and time-consuming to precisely define the base and derived measures than you might think. However, if you don’t pin these definitions down, participants are likely to interpret and apply them in different ways. As a result, you might end up combining apples and oranges, which yields fruit salad instead of meaningful insights. Define what you mean by a “line of code” or a “user story”, spell out which activities go into the various work effort categories, and agree on what a “defect” is. Write clear, succinct procedures for collecting and reporting the measures you select.

Understand Trends. Trends in the data over time are more significant than individual data points. Some trends may be subject to multiple interpretations. Did the number of defects found during system testing decrease because the latest round of testing was ineffective, or did fewer defects slip past development into the testing process? Make sure you understand what the data is telling you, but don’t rationalize your observations away.

Measurement alone will not improve your software performance. Measurement provides information that will help you focus and evaluate your software process improvement efforts. Link your measurement program to your organizational goals and process improvement program. Use the process improvement initiative to choose improvements, use the metrics program to track progress toward your improvement goals, and use your recognition program to reward desired behaviors.

Also Read A Software Metrics Primer, Part 1

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

My friend Nicole was a quality manager at a large company recognized for its software process improvement and measurement success. Once I heard her say, “Our latest code inspection only found two major defects, but we expected to find five, so we’re trying to figure out what’s going on.” Few organizations have deep enough insight into their software development process to make such precise statements. Nicole’s organization spent years establishing effective processes, measuring critical aspects of its work, and applying those measurements on well-managed projects that develop high-quality products.

Software measurement is a challenging but essential component of a healthy and highly capable software engineering culture. This two-part series, adapted from my book Practical Project Initiation: A Handbook with Tools (Microsoft Press, 2007), describes some basic software measurement principles and suggests some metrics that can help you understand and improve the way your organization and its projects operate.

Why Measure Software

Software projects are notorious for running over schedule and budget and having quality problems to boot. Software measurement lets you quantify your schedule and budget performance, work effort, product size, product quality, and project status. If you don’t measure your current performance and use the data to improve your future work estimates, those estimates will forever remain guesses. Because today’s current data becomes tomorrow’s historical data, it’s never too late to begin recording key information about your project.

You can’t track project status meaningfully unless you know the actual effort and time spent on each task compared to your plans and the progress you’ve accomplished as a result. You can’t sensibly decide whether your product is stable enough to ship unless you’re tracking the rates at which your team is finding and fixing defects. You can’t assess how well your new development processes are working without some measure of your current performance and a baseline to compare against. Metrics help you better control your software projects and better understand how your organization works.

What to Measure

You can measure many aspects of your software products, projects, and processes. The trick is to select a small, balanced set of metrics that will help your organization track progress toward its goals. The Goal-Question-Metric (GQM) method promoted by Professor Victor Basili of the University of Maryland is an effective technique for selecting appropriate metrics to meet your needs. With GQM, you begin by selecting a few project or organizational goals. State the goals to be as quantitative and measurable as you can. They might include targets such as the following:

  • Reduce maintenance costs by 50% within one year.
  • Improve schedule estimation accuracy to within 10% of actuals.
  • Reduce system testing time by three weeks on the next project.
  • Reduce the time to close a defect by 40% within three months.

For each goal, think of questions you would have to answer to judge whether your team is reaching that goal. If your goal was “reduce maintenance costs by 50% within one year,” these might be some appropriate questions:

  • How much do we spend on maintenance each month?
  • What fraction of our maintenance costs do we spend on each application we support?
  • How much do we currently spend on adaptive maintenance (adapting to a changed environment), perfective maintenance (adding enhancements), and corrective maintenance (fixing defects)?

Finally, identify metrics that will provide answers to each question. Some of these will be simple items you can count directly, such as the total budget spent on maintenance. These are called base measures. Other metrics are derived measures, which are calculated from one or more base measures, often as simple sums or ratios. To answer the final question in the previous list, you must know the hours spent on each of the three maintenance activity types and the total maintenance cost over a period of time.

Note that I expressed several goals in terms of a percentage change from the current level. The first step of a metrics program is to establish a current baseline, so you can track progress against it and toward your goals. I prefer to establish relative improvement goals (“reduce maintenance by 50%”) rather than absolute goals (“reduce maintenance to 20% of total effort”). You can probably reduce maintenance to 20 percent of total organizational effort within a year if you are currently at 30 percent, but not if you spend 80 percent of your effort on maintenance today.

Your balanced metrics set should eventually include items relating to the six basic dimensions of software measurement: size, time, effort, cost, quality, and status. Table 1 suggests some metrics that individual developers, project teams, and development organizations should consider collecting. Some of these metrics relate to products, others to projects, and the remainder to processes. You should track most of these metrics over time. For example, your routine project tracking activities should monitor the percentage of requirements implemented and tested, the number of open and closed defects, and so on. I recommend including the following base measures fairly early in your metrics program:

  1. Product size: Count lines of code, function points, object classes, number of requirements, or GUI elements.
  2. Estimated and actual duration: Count in units of calendar time.
  3. Estimated and actual effort: Count in units of labor hours.
  4. Work effort distribution: Record the time spent in various development and maintenance activities.
  5. Requirements status: Track the number of requirements that are proposed, approved, implemented, verified, deleted, and rejected.
  6. Defects: Track the number, type, severity, and status of defects found by testing, peer reviews, and customers.

Table 1: Appropriate Metrics for Software Developers, Teams, and Organizations

Level Appropriate Metrics
Individual Developers
  1. Work effort distribution
  2. Estimated and actual task duration and effort
  3. Code covered by unit testing
  4. Number and type of defects found by unit testing
  5. Number and type of defects found by peer reviews
Project Teams
  1. Product size
  2. Estimated and actual duration between major milestones
  3. Estimated and actual staffing levels
  4. Number of tasks planned and completed
  5. Work effort distribution
  6. Requirements status
  7. Requirements volatility
  8. Number of defects found by integration and system testing
  9. Number of defects found by peer reviews
  10. Defect status distribution
  11. Percentage of tests passed
  12. Productivity (also called velocity)
Development Organization
  1. Overall project cycle time
  2. Planned and actual schedule performance
  3. Planned and actual budget performance
  4. Schedule and effort estimating accuracy
  5. Released defect levels
  6. Reuse effectiveness

It’s important to remember that even if you conclude that all of the metrics in Table 1 are valuable, you cannot possibly begin a metrics program that encompasses so many dimensions of measurement. It would overwhelm the practitioners, and you’d likely end up with far more data than you’re prepared to analyze and use to steer the organization’s development activities.

The second article in this series will continue with some recommendations about the importance of creating a measurement culture in the organization, as well as presenting five tips for metrics success.

Read A Software Metrics Primer, Part 2.

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

You might have noticed that requirements activities on projects sometimes lead to adversarial relationships. Customers don’t always feel that business analysts have their best interests at heart. Product managers get frustrated when customers demand never-ending changes in requirements but don’t want the delivery date to slip. Testers don’t appreciate having to redo their work because no one told them about updates to the requirements. Developers bristle when the project manager holds their feet to the fire to meet schedule despite piling on new features through scope creep. It’s a wonder anyone still speaks to their colleagues at the end of the project.

If you’re interested in pursuing better requirements processes, you have to respect the many cross-connections between the software development group and numerous external stakeholders. This article identifies some of those important interfaces and suggests ways to engage such stakeholders in a collaborative approach toward more successful projects in the future.

Figure 1. Requirements-related interfaces between software development and other stakeholders.

Figure 1 shows some of the project stakeholders that can interface with a software development group and some of the contributions they make to a project’s requirements engineering activities. Explain to your contact people in each functional area the information and contributions you need from them if the product development effort is to succeed. Agree on the form and content of key communication interfaces between development and other functional areas, such as a system requirements specification or a marketing requirements document. Too often, important project documents are written from the author’s point of view without full consideration of the information that the readers of those documents need.

On the flip side, ask the other organizations what they need from the development group to make their jobs easier. What input about technical feasibility will help marketing plan their product concepts better? What requirements status reports will give management adequate visibility into project progress? What collaboration with system engineering will ensure that system requirements are properly partitioned among software and hardware subsystems? Strive to build collaborative relationships between development and the other stakeholders of the requirements process.

When the software development group changes its requirements processes, the interfaces it presents to other project stakeholder communities also change. People don’t like to be forced out of their comfort zone, so expect some resistance to your proposed requirements process changes. Understand the origin of the resistance so that you can both respect it and defuse it.

Much resistance comes from fear of the unknown. To reduce the fear, communicate your process improvement rationale and intentions to your counterparts in other areas. Explain the benefits that these other groups will realize from the new process. Make sure they understand the pain that projects have experienced in the past because of shortcomings in your current processes. The prospect of eliminating pain is a great motivator for change. When seeking collaboration on process improvement, begin from this viewpoint: “Here are the problems we’ve all experienced. We think that these process changes will help solve those problems. Here’s what we plan to do, this is the help we’ll need from you, and this is how our work will help us both.”

Here are some forms of resistance–both active and passive—that you might encounter:

• A change control process might be viewed as a barrier thrown up by development to make it harder to get changes made. In reality, a change control process is a structure, not a barrier. It permits well-informed people to make good business decisions. The software team is responsible for ensuring that the change process really does work. If new processes don’t yield better results, people will find ways to work around them—and they probably should.

• Some developers view writing and reviewing requirements documents as bureaucratic time-wasters that prevent them from doing their “real” work of writing code. If you can explain the high cost of continually rewriting the code while the team tries to figure out what the system should do, developers and managers will better appreciate the need for good requirements.

• If customer-support costs aren’t linked to the development process, the development team might not be motivated to change how they work because they don’t suffer the consequences of poor product quality.

• If one objective of improved requirements processes is to reduce support costs by creating higher-quality products, the support manager might feel threatened. Who wants to see his empire shrink?

• Busy customers sometimes claim that they don’t have time to spend working on the requirements. Remind them of earlier projects that delivered unsatisfactory systems and the high cost of responding to customer input after delivery. You’re going to get the customer input eventually. It’s a lot cheaper and less painful (and people are in a better mood) to get it earlier rather than later.

Anytime people are asked to change the way they work, the natural reaction is to ask, “What’s in it for me?” However, process changes don’t always result in fabulous and immediate benefits for every individual involved. A better question—and one that any process improvement leader must be able to answer convincingly—is, “What’s in it for us?” Every process change should offer the prospect of clear benefits to the project team, the development organization, the company, the customer, or the universe. You can often sell these benefits in terms of correcting the known shortcomings of the current ways of working that lead to less than desirable business outcomes.

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.