Tag Archive for: requirements management

estimating-balance-01

In part one and part two of this series, adapted from my book Practical Project Initiation, I’ve described fifteen practices the project manager can apply to lay the foundation for a successful project, plan the project, and estimate the work to be done. In this final article I share two additional estimation practices, three good practices for tracking your progress throughout the project, and one practice for learning how to execute your future projects more successfully.

Estimating the Work (continued)

Practice #16: Use estimation tools. Many commercial tools are available to help project managers estimate entire projects. Based on equations derived from large databases of actual project experience, these tools can give you a spectrum of possible schedule and staff allocation options. They’ll also help you avoid the “impossible region,” combinations of product size, effort, and schedule where no known project has been successful. The tools incorporate a number of “cost drivers” you can adjust to make the tool more accurately model your project, based on the technologies used, the team’s experience, and other factors. You can compare the estimates from the tools with the bottom-up estimates generated from a work breakdown structure. Reconcile any major disconnects so you can generate the most realistic overall estimate.

Practice #17: Plan contingency buffers. Projects never go precisely as planned. The prudent project manager incorporates budget and schedule contingency buffers at the end of phases, dependent task sequences, or iterations to accommodate the unforeseen. Use your project risk analysis to estimate the possible schedule impact if several of the risks materialize, then build that projected risk exposure into your schedule as a contingency buffer. An even more sophisticated approach is critical chain analysis, a technique that pools the uncertainties in estimates and risks into a rational overall contingency buffer. Chapter 10 of Practical Project Initiation is all about contingency buffers.

Your manager or customer might view these contingency buffers as padding, rather than as the sensible acknowledgment of reality that they are. To help persuade skeptics, point to unpleasant surprises on previous projects as a rationale for your foresight. If a manager elects to discard contingency buffers, he has tacitly absorbed all the risks that fed into the buffer and assumed that all estimates are perfect, no scope growth will occur, and no unexpected events will take place. Sound realistic to you? Of course not. I’d rather see us deal with reality—however unattractive—than to live in Fantasyland.

Tracking Your Progress

Practice #18: Record actuals and estimates. Unless you record the actual effort or time spent on each project task and compare them to the estimates, your estimates will forever remain guesses. Someone once asked me where to get historical data to improve her ability to estimate future work. My answer was, “If you write down what actually happened today, that becomes historical data tomorrow.” It’s really not more complicated than that. Each individual can begin recording estimates and actuals, and the project manager should track these important data items on a project task or milestone basis. In addition to effort and schedule, you could estimate and track the size of the product, in terms of requirements, user stories, lines of code, function points, GUI screens, or other units that make sense for your project.

Practice #19: Count tasks as complete only when they’re one hundred percent complete. We give ourselves a lot of partial credit for tasks we’ve begun but not yet fully completed: “I thought about the algorithm for that module in the shower this morning, and the algorithm is the hard part, so I’m probably about sixty percent done.” It’s difficult to accurately assess what fraction of a sizable task has actually been finished at a given moment.

One benefit of using inch-pebbles (see Practice #6 in Part 2 of this series) for task planning is that you can break a large activity into a number of small tasks (inch-pebbles) and classify each small task as either done or not done—nothing in between. Project status tracking is then based on the fraction of the tasks that are completed and their size, not the percentage completion of each task. If someone asks you whether a specific task is complete and your reply is, “It’s all done except…,” then it’s not done! Don’t let people “round up” their task completion status. Instead, use explicit criteria to determine whether an activity truly is completed.

Practice #20: Track project status openly and honestly. An old riddle asks, “How does a software project become six months late?” The rueful answer is, “One day at a time.” The painful problems arise when the project manager doesn’t know just how far behind (or, occasionally, ahead) of plan the project really is. Surprise, surprise, surprise.

If you’re the PM, create a climate in which team members feel it is safe for them to report project status accurately. Run the project from a foundation of accurate, data-based facts, rather than from the misleading optimism that can arise from the fear of reporting bad news. Use project status information and metrics data to take corrective actions when necessary and to celebrate when you can. You can only manage a project effectively when you really know what’s done and what isn’t, what tasks are falling behind their estimates and why, and what problems, issues, and risks remain to be tackled.

The five major areas of software measurement are size, effort, time, quality, and status. It’s a good idea to define a few metrics in each of these categories. Instilling a measurement culture into an organization is not trivial. Some people resent having to collect data about the work they do, often because they’re afraid of how managers might use the measurements. The cardinal rule of software metrics is that management must never use the data collected to either reward or punish the individuals who did the work. The first time you do this will be the last time you can count on getting accurate data from the team members.

Learning for the Future

Practice #21: Conduct project retrospectives. Retrospectives (also called postmortems and post-project reviews) provide an opportunity for the team to reflect on how the last project, phase, or iteration went and to capture lessons learned that will help enhance your future performance. During such a review, identify the things that went well, so you can create an environment that enables you to repeat those success contributors. Also look for things that didn’t go so well, so you can change your approaches and prevent those problems in the future. In addition, think of events that surprised you. These might be risk factors to look for on the next project. Finally, ask yourself what you still don’t understand about the project, so you can try to learn how to execute future work better.

It’s important to conduct retrospectives in a constructive and honest atmosphere. Don’t make them an opportunity to assign blame for previous problems. Chapter 15 of Practical Project Initiation describes the project retrospective process and provides a worksheet to help you plan your next retrospective. It’s a good idea to capture the lessons learned from each retrospective exploration and share them with the entire team and organization. This is a way to help all team members, present and future, benefit from your experience.

The twenty-one project management best practices I’ve described in this series of articles won’t guarantee your project a great outcome. They will, however, help you get a solid handle on your project and ensure that you’re doing all you can to make it succeed in an unpredictable world.

Also read Project Management Best Practices, Part 1
Also read Project Management Best Practices, Part 2

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

change-impact-analysis-01[1]

The need for performing impact analysis is obvious for major enhancements. However, unexpected complications can work below the surface of even minor change requests. A consulting client of mine once had to change the text of a single error message in its product. What could be simpler? The product was available in both English and German language versions. There were no problems in English, but in German the new message exceeded the maximum character length allocated for error message displays in both the message box and a database. Coping with this apparently simple change request turned out to be much more work than the developer had anticipated when he promised a quick turnaround.

Impact analysis is a key aspect of responsible requirements management. It provides accurate understanding of the implications of a proposed change, which helps the team make informed business decisions about which proposals to approve. The analysis examines the proposed change to identify components that might have to be created, modified, or discarded and to estimate the effort associated with implementing the change. Skipping impact analysis doesn’t change the size of the task. It just turns the size into a surprise. Software surprises are rarely good news. Before a developer says, “Sure, no problem” in response to a change request, he or she should spend a little time on impact analysis. This article, adapted from my book Software Requirements, 2nd Edition (Microsoft Press, 2003), describes how the impact analysis activities might work.

Impact Analysis Procedure

The chairperson of the change control board will typically ask a knowledgeable developer to perform the impact analysis for a specific change proposal. Impact analysis has three aspects:

  1. Understand the possible implications of making the change. Change often produces a large ripple effect. Stuffing too much functionality into a product can reduce its performance to unacceptable levels, as when a system that runs daily requires more than 24 hours to complete a single execution.
  2. Identify all the files, models, and documents that might have to be modified if the team incorporates the requested change.
  3. Identify the tasks required to implement the change, and estimate the effort needed to complete those tasks.

Figure 1 presents a checklist of questions designed to help the impact analyst understand the implications of accepting a proposed change. (You can download the checklists and templates described in this article from http://www.processimpact.com/goodies.shtml.) The checklist in Figure 2 contains prompting questions to help identify all of the software elements that the change might affect. Traceability data that links the affected requirement to other downstream deliverables helps greatly with impact analysis. As you gain experience using these checklists, modify them to suit your own projects.

Checklist of possible implications of a proposed change.

Checklist of possible implications of a proposed change.

Checklist of possible software elements affected by a proposed change.

Checklist of possible software elements affected by a proposed change.

Following is a simple procedure for evaluating the impact of a proposed requirement change. Many estimation problems arise because the estimator doesn’t think of all the work required to complete an activity. Therefore, this impact analysis approach emphasizes comprehensive task identification. For substantial changes, use a small team—not just one developer—to do the analysis and effort estimation to avoid overlooking important tasks.

  1. Work through the checklist in Figure 1.
  2. Work through the checklist in Figure 2, using available traceability information. Some requirements management tools include an impact analysis report that follows traceability links and finds the system elements that depend on the requirements affected by a change proposal.
  3. Use the worksheet in Figure 3 to estimate the effort required for the anticipated tasks. Most change requests will require only a portion of the tasks on the worksheet, but some could involve additional tasks.
  4. Total the effort estimates.
  5. Identify the sequence in which the tasks must be performed and how they can be interleaved with currently planned tasks.
  6. Determine whether the change is on the project’s critical path. If a task on the critical path slips, the project’s completion date will slip. Every change consumes resources, but if you can plan a change to avoid affecting tasks that are currently on the critical path, the change won’t cause the entire project to slip.
  7. Estimate the impact of the proposed change on the project’s schedule and cost.
  8. Evaluate the change’s priority by estimating the relative benefit, penalty, cost, and technical risk compared to other discretionary requirements.
  9. Report the impact analysis results to the CCB so that they can use the information to help them decide whether to approve or reject the change request.

In most cases, this procedure shouldn’t take more than a couple of hours to complete. This may seem like a lot of time to a busy developer, but it’s a small investment in making sure the project wisely invests its limited resources. If you can adequately assess the impact of a change without such a systematic evaluation, go right ahead; just make sure you aren’t stepping into quicksand. To improve your ability to estimate the impacts of future changes, compare the actual effort needed to implement each change with the estimated effort. Understand the reasons for any differences, and modify the impact estimation checklists and worksheet accordingly.

Estimating effort for a requirement change.

Estimating effort for a requirement change.

Money Down the Drain

Here’s a true story about what can happen if you don’t take the time to perform impact analysis before diving into implementing a significant change request. Two developers at the A. Datum Corporation estimated that it would take four weeks to add an enhancement to one of their information systems. The customer approved the estimate, and the developers set to work. After two months, the enhancement was only about half done and the customer lost patience: “If I’d known how long this was really going to take and how much it was going to cost, I wouldn’t have approved it. Let’s forget the whole thing.” In the rush to gain approval and begin implementation, the developers didn’t do enough impact analysis to develop a reliable estimate that would let the customer make an appropriate business decision. Consequently, the A. Datum Corporation wasted several hundred hours of work that could have been avoided by spending a few hours on an up-front impact analysis.

Impact Analysis Report Template

Figure 4 suggests a template for reporting the results from analyzing the potential impact of each requirement change. Using a standard template makes it easier for the CCB members to find the information they need to make good decisions. The people who will implement the change will need the analysis details and the effort planning worksheet, but the CCB needs only the summary of analysis results. As with all templates, try it and then adjust it to meet your project needs.

Impact analysis report template

Impact analysis report template

Requirements change is a reality for all software projects, but disciplined change-management practices can reduce the disruption that changes can cause. Improved requirements elicitation techniques can reduce the number of requirements changes, and effective requirements management will improve your ability to deliver on project commitments.

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

There has been no shortage of press surrounding the HealthCare.gov October 1st release. Much of the debate points fingers at individuals often skewed by political leanings and party affiliation. Regardless of the political circus there are important lessons to be learned that highlight yet another glaring example of the importance of open, iterative collaboration around building products. This is not a new problem, nor is it unique to government. This is a problem faced by some of the world’s largest companies. Both Microsoft and Apple have recently experienced less than stellar rollouts: Surface estimated as a $900 million mistake and Apple Maps, which prompted a $30 billion loss in stock shares.

The fact is that when developing massively complex products in a broken system with insufficient tools, issues will be amplified. In the fallout of the much-anticipated HealthCare.gov release, the typical reactions are taking place; blame the most senior person available and throw tons of last minute resources at the problem in an attempt to fix it. None of this will work because the damage has been done – the source of the problem is cultural and ingrained in the process. Anthony Wing Kosner’s piece in Forbes discusses one aspect of the problem centered around the definition of the requirements themselves. Kosner discusses the fact that an estimated “40% of the defects that make it into the testing phase of enterprise software have their root cause in errors in the original requirements documents” and that government project requirements can be especially difficult to manage.

I have experience working on an implementation that suffered similar issues, and am especially frustrated with the state of HealthCare.gov because since then I’ve been reading about the many initiatives to incorporate Agile or Lean concepts into government software products. There are instances where taking an open, collaborative approach has paid off; one great example is the implementation of Recovery.gov and FederalReporting.gov. The success stories incorporated iterative processes that enabled changes to be more easily and effectively implemented, as opposed to HealthCare.gov, whose architects have cited changes in requirements as a root cause of issues.

Each set of changes created a new version of the document that needed to be shared across many teams. For example, Andrew Slavitt, vice-president of Optum explained to lawmakers about the late decision requiring consumers to register for an account before browsing insurance products. This highlights the typical occurrence of change, but begs the question: is the late decision the problem or is it how that decision is managed and ultimately communicated to the rest of the team? Compounding these problems is the fact that especially in government there are multiple contractors involved that are not incentivized to work together. The success of the their work is more important than the success of the project as a whole because contracts and payments are based on those deliverables. This fosters an environment of CYA as each contractor spends more time making sure that they are covered based on their individual contract and less on the idea of building the right solution.

The sad thing is that it’s hard to blame the contractors, developers, QA or even the government. The problem is in the status quo we all accept. Organizations continue to resist the inevitable need to make overarching changes to their process and tools to move away from such avoidable chaos and lack of communication. Based on this fact and my past experiences, there are 4 key lessons to be learned from the HealthCare.gov story:

  1. Initial expectations and visions are often too vague and lofty. In the instance of HealthCare.gov, acceptance of change was not considered early on. Implementing such a complex system is not as simple as defining a set of requirements and pushing them downstream to be built. It takes a highly coordinated effort full of constant communication and realignment to change. There needs to be a conversation about how to handle unpredictability.
  2. Allow for voices to be heard sooner. If there are no clear paths of communication that connect the business side to the implementation and test side there is an increased risk of misalignment, discontent, frustration, and delays that ultimately create a fast moving train of costly issues.
  3. Allow for innovation to thrive throughout. Were there options to rethink the requirements themselves? One such possibility was the requirement for each user to receive an immediate confirmation based on complex integrations and system checks. Would it have sufficed to let the user know their information was accepted and would be processed over the next 7 business days?
  4. Tools. If those who built HealthCare.gov are anything like many major organizations and companies they were likely relying heavily on Word documents to send requirements. When will we learn that this is an archaic means of communication? This method does not in any way help with managing the constant influx of changes notorious with government contracts.

My own experience working on a government healthcare software project took place 6 years ago when I worked for a contractor who was tasked with designing a state Medicaid eligibility program. The realities and problems my team faced back then have a striking similarity to the failings of the HealthCare.gov rollout. I was a developer at the time and was part of a team that had been put together based on a set of requirements that were written as an RFP and awarded to the contractor. Based on those initial requirements, estimates were created and resources were assigned to the project. As is common in RFPs, a year passed between the award and the final team being assembled, during which requirements changed. This common occurrence is most often the initial point of failure as teams quickly fall out of alignment, while scope and schedule became a constant topic of debate.

As a development team, we worked in a very Agile manner and provided frequent working software to the customer. Still, the complexity and volume of changes made it very difficult to be efficient, the schedule changed often and resources were staffed up in an effort to increase velocity. Luckily we had some flexibility with dates and a lot less visibility nationally.

Why does this status quo continue to dominate and dictate how we implement projects? The government is not alone in the challenges they are facing in so many products and programs. At my company, we often receive RFPs that are frankly outdated and irrelevant. The RFP model is broken and the idea that requirements are to be used as a binding contract that if changed puts both parties at odds, prevents teams from doing what’s right and speaking up if they sense something is wrong. RFPs are not the only culprit, requirements used as a binding contract set the tone. The agile manifesto highlights this point: “Customer collaboration over contract negotiation.” This isn’t to say there isn’t a need for contracts. HealthCare.gov may have failed just as much had there been no contracts involved. It’s easy to say generically that an agile approach would have done better, especially considering many over emphasize the desire to eliminate requirements all together in lieu of complete “we’ll figure it out as we go along” mentality.

The balance is in providing the goals (the requirements) in parallel to a more open, iterative, and collaborative process that is necessary in order to deliver products that fulfill requirements and are completed on time. The requirements and decisions themselves should have been in a place that was available to all that could handle the complexity and balance of requirements that change the speed necessary to make critical decisions when they can’t be changed. We are at a day and age where how we collaborate has fundamentally changed. Organizations need to take advantage of both the concept of agile – keeping in mind the key quote “…while there is value in the items on the right we value the items on the left more.” – and tools and technology built on modern methods of communication.  Had this been in place for HealthCare.gov, changes to requirements would have been clear, communication would have been open, and decisions made would have been communicated immediately. All of that information would have been easy to track and maybe we wouldn’t be in this state of constant blame.