Recently I decided it was time I improved my cooking skills. Being an analytical person, I spent a considerable amount of time deciding on an approach. One must have a strategy, measurements for success, and a repeatable pattern of course! (Right?) Given that I like to run repeated experiments, I decided to take a set of dishes I wanted to master, find a few variants (similar recipes), and repeat them until I understood what specific ingredients, tools and techniques were essential.
The act of repeating recipes itself turned out to be the valuable lesson. Following the steps, not isolating the science behind it each decision, allowed skills to be internalized in concert. There is no single essential technique, or secret ingredient. Having a full toolbox of interrelated skills and past decisions to call upon is what works. While it’s hard to measure the exact causes for success, my larger goal is being met as my cooking improves!
Using modern traceability in product development, those that allow you to connect data and people across an organization, follows a similar pattern. Some complex situations call for traceability recipes, others just common sense. It’s a collection of related tools and behaviors used for a purpose – successful product delivery. It’s flexible, adaptable, and evolving to keep up with the demands of building high quality products fast. While I might have tried to limit or isolate traceability like it’s a single secret ingredient, I’m finding it’s more valuable to consider its many forms together as I did learning to cook.
Below are some of the goals our customers have found traceability can in fact solve. Recipes from master chefs, if you will.
Finding the Source of a Decision – Before you get to work making a change, use traceability to understand the why behind decisions.
Use Modern Traceability to keep conversations connected as context, and do so continuously. This reduces the time required to find the source of past decisions, and doesn’t rely on flawed human memory to answer the question “why did we decide that again?”
What’s connected: Track decisions associated to requirements changes as closely to the requirement itself as possible, such as in the comments. Use tools like Jama’s Review Center to keep comments all related to the same set of data clearly saved in one spot, and referenceable later.
Adapting to Challenges and Change – When a major change does need to happen, easily see the ripple effect up and downstream at any point in a project, not just milestones.
Use Modern Traceability to see potentially risky changes coming. When you track and relate requirements as you work, it’s much easier to see the impacted data when a change is proposed. Teams can adapt more quickly because the map of how your product is built exists throughout the project, not just at major milestones.
What’s connected: Associate people to the requirements themselves. Use this to quickly see who’s related to data, tests, requirements, etc. connected 1-2 levels in either direction. Notify connected people automatically when major things change.
Managing Risk – Keep track of risks and mitigations as you work, in a shared tool so re-use of similar data is easy and visibility is high.
Use Modern Traceability to reduce the heavy lift of managing risk data. Update your tracking of risk dynamically, tied to requirements, and visible to the entire team working on your product. Generate a view of how you’re doing along the way, and share it long before an audit.
What’s connected: Configure your teams’ traceability map to include links from requirements to risks, mitigations, environmental context, and test data.
“Are we there yet?!” Status Updates – Everyone needs to know how the team is doing, at different times and at different data granularities.
Use Modern Traceability to shared dynamic views of progress, at the level of data that makes sense for the audience. Skip generating manual static reports, and instead share live, accurate ones.
What’s connected: For this to work you need a common language, and that is derived by connecting all the levels of product data so everyone has a familiar anchor point. Create relationships from the highest level market requirements, to draft designs, to requirements, to passed test in Jama. This gives every use the ability to pick a data type they are familiar with and see progress at that level, whether that means seeing the status of the requirements a marketing goal decomposes to, or looking at all the downstream test status for a particular hardware component.
Referencing Similar Past Projects – By maintaining data and relationships throughout a project, by the end that project will be full of rich insights that can be used in the future.
Use Modern Traceability to look at past projects as a whole, across all the data types from requirements to comments. Find projects that were successful, and use that as a starting point for new projects.
What’s connected: Everything! Data should be explore-able, like a map, so anyone can self-serve when they want to know answer to questions like “what did we do last time?”
The product development world is getting more complex, time pressured, and all in a changing environment of rules and regulations. To keep up, your traceability practices need to adapt, to take into account how humans and teams actually think. As your team adopts new traceability practices, though, I humbly encourage you to approach it like learning a complex skill such as cooking. It’s not any one practice, ingredient or tradition that leads to success. Think of how many moving parts there are on a successful team release! Integrating traceability skills and tools into daily work in a way that continues to value traditional Traceability (we still need reports for regulatory bodies, for example!) but also leaves room for new complex skills to emerge that mirror your specific favor of product delivery!
Read Forrester Report about the use of Modern Traceability and how it improves developers’ ideas, processes, and software.
Using stakeholder, system, hardware and software requirements to build a professional wireless microphone.
In the post below—the last of three transcribed from his Writing Good Requirements workshop, with notes and slides from his presentation deck included—Jama Consultant Adrian Rolufs explains common problems teams go through, and how to avoid them. (See Part II here.)
==Start Part III==
Let’s look at an example product using my audio background. I’m going to take a circuit that goes into a professional wireless microphone—the kind of high-performance microphone you’d see someone on a stage, like a MC or a musician, use.
It’s got to be able to handle a wide, dynamic range, meaning it has to be able to record very loud signals and very quiet signals, all with very high quality, and it’s got to be powered off of a battery so that it can he handheld, meaning the connection to the system will be wireless.
So we’re going to talk about some of the requirements that go into the chip; one of the main chips that goes into a solution like this.
First we’ll start at the market or stakeholder requirements level. Often, they’re called stakeholder requirements because stakeholders can be more than just customers.
In most product development organizations customers have requirements, but internal teams also have requirements.
So if I’m building a chip, for example, I have quality requirements that my quality department is going to dictate, but will also be influenced by the customer’s requirements.
And I probably have a production test organization that has to test every one of these devices as they go out the assembly line.
These devices are going to have requirements concerning what kind of access they need to internal circuits, and what kinds of circuits they need to enable them to test in a timely manner—things like that.
The development team might also have requirements; for example, they need to be able to reuse certain amounts of existing circuitry to stay on schedule, or requirements around data costs.
The point is that what we call stakeholder requirements is really a broad category. It could be anybody who has an influence on the product development.
Let’s look at some examples—these would most likely come from customers—which would be focused on the functionality and performance of the device. I’ve got three examples here: One is good and recommended; two are not.
We’ll start with the first one. Say we need a product that can input a microphone signal, convert a signal with two digital audio signals, have two different gain stages and consume less than 20mA while operating.
This is the sort of thing you’d likely hear directly from the customer but not necessarily the sort of thing I’d want to write down as requirements.
This brings up a couple of issues.
First, I’ve got a bunch of requirements mixed together, so it would be easy to miss something, and also it presupposes a certain solution.
It could be this is the right solution, but it assumes certain solutions, so it’s talking about internal details that are over‑constraining the design team.
The team can come up with a different gain structure that works and achieves the results, but doesn’t use 20dB and 0dB of gain. What’s wrong with that? Why do I need to over‑constrain them?
So those are some of the problems with the first one.
Second, customer X needs a 140dB microphone amplifier with a digital output for less than 50 cents. The microphone amplifier shall be low power.
This is the sort of thing marketing might write, because it’s focused on the customer’s request: They need it at a certain cost and everything should always be low power.
It’s very difficult to actually meet these kinds of requirements.
140dB—well, what is that? That’s just a ratio number; I don’t know what that actually is a measurement of. I need some more specificity around that.
As for 50 cents, you have no idea what the solution is yet so 50 cents may or may not be achievable, but it’s good to know.
And then the last one, low power; that can mean almost anything. Low power in one industry could be high power in another, so specificity around what low power means would be beneficial.
So in that case, the first example is more specific and has more detail—although both of the first two are not very atomic so it would be easy to miss things.
The last example talks about two things.
The first one has a problem statement. I love problem statements because they really tie back to the value the solution can offer. It’s giving me some context around what’s in the market today and what the problem is.
It’s saying in the market there are high dynamic range microphones which transmit digitally, and it requires circuitry that’s expensive and large or high power to obtain the necessary performance.
And from that I know that a solution is out there, but that solution is difficult, hard to use or hard to implement, and it can be expensive and it may or may not provide the necessary performance.
You can see how this helps outline the idea of what kinds of problems I need to solve and where the most value is in design.
So based on this, I would know that hitting the audio performance is important, and getting a small solution size that’s low power is also important; those are the key constraints.
To make that have a specific power consumption, say the solution shall consume less than 75mW while in operation.
Now, the other benefit here is 75mW; it’s an actual power number, whereas in the first example I had a current but without knowing the voltage I don’t what the power consumption is, so that’s also not a great example.
So in this case, the last one is the one I would recommend; it has more constraints and a good set of stakeholder requirements. With that, the design team has a good idea of what their goals are, but they’re not over‑constrained.
Now for the next level of detail: Once we have a set of stakeholder requirements, or at least a draft, we can start looking at system requirements. The system requirements are what we’re actually going to build a product against.
We’re not going to build a product directly against the stakeholder requirements because we could have multiple stakeholders and we need to consolidate their requirements into one set.
Or, certain stakeholders may ask for things that we actually end up not satisfying, but we still know that we can build a successful product.
So that translation from stakeholder requirements to system requirements provides the clarity and explicit decisions around what we’re going to do, what we’re not going to do, and what the actual requirements are for this project.
Now, one of my favorite examples of system requirements is the first one—absolutely nothing—and I see this time and time again.
I can’t count the number of times where I see people skipping the system requirements when they’re building a system.
If you’re an engineer responsible for low-level details, how do you know if those low-level details are the right details? Well, you need system requirements first, so we definitely don’t want to skip this level.
Now, the next one: The solution shall have two differential inputs using instrumentation amplifiers. The instrumentation amplifiers shall be followed by sigma‑delta ADCs. These are really low-level component requirements.
We’ve already jumped to the conclusion that we’re going to have a specific architecture in the hardware. What if part of the solution needs to be software? We haven’t said anything about that and we could already be over‑constraining the design team when some other architecture would be more appropriate.
It could be an instrumentation amplifier is not the best choice. We don’t need to constrain that at this level. So the last example here is really a better example of system requirements.
What about power consumption? It’s going to 20mA while in operation. I said before, current on its own is not necessarily the best example. With this you would typically provide a supply voltage range so then it would become clear.
What about the signal levels? Stating what the signal range needs to be provides a lot of detail around what the architecture of the design needs to be, without over‑constraining it.
And then, the overall end‑to‑end, signal‑to‑noise ratio: 140dB A‑weighted gives me a very clear statement of the overall performance, again without over‑constraining.
So for system requirements I like that last one.
These system requirements are all focused on the performance of the signal path. There should also be some system requirements here that talk about constraints on size, constraints on packaging and things like that.
Now we know we need to build something that consumes relatively low power, takes in a very wide dynamic range signal and maintains the quality of that. So we can start talking about architectures in response to these system requirements.
Let’s say we use a hardware device that has analog to digital conversion with two signal paths, both of which have medium performance but which we can combine together to obtain high performance, and that’s actually the common solution in the application.
And then we use a software algorithm to combine those signals, so we’re going to need a DSP to run the software algorithm and process the signal to output this resulting signal of 140dB A‑weighted signal noise ratio.
Based on that, we can now talk about the hardware‑specific requirements.
Here are some examples of different possibilities. The first one is a block diagram of the architecture.
I’m visual; I love block diagrams. I love schematics because they’re very intuitive. I can relate to them very well. They don’t make good requirements, unfortunately. It’s very difficult to test a diagram. It’s very difficult to make sure you didn’t miss anything in a diagram. So having a diagram on its own is not sufficient.
A really good solution is a diagram complemented by a set of requirements that attach to every important detail of that diagram.
That way, visual people have something to see, but we also have atomic requirements that we can test against and trace to make sure we didn’t miss anything, and also so we can manage changes.
If I make changes to this diagram based on changes to the architecture or customer requirements, it might be hard to actually know what those changes were, whereas if I have individual requirements I can track, I can easily know.
The second example is just a description of functionality, a response to the requirements. This is saying what the signal path of the device is; the architecture is describing a specific part number. This is down in the design descriptions where this belongs. It’s not hardware requirements.
So the last example is one I like for hardware requirements. We’re talking power consumption; we’re getting more specific.
We know that I’m building a chip, the power consumption is going to vary and I want to know what it’s typically at and what its maximum can be, so we’re specifying that.
Again, we’re repeating the input signal level because that input signal level was a requirement on the system that’s also a requirement on the hardware.
There is some duplication, but it’s there to explicitly say that this is a requirement on the hardware. I won’t see a requirement for 17uV RMS to 1V RMS on the software, because the software is never going to know about volts; it’s going to know about digital signals.
So even though there is duplication it’s done to make the decisions and the traceability explicit. So then I have requirements on the specific architecture.
Now that we’re down at the low-level and component requirements, the hardware requirements, we can start talking about specific solutions. We’ve got to get into the details of what the solution is actually going to be.
So in this case, in the hardware requirements, you’ll likely see requirements that dictate a certain solution, but that’s okay because it’s quite likely that the design team is the one writing these requirements, so they’re the right ones to make that decision.
As you probably have guessed by now, the last one is my recommendation for well‑written hardware requirements.
The last example is software requirements.
I see a lot of teams that just skip software requirements entirely and go straight to writing code. It’s really fun to write code, really satisfying, but if you don’t have any requirements, you’re starting without clear directions. We need some requirements.
The second example, some descriptions of functionality, is written as a shall statement. It sounds like a requirement but I’ve got a bunch of stuff mixed together.
I’m talking about two signals. I’m talking about what their performance is. I’m talking about the output. There is too much stuff mixed together here, so the third one is the recommendation: talking about specifics.
I am going to develop this software for a specific DSP, the Tenscilica HiFi 3. It’s going to perform a specific function. It’s going to take two audio signals and it’s going to combine those into one.
(Note: I’d probably need more detail around this. This is probably not enough by itself but I didn’t want to fill the screen with the requirements.)
And then, what’s the sample rate going to be? This algorithm is going to be designed for a specific sample rate or multiple sample rates. It’s an important characteristic of the algorithm. Let’s make sure that’s captured in requirements.
So that is exactly what I would recommend, and in each of these there are a lot more requirements that go along with this. These are just a couple of examples in each category.
Many teams mix up requirements and specifications. It’s very common.
You need to make sure you have a clear understanding of each of them and when to use them. It’s not always easy to decide which one is which, so it’s absolutely critical to have that discussion with your team.
What I see a lot of teams doing is skipping levels of hierarchy, jumping straight from high-level customer requirements down to detailed requirements or detailed specifications. Do that and you’ll have a very difficult time proving that you built the right thing.
It could be you’re operating fast and loose and you’re okay with that. Maybe that’s okay for a very small team in a very small organization. But in every other situation, it’s unlikely that you’d be building something so uncomplicated that you could get away with it. It’s high-risk.
So make sure that you have at least stakeholder requirements, client requirements and some kind of detailed specifications.
That’s the bare, bare minimum for any kind of product. More likely you need more.
What I recommend:Make sure you have a clearly defined process with clear levels of your requirements. If you don’t think you have that, discuss it with your team. What levels do you need? Which one of those diagrams [more in posts Part I and Part II] would be appropriate for your project?
And then there’s the scary question: Do you even use requirements?
Some teams plow ahead without requirements. Think about what kind of problems that can cause:
Maybe you’ve built products that have not been successful, that maybe needed a late change or maybe even failed, and you had to develop a new product in order to be successful.
Perhaps you’ve been in a situation where you later learned you missed some important details along the way and realized that you barely got away with it. That’s high-risk too.
When you start with an understanding of the roles different levels of requirements perform, you’re less likely to invite risk and add complications during development, and are much more likely to build the right product.
==End Part III==
Key differences between requirements and specifications, why different levels of requirements are important, and how to establish a clear requirements hierarchy you can use and change to suit any product, version or variant you build.
In the post below—the second of three transcribed from his Writing Good Requirements workshop, with notes and slides from his presentation deck included—Jama Consultant Adrian Rolufs explains common problems teams go through, and how to avoid them. (See Part I here.)
==Start Part II==
These days, products are so complicated they can only be used in specific scenarios and for specific applications, which means that if you don’t build a product right, chances are there’s no home for it.
Potentially millions of dollars of development efforts, not to mention sales, are lost if you’re unable to thoroughly keep track of the requirements all the way through.
So we’re going to talk today about some ways to avoid those problems and really set yourself up for success.
What’s most important is having a systematic process to follow; you want a logical progression that takes you from the high-level to the low-level details in a structured way, because that leads to the best results.
It’s actually more important than how you write the requirements.
So the first key point I want to make concerns differentiating between requirements and specifications, and here the word “specifications” is a nebulous term. It’s used differently in different industries.
In many, “specifications” means a document that contains something, a requirement specification, a verification specification or a list of verification test cases.
By the way, I’m using the term “specification” here as the semiconductor industry does. The specification is a list of the performance, the functionality and the features of the solution; it’s the end result. It documents what you actually produce.
In many cases, there is a document called a datasheet that’s the customer facing version of this. So if you’re familiar with datasheets, think of the specification as the datasheet.
For the purposes of this discussion, here are the differences between requirements and specifications:
Requirements
Requirements reveal what the product needs to do
The tool we use to identify the right product to build and to ensure we’re building it right
The tool we use to communicate internally about what the product needs to do and how it needs to work
Specifications
Specifications detail what the product actually does
Specifications are not useful to identify the right product to build
The tool we use to communicate externally about what the product is and how it works
Typically, requirements are a little higher-level and a less explicit than specifications.
But when you combine the two, what you get is a clear statement of a need and a clear statement of what you’re going to do to satisfy that need.
In doing so, you document exactly what you’re doing and why, and this helps capture the decisions that are made along the way and why they’re made.
However, what I typically see is a document that has intermixed requirements and specifications.
It’s an easy and logical way to write, but it’s very difficult to refer back to afterward for facts and analysis.
So what you end up finding out is, although what you did made sense at the time, you missed some things along the way. There were some high-level requirements that you’d forgotten about.
For example, I was recently working on a product that had only 30 requirements, but discovered that when I wrote the documentation and the specification, I missed one, even though I had written the requirements and solution myself on the same day.
It’s very easy to miss things without a systematic approach in place.
I found what I’d missed only because I’d built the traceability from my specification back to the requirements. I had to prove that I’d met every single requirement and that every one of my specifications was there because of a requirement.
By doing that, it reminded me of something that I missed, so it really saved me some trouble.
This oversight may have come up at some point during reviews, but maybe not, because it’s impossible for anybody to remember every single detail.
Having the requirements separate from the specifications and traceability links between them is critical for making sure you don’t miss anything or end up with features that you don’t need, which add cost or schedules to the product.
Separating the two is difficult if you’re not used to writing that way. Often, I will write, or other people will write, in a traditional kind of document style, and then extract from that what the requirements and the specifications of the solution are.
In other words, you can take an iterative approach to this, and that’s totally valid.
Now, the next question is, how do we get to the right solution? The answer is, by having a clear hierarchy.
So what I’m showing here is a basic hierarchy with market requirements and product requirements. It’s probably the simplest level of requirements you can possibly have in any product development.
The market requirements capture what the customer needs and what the market as a whole needs, and the product requirements say what the requirements are, for the product that we’ve agreed to build.
We can trace back to those customer requirements in such a way that we can prove that the product we’re building is going to satisfy the market requirements, and that we don’t build anything extra.
This is the basic minimum.
You can think of each as a documentation task, but they also follow the phases of your project. When you’re capturing marketing requirements you’re also thinking about what possible solutions you could be developing to satisfy those market requirements.
You’ll likely come up with product concepts, or maybe just one product concept, depending on the situation. And so you would capture, in addition to the requirements, some architectures or concepts that go along with that; that’s the “black box” for all the market requirements.
Same for the product requirements. Once you have them—or while you’re writing them—you’re thinking about the architecture of your solution and the trade-offs you might need to make.
This informs what requirements you can satisfy and which ones you can’t.
By writing the requirements in conjunction with coming up with a design, when you’re done, you have a clear statement of requirements and a solution that can meet them.
Before I came to Jama I was an engineer, coming up with new products, and I sometimes focused only on the product concept and the product design, and skipped a lot of the requirements.
It’s easy to fall into that trap. Engineers love solving problems. We don’t love writing down the requirements for solving those problems. But without those requirements we don’t know whether our solution is the right solution.
Some teams might have, say, only market requirements and no product requirements, or vice versa.
But what they don’t have is a clear distinction between what the customer asked for, or what the market needs and what the team is doing to address both.
As a result, it’s difficult to know whether they’re building the right thing or not.
Now, if your product is complicated you add hierarchy to this model.
Let’s say, for example, I’m doing chip development and my chip has a whole bunch of different internal blocks that are all each fairly complicated in of themselves.
Well, then I can add another level of hierarchy, which I’ll call block-level requirements.
A block requirement would be probably something specific to a chip, or a system where you have a hardware device and it’s made up of sub-circuits.
For example, say I have a digital chip that’s a microcontroller. One block might be a digital interface. Another might be the memory. Another block might be the analog interface.
Or, say I’m building a bigger system, and Engine Control Unit, or ECU, for a car. The ECU is my system. And that ECU is made up of a microcontroller and interfaces; they are components of the system.
Whatever you’re building, you want to break it up into logical pieces; those are your components—which you’ll be wanting to write component requirements for.
So product requirements would describe what is needed from this whole chip overall, and that chip, for the purposes of the requirements, is really best thought of as a “black box.”
But then the block-level requirements say, now that we have a product architecture in mind, what the requirements are for the individual pieces. The designers are going to go and design against those block-level requirements.
For example, product architecture says we’re going to have an ADC, an Analog Digital Converter.
We would then need block-level requirements to say what the performance for this ADC is: What does the power consumption need to be? What does the size need to be it needs to fit into a certain space? What kind of input signals and outputs signals does it need to have?
Things like that.
And then the block design would tell me how this ADC is architecture. What’s the topology? What circuit components are coming together to satisfy those requirements?
Again, having both of those pieces of information is critical.
In this example, what sometimes happens is the product requirements section gets skipped. People already know the architecture, to a certain degree, and so they jump right to the block-level requirements.
The problem with that is market requirements are very high-level and block level requirements are very detailed, so skipping requirements means teams can’t forget a single thing during building.
But the most serious problem is having no traceability back to product requirements; without it, teams can’t confirm the connection between block-level requirements and market requirements.
Without traceability, it’s difficult to know for certain if this block-level requirement traces to that particular market requirement.
You end up missing things, so each of those levels is important.
Now, it could be you are building something even more complicated, so you need to add levels of hierarchy.
Basically,the more complex your build, the more hierarchical structure you want in place.
Here’s a system example; in this case we have both hardware and software, so we have system requirements that describe the overall needs of the system, and then we have an architecture that says, what’s going to happen in hardware, and what’s going to happen in software.
And based on that architecture, we can then write requirements for the hardware and for the software.
We can architect the hardware and the software, and then we can again write low-level requirements for the individual pieces of hardware and the individual pieces of software, and then write the design details that go along with each of those blocks.
So again, you take a systematic approach going from high-level customer needs all the way down to design, and you just adjust this based on the levels of complexity of your product.
And as your products get more complicated, it’s entirely possible that you start off with something simple and you add complexity to the next generation, and maybe you add even more complexity in the next generation, so you have to adjust the model based on your product complexity.
But it’s very unlikely you’ll use the same model forever.
When I was an engineer, we were really more focused on market requirements, product requirements and the block requirements model. Recently I’ve seen a lot more of the system requirements, especially in the chip industry.
Many of the products coming out of chip companies these days are more like systems than ever, so this model ends up being a good place to start.
You can cut out pieces out that you don’t need, but make sure you have accounted for all the pieces that you do need. Having that discussion with your team is really critical to setting up the right model.
==End Part II==
Finding the requirements management sweet spot means being concise, specific and parametric, and answering the question, “What do we need?” rather than, “How do we fulfill a need?”
In the post below—the first of three transcribed from his Writing Good Requirements workshop, with notes and slides from his presentation deck included—Jama Consultant Adrian Rolufs explains common problems teams go through, and how to avoid them.
= = Start Part I = =
Today I’m going to be speaking about product definition and how to ensure that you are using requirements correctly and to maximum benefit.
A bit about myself: For the first 10 years of my career I worked in the analog and mixed-signal semiconductor industry, first as an applications engineer and later as a product definer.
As a product definer I became a customer of Jama Software. Adopting Jama to manage requirements completely revolutionized how I built products. So much so, that a couple of years ago I joined Jama Software to help as many teams as possible benefit from using Jama.
Product development teams face many challenges, but the ones we are going to focus on today are how to systematically navigate the path from a high-level market need to the specification of an actual product.
Specifically, we’ll be looking at this challenge in the context of development teams that work from a specification, but the concepts apply to all development.
So, the ultimate question is, really, why do we write requirements at all?
Requirements are a tool that guides the journey from the vast number of possibilities for products we could build, to determining whether the product we want to build is going to be successful or not, down to picking exactly the right product we’re going to build.
Particularly in systems and hardware development, designers are typically not able to start developing a product until there are sufficiently detailed requirements, or possibly even a specification.
So for the purposes of the discussion today, I’m going to use the term product specification to mean the detailed document that describes what the actual product is, the final result of the development process.
One way of looking at this challenge is shown below. Imagine that the orange circle is every product that a given team knows how to build, and the blue dot is the exact specification for a particular product. The specification defines exactly which product the team will produce.
Let’s say we have a product we’ve developed, and we have the specifications that go along with it that tell us exactly how it works, how well it performs, size and cost, and requirements are the guide we use to get through those.
In many industries, you have to have a fair amount of detail fleshed out before teams and resources get assigned and dedicated, so there are milestones to meet as you go through this. So, you’ll get some level of detail and you have a milestone to review it. You’ll get more detail; you’ll have more milestones.
Usually in the process, companies follow step by step guides for getting into the details, but there is usually a lot of room for interpretation along the way, so we’ll talk about some of the structures you can apply along those ways.
In more iterative environments, perhaps more software-type environments, you might actually go through this loop much, much more quickly, so you might do all of this but still focus on a very specific function and do it in a couple of weeks.
The same concepts are still applicable; it all depends on the scope of the product, time frames, interactions, and things like that. What we’re going to talk about here, in terms of systematically defining what to build, applies to almost any kind of product.
So, one of the first steps is defining the overall kind of space that the product team is going to operate in, and that’s typically done with a market definition or market requirement document, or something along those lines.
There are different approaches to doing that. The approach I recommend is defining that solution space with problem statements and constraints, because problem statements are clear descriptions that answer the question, “What is this product supposed to do such that it adds value to the market?”
So, if the product solves a problem that a lot of customers have it should sell well, and if it does it meeting the constraints, this should also result in it selling well. This is kind of the starting point for our product development process.
This might include, in addition to the problems that it’s solving, certain amounts of functionality that are required for the industry. The right amount of performance, functional and non-functional types of requirements, schedules, and things like that. This is really a kind of visual way of thinking of a market requirement set.
Important things here are not over constraining the development team or the design team. If these requirements are so specific that we can only build one product, then there is not a lot of room for the teams to innovate. If the requirements are really vague, the teams don’t know what to build, so we’ll talk through that next.
So the first example—and this is a common scenario that a lot of teams face— is that the solution space or the market requirements are so vague that the design team doesn’t know where to start.
It could be they’re too high-level, say, a problem statement with no constraints, in which case the design team doesn’t know whether the solutions that they can think of are valid solutions to those problems or not.
From the perspective of a designer, the detail that a marketer can provide is usually insufficient. So many questions remain unanswered, that either they go and build something, and it doesn’t end up resulting in a successful product, or they ask a million questions, that the marketer doesn’t have answer to.
The problem is that while the blue space is completely enclosed here, it typically isn’t. Many more detailed requirements are needed to fully enclose the box. Specifying those details completely will also likely dramatically shrink the blue area.
Even though, this approach is clearly problematic, many teams fall victim to it. Designers may feel that they can’t trust marketing because they know they aren’t getting enough detail and often have experienced failed products as a result of it. Marketing doesn’t understand why designers can’t just get on with building the product and why the products are missing the target, late, or both.
It could be there are other factors not considered, and so a lot of times this results in design teams starting to ask lots and lots of questions, which is good. It’s better they ask those questions than don’t, but it does mean that you possibly spend a lot of time iterating in this phase of the project because there’s not enough definition around, well, what problem are we trying to solve and what are some valid constraints around that?
The other possibility is the design team might say, “Hey, we can build whatever we want,” and they proceed without asking questions. It might be brilliant or it might be a complete failure, but because there are no controls for predictability, it’s definitely a high-risk situation.
Another common scenario is that the market requirements are so specific that the design team doesn’t have room to innovate. The market requirements could be a copy of a competitor’s specification with a couple of lines changed to say, “Build me one of these.” Or it could be a previous specification produced by the company with a few modifications that say, “Improve everything by five percent.”
Those kinds of requirements documents tend not to lead to a lot of innovation. It’s okay to have them, because sometimes you need to make a derivative part that’s just a quick improvement to an existing device; this can be very successful in the market.
But if you have too many of those types of products it becomes more difficult as time goes on to react quickly to new requests because you don’t have new technology. By focusing on modifying your existing technology, you’re likely falling behind in the competitive landscape with your customers.
You want to have a good mix of products that are defined in such a way that innovative technology can be developed. Design teams can go solve creative problems using their engineering skills, and that’s really the biggest benefit to the overall organization; it also makes the engineers happier because engineers love solving problems. If you just say, “Build me one of these,” they’re usually far less satisfied and far less enthusiastic about working on a project.
All right; so the third common scenario where this can go wrong is you define what your problem statement is, you have constraints, you think you’ve got a really well defined solution space, and the team goes off and builds something. And along the way the team finds there are challenges in the design, make some changes, and build a product that simply doesn’t meet the original requirements.
Marketing may have provided high-level problems to solve with constraints initially, but the focus moved to agreeing on a specification. As design challenges come up and trade-offs are made, the specification slowly drifts outside of the blue “acceptable products” area.
The result is that the team is so focused on building that they end up not building the right thing.
While this can be addressed by periodically reviewing the specification against the high-level requirements, there are likely many details in the specification that do not clearly trace to any high-level requirements.
As a result, the team doesn’t actually know they are building the wrong product.
When teams say, “Okay, we can make that change,” but don’t have a “live” source of traceability back to the requirements, problems are guaranteed.
This is very common scenario when managing the process in documents and spreadsheets because it’s very difficult to actually have traceability in those kinds of tools.
And what happens as teams go through the discussions and make the compromises, is that they stray from solving the original problem, meeting the original constraints and focusing on the original solution space.
Now, sometimes you get lucky and you can still sell what you’ve built, but what I’ve found in the industry overall is that as time has gone on, things have gotten sufficiently more complicated such that the chances of one these products being successful is decreasing.
It used to be that if I got things wrong, using what I built for another application was possible. That’s rarely case with today’s complex systems.
= = End Part I= =
“Delivering a release is a little like wrapping up a present and giving it to our customers” – Maarika Krumhansl, Release Manager at Jama Software
When I mention to folks outside of Jama that I’m a Release Manager at Jama, the most common reaction is “Interesting!” and then shortly thereafter “…What does a Release Manager do?”
Release Management means slightly different things at different companies. Some companies employ DevOps Release Engineers instead of Release Managers. Some companies roll the Release Management function into the Product Team. Other companies have their build, test, deploy, documentation, and customer communication so streamlined that they have no need for a Release Manager. I personally come at Release Management from a DevOps background. In a previous job as a Deployment Developer I had the opportunity to build that company’s first Continuous Integration pipeline. I was also responsible for releasing and packaging a Java application for production deployments. I am a huge advocate for Agile methodologies and my Release Management philosophy is heavily based on personal experience as well as learningfromtheindustryleaders.
Regardless of who or what process performs the role of Release Management, it is based on three primary principles: Traceability, Reproducibility, and Measurability.
Traceability: The ability to see how one piece of information – e.g. a requirement, a story, a git commit, an automated test run – connects to any and all other relevant pieces of information in a release, either upstream or downstream in the item hierarchy (or forwards/backwards in the workflow). For example, a release is traceable if any member of the organization is able to see which epics are shipping with a release, the specific stories in those epics, and any bugs or defects slated to be fixed. For each ticket (story or defect) in a release, it is also possible to determine exactly which git commit(s) represents the work done to satisfy the requirements, who performed the code review and the desk review, and whether the automated unit-, integration and functional tests passed against that commit.
Reproducibility: At its core, this is about the ability to generate an exact copy of (i.e. reproduce) a release of Jama. A release is made up of multiple components, including the actual binary artifacts, the deployment method/scripts, the documentation, and the environment configurations. Binary repositories (e.g. Nexus, Artifactory, etc) provide reproducibility of artifacts, and by keeping build and deployment scripts – as well as standard environment configurations – in source control (“Infrastructure as Code“) we can guarantee reproducibility of installs / instances of a release.
Measurability: The ability to determine the “state” of a release at any moment, either in development or in production. While a release is in development, it is important for all stakeholders to have a clear view of the progress being made and the “health” of a release, including things like: How many tickets are still open/in development/in testing? What is the test coverage? What are the results of the automated regression and performance testing and how do the results compare to previous runs? Once a release is live, it is our responsibility to monitor and measure its performance compared to previous releases and to remediate any unexpected behavior (if needed). Numerous tools exist to help with application performance monitoring, server-side resource monitoring, logging and parsing of errors, etc, but these tools are only helpful if 1.) they are measuring the right things, 2.) they have visibility (e.g. alerts/triggers set up, well-designed dashboards, people actually looking at them, etc.) and 3.) they are reliable (e.g. provisioned with enough resources, few numbers of false positives).
It is the Release Manager’s job to ensure the Traceability, Reproducibility and Measurability of software releases. Ideally this is done by implementing tooling and automation but in the worst case some of it must be done manually until the pain of NOT automating the task is far greater than the up-front cost of scripting it. Case in point: Currently at Jama the process of producing a Manifest Check (i.e. the document that proves that each ticket slated for the release has at least one git commit implementing it, as well as verifying that each git commit is implementing a ticket planned for the release) is manual and tedious, involving:
running a bash script to diff the commits in the current release from the last release,
parsing the commit messages for ticket numbers and loading those ticket numbers into a spreadsheet,
cross-checking the tickets in the spreadsheet with the tickets intended for release, as reported by our internal install of Jama,
working with Engineering and Product to resolve any discrepancies by either adding tickets to the release that were overlooked originally, or identifying which commits may have implemented code for multiple tickets.
As you can imagine, this process is time-intensive and non-scalable, since Jama already has multiple code repositories. As we plan to move towards a Service Oriented Architecture (e.g. “microservices“) the number of repos is expected to explode. Clearly the current manual process is no longer tenable. At a recent Jama Hackathon a team of developers and QA engineers developed a proof of concept service that will automate all of the tasks in the above list, and Product has added this work to the overall product backlog (to be prioritized against other strategic initiatives) as an add-on service for Jama.
What I love about Release Management at Jama is the diversity of responsibilities and technical challenges. It is fascinating to witness and assist Jama transform from a monolithic architecture to a service-oriented architecture, and ultimately support a more container-ized, continuous deployment paradigm for our Hosted releases. Additionally, I am learning an enormous amount about the state of the art in on-premises deployment technology – i.e. Replicated and Docker. As a Release Manager concerned with Traceability, I am fortunate to be able to use Jama to build Jama, since this is exactly what Jama was built to do! I work with people from across the organization daily as I perform general project management for releases, and I get to be a spokesperson for process improvements and CI optimization, helping to drive initiatives such as modernizing our binary repo and establishing and enforcing our git release branching strategy. We are also starting to implement slow rollouts of some of our features to small subsets of our customers, also known as “Canary Releases“, and we are pleased with the feedback and data we have been receiving about this effort.
Connecting your requirements to downstream test plans and test cases is crucial to end-to-end traceability. Jama makes it easy to trace the relationships between your requirements and their stakeholders, test cases and test results to ensure that you have full and automated coverage.
As part of our developer community support we’ve just released a script that utilizes our REST API to relate test cases to their test runs in Jama, and makes these relationships visible in coverage explorer and the resulting data available in reporting.
Connect test cases to test runs. *Click to enlarge.*
We’ve made this script available on the Jama Software GitHub account. This simple script is a place to help you get started and we hope you’ll do great things with it! You can also join our open support community to ask questions–or offer ideas!–about anything Jama or product development. You can join in the conversation about our REST API specifically here, where’s theres some great, active topics right now. We invite you to comment in the REST API group if you use this script and improve upon it.
As a System Engineer managing requirements do you ever feel like you’re playing a game of Topple? First, you start with a board that is relatively balanced, but depending on where you put the pieces it can quickly get off kilter. As the game evolves you are adding more and more pieces to the board. Now let’s make the game harder. Some of those pieces weigh more than others, so putting one green piece on the board means adding two red pieces to balance the load. Just for fun, lets now tie a few of those pieces together with some string, meaning you can’t add one or move one piece with out moving another. And are you really playing this game all by yourself?
Managing requirements can feel like a game of Topple.
Finding balance between competing requirements can seem just this precarious. If you’re building a medical device, you are likely weighing human safety over product aesthetics. When you add cost to one area of the product you have to adjust another area to keep cost in balance. And likely you’re working with a team of engineers who are building this product and must stay in close communication with them in order to deliver a complete, quality system. And as your product evolves you’re receiving requirements from many sources: business, product, hardware, and software.
How do you manage all of these competing priorities, conduct effective impact analysis and keep all stakeholders and developers in alignment? You are likely using some sort of complex matrix to keep track of the individual requirements and their relationships. It could be in Excel or even in a legacy RM tool. And this may work if all requirements were created equal, or if you’re the only person who needs to know about the impacts to the complete system.
But likely, that spreadsheet is not working.
Here’s what that spreadsheet on your desktop cannot do:
manage the complex web of traceability to truly understand the relationships between requirements and the people who are responsible for them
quickly find who and what are impacted by changes to the system
ensure that each requirement is validated and verified, proving that when the product is complete, you are delivering what was asked for and that the system has been thoroughly tested
In my work as a Jama consultant, I’ve seen our customers solve these very problems using Jama. Like Sirius XM, who picked Jama for traceability and alignment from requirements to testing. They wanted visibility into change so that they knew what was impacted. And they needed to eliminate the chaos from spreadsheets and emails.
Our partner, Deloitte, first implemented traceability with Jama to get visible coverage from requirements through test. Then, they connected their many stakeholders to the requirements those people owned, and, as questions came up throughout development, the right people could be pulled into conversation, within the Jama application, to get to a decision quickly. These changes were captured along with the discussion right in Jama so there was a history of decisions that linked back to the original requirements requests.
One of the things I often hear in my work is a belief that implementing a new system will only increase the complexity of an already difficult-to-manage process. I understand the concern, and I’ve written before about how to ensure adoption of a new enterprise application. One thing that makes it easy for teams to adopt Jama is its ease-of-use, especially when you compare it to the chaos of documents and email and file sharing applications. In our next post, Matt Mickle, another Jama consultant will discuss the characteristics in the Jama application that make it easy to transition from document-based traceability to visible coverage in a collaborative system.
Open just about any business management book or blog and the topic of accountability—and the eternal quest for it—will turn up. But when your world revolves around managing the creation, iteration and release of new products, traceability much more accurately defines what you seek.
The frustrating fact is that, for most product managers, trying to implement traceability that occurs concurrent with build processes is like trying to wish a unicorn into existence. It’s something you aspire to see, but repeated trial and error suggests it might remain a figment of your imagination.
As Jama product manager Derwyn Harris likes to say, traceability is the process of connecting data, people and work. It sounds simple but the challenge is that traceability is too often treated as a kind of checklist.
To add real, measurable value to your team’s product delivery process, traceability needs to show you how every item, action and actionable item connects to each person working on it; it needs to illustrate how your people are connected to each step of the process.
Of course, decisions drive the actions in each product delivery cycle. As taken from our webinar, Evolve Your Definition of Traceability, below is a partial, simplified outline of the decision questions traceability needs to answer for each stakeholder and team member, from the point of original concept through the stages of define, build, test and launch:
Decision needed:
Whom do I need to ask?
What’s the best way to communicate with decision makers?
Do I have all the necessary context to understand the reason for this decision, or the problems associated with it?
Decision in progress:
How will this tie-in with and affect what we’ve already agreed to?
Can we make this discussion transparent so we can react in real time?
How can we determine what the impact of making this decision will have?
Decision made:
When the next iteration requires more decisions, how can we track them?
What’s the best way to notify stakeholders and key team members about changes that are relevant to them?
Our product’s history is in millions of critical details; how do we provide context and rationale for each choice made?
When your teams work in different time zones, on different product-related projects or in siloed areas of expertise, static methods of tracking data for impact analysis and coverage fall short. Product managers need a live environment for real-time collaboration that tracks relationships between people and data—whether you’re building software, hardware or a combination of the two.To see a demo of how traceability works with Jama, grab a coffee, tea or a snack and check out our webinar.
Today, every product launch involves many people and thousands of decisions. Apply real-time traceability to them, achieve product delivery accountability and stop chasing after the unicorn.
The CEO of a major corporation who was present when I described requirements traceability at a seminar asked, “Why wouldn’t you create a requirements traceability matrix for your strategic business systems?” That’s an excellent question. He clearly saw the value of having that kind of data available to the organization for each of its applications. If you agree with this executive’s viewpoint, you might be wondering how to incorporate requirements traceability into your systems development activities in an effective and efficient way.
Tracing requirements is a manually intensive task that requires organizational commitment and an effective process. Defining traceability links is not much work if you collect the information as development proceeds, but it’s tedious and expensive to do on a completed system. Keeping the link information current as the system undergoes development and maintenance takes discipline. If the traceability information becomes obsolete, you’ll probably never reconstruct it. Outdated traceability data wastes time by sending developers and maintainers down the wrong path, so it’s actually worse than having no data at all.
Requirements Traceability Procedure
If you’re serious about this effort, you need to explicitly make gathering and managing requirements traceability data the responsibility of certain individuals. Otherwise, it just won’t happen. Typically, a business analyst or a quality assurance engineer collects, stores, and reports on the traceability information. Consider following this sequence of steps when you begin to implement requirements traceability on a specific project:
Select the link relationships you want to define from the possibilities shown in Figure 2 from the first article in this series on requirements traceability.
Take another look at the second article in this series and choose the type of traceability matrix you want to use: the single matrix shown in Table 1 or several of the matrices illustrated in Table 2. Select a mechanism for storing the data—a table in a text document, a spreadsheet, or a commercial requirements management tool.
Identify the parts of the product for which you want to maintain traceability information. Start with the critical core functions, the high-risk portions, or the portions that you expect to undergo the most maintenance and evolution over the product’s life.
Modify your development procedures and checklists to remind developers to update the links after implementing a requirement or an approved change. The traceability data should be updated as soon as someone completes a task that creates or changes a link in the requirements chain.
Define the tagging conventions you will use to uniquely identify all system elements so they can be linked together. If necessary, write scripts that will parse the system files to construct and update the traceability matrices. If you don’t have unique and persistent labels on requirements, design elements, and other system elements, there’s no way you can document the connections between them.
Educate the team about the concepts and importance of requirements tracing, your objectives for this activity, where you will store the traceability data, and the techniques for defining the links—for example, using the tracing features of a requirements management tool.
Identify the individuals who will supply each type of link information and the person who will coordinate the traceability activities and manage the data. Obtain commitments from all of them to do their part.
As development proceeds, have each participant provide the requested traceability information as they complete small bodies of work. Stress the need to assemble the traceability data as they work, rather than attempting to reconstruct it at a major milestone or at the end of the project.
Audit the traceability information periodically to make sure it is being kept current. If a requirement is reported as implemented and verified, yet its traceability data is incomplete or inaccurate, your traceability process isn’t working as you intend.
This procedure is described as though you were starting to collect traceability information at the outset of a new project. If you’re maintaining a legacy system, odds are that you have some traceability data available. There’s no time like the present to begin accumulating this useful information. The next time you have to add an enhancement or make a modification, write down what you discover about connections between code, tests, designs, and requirements. Build the recording of traceability data into your procedure for modifying an existing software component. This small amount of effort might make it easier the next time someone has to work on that same part of the system. You’ll never reconstruct a complete requirements traceability matrix, but you can grow a body of knowledge a bit at a time during the application’s life.
Is Requirements Traceability Feasible? Is it Necessary?
You might conclude that creating a requirements traceability matrix is more expensive than it’s worth or that it’s not feasible for your project. That’s fine: it’s your decision. But consider the following counter-example. A seminar attendee who worked at an aircraft manufacturer told me that the requirements specification for his team’s part of the company’s latest jetliner was a stack of paper six feet thick. They had a complete requirements traceability matrix. I’ve flown on that very model of airplane several times, and I was happy to hear that the developers had managed their requirements so carefully. Managing traceability on a huge product with many interrelated subsystems is a lot of work. This aircraft manufacturer knows it is essential; the Federal Aviation Administration agrees.
Not all companies build products that can have grave consequences if the software has problems. However, you should take requirements tracing seriously, especially for your business’s core information systems. You should decide to use any improved requirements engineering practice based on both the costs of applying the technique and the risks of not using it. As with all software processes, make an economic decision to invest your valuable time where you expect the greatest payback.
Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars. Karl Wiegers is an independent consultant and not an employee of Jama. He can be reached at http://www.processimpact.com. Enjoy these free requirements management resources.
The first part in this series of articles presented an overview of requirements traceability, identified the potential kinds of traceability links you could define among a project’s artifacts, and stated several motivations for tracing requirements. This article, adapted from my book Software Requirements, 2nd Edition, describes the requirements traceability matrix.
The Requirements Traceability Matrix
The most common way to represent the links between requirements and other system elements is in a requirements traceability matrix, also called a requirements trace matrix or a traceability table. Table 1 illustrates a portion of one such matrix. When I’ve set up such matrices in the past, I made a copy of the baselined SRS and deleted everything except the labels for the functional requirements. Then I set up a table formatted as in Table 1 with only the Functional Requirement column populated. As fellow team members and I worked on the project, we gradually filled in the blank cells in the matrix.
Table 1. One Kind of Requirements Traceability Matrix
Table 1 shows how each functional requirement is linked backward to a specific user requirement (represented in the form of use cases in this example), and forward to one or more design, code, and test elements. Design elements could be objects in analysis models such as data flow diagrams, tables in a relational data model, or object classes. Code references can be class methods, stored procedures, source code filenames, or procedures or functions within the source file. You can add more columns to extend the links to other work products, such as online help documentation. Including more traceability detail takes more work, but it also gives you the precise locations of the related software elements, which can save time during change impact analysis and system maintenance.
You should fill in the information as the work gets done, not as it gets planned. That is, enter “catalog.sort()” in the Code column of the first row in Table 1 only when the code has been written, has passed its unit tests, and has been integrated into the source code baseline for the product. This way a reader knows that populated cells in the requirements traceability matrix indicate completed work, not just good intentions.
When completing the matrix column for system test cases, note that listing the test cases for each requirement does not indicate that the software has passed that test. It simply indicates that tests have been written to verify the requirement at the appropriate time. Tracking testing status is a separate matter.
Nonfunctional requirements such as performance goals and quality attributes don’t always trace directly into code. A response-time requirement might dictate the use of certain hardware, algorithms, database structures, or architectural choices. A portability requirement could restrict the language features or coding conventions the programmers use, but it won’t necessarily turn into specific code segments that enable portability. Other quality attributes are indeed implemented in code, though. Integrity requirements for user authentication lead to derived functional requirements that are implemented through, say, passwords or biometrics functionality. In those cases, trace the corresponding functional requirements backward to their parent nonfunctional requirement and forward into downstream deliverables as usual. Figure 1 illustrates a possible traceability chain involving nonfunctional requirements.
Figure 1. Traceability chain for requirements dealing with application security.
Traceability links can define one-to-one, one-to-many, or many-to-many relationships between system elements. The format in Table 1 accommodates these cardinalities by letting you enter several items in each table cell. Here are some examples of the possible link cardinalities:
• One-to-one. One design element is implemented in one code module.
• One-to-many. One functional requirement is verified by multiple test cases.
• Many-to-many. Each use case leads to multiple functional requirements, and certain functional requirements are common to several use cases. Similarly, a shared or repeated design element might satisfy a number of functional requirements. Ideally, you will capture all these interconnections, but in practice many-to-many relationships can become complex and difficult to manage.
Another way to represent traceability information is through a set of matrices that define links between pairs of system elements, such as:
• One type of requirement to other requirements of that same type.
• One type of requirement to requirements of another type.
• One type of requirement to test cases.
You can use these matrices to define various relationships that are possible between pairs of requirements, such as “specifies/is specified by,” “depends on,” “is parent of,” and “constrains/is constrained by.”
Table 2 illustrates a two-way traceability matrix. Most cells in the matrix are empty. Each cell at the intersection of two linked components is marked to indicate the connection. You can use different symbols in the cells to explicitly indicate “traced-to” and “traced-from” or other relationships. Table 2 uses an arrow to indicate that a functional requirement is traced from a particular use case. These matrices are more amenable to automated tool support than is the single traceability table illustrated in Table 1.
Table 2. Requirements Traceability Matrix Showing Links Between Use Cases and Functional Requirements
Traceability links should be defined by whoever has the appropriate information available. Table 3 identifies some typical sources of knowledge about links between various types of source and target objects. Determine the roles and individuals who should supply each type of traceability information for your project. Expect some pushback from busy people whom the BA or project manager asks to provide this data. Those practitioners are entitled to an explanation of what requirements tracing is, why it adds value, and why they’re being asked to contribute to the process. Point out that the incremental cost of capturing traceability information at the time the work is done is small; it’s primarily a matter of habit and discipline.
Table 3. Likely Sources of Traceability Link Information
Tools for Requirements Tracing
It’s impossible to perform requirements tracing manually for any but very small applications. You can use a spreadsheet to maintain traceability data for up to a couple hundred requirements, but larger systems demand a more robust solution. Requirements tracing can’t be fully automated because the knowledge of the links originates in the development team members’ minds. However, once you’ve identified the links, tools can help you manage the vast quantity of traceability information.
There are numerous commercial requirements management tools that have strong requirements tracing capabilities. Two sources of information about such tools are http://www.incose.org/ProductsPubs/Products/rmsurvey.aspx and http://www.volere.co.uk/tools.htm. You can store requirements and other information in a tool’s database and define links between the various types of stored objects. Some tools let you differentiate “traced-to” and “traced-from” relationships, automatically defining the complementary links. That is, if you indicate that requirement R is traced to test case T, the tool will also show the symmetrical relationship in which T is traced from R.
Some tools automatically flag a link as suspect whenever the object on either end of the link is modified. The suspect links have a visual indicator such as a red question mark or a diagonal red line in a cell in the requirements traceability matrix. The suspect link indicators tell you to check, say, whether you need to change certain functional requirements to remain consistent with a modified use case. This feature helps ensure that you have accounted for the known ripple effects of a change.
The final article in this series will propose a process for incorporating requirements traceability practices into your project activities.
Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars. Karl Wiegers is an independent consultant and not an employee of Jama. He can be reached at http://www.processimpact.com. Enjoy these free requirements management resources.