In this blog, we recap our “The Inside Story: Data-Model Diagnostic for IBM® DOORS®” webinar.
Organizations make investments in software tools to improve their product development process, but they often forget to invest in their data. A consistent data model is the best way to maximize the benefits of software tooling, but this can only be achieved by spending time on analysis.
Jama Software is well documented on the benefits of a common engineering data-model and the use of diagnostics to understand the true nature of your engineering data.
In this session, we will discuss the production of a return on investment (ROI) for cleaning your IBM® DOORS® data.
You’ll learn more about:
- Why a common engineering data-model is important
- The aims of a diagnostic tool
- The breakdown of the IBM DOORS data-model diagnostic in terms of the measures that can be taken
- Calculating the financial impact of cleaning your engineering data
Below is an abbreviated transcript and a recording of our webinar.
The Inside Story: Data-Model Diagnostic for IBM® DOORS®
Richard Watson: Thanks very much. Yeah, I’m excited to be here today. As was said, previously, before Jama Software, I’ve been working with DOORS for a huge amount of time as the product manager for a long time, and I’ve moved to Jama Software. One of the main activities I’ve been working with in Jama is “How do we transform DOOR’s data into a new requirements tool of Jama Connect?” And this presentation is all about trying to understand the diagnostics or understand some diagnostics from your data to be able to understand data shape and size so that we can help see the business case and also help understand how we would transform it to improve the situation.
Let’s start at a very high level. I’ve presented a slide like this repeatedly for all of those 30 whatever years. We know that the earlier in the life cycle that we find a defect or a problem, the cheaper it is to resolve, but in Jama Software, we firmly believe that this is related to traceability. If we have traceability between the artifacts in our engineering process, then we have the ability of finding the information that has an error. For example, if you’ve been working on your needs analysis, and then you start to decompose your requirements, at that stage, if there is a traceability between the requirements definition and the needs analysis, you’ll start to see the errors in the needs at that stage, rather than having to wait all the way around to validation, and then finding out that mistake. And if you wait until the end, we all know that it’s been proven that it’s much more expensive, and there are many sources of this information, but you’ll see a link on the slides to INCOSE.
If we look at the reality of this V model, though, even with tools in place and some integrations in place, we find that there are many different types of silos of information in organizations, and this integration of framework between those different silos is just not established. As you’re documenting the requirements or the system design or the implementation, because you don’t have a viable connection back to the previous assets, you’re not encouraged to find the errors in those previous assets. And so if you work in different silos in this way, you don’t fix those errors. And by the time then you go through to verification, validation, and up that side of the V, then you’ll perhaps start uncovering those problems and it’ll be expensive. Even worse, you won’t find those problems; you’ll go into deployment, and you’ll find it in production, and that’s terribly expensive.
Richard Watson: Here in Jama Software, we believe that our live traceability model has resolved this. Jama Software provides an environment that keeps all of those different assets connected. Requirements actually are the common denominator. Everybody works against some form of specification. Maybe the name changes, maybe it’s a work instruction or a requirement or a project need, or a user expectation, but everybody’s working to something, everybody’s conforming to something. And here in Jama Software, we provide an environment to create your engineering data in something called a model-based framework. You have a model-based systems engineering framework for all of your engineering data, and we keep that data connected directly from the beginning. So rather than waiting to comply or give some sort of statement to say everything’s been covered and creating traceability, later on, we encourage this traceability to be established right from the very beginning.
And we do that in a way that does not force people out of their preferred environments. And so they work in their existing environments establishing traceability, but then we can see the end to traceability in a commonplace. And we can make sure that all the engineering assets are consistent. Jama Connect offers this environment. It offers a way of defining a model-based systems engineering data model over your engineering data and then facilitating which applications should be contributing to those bits of information. Be it Jama Connect for requirements or test and risk, or some other system for defect tracking and other assets in that way.
Great, we’ve got this understanding that information needs to be connected together from the get-go and not at some later stage. And so that would give you a perfect environment, right? But there’s a big but. This data model that we’ve been describing, we can describe it in something like a language. Your engineering data model is the language of your engineers. It’s the way that they create your systems, though the way they specify it, et cetera. But you want to be able to facilitate a common language. If you’ve got separate teams using a different way to engineer your systems, then they can’t communicate between each other effectively. They can’t move between teams effectively. And the cost of integration across that life cycle becomes more and more expensive.
Richard Watson: And so this language needs to become common, but it’s quite difficult because in traditional environments, so if we move the conversation to talk about some of the IBM tooling, so IBM DOORS or IBM DOORS Next, for example, we find that there aren’t many of these silos. IBM DOORS, for example, specifies requirements in what it calls DOORS modules. Each DOORS module stands on its own. Unless your organization have taken steps to try and rigorously make modules consistent with each other, each of those modules would end up being a silo of information. Multiple different sets of user requirements, for example, could be easily inconsistent with each other.
DOORS Next is the same. DOORS Next uses a component model, and each component stands on its own as a silo, and keeping the components consistent with each other is also difficult. Although we’ve got this wish to have a common language across our organization, it’s very easy and quite convenient to end up with lots of silos of organizations, each doing their own particular thing and not being able to communicate. Then that big question. The big question is if you have these silos, how do you move from having a silo-based organization to having a common language or a common engineering data model? And that’s when we should start talking about data-model diagnostics…
To watch the full webinar, visit: The Inside Story: Data-Model Diagnostics for IBM® DOORS®