Tag Archive for: Product Development & Management

Software Validation

This is part two of a two-part series on software validation and computer software assurance in the medical device industry.

Practical Guide for Implementing Software Validation in Medical Devices: From FDA Guidance to Real-World Application – Part 2

In our previous blog post, we reviewed the top things to know about software validation and computer software assurance in the medical device industry. In this installment, we’ll take a closer look at computer software validation and provide tips and tools to manage your software in a compliant and efficient manner.

Main points

The FDA Draft Guidance on Computer Software Assurance

In September, 2022, the FDA released its draft guidance “Computer Software Assurance for Production and Quality System Software.” While in draft form, the final form for most guidance typically mirrors the draft document. The 2022 supplements the 2002 guidance on Software Validation, except it will supersede Section 6 (“Validation of Automated Process Equipment and Quality System Software”). In this guidance the FDA uses the term computer software assurance and defines it as a “risk-based approach to establish confidence in the automation used for production or quality systems.”

There are many types of software used and developed by medical device companies, including those listed below. The scope of the 2022 draft guidance is on software used in production and quality systems software, as highlighted below.

  • Software in a Medical Device (SiMD) – Software used as a component, part, or accessory of a medical device;
  • Software as a Medical Device (SaMD) – Software that is itself a medical device (e.g., blood establishment software);
  • Software used in the production of a device (e.g., programmable logic controllers in manufacturing equipment);
  • Software in computers and automated data processing systems used as part of medical device production (e.g., software intended for automating production processes, inspection, testing, or the collection and processing of production data);
  • Software used in implementation of the device manufacturer’s quality system (e.g., software that records and maintains the device history record);
  • Software in the form of websites for electronic Instructions for Use (eIFUs) and other information (labeling) for the user.

RELATED: Understanding Integrated Risk Management for Medical Devices


Understanding Your Software’s Intended Use and Risk-Based Approach

Defining the software’s intended use is an important aspect of managing your organization’s computer software assurance activities.

This then allows you to analyze and document the impact to safety risk if the software failed to perform to meet its intended use. One aspect that I appreciate the FDA adopting is the concept of ‘high process risk,’ when the failure of the software to perform as intended may result in a quality problem that foreseeably compromises safety and an increased medical device risk. The guidance has a number of examples to illustrate examples of high process risk and not high process risk. Previously, risk that purely a high risk to compliance only (i.e., no process risk) was essentially treated the same as risk that could compromise safety.

Commensurate with the level of process risk, guidance, and examples are presented to outline expected computer assurance activities, including various levels of testing, and level of documentation. Computer assurance activities for software that poses a high level of process risk include documentation of the intended use, risk determination, detailed test protocol, detailed report of the testing performed, pass/fail results for each test case, any issues found and their disposition, among others.

In contrast, guidance is provided that computer software assurance activities that pose no level of process risk can consist of lower level of testing, such as unscripted ad-hoc or error guessing testing. Prior to this guidance, the expectation was fully scripted protocols and documented results for each test case, which felt burdensome. For example, having to script out protocol steps to include user log-in steps for an electronic QMS module that facilitated the nonconformance process, which did not have a high level of process risk. The usage of the concept of high process risk and acknowledging that unscripted testing can be appropriate in times of low risk, will certainly help lessen the burden of compliance, without compromising safety.

Managing Your Software Efficiently

For those that think analytically like me, once can easily see the value of a Trace Matrix to keep my organization’s software organized and ensure the intended use, risk assessment, planned computer software assurance activities, and outcomes documented.

Similar to how it efficiently traces your medical device design inputs to outputs and links to your risk management, Jama Connect® is a great tool to also trace and manage all your software and software validation and computer software assurance activities. This includes documentation of the intended use, risk determination, and test protocols and reports performed. With its new validated cloud offering, SOC2 certification, and available Jama Connect Validation Kit, Jama Software also provides the tools and evidence you need to meet your organization’s computer software assurance activities.


RELATED: Jama Connect® for Medical Device Development Datasheet


Closing

Developing a risk-based process for software management, including software validation and computer software assurance, is key to staying compliant. Staying organized and using a tool like Jama Connect helps you do so efficiently.

To read part one of this blog, click HERE.


FuSA

Functional Safety (FuSA) Explained: The Vital Role of Standards and Compliance in Ensuring Critical Systems’ Safety

Have you heard of FuSA? It stands for Functional Safety, and it is a vital part of any system that requires safety assurance. FuSA was designed to reduce the risk of physical injury or damage due to malfunctioning equipment. This guide will provide an overview of the subject, including the standards, compliance requirements, and the different types of systems where FuSA is used.

What Is Functional Safety?

At its core, Functional Safety (FuSa) is a set of measures taken to ensure that a system meets certain safety requirements. In other words, it’s a way to make sure that any system can operate safely without causing physical injury or damage. This includes both hardware and software components within the system.


RELATED: Managing Functional Safety Development Efforts for Robotics Development


How Does FuSa Work?

The goal behind FuSa is to reduce the risk associated with a product’s failure as much as possible through the use of safety systems that are designed to detect any potential hazards and then take corrective action if necessary. To do this, developers must consider both hardware-based solutions such as monitoring devices or sensors, as well as software-based solutions such as algorithms or machine learning models that can detect potential faults before they occur. Once all potential risks have been identified and addressed, designers must then create a comprehensive test plan to validate all safety system components before the product is released into production.

FuSa Standards and Compliance Requirements

Several international standards have been established to help guide organizations in their implementation of FuSa. These standards include ISO 26262 for the automotive industry and IEC 61508 for industrial manufacturing and consumer electronics sector. Both these standards establish minimum requirements for safety-critical functions within a system. Additionally, each standard specifies certain testing procedures that must be followed in order to demonstrate compliance with the standard.

Typical Applications of FuSa

FuSa is commonly used in aerospace and defense applications as well as road vehicles, industrial machinery, medical devices, consumer products, and more. It can also be applied in critical systems such as those involving control functions or power generation/distribution systems. In all cases, the goal is to reduce the risk of unacceptable physical harm or damage due to malfunctioning systems or components.

When creating a safety system using FuSa principles, engineers typically use several tools such as FMEA (Failure Modes Effects Analysis), FMEDA (Failure Modes Effects & Diagnostic Analysis), FHA (Functional Hazard Analysis) etc., which are all based on the IEC EN 62304 standard for software development processes in medical devices; Road Vehicles Functional Safety Standard (ISO 26262); IEC 61508 for industrial automation; etc., all depending on what type of product/system one has in mind when developing a safety critical E/E/PS (Electronic / Electrical / Power Supply). All these rules vary depending on what type of product is being developed but usually involve assessing potential risks from different scenarios and establishing suitable safeguards against them so that they meet certain Safety Integrity Level requirements laid out by ISO/IEC 61508 standard.


RELATED: 2023 Predictions for Industrial and Consumer Electronics Product Development


Conclusion:

Functional Safety is an important consideration for any organization dealing with safety-critical systems or components involving significant risks from potential malfunctioning equipment or software failure leading to unacceptable physical harm or damage caused by the equipment itself. Engineers must use proper tools like FMEA & FMEDA during development process while ensuring adherence to standards such as ISO 26262 & IEC 61508 while developing their products meeting necessary Safety Integrity Level requirements laid out by these standards. As long as organizations are aware of these requirements and take steps towards implementing them properly into their products & services they should be able to develop reliable & safe products meeting customer expectations!

Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by McKenzie Jonsson and Steve Rush.


Jama Connect Jira Integration

Revolutionize Your Software Development Process with Seamless Integration of Jama Connect and Atlassian Jira

Nearly all of Jama Software®’s clients engage in software development to some degree. In some cases, the products they build are entirely software-based, and in others, software is just one critical component of a more complex system.
Because most software engineering teams use Atlassian Jira as their central development hub, Jama Software provides a direct link between Jama Connect® and Jira through our integration platform, Jama Connect Interchange™.

As the Product Manager for Jama Connect Interchange (JCI), I often get asked by customers for best practices and examples of how to integrate Jama Connect with Jira, so their teams can collaborate across these tools more seamlessly.

For the most part, the customers I talk to aren’t interested in discussing high-level theories and best practices for software development (though our seasoned consulting team is always happy to provide advice in this area!). Rather, customers are looking for ground floor examples and use cases to reconnect their disconnected teams and processes.

Our customers know that in order to finish projects on time and on budget, the two most important tools in their software development ecosystem – Jama Connect and Jira – need to talk to each other directly. That’s where a targeted integration workstream comes in.

Jama Connect-Jira Integration Examples

Today, I’m going to share a few of the most successful Jama Connect to Jira integration examples we’ve seen. These examples can be implemented on their own or layered together for a more advanced workstream.

Keep in mind that this is not an exhaustive list! If your team has another workstream they’re loving, I’d encourage you to post about it in the JCI Sub-Community for others to see.


Editor’s Note: The terminology included below may vary slightly depending on your industry.

Example #1 (Agile Development Workstream) – Requirements in Jama Connect, Stories and Tasks in Jira

Most of our customers have the best success by completing their higher-level product planning in Jama Connect, while reserving Jira for more granular task execution by the software engineering team.

This is because Jira excels at task management, while Jama Connect’s built-in requirements traceability, risk management, and test management capabilities make it the ideal place to track your project in a holistic manner.

With the following model, you send software requirements from Jama Connect to Jira for execution at distinct points in the project’s lifecycle when software development activity occurs.

Rather than sync full hierarchies of items (requirements, user stories, tasks, etc.) back and forth unnecessarily, we set up JCI to sync just the relevant information at the time it is needed by the receiving team. This provides greater focus and prevents duplication of effort or conflicting changes.


RELATED: Write Better Requirements with Jama Connect Advisor™


Example #2 (Project Management Workstream) – Requirements and Stories in Jama Connect, Tasks in Jira

This example is similar to the first, except that User Stories are authored in Jama Connect rather than in Jira. Use this workstream if the Project Manager or Product Manager typically breaks down requirements into more granular units of work (stories) during the planning phase, before passing them off to the development team for execution.

With this workstream, each team works in their tool of choice, and when information is shared by multiple teams, that information is visible from both tools and synced in real time.

Example # 3 – Never Miss a Regression Defect

The regression testing cycle that accompanies each new software release can be chaotic if you don’t have an airtight process in place. During regression, Jama Software’s own development teams use JCI to streamline communication between our QA testers (who work in Jama Connect) and software engineers (who work in Jira). This ensures that any regression defects reported by QA are instantly sent to engineers for triage, and the release is not held up.

Here is how it works:

Since defects are automatically synced to Jira soon as they are reported, we eliminate any potential communication delays between the two teams, and more importantly, we ensure that no defects get missed.


FREE DEMO: Click Here to Request a Demo of Jama Connect Interchange™


Conclusion

Jama Connect Interchange is an integration platform that seamlessly integrates Jama Connect with other best-of-breed tools, like Atlassian Jira and Microsoft Excel.

JCI is built, supported, and continuously enhanced by dedicated teams at Jama Software. This means that JCI is deeply integrated with Jama Connect configurations and workstreams, providing you with a smart and seamless sync.

JCI supports both cloud-based and self-hosted instances of Jama Connect. To find out whether JCI would be a good fit for your organization, contact your Customer Success Manager.


TO LEARN MORE, DOWNLOAD THE: Jama Connect Interchange™ Datasheet



Software Validation, Medical Device

Practical Guide for Implementing Software Validation in Medical Devices: From FDA Guidance to Real-World Application – Part I

Intro

This is Part 1 of a 2-part series on software validation and computer software assurance in the medical device industry.

While it is clear that software validation is required by regulation in the US and elsewhere (e.g., the EU (European Union)), as regulated by the MDR and IVDR), how to execute continues to cause challenges, both for established medical device companies, and those just entering the medical device industry.

Between the different types of software, variations in terminology, type, and source of software (developed in-house, or purchased OTS, customized OTS (COTS), SOUP, etc.) advances in software technology, and evolving guidance of the FDA (Food and Drug Administration) and other regulatory bodies, it’s no wonder that implementation of software validation practices and procedures causes confusion.

This blog outlines the top things to know about software validation and computer software assurance as you implement practices and procedures for your organization in a way that is compliant and brings value.

Are you building or updating your software validation practices and procedures? If so, read on!

Top Things to Know About Software Validation and Computer Software Assurance

#1. Yes, there are different terms, methods, and definitions for software validation.

For the purposes of this blog, we’ll use the FDA’s definition of software validation, from their 2002 guidance. The FDA considers software validation to be “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”

At a high level, this makes sense. The confusion starts when folks try to define how that confirmation is performed and documented. How do I determine and document the requirements? How detailed do I need to go to my user needs and intended uses? For each feature? What kind of objective evidence? What if I’m using software to automate test scripts? Do I have to qualify the testing software? Turning to guidance and standards for a “standard” set of practices can add to the confusion. Even within just the medical device industry, there are multiple regulations and standards that use similar and at the same time, slightly different concepts and terminology. Examples include the IQ/OQ/PQ (Installation Qualification / Operational Qualification / Performance Qualification) analogy from process validation, black box testing, unit testing, just to name a few.

Before getting overwhelmed, take a breath and read on to point #2.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


#2. Start with the regulations and standards.

While the multiple regulations and standards around software validation cause confusion, they are also a good place to start. I say that because at a high level they are trying to achieve the same thing- software that meets its intended use and maintains a validated state. Keeping the intent in mind can make it easier (at least it does for me) to see the similarities in the lower-level requirements between any terminology differences and not be as focused on making all the terminology match.

To start, select those regulations and guidance from one of your primary regulatory jurisdictions (like the FDA for the US). In the US, three main FDA guidance documents to incorporate are 1) General Principles of Software Validation; Final Guidance for Industry and FDA Staff, issued in 2002; 2) Part 11, Electronic Records; Electronic Signatures – Scope and Application, issued in 2003.

The 3rd guidance is relatively new, a draft guidance released in September, 2022, Computer Software Assurance for Production and Quality System Software. While in draft form, the final form for most guidance typically mirrors the draft document. The 2022 supplements the 2002 guidance, except it will supersede Section 6 (“Validation of Automated Process Equipment and Quality System Software”). It is also in this guidance that the FDA uses the term computer software assurance and defines it as a “risk-based approach to establish confidence in the automation used for production or quality systems.”

Once you’ve grounded yourself in one set, then you can compare and add on, as necessary, requirements for other regulatory jurisdictions. Again, focus on specific requirements that are different and where the high-level intent is similar. For example, in the EU, Regulation (EU) 2021/2226 outlines when instructions for use (IFUs) may be presented in electronic format and the requirements for the website and eIFUs presented.

#3. Start on the intended use and make your software validation and computer software assurance activities risk based.

Start with documenting the intended use of the software and associated safety risk if it were to fail. Then define the level of effort and combination of various software validation activities commensurate with the risk. Software and software features that would result in severe safety risk if it fails are to be validated more rigorously and have more software assurance activities than software that poses no safety risk.

Here are some examples of intended use and the associated safety risk.

Example 1: Jama Connect®, Requirements Management Software

Intended Use: The intended use of Jama Connect is to manage requirements and the corresponding traceability. The following design control aspects are managed within Jama Connect, user needs, design inputs, and traceability to design outputs, verification and validation activities. Risk analysis is also managed in Jama Connect.

Feature 1 Intended Use: Jama Connect provides visual indicators to highlight breaks in traceability. For example, when a user need is not linked to a design input, or vice versa.

Risk-based analysis of Feature 1: Failure of the visual indicator would result in the possibility of not establishing full traceability or missing establishment of a design control element like a design input. This risk is considered moderate as manual review of the traceability matrix is also performed as required by the Design Control SOP. Reports are exported from Jama Connect as pdfs, reviewed externally to the software, and then approved per the document control SOP.


RELATED: Traceability Score™ – An Empirical Way to Reduce the Risk of Late Requirements


Example 2: Imbedded software in automated production equipment

Intended use: The intended use of the software is to control production equipment designed to pick in place two components and weld them together.

Risk-based analysis: This is a critical weld that affects patient safety if not performed to specification. Thus, the software is considered high risk.

#4. Software Validation and computer software assurance is just one part of the software life cycle… you need to be concerned about the whole lifecycle.

There is more to software development and management than just the validation. Incorporate how custom software will be developed, how purchased software will be assessed to determine the appropriate controls based on risk, including verification and validation activities, and revision controlled.

#5. Have different procedures and practices for the different types of software.

This is a good time to consider how different types of software in your organization will be managed, and it’s not a one-size fits all approach. A best practice is to have separate practices and procedures; one for software in a medical device (SiMD) and software as a medical device (SaMD) and at least one other procedure and set of practices for other software, like software used in the production of a device, software in computers and automated data processing systems used as part of medical device production, or software used in implementation of the device manufacturer’s quality system.

Closing

Stay tuned for Part 2 of this 2-part blog series, where we’ll dive deeper into computer software assurance, highlight the risk-based approach, and provide tips and tools to manage your software in a compliant and efficient manner.



Finding Information

Jama Connect® Features in Five: Finding Information

Learn how you can supercharge your systems development process! In this blog series, we’re pulling back the curtains to give you a look at a few of Jama Connect®’s powerful features… in under five minutes.

In this Features in Five video, Carleda Wade, Senior Consultant at Jama Software®, walks viewers through various ways of filtering and finding information within Jama Connect.

In this session, viewers will learn:

  • How to find information within Jama Connect®
  • Use search boxes throughout the application
  • Use facet filters to narrow search results
  • Interact with predefined filters
  • Create and manage new filters

Follow along with this short video below to learn more – and find the full video transcript below!


VIDEO TRANSCRIPT:

Carleda Wade: In this session, we will learn how to find information within Jama Connect, use search boxes throughout the application, use facet filters to narrow search results, interact with predefined filters, and create and manage new filters. So now let’s jump into our Jama instance.

Now we’re going to show you a couple different ways that you can search throughout the software. So here on our homepage we have this search bar. So let’s say I’d like to look for scheduling, since we just did manual scheduling in the previous session.
If I click on submit, you’ll see all these results. These results will show anytime the word scheduling shows up in any of the various projects. As you can see, this is a lot of results. So maybe we want to apply a filter so that we can narrow our list. So here I can click on filter items, and maybe potentially search for a keyword.

But I can also narrow this by looking at a certain project. So we’ve been working in our Jama 101, and then maybe I want to just look at system requirements, and let’s say stakeholder requirements. So here you’ll see are just the items that meet those two requirements. Another way to do this is by an advanced search. If I do an advanced search, first I can create a new filter. So let’s say I want to look for scheduling in my Jama 101 project, and I want to look at system requirements with the keyword of scheduling. When I do this, you can see here that I can preview, and that there will be three results. So if I click on there, it will give me a preview. And I can choose to save my filter. So now, essentially I’ve created a brand new filter.

Next, if I click here from my project explorer on filters, you’ll be able to see all of the various filters that are available. If I click on bookmarks, you’ll see this is the one that I just created, scheduling. And this little icon here indicates that it’s been bookmarked, or it’s become one of my favorites. If I go through the all, you can see other filters that have already been created within the system.


RELATED: Jama Connect® vs. DOORS®: Filters, Search, and Analysis: A User Experience Roundtable Chat


Wade: So let’s take a look at what happens when you right click. So when you right click on a filter. There are a few different options. So I could choose to remove this from my bookmarks if I so to desire. I could also choose to duplicate this. So let’s say for instance, this particular filter houses a lot of good information, and I want to be able to modify that information without changing the original filter. Maybe I would first duplicate this filter, then add onto it. I could also choose to edit the filter and view the criteria. If so desired, I could delete it. Another way to do this is by an advanced search. If I do an advanced search, first I can create a new filter. So let’s say I want to look for scheduling in my Jama 101 project, and I want to look at system requirements with the keyword of scheduling.

When I do this, you can see here that I can preview, and that there will be three results. So if I click on there, it will give me a preview. And I can choose to save my filter. So now, essentially I’ve created a brand new filter. Next, if I click here from my project explorer on filters, you’ll be able to see all of the various filters that are available. If I click on bookmarks, you’ll see this is the one that I just created, scheduling. And this little icon here indicates that it’s been bookmarked, or it’s become one of my favorites. If I go through the all, you can see other filters that have already been created within the system.

So let’s take a look at what happens when you right click. So when you right click on a filter. There are a few different options. So I could choose to remove this from my bookmarks if I so to desire. I could also choose to duplicate this. So let’s say for instance, this particular filter houses a lot of good information, and I want to be able to modify that information without changing the original filter. Maybe I would first duplicate this filter, then add onto it. I could also choose to edit the filter and view the criteria. If so desired, I could delete it.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


Wade: Another really interesting thing to see is if I choose to apply the filter to the explorer. When I do that, you’ll see that only the items that meet the filter requirements show up, instead of the full exploratory like it did before. So that’s pretty interesting.
Going back in, the last option is send for review. So let’s say for instance, for this stakeholder requirements in draft status. If I wanted to go ahead and move these requirements from draft, I could choose right here from the filtered screen to send this for a review. And it would just open up in the review center. Another really interesting thing to see is if I choose to apply the filter to the explorer. When I do that, you’ll see that only the items that meet the filter requirements show up, instead of the full exploratory like it did before. So that’s pretty interesting.

Going back in, the last option is send for review. So let’s say for instance, for this stakeholder requirements in draft status. If I wanted to go ahead and move these requirements from draft, I could choose right here from the filtered screen to send this for a review. And it would just open up in the review center. Another way to be able to search is if we go to our activity stream. So here you can see there’s a little search bar for our activity stream. So let’s say I also typed in scheduling here. Or let’s say I wanted to see what Sarah has done within my stream. Here you can see all of the activities that Sarah has done within my instance here.

Another way to search for information is, let’s go back into our manual scheduling and go to our activities. Here you’ll see we have yet another search function, if we’d like. And then also, we could apply filters here if we so desire. Also, whenever using filters such as either here or any of the locations, we can also use built in operators. So let’s say we wanted to look for intelligent and scheduling in our project. You’ll see here that it comes up.


RELATED: Jama Connect User Guide: Find Content


To view more Jama Connect Features in Five topics visit: Jama Connect Features in Five Video Series



Total Cost of Ownership

Jama Connect® vs. IBM®DOORS®: Total Cost of Ownership: A User Experience Roundtable Chat

Increasing industry challenges and complexities are pushing innovative organizations to consider modernizing the tool(s) they use for requirements management (RM). In this blog series, Jama Connect® vs. IBM® DOORS®: A User Experience Roundtable Chat, we’ll present several information-packed video blogs covering the challenges that teams face in their project management process.

In the 10th and final episode of our Roundtable Chat series, Preston MitchellSr Director, Global Business Consulting at Jama Software® – and Susan ManupelliSenior Solutions Architect at Jama Software® – discuss the total cost of ownership in product management.

To watch other episodes in this series, click HERE.

Watch the full video and find the video transcript below to learn more!


VIDEO TRANSCRIPT:

Preston Mitchell: All right. Welcome everybody, to episode 10 in our vlog series. Today, we’re going to be talking about total cost of ownership. I’m Preston Mitchell, the senior director of solutions at Jama Software, and I’m joined by my colleague, Susan Manupelli. Susan, do you want to introduce yourself?

Susan Manupelli: Sure. My name’s Susan Manupelli. I’m a senior solutions architect here at Jama Software, but I came from IBM, where I was a test architect for the last 20 years on some of their requirements management tools, so primarily Rational DOORS Next Generation and RequisitePro actually, before that.

Preston Mitchell: Excellent. Like Susan, I was a former IBM-er as well, so a user of many of those tools. Today, as you can see, we want to talk about kind of three main categories of total cost of ownership, IT infrastructure, so these are things like the actual physical hardware, the FTE administration costs, so like upgrades, maintenance, and then also the opportunity costs of when you do not adopt best-in-breed tools and processes. Why don’t we first start it off with the IT infrastructure costs? You know, Susan, in your experience with other RN tools, what have you found to be the challenges in this area?

Susan Manupelli: Sure. I’ll talk first about DOORS Next Generation. You know, DNG’s part of the ELM suite of products, that’s based on the Jazz architecture. It’s a very complex architecture. There’s a large number of servers you need, or VMs, to be able to stand up the solution. There’s an app server or some version of WebSphere. There’s a DB server for every application. So at a minimum with DNG, in addition to the app and DB server, you also would need a JTS server, an additional reporting server, [inaudible 00:02:08] or Data Warehouse. And if you have configuration management enabled, then there’s two additional servers that come with that, so for the global configuration manager and the LDX server. So-

Preston Mitchell: Interesting.

Susan Manupelli: And then of course, if you use any of the other applications of the ELM suite, there’s a server and database for those.


RELATED: Traceability Score™ – An Empirical Way to Reduce the Risk of Late Requirements


Preston Mitchell: Yeah, that’s quite a contrast to Jama, where we just require one application server and then a database server, which could be shared, actually, with other applications. Of course, that’s as far as self-host customers. Cloud customers really have no IT infrastructure costs at all, and I think that’s one of the biggest benefits of adopting a tool like Jama Connect. Okay, great. Next, I’d love to talk about the human or FTE maintenance costs that go along with tools. Susan, what’s your experience with other requirements management tools around the FTE costs?

Susan Manupelli: Sure. I’ll start off with DOORS Classic, which is an older client-server technology, and what I mean by that is that every user had to have software installed on their computer that was compatible with the server, so it was what we referred to as a thick client. An upgrade or maintenance of that would mean pushing out updates to however many users you have in your organization, potentially could be hundreds. So there was a lot of logistics involved with trying to get that upgrade done.

Preston Mitchell: Got it, and yeah, I imagine that’s downtime for the users, and a lot different than just a web-based tool that I sign in with my browser. The other thing that I know in working with customers that have migrated from DOORS Classic is DXL scripts and customization. Maybe you could talk a little bit about the hidden costs with those things.

Susan Manupelli: Yeah. Basically, any kind of customization that you want to do in DOORS Classic, you had to have somebody that could write a DXL script for it, that’s kind of a specialized skill, so there were costs with maintaining those, and particularly if they were used by across the organization.

Preston Mitchell: Is that any better with DOORS Next Generation?

Susan Manupelli:With DOORS Next Generation, there’s no DXL scripting or anything like that, but the thing that’s challenging with DOORS Next Generation is the upgrades and maintenance. Upgrades were often very complex and time-consuming. There was pretty high risk of failure, and then of course you have the time involved in roll back and trying it again. There’s also the ongoing maintenance of the middleware, would require a highly technical admin with some specialized skills in maybe database optimization, so Oracle or Db2. Also, keeping the system running optimally requires a full-time, highly skilled administrator for the ELM suite.

Preston Mitchell: Really? Full-time just for the backend? Wow.

Susan Manupelli: Yeah.


RELATED: Eight Ways Requirements Management Software Will Save You Significant Money


Preston Mitchell: Yeah, that’s definitely different than kind of what our self-hosted customers experience. I mean, we try to make the self-hosted upgrades very easy and straightforward. It’s a button click in the admin console. And then obviously, for the majority of our customers who use our cloud solution, there’s really no upgrade or maintenance that they have to do at all. We push the upgrades for them. We handle that for them in an automated process, that’s validated and verified. So yeah, definitely different. Well, let’s transition to talk about adoption costs, and I want to bring my screen share up again, because you and I have spoken about really the opportunity costs of not using best-in-breed tools or processes, and it kind of really comes down to measurement. We really believe using Jama Connect, we can reduce the negative product outcomes, because we can help you measure your process performance. As management guru, Peter Drucker, said, “If you can’t measure it, you can’t improvement.” So Susan, maybe you could touch on what I find are the three primary ways that we can help our customers measure their performance.

Susan Manupelli: Sure. First of all, we can measure the quality of the requirements. This means properly define… making sure the requirements are properly defined, that they’re complete and consistent. And we actually have a new product, Jama Connect Advisor, that helps in this area. As far as the digital engineering, we can measure the level of collaboration that’s happening in the tool, the number of reviews, and the output from those reviews. And then also for live traceability. Traceability is one of the key reasons why people use a requirements management tool, and Jama does it better than any other tool that I’ve used. And in addition, we can measure how well you’re actually capturing that traceability.

Preston Mitchell: Yeah. And speaking to that, especially on the live traceability, we have for our cloud customers, this great benchmark, where we anonymize all the data, and you can actually see how you stack up against your peers in the industry with regards to the traceability completeness of your projects. So some really great return on investment by utilizing our cloud offering and being able to see the actual performance compared to your peers in the industry. Ultimately, I think everyone realizes the later you are in a product development lifecycle, it’s much more expensive to actually fix any errors that are found. So our whole goal at Jama Connect is really to lower the total cost of ownership, but really actually make your product development less costly by finding and fixing those errors way earlier in the cycle, in the requirements definition phase. Well Susan, thanks again for the quick chat, and sharing your perspective on cost of ownership. Appreciate it.

Susan Manupelli: Great. Thanks, Preston.

Preston Mitchell: Bye, everybody.


Is your data working for you? A consistent and scalable data model is instrumental for achieving Live Traceability™ and making data readily available across the development lifecycle.

Download our Jama Software® Data Model Diagnostic to learn more!


Thank you for watching our 10th and final episode in this series, Jama Connect vs. IBM DOORS: Total Cost of Ownership. To watch other episodes in this series, click HERE.

To learn more about available features in Jama Connect, visit: Empower Your Team and Improve Your Requirements Management Process



VALID Act

In this blog, we recap our whitepaper, “Your Guide to the Verifying Accurate Leading-edge IVCT Development (VALID) Act” – To read the entire paper, click HERE.


Your Guide to the Verifying Accurate Leading-edge IVCT Development (VALID) Act

For more than four years, members of both houses of Congress have solicited input from a variety of stakeholders to improve Food and Drug Administration (FDA) oversight of Laboratory-Developed Tests (LDTs). The result is the Verifying Accurate Leading-edge IVCT Development (VALID) Act, a bi-partisan and bi-cameral bill designed to empower the FDA to regulate diagnostic tests. While the bill has not been passed yet, it is likely that it will move forward in some form and eventually become law.

This guide will help your IVD product team understand the provisions and implications of the VALID Act and how it applies to product development and risk management for medical devices.

Editor’s Note: As of the date of publication, the bill has not made it into FDA’s budget, but tighter regulations may be coming regardless.


RELATED: Application of Risk Analysis Techniques in Jama Connect® to Satisfy ISO 14971


Background

The FDA first received authority to regulate in vitro devices in 1976 under amendments to the Food, Drug, and Cosmetic (FD&C) Act. Following passage of those amendments, the FDA chose to exercise discretionary oversight over commercializing LDTs, citing the conclusion that LDTs were normally used within restricted environments for the benefit of the institution’s clinicians only.

Under the Clinical Laboratory Improvement Amendments (CLIA) of 1988, the Centers for Medicare and Medicaid Services (CMS) was charged with setting performance standards for test validation and certifying qualified laboratories. Included for regulatory oversight under CLIA were LDTs, which are sometimes called “home brew tests” due to their design and use within a single lab.

The FDA started to move toward regulation of LDTs in 2010, and pharmaceutical companies, professional societies, and industry groups began to submit feedback for and against regulatory oversight. Guidance documents, discussion, and
early draft versions of regulatory legislation were ongoing until 2018, when the first version of the VALID Act was introduced in the House of Representatives.

The VALID Act is designed with the goal of modernizing regulatory processes for in vitro diagnostics (IVDs) and LDTs and clarifying the FDA’s authority to regulate LDTs. The act’s authors and contributors also hope to address discrepancies between the market pathway of LDTs and other FDA-cleared or -approved tests.

A recent Senate version of the bill is attached to a larger legislative package that reauthorizes the Medical Device User Fee Amendments (MDUFA), which expired at the end of September 2022.

Provisions

Current versions of the VALID Act include the following provisions:

  • Create a new category of IVD called “in vitro clinical tests,” or IVCTs. This category would apply to both commercial test kits and LDTs.
  • Exempt existing LDTs from FDA pre-market review UNLESS they pose a safety risk for patients or are significantly modified after the act goes into effect.
  • Create a risk-based system for the FDA to oversee LDTs. This system would classify LDTs as low-, moderate-, or high-risk.
  • Provide measures to move LDTs into lower tiers of regulation. These measures include appropriate labeling, performance testing, submission of clinical data, clinical studies, and posting information to a website.
  • Offer exemptions from FDA pre-market review for low-risk LDTs, low-volume tests, modified tests, manual interpretation tests, and humanitarian tests.
  • Prohibit the FDA from infringing on the practice of medicine.
  • Direct the FDA not to duplicate regulations that are already included in CLIA.
  • Require the FDA to hold public hearings on LDT oversight.
  • Create a process to establish user fees through negotiations between the FDA and the industry. Fees would be subject to congressional approval.

The VALID Act would be effective five years after the date of passage.


RELATED WEBINAR: Ensuring FDA Compliance for Your Digital Health Solution


Debate

While the VALID Act does have bi-partisan support in Congress, debate is ongoing within the IVD industry, and experts in professional societies and impacted companies continue to submit feedback and suggest improvements to Congress.

Opinions range from support for the current version of the VALID Act with zero to few modifications to creating more CLIA-centric options for regulations under a 2015 modernization proposal. In addition, there is concern among some interested parties that the VALID Act is adding more complexity than necessary when the act would complement existing CLIA guidelines.

As debate continues, both sides offer opinions about the pros and cons of the act:

Pros:

  • Closes a “gap” in regulatory oversight that allowed companies such as Theranos and Laboratory Corporation of America to market LDTs that never work or are prone to error. The VALID Act would ensure consistency in quality and performance standards.
  • Updates and modernizes oversight of LDTs that was originally established in 1976 when such tests were much simpler and usually designed for small patient populations in a localized environment.
  • Reorients focus to test effectiveness and accuracy and patient safety rather than where it is developed and used. Currently, even high-risk tests need not undergo external review, and there is no requirement to report adverse events from inaccurate results.
  • Establishes a mechanism to track LDTs — how many there are, what they are used for, their level of complexity, etc.
  • Includes provision to bring tests to market quickly under Emergency Use Authorization (EUA).
  • Grandfathers almost all existing LDTs, and risk-based classification would allow some LDTs to be exempted from pre-market review process.
  • Introduces a regulatory innovation—“technology certification”—that allows IVD developers to submit one representative test for FDA approval rather than each new assay or indication. This
  • innovation would only apply to lower risk LDTs.

Cons:

  • Potentially places greater burden on FDA in the case of a high volume of EUA requests, as evidenced by a rush of requests to approve COVID-19 tests during the early days of the pandemic.
  • Requires registration and listing of all tests, even if they aren’t subject to pre-market review.
  • Imposes additional costs and regulations that could impede innovation and slow or halt life-saving research conducted in academic, clinical, and hospital labs.
  • Remains unclear about what LDTs will be grandfathered and what constitutes a “significant modification” to an existing LDT that would require it to undergo premarket review.
  • Duplicates existing regulatory oversight already being conducted by CMS under CLIA; laboratories in hospital settings already adhere to multiple accreditation and certification requirements.
  • Could create an incentive not to update or improve existing LDTs because of the burden of regulatory review.
  • Places burden of additional user fees on labs that already bear costs of registration and accreditation under CLIA and incur costs of onsite inspections and proficiency testing. These additional fees could drive some labs out of business, thereby stifling innovation and slowing patient access to care.

Despite the ongoing debate, most of the parties involved do agree that some modernization and reform is overdue in the IVD industry. Current rules were written decades ago and do not allow for modern technology, and the debate over regulation of LDTs has been going on for some time. In addition, recent events such as the Theranos fraud revelations, the COVID-19 pandemic, and concerns about pre-natal misdiagnoses due to ineffective testing have highlighted the concerns around testing accuracy and government efficiency. If reforms could help ensure better, more accurate tests and a more streamlined process for approval without higher burdens on laboratories, many industry concerns could be alleviated.


To Learn More, Read the Entire, Whitepaper: Your Guide to the Verifying Accurate Leading-edge IVCT Development (VALID) Act


digital engineering

Digital Engineering Between Government and Contractors

How does a digital thread work when tool ecosystems are disconnected from each other? For the defense contracting world, THIS is the elephant in the room. The DoD (Department of Defense) wants its Digital Engineering (DE) vision to become a reality and they do realize it cannot happen overnight, so how is this being addressed today?

The short answer is that the DoD wants their contractors to set up their own DE ecosystems and then exchange deliverables. This is no different than what they have done for the past 50 years but the types of deliverables are changing. Namely, they want DE content to be delivered as models which means SysML, UAF (an enterprise architecture format), Computer Aided Design (CAD) I– if the government purchased the technical data for this during contracting – models. The DOD still requires deliverables in Word/PDF/Excel/MS Project for various project management deliverables and engineering analysis that is only representable in textural form as well.

“Within the new Digital Engineering Ecosystem, there are three possible scenarios for information access between customer and contractor. The scenarios are: 1. Provide access to controlled baselines in contractor environment. This is considered a low IP (Intellectual Property) and data rights loss risk option two. Provide access to controlled baselines in the contractor environment and provide selected models and data in accordance with contract data requirements. This is considered a medium IP and data rights loss risk option 3. Provide all required baseline models and data in accordance with contract data requirements. This is considered a medium-high IP and data rights loss risk option.” – (AIA Aerospace whitepaper)

Nightly Check-in Technique:

PLM tools are deemed the center of the ecosystem since these already have the digital twin information (hardware CAD, configuration management, and product management backbone) in place. Nightly check-ins of files (SysML model, CAD, requirements, Ansys, etc.) are thought to be timely enough to keep things in sync. With this model, both the government and contractors would utilize a shared or their own PLM systems.

SAIC ReadyOne – a ready-made DE ecosystem of tools that can be purchased and installed in one click. It is managed on AWS GovCloud and is available up to CUI level four today. It is meant to answer the call for an easy to deploy, cost effective, and powerful integrated environment that delivers Digital Transformation – SAIC created Ready 1. ReadyOne is comprised of two main components

  • The infrastructure which includes the servers, application stacks, and build automation
  • And the Digital thread platform with a custom data model – which you can think of as the plumbing that defines the available connections between the various data artifacts – like system models, parts, and simulations – within the ecosystem

ReadyOne is Built on a model based, low code, platform supporting openness, flexibility, and customization. OOTB applications allow domain specific content to be mapped to Items and artifacts as well as customization – ensuring uniqueness of process, instead of forcing synthetic commonality.

Developers use familiar domain specific applications for System Modeling, CAD, and simulation to author content. Domain tools utilize pre-configured connectors with business rules to ensure all data is connected and the single source of truth remains persistent. Enterprise configuration management is a foundational component ensuring each domain is utilizing the proper data and relationships to remove opportunity for errors.

Digital Thread

SAIC Proprietary

INCOSE Digital Engineering Information Exchange Working Group (DEIX-WG)

Group Goal = “The DEIX WG primary goal is to establish a finite set of digital artifacts that acquiring organizations and their global supply chains should use to request an exchange with each other as well as internally between teams/organizational elements.”

Product Descriptions:

  • DEIXPedia: Micropedia of digital engineering topics to explain relevant DEIX topics. STATUS = In place and updating
  • Primer: A Narrative that describes the concepts and interrelationships between digital artifacts, enabling systems and exchange transactions. STATUS = DRAFT
  • Digital Engineering Information Exchange Mode (DEIXM): A prescriptive system model for exchanging digital artifacts in an engineering ecosystem. STATUS = DRAFT
  • DEIX Standards Framework (DEIX-SF): A framework for official standards related to MBSE (Model Based Systems Engineering) Information Exchanges. STATUS = DRAFT

Digital Thread Chart


RELATED: Traceability Score™ – An Empirical Way to Reduce the Risk of Late Requirements


Defense Systems Integrator – Digital Engineering Use Case and Lessons Learned

At the NDIA 25th Annual Systems and Mission Engineering Conference one large defense contractor presented their working tool ecosystem and explained their use case and lessons learned.

Use Case:

  • Provide system stakeholders with visibility into the system
  • For Example:
    • Determine impact of a Change to the System (e.g., requirement, model, part.)
    • Determine impact of Simulation to the System (e.g., validate or invalidate a requirement.)
  • To do this, I will need a digital engineering ecosystem that:
    • Enables the integration of repositories, i.e., requirements database, SysML models, PLM system.
    • Provides a framework for creating digital threads across data repositories.
    • Provides a mechanism for querying / visualizing digital threads.
    • Provides a way to compare/sync data repositories.
    • Can perform model/data transformations, e.g., DNG requirement → SysML requirement.

Dassault Cameo Systems Modeler is being used as the main user interface to coordinate with other models in other engineering domains, e.g., Creo, DNG etc.

Intercax’s Syndeia Cloud and Cameo plugin is good at connecting artifacts across different engineering repositories (DNG, Teamcenter, etc.) This group felt that Syndeia was the easiest to use and offered the most connections to their various tools. (Cameo, Teamwork Cloud, DNG, Teamcenter, Creo, Jira, GitHub, MySQL, Volta, Matlab/Simulink, Excel) They felt that in most cases Reference Connections were ideal and that reporting and analysis of the digital thread in the Syndeia Cloud webapp was more robust than anything other tool in the marketplace. They felt that Syndeia Cloud’s export of digital threads relationships into Excel to give to government customers was satisfactory.

Their Lessons Learned:

  • Define the Process for creating “reference connections”:
    • Who creates and manages the links.
    • Directionality of the link are consistent. Note: SysML element should always be identified as the source of the link.
  • Identify what types of links (digital threads) you want to create, for example:
    • Create reference connections from DNG functional requirement(s) to its SysML <<functional>> block to show a <<satisfy>> relationship.
    • Create reference connections from a Teamcenter part/item/assembly to a <<physical>> SysML block to show a <<satisfy>> relationship.
  • Establish operational and QA environments for Syndeia:
    • For testing out new patches and upgrades.
    • QA environment for training/experimentation.
  • Use caution of using “Local Repositories” (because they are local!)
  • Configuration manage your data repositories:
    • Teamwork Cloud for Cameo.
    • Global Configuration Management (GCM) for DNG.

RELATED: Write Better Requirements with Jama Connect Advisor™


Model-Based Acquisition (MBAcq)

An increasing number of RFPs are not only requiring MBSE but RFPs themselves are now starting to be created as SysML models and responses are expected to be returned in a SysML model file. Yes, there are SysML tool vendors (PTC, Spec Innovations) and even contractors (L3Harris) asking the DoD to drop its language so that the file format is compatible with Cameo. These vendors are trying to ensure they can export and import tool-agnostic SysML that is interoperable with each other.

The challenge to both the supplier and provider is the lack of standardization in the approach, resulting in a learning curve for every proposal as well as response. To address this concern, the OMG UAF MBAcq Working Group was formed to survey the current landscape with participation from government, industry, FFRDC’s, tool vendors, NDIA, and OMG standards org. The goal is to come up with process guidance and a SE (Systems Engineering) and architecture guidance.

Planned Deliverables:

  • ARM template and guidance (how to specify model-based DIDs, CDRLs.)
  • GRM template and guidance (includes guidance for how NOT to over-specify a system.)
  • Sample model (as part of UAF sample model.)
  • UAF Process Guide for Acquisition will:
    • Define the CONOPS for how a program office will use all the models they will receive over the lifetime or a system.
    • Demonstrate how to make models available for reuse for other/new systems.
    • Provide portfolio management for the models/programs.
    • Provide process and guidance that describes how to integrate MBSE approaches into pre-acquisition (before request for proposal release), request for proposal, contract award, and contract execution steps.
  • Impact to existing policy with recommendations for change.
  • Descriptions of what Sections K, L, and M could look like for model delivery.
  • Taxonomies with precise definitions for concepts and terms.
MBSE Digital Thread Chart

NDIA 2021 Systems & Mission Engineering Conference

Digital Engineering Tool Suites

According to SBE Vision, digital engineering tools are a mixed bag of silos:

  • Not all tools lend themselves to remote linking data at rest.
  • Some tools don’t have a web server.
  • Many detail design tools require a “local” model data as a basis for initial processing.
  • Sometimes a transformative capability of some kind is needed.
  • Certain use cases require a mashup of three or more systems.

Two Worlds Apart

SBE Vision is also developing techniques for both OLSC and synced data approaches. Complications can arise from linking vs syncing, such as company product, use case dependent, and can change over time.

OSCLS – Linked Data:
  • Remote linking of data
  • Peer-to-peer
  • Human oriented
  • Benefits:
    • Data stays at rest
    • Clean paradigm
    • Standards
  • Challenges:
    • Semantics (relational)
    • Configuration management
    • Change management
    • Consistency of standards implementation
Synchronization
  • Creating (temporary) copies of data
  • Hub-and-spoke
  • Human & machine Oriented
  • Benefits:
    • Enables many use cases
    • Simple, easy to use
    • Rich transformation when needed
  • Challenges:
    • Semantics (relational & transformative)
    • Configuration management
    • Change management
    • Managing correlation/sync state

Conclusion

Jama Connect enables real-time team collaboration through traceability and digital threads. To learn more about achieving Live Traceability™ on your projects, please reach out for a consultation.



Jama Software is always looking for news that would benefit and inform our industry partners. As such, we’ve curated a series of customer and industry spotlight articles that we found insightful. In this blog post, we share an article, sourced from EURACTIVE, titled “The AI Act’s Fine Line On Critical Infrastructure” – originally authored by Luca Bertuzzi on February 8, 2023.


The AI Act’s Fine Line On Critical Infrastructure

As EU policymakers make progress in defining an upcoming rulebook for Artificial Intelligence, the question of to what extent AI models employed to manage critical infrastructure should be covered by tight requirements still remains open.

The AI Act is reaching a critical stage in the legislative process, with the European Parliament set to reach a common position in the coming weeks. The legislative proposal is the world’s first attempt to put in place a comprehensive set of rules for Artificial Intelligence based on its potential risks.

A critical aspect of the draft law is the category of AI models that can cause significant harm, which must comply with stricter obligations regarding quality and risk management. However, concerning critical infrastructure, how to assess risk remains a matter of debate.

AI in critical infrastructure

Artificial Intelligence is increasingly employed in managing critical infrastructure, notably for project development, maintenance and performance optimization.

An example on the construction side is Sweco Netherlands, an engineering consultancy company tasked to extend the light-rail system of Bybanen, Norway’s second-largest city, considering the existing tram lines, adjacent roads, cycle lanes, pedestrian zones and surrounding public areas.

To put together these different factors, Sweco NL used a digital twin model to visualise its project and understand how design changes would impact the timeline, costs and surroundings. The company estimates it reduced construction errors by 25% as a result.

Another area of application for this technology is dams. In 2017 HDR, a US construction company, applied machine learning to a dam’s digital twin model to simulate how the infrastructure would be affected by changes like natural shifting and erosion of the surrounding soil over time.

The model allowed dam operators to detect anomalies like cracks with an accuracy of two centimeters, differentiating them from harmless algae growth, and taking corrective measures before they grew into more significant problems.


RELATED: 2023 Predictions for Industrial and Consumer Electronics Product Development


Regulatory approach

The original AI Act proposal noted that “it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.”

In the EU Council of Ministers, member states clarified that the concept of the safety component should be distinguished by the management system itself. In other words, in a dam, the mechanism to open the volts is the management system, whilst the technology that monitors the water pressure is a safety component.

In the European Parliament, the MEPs spearheading the work on the AI Act proposed to differentiate the management of traffic on roads, rail and air, from supply networks like water, gas, heating, energy and electricity, in compromise amendments obtained by EURACTIV.

While the Council included digital infrastructure like cloud services and data centres in the list of high-risk use cases, since the intent is to prevent “appreciable disruptions in the ordinary conduct of social and economic activities”, EU lawmakers have so far not done so.

The addition to the high-risk list caused significant anxiety in the telecom industry, which uses AI to manage network capacity, plan upgrades, detect frauds and improve energy efficiency. The question is whether the malfunction of any of these algorithms might bring the whole system down.


RELATED: [Webinar Recap] Managing Functional Safety in Development Efforts for Robotics Development


Where to draw a line

For example, if a telecom operator miscalculates traffic peaks in different areas of its network, would that lead to internet outages? A representative of telecom operators told EURACTIV they are not aware of any situation where that occurred, branding the issue as ‘highly hypothetical’.

More generally, critical infrastructure operators are concerned that, by casting the high-risk category of the AI regulation too wide, they might be precluded from useful tools that contribute to making their systems more efficient and secure.

A case in point is that member states excluded AI-powered cybersecurity tools from the definition of safety component.

Anti-virus malware analysis is based on predictive models and machine learning, meaning critical infrastructure service providers would have been precluded from using virtually all commercially available anti-viruses.

At the same time, AI-powered management systems are not without risks. Kris Shrishak, a technologist at the Irish Council for Civil Liberties, made the case of India in 2012 when a miscalculation of the electric grid’s peak traffic led to perhaps the largest blackout in history.

Therefore, the argument for a more granular approach in the high-risk categorisation relates to when the AI solutions make the infrastructure safer and if their failure does not entail an imminent threat.

Physical maintenance, for instance, is often costly and time-consuming, which might lead to infrastructures falling into disrepair. Not employing AI’s capacity to identify patterns and spot anomalies before they develop into bigger problems can also come at a cost.

Last year, amid the Russian-prompted energy crisis, France, usually Europe’s largest energy exporter, became a net importer as a record number of its nuclear reactors were put out of service due to maintenance stoppages.

[Edited by Nathalie Weatherald]



requirements-driven testing

Jama Connect® vs. IBM®DOORS®: Requirements-Driven Testing: A User Experience Roundtable Chat

Increasing industry challenges and complexities are pushing innovative organizations to consider modernizing the tool(s) they use for requirements management (RM). In this blog series, Jama Connect® vs. IBM® DOORS®: A User Experience Roundtable Chat, we’ll present several information-packed video blogs covering the challenges that teams face in their project management process.

In Episode 9 of our Roundtable Chat series, Mario MaldariDirector of Solutions Architecture at Jama Software® – and Susan ManupelliSenior Solutions Architect at Jama Software® – discuss requirements validation, verification, and testing in addition to demonstrating test management in Jama Connect.

To watch other episodes in this series, click HERE.

Watch the full video and find the video transcript below to learn more!


VIDEO TRANSCRIPT:

Mario Maldari: Hello, welcome to the ninth edition of our vlog series. Today, we’re going to be talking about something that’s very important in requirements management, something that I’m particularly passionate about, and that’s requirements validation, verification, and testing. And I’m joined by my friend and colleague once again, Susan Manupelli. Susan and I have worked together for a long time, 15 years plus testing various requirements management tools using various techniques, and various software. I believe the most recent software you were using was IBM’s enterprise test management tool, something we used to call RQM. Looking back on all those years and all those tools you feel as though have been your biggest challenge.

Susan Manupelli: So talking about the ELM suite where we were talking about rational quality manager and also we were using that to test DNG. Really the issue, the biggest challenge is that they were two separate tools. So even though they were part of the same tool set, the UIs were completely different. They were very inconsistent in how you would use them. The review and approval aspect of RQM wasn’t that great. And again, it was completely different from the review and approval that you would get when you were working with DNG. And also because they were from two separate tools, in order to really get the traceability, that would be a challenge. You’d have to do reports that were outside of the individual tool tools. And then one of the biggest things too was the comparison. Things changed in RQM. It was not easy to find out what changed, even if you compared one test case to another.

Mario Maldari: Yeah, I recall some of those challenges. I think for me, the biggest challenge I had was the UI inconsistencies like you mentioned. Obviously, I was in one tool, I’d go to another. It’s completely different experience, completely different nomenclature. And then having to integrate between the tools and just frankly having to go to a separate tool to do the testing was problematic and challenging sometimes. So I think you hit an important topic in terms of having everything in one tool. And I’d like to show you how Jama does that. Okay. So in Jama, the fact that we have testing integrated into the tool allows you to do some pretty neat things. So as you can see here on my desktop, we have this dashboard, and I can define a relationship rule diagram in Jama where I can define that I want specific requirements to have validation points and test cases associated with them.

And so what that gives me is I can create some dashboard views for requirements, lacking test coverage, or I can even look at test case summaries. Right on the dashboard, I can look at test case progress, the priority of my tests. Jama even allows you when you’re testing to log defects. So I can track my defects here. And so for you and I, we always have to provide test case reports and summaries up through management, up through the development team. And so this allows you to have it all in one spot, which is really nice to have. So the testing itself in Jama, you basically enter it on the test plan tab and very similar to the way you and I worked, we have a concept of a test plan where you can define your test intent, the things you’re going to be testing, your approach, your schedule on your team, your entry criteria, your exit criteria.

And from there, as you pointed out, you can send this for a review and you can get an official sign-off from your development team or whomever you need to sign off on your test plan. And then once that’s in place, you can go to your test cases and you can start to group your tests according to functionality or whatever makes sense for your grouping and your organization of your suites of tests. And once they’re grouped, you can come to the test runs and this is where you actually will be doing your execution of your test. So I can click on one of these here and can start an execution and I can start to go through each step and pass or fail as I go through. And the nice thing about Jama, as I mentioned, is that you can actually go ahead and log a defect in real time and I can go ahead and log this defect.

And now when I’m saving this defect, it’s associated with this test execution run, which is associated with my test case, which is associated with multiple levels of requirements upstream. So now if I look at a traceability view, I will see my high level requirements traced all the way down to the defects. When I have logged a defect, I can actually go in and I can take a look at this test run and I can see the defects. And if I have something like an integration to another product like Jira for example, maybe my development team and is working in Jira and they love Jira, it automatically populates the defect in the defect tool like Jira. So a developer can come in here, they can make some changes, they can put in some comments, they can change the priority, the status, and all of that gets reflected back in Jama.


RELATED: Traceability Score™ – An Empirical Way to Reduce the Risk of Late Requirements


Mario Maldari: So really nice integration if you’re using something like Jira. From my perspective too, what would’ve been nice in my past test background is to have this concept of suspect trigger. And so if I look at the relationships for this particular requirement and I see that downstream there’s a validation of a test case, which is validated by length type, I can see that it’s flagged as suspect. So that means that something upstream has changed and my downstream test case is now suspect. And what does that mean? Maybe I need to change it, maybe I don’t. How do I know? I can come to the versions and I can say, “Well, the last time I tested this requirement was in our release candidate one, and what’s different now?” So I can compare our version three to version seven, run our compare tool, and I can see exactly what changed.

So as a tester, this is great to me, it’s not enough to know that something’s changed. I can actually see exactly what changed and maybe it’s just a spelling update and I don’t need to really change it. Or maybe it’s something more substantial like you see here. And at this point I can come in and I can make my change to my test and I can go ahead and I can clear the suspect flag.

So really nice level of granular control. What’s also good with the Jama’s we have these out of the box, and you’ll like this, Sue, out-of-the-box canned reports that have summaries of your tests, how many blocked, how many failed, how many passed executions. So these are canned reports that come with Jama. If you needed any customized reporting for your specific needs of the organization, we have that available as well. So really nice back to your point about having everything in one tool, this is it, and this is the benefit. Now, I know you’ve been at Jama for just about six months now. I’d love to hear your impression of the test management built-in, what your thoughts are there?


RELATED: Telesat Evolves Engineering Requirements Management & Product Development


Susan Manupelli: Oh, sure. Yeah, I do. I definitely love how everything’s in one tool and the ease with which you can just trace, actually verify the testing of your requirements. You can just go from requirements straight down to multiple levels of decomposition to your test cases. So you can see, answer the question, did your requirement are your requirements passing, which is great. And also the ability to display related queries right on the dashboard. I think that’s a huge plus the consistency of the UI between what you do for requirements, creating a test case isn’t any different than creating any other requirements.
So it’s a very familiar UI for both operations, which I think is important. The review and approval piece is really a nice strong point for Jama, and to be able to apply that to reviews for test cases is really great. And I just think it’s a really streamlined UI. It really has everything you need and nothing that you don’t. So I just think it’s a great tool. And then there’s one other aspect that I really like is the impact analysis. You mentioned being able to trace when something’s changed after the fact. It’s also to be able to say, “Hey, we’re looking at making a change here.” There’s one button in Jama, you click that impact analysis and it tells you all of your test cases that you might need to revisit if you make that change.

Mario Maldari: I call that the proactive method.

Susan Manupelli: Yes.

Mario Maldari: Yeah, the impact analysis is extremely important. And if you were a developer in an organization and you changed a requirement or you were about to change a requirement and you knew you had 30 tests that are associated with that, you could run the impact analysis. See all of those, and you could proactively warn your test team, “Hey guys, I’m about to make this change. Here it is. I’ll explain it to you. We can have a separate review and approval.”

So it really contains all of that and controls all of that for you. I’ve often said to people, it’s one thing to have your requirements in a tool, and that’s the first step. Define your requirements, have your traceability. But if you’re not doing your testing and validating those requirements, then how do you know that you built the right thing, right? So extremely important aspect testing to requirements in the supply. So any requirements gathering process so I’m glad we could talk about it today. Sue, glad I could have you to talk to about it. And I’d like to thank everyone for their time and thanks for participating in the vlog series and we’ll see you on the next one.


Is your data working for you? A consistent and scalable data model is instrumental for achieving Live Traceability™ and making data readily available across the development lifecycle.

Download our Jama Software® Data Model Diagnostic to learn more!


Thank you for watching our Episode 9, Jama Connect vs. IBM DOORS: Requirements Driven Testing. To watch other episodes in this series, click HERE.

To learn more about available features in Jama Connect, visit: Empower Your Team and Improve Your Requirements Management Process