
This blog previews our recent webinar. To watch the entire presentation, visit: “From Requirements to Test Coverage Using AI to Strengthen Traceability and Validation.”
From Requirements to Test Coverage – Using AI to Strengthen Traceability and Validation
Engineering teams frequently identify verification and validation gaps late in the development process — often during testing, audits, or certification reviews — when addressing them becomes both costly and disruptive. These challenges often stem from earlier stages, where unclear or inconsistent requirements hinder the creation of effective validation scenarios and disrupt the ability to maintain robust traceability between requirements and tests.
In this webinar, Katie Huckett, Product Line Manager, Jama Connect Advisor/AI explores a practical workflow for improving requirements clarity and transforming well-structured requirements into meaningful test coverage using AI-assisted analysis and generation techniques.
Key Takeaways:
- How to leverage AI for effective and meaningful Test Case Generation
- Why validation failures often originate in poorly structured or ambiguous requirements
- Practical techniques for improving requirements clarity and consistency
- How AI-assisted analysis helps refine requirements before downstream work begins
- How relationship discovery exposes hidden traceability gaps between requirements and tests
WEBINAR PREVIEW – WATCH ENTIRE PRESENTATION HERE
TRANSCRIPT BELOW
Katie Huckett: Welcome everyone, and thank you for joining today’s webinar, From Requirements to Test Coverage: Using AI to Strengthen Traceability and Validation. Today, we’re going to look at a very practical engineering workflow and how improving the clarity of requirements can directly improve validation, coverage, and traceability. Many engineering teams experience problems late in the lifecycle during testing or audits. What we’ll show today is that those issues often originate much earlier in the requirements themselves. I’m Katie Huckett, product line manager for Jama Connect Advisor™. I lead the overall vision and strategy for AI at Jama Software.
When validation failures appear late in the development lifecycle, teams often assume the testing phase is the problem, but in many cases, the root cause started much earlier in the requirements. If requirements are ambiguous or incomplete, engineers may create tests that don’t fully validate the intended behavior. That gap may not become visible until system testing or certification activities, when fixing the problem becomes much more expensive. Before we go any further, we’re curious about your experience. Where do validation gaps typically surface in your organization? Go ahead and select the answer that best reflects your experience.
Many teams find that these gaps surface during system testing or audits, which means the issue has been present for a long time before it becomes visible. Now, why does this happen? There are several reasons these validation gaps occur. One common issue is ambiguous language. For example, if a requirement states that the system should load a page quickly, what does that actually mean? Different engineers may interpret that requirement differently, which leads to inconsistent validation. Another issue is an incomplete requirement structure, where key details needed for validation are missing. Finally, relationships between requirements and tests are not always maintained consistently as systems evolve.
RELATED: Enhance Test Quality and Coverage with AI-native Test Case Generation in Jama Connect Advisor™
Huckett: This is the example requirement we’ll use throughout today to illustrate this workflow for a hearing aid surgical installation change. “The system should try to ensure that the doctor performs the surgery in three or more sequential stages if needed and completes it before they are done.” At first glance, this seems okay, but it leaves several open questions. What is it or they? How many stages are actually required? How should we validate that the behavior meets the intended design? These kinds of questions are common when requirements are written quickly or evolve over time.
The first step in strengthening validation coverage is improving requirement quality. AI-assisted quality analysis can evaluate requirements and identify issues such as ambiguous language or structural inconsistencies. This helps engineers catch potential problems earlier, before downstream work begins. Now, let’s look at how this works in practice. So let me start here by going into my system requirements, and you can see my surgical installation change requirements right here at the top. All right, let’s go ahead and run an analysis on this requirement. All right, let’s take a look at what the analysis found. And as a reminder, this isn’t about making requirements perfect; it’s about identifying issues that can cause confusion downstream in design, testing, and validation.
RELATED: Jama Connect Advisor™ Datasheet
Huckett: So right away, you can see we’re flagging multiple issues across INCOSE and EARS guidelines. That’s usually the first signal; if a requirement triggers this many rules, it’s likely going to cause problems later in the lifecycle. One of the first things flagged is the use of “should” and phrases like “try to ensure.” That’s a problem because it weakens the requirement. It’s no longer clear whether this is mandatory or optional. And if it’s not clear, it won’t be clear when someone tries to validate it later. We’re also seeing logical conditions like “or” and “if needed.” These introduce ambiguity because they allow multiple interpretations of what the system is actually expected to do. That’s where teams start making assumptions, and different assumptions lead to inconsistent implementation and testing.
And then again, we have pronouns like “it” or “they.” This might seem minor, but in complex systems, unclear references can make it difficult to trace exactly what behavior is being described. That becomes especially important when you’re trying to maintain traceability across requirements and tests. So individually, these issues might seem small, but together they make the requirement hard to interpret, harder to validate, and harder to trace. And that’s where we start to see gaps showing up later in testing. So instead of manually rewriting this from scratch, we can use our Requirement Refinement feature to generate a cleaner version of this requirement. So let’s go ahead and kick off that process. Okay, here’s the suggested rewrite. We have a couple of options here. “The system shall facilitate the surgery to be conducted in a minimum of three sequential stages,” and “The system shall ensure the surgery is completed within the designated timeframe.”
THIS HAS BEEN A PREVIEW – TO WATCH THE ENTIRE WEBINAR, VISIT:
From Requirements to Test Coverage – Using AI to Strengthen Traceability and Validation
- [Webinar Recap] From Requirements to Test Coverage – Using AI to Strengthen Traceability and Validation - April 30, 2026
- [Webinar Recap] The Collapse of Requirements Quality Under System Complexity – How AI Can Help - January 27, 2026
- Jama Connect® Features in Five: Jama Connect Advisor™ - August 1, 2025