Jama Connect Features in Five: Surgical Robotics Framework
In this Features in Five session, Máté Hársing – Senior Solutions Architect, explores how Jama Connect’s Surgical Robotics Framework empowers teams to manage the complexity of developing cutting-edge surgical robotic systems while maintaining compliance and accelerating innovation.
Centralized platform for managing user needs, system and subsystem requirements, risks, and verification with seamless navigation and visibility.
Controlled reuse capabilities to manage libraries, variants, and generations, ensuring efficiency without duplication.
Release and generation management tools to baseline requirements for regulatory submissions while enabling innovation for future product generations.
With Jama Connect, surgical robotics teams can streamline development, reduce errors, and bring innovative products to market faster—all while staying audit-ready and compliant.
Hello, I’m Máté Hársing, a Senior Solutions Architect at Jama Software. Surgical Robotic systems are among the most complex medical devices today, combining hardware, software, AI, and strict regulatory demands across multiple generations. In this Features in Five video, I’ll show how Jama Software’s surgical robotics framework helps teams manage that complexity through structured views, variant and release management and end-to-end traceability without sacrificing compliance. Let’s start with the challenges surgical robotics teams face.
Understanding System Complexity
First, system complexity. Multiple subsystems, robotic arms, vision systems, control software, and user interfaces, each developed by different teams but tightly coupled. Second, reuse at scale. Many organizations don’t build just one robot. They build platforms, derivatives, and next-generation systems.
Copying requirements or test cases quickly and manually leads to confusion and risk. Different instruments, markets, and clinical use cases introduce variability that must be controlled, not duplicated. Release and generation management teams need to freeze baselines for regulatory submissions while continuing innovation for the next release or product generation.
Traditional documents and disconnected tools simply don’t scale to this level of complexity.
This is where Jama Connect and the surgical robotics framework come in. The framework provides a pre-structured data model aligned to robotic systems engineering covering user needs, system and subsystem requirements, risks, validation, and verification. On top of that structure, Jama Connect enables controlled reuse in three powerful ways. Libraries to centrally manage shared and reusable assets, variants to model differences without duplicating data, release and generation management using baselines and reuse patterns to support regulatory submissions and future innovation. Together, these capabilities allow teams to scale development without losing control, traceability, or compliance.
Overview of Product Management Scale
Before we look at any libraries, variants, or generations, it’s important to understand the scale of the product we’re managing.
This demo dataset, based on the surgical robotic platform, consists of ten systems: robotic arms, vision, control software, imaging, and safety, broken down into thirty subsystems.
All this lives in a single Jama Connect project organized neatly in the project explorer tree. The project structure mirrors the system architecture, making it easy to navigate from high-level user needs all the way down to detailed subsystem requirements, risks, and verification, without jumping between tools or documents. Despite the complexity, teams can quickly find what they need, understand ownership, and see how everything connects.
Utilizing Hazards Library for Reusability
Here, we use a Hazards Library project to manage reusable content.
These assets are created once and reused across multiple surgical robot programs and generations. When a library item changes, Jama Connect highlights the impact on locations where it’s being reused, allowing teams to review and selectively accept updates, maintaining control while avoiding duplication.
Many organizations manage parallel surgical robot variants that have the same core skeleton of requirements, tests, and risks, but differ in various specific aspects. Using reuse and synchronization, teams can share a common baseline while clearly seeing what’s different in each variant. Jama Connect highlights the delta, what’s been added, changed, or removed, so teams can focus only on what matters. And when needed, changes can be synced in either direction from the core platform to a variant or from a variant back to the platform, always with full visibility and control.
Release and Generation Management Process
Let’s look at release and generation management. For each regulatory submission or product release, we baseline the full set of requirements, risks, and verification. That baseline becomes our approved auditable snapshot. From there, we can reuse that baseline or duplicate and synchronize the entire project to start the next release or product generation, building on what’s already validated while clearly tracking what’s new or changed. This allows teams to move fast without losing control or compliance.
Conclusion and Call to Action
With Jama Software’s surgical robotics framework, teams can handle system complexity with structured end-to-end traceability, reuse safely through governed libraries and variant management, manage multiple releases and generations without chaos, and accelerate development while staying audit-ready and compliant. The result is faster innovation, fewer errors, and greater confidence in both your product and your process. That is a quick look at how Jama Connect helps surgical robotics teams manage complexity through smart views. To learn more about the surgical robotics framework or request a personalized demonstration for your team, visit jamasoftware.com or reach out to your Jama Software customer success manager or solution consultant. Thank you.
This blog recaps our webinar, “Best Practices for Test Management” – Watch it in its entirety HERE.
Transform Your Development Lifecycle with Modern Test Management
Building complex systems demands more than just functionality—it requires precision, compliance, and reliability. Verification and validation are the cornerstones of ensuring your product meets industry standards and exceeds expectations.
Traditional testing methods can’t keep up with the growing demand for faster delivery and uncompromised safety in complex system development. In this session, we’ll look at how adopting a modern test management approach can transform the way you develop and deliver complex systems.
JoinRomer De Los Santos, Principal Solutions Manager at Jama Software, for a deep dive into optimizing your testing lifecycle. We will discuss the critical shift toward requirements-based testing and how connecting test status directly to requirements ensures complete traceability and streamlines development.
What you’ll learn:
Achieve end-to-end traceability by linking test results directly to requirements
Ensure compliance and eliminate gaps in your development process
Empower QA teams to validate requirements early, accelerating approvals
Foster seamless collaboration between engineering and quality assurance teams
Gain real-time visibility into test progress to proactively address roadblocks
Leverage data-driven insights to mitigate risks and enhance product quality
Don’t miss this opportunity to improve how you manage verification and validation.
THE VIDEO BELOW IS A PREVIEW – WATCH THE ENTIRE PRESENTATION HERE
TRANSCRIPT PREVIEW
Romer De Los Santos: Hello, everyone. I’m Romer De Los Santos, a principal solutions consultant here at Jama Software, specializing in software development and process improvement for the medical advice and life sciences vertical. Before joining Jama Software, I spent over 20 years developing a myriad of medical devices, including insulin pumps, continuous glucose sensors, diabetes management software, solid-state cardiac spec cameras, genomic sequencers, and IVD genomic assays. Having served in the roles of software developer, test lead, systems engineer, technical product manager, core team lead, and even a short stint as an internal auditor, I have gained firsthand experience in the full development lifecycle and have an understanding of the perspectives of the different stakeholders involved in development. I’m pleased to be here today to present on test management using Jama Connect®.
Jama Connect is a highly configurable requirements management tool that includes robust test management capabilities. I’m happy to share some best practices on how to use those capabilities. This is not intended to be a step-by-step tutorial on how to perform testing using Jama Connect. Instead, I’ll be going over some testing concepts and best practices to help improve your experience with the tool. Then I’ll provide some information on what is possible and how you can extend Jama Connect’s capabilities. First, let’s start with a discussion about the structures around testing in Jama Connect, and how understanding those structures will help you manage your testing effort.
The scope of testing is defined by a test plan, and test execution must be in the context of a test plan. Many users use one test plan per release. However, for more complex projects, it may make more sense to break up testing into one test plan per major component or one test plan per test team. Having a test plan per component allows you to leverage the testing of that component whenever the component is used. Having a test plan per test team allows individual test teams to manage their own testing effort independently, and is often used by very large organizations. Your testing strategy depends on your situation, and if you need advice, please contact your designated Jama Solutions consultant or your customer success manager.
Test plans contain groups of test cases. Jama Connect adds test cases to a cycle of testing by test group and status. The criteria you use for grouping test cases is up to you; however, it is best practice to organize test groups by functional group, which is defined as a feature or functionality that can be independently tested. This type of organization facilitates reuse. For example, say you swap out an imaging module for a genomic sequencer with an equivalent component. Instead of cherry-picking individual test cases, you can rerun the imaging module test group. Now, let’s talk about the structures around test execution.
De Los Santos: A group of test runs is known as a test cycle. Jama Connect will allow you to add to the test cycle by test group, test status, pass or fail, and will even give you the option of cherry-picking from the selected test groups. Test cycles can be run in series or in parallel. If you have a small team, you may choose to run one cycle at a time. If you have multiple test teams, it may be more efficient to have each test team have their own test cycles so that testing can be run in parallel. When running multiple test cycles in parallel, it is best practice to agree on a naming convention to minimize ambiguity when looking at a growing list of test cases. Something like Alpha Team Cycle 1 identifies the team and the current cycle they’re on.
Each test case added to the test cycle will spawn a test run, which captures the execution of the test case. The test run is synchronized with the version of the test case at the time the test cycle was created. If there are any changes after that point, the test run will not automatically update until you choose to resynchronize them. However, doing so will wipe out any progress you’ve currently made on your test run. If you want to keep your progress and continue your work on your previous test case, then don’t sync. Jama Connect allows you to run different versions of the same test case, as long as they live in different test cycles.
Now, this is a good time to talk about the concept of parameterization. Parameterization is when a single test case is run multiple times to verify a specific set of parameters. It’s best practice to duplicate the test case for each parameter so that you have a separate test run per parameter. While this method does increase the total number of test cases in your test plan, it also ensures that each parameter is tested and captured in its own test run, thus eliminating ambiguity in your testing results.
Since Jama Connect is an item-based software solution, you can use item locks to manage your testing effort. If you lock a test plan, you prevent modifications to the test plan, the adding and removing of test cases and the organization of those test cases into test groups. However, testers are still able to create test cycles and execute test runs. They can also choose to synchronize runs to the latest versions of your test cases. In other words, when locking a test plan, you have control over what test cases are run and how they are organized. If you choose to lock a test cycle, you will ensure that testers execute the version of the test case at the time the cycle was created or last synchronized. Thus, locking the test cycle gives you control over the version of the test case to be executed. Finally, if you want to prevent a test case from being run, you should lock the associated test run. This effectively prevents any test execution.
De Los Santos: While Jama Connect is not designed as a dedicated test management tool, it can be configured to be compatible with most testing processes. Let’s go over some of the most useful configuration options available to you. What I’m showing you here is Test Center in Jama Connect. One of the most common requests I receive from my clients is, ” Where can I put a prerequisite or preconditions field in Jama Connect?” Ideally, you want to place it where the description field is located on the test execution tab here. However, you don’t have control over the order of the items that are going to be displayed on the test execution tab.
The best way to accomplish this is to reuse or rather commandeer the description field of your test case to be your new preconditions field. So the way you would do that very simply is you would go to your admin panel, go to item types, select your particular test case item, and then look for a unique field name called description and rename that to be your preconditions field. Any value you enter into your new preconditions field will appear in the description field of the associated test run. All right? So let’s try it out. Let’s go into our project, go under verifications. We’ll pick the first test case and enter a precondition for a prerequisite. This is a precondition. Save that off. When we go back to the test plan and look at the test runs, you’ll notice it’s now out of sync because we updated the test case. We’ll go ahead and resync, and now, when you execute your particular test case, or rather, execute the test run, you’ll see here that the precondition now appears above the test steps.
What Is Fault Tree Analysis (FTA)? How It Works and When to Use It
Fault tree analysis (FTA) helps engineering teams figure out every way a system could fail before it ships. You pick the worst thing that could happen, then work backward to find every combination of events that could cause it. When those findings stay tied to the actual design, the analysis catches dangerous paths early. That’s why regulators across aerospace, automotive, medical device, and nuclear programs expect it.
The U.S. Nuclear Regulatory Commission showed what this looks like in practice back in the mid-1970s when it published WASH-1400, one of the first big risk assessments of a nuclear power plant. A later NRC report said the work gave insights into real incidents that were hard to get any other way. The method hasn’t changed much since then, but keeping findings connected to the design is still where most teams run into trouble.
This guide covers what fault tree analysis is, how to build one, how it compares to FMEA, and where that connection usually breaks down.
What Is Fault Tree Analysis (FTA)?
Fault tree analysis (FTA) is a top-down method where you start with the worst outcome your system could produce, called the top event, and trace backward through layers of causes connected by logic gates. Instead of asking “what could go wrong?” in general, you pick one specific failure and work out whether the design actually prevents it.
That’s what sets it apart from most other safety methods. You model how component failures, human errors, environmental conditions, and system interactions can combine to cause that one outcome. The goal is to find every path to the failure and figure out which ones need design attention right now.
Fault Tree Analysis vs. Failure Mode and Effects Analysis (FMEA)
Fault tree analysis and FMEA (failure mode and effects analysis) answer different questions, and most teams use both. Here’s where they split and why the handoff between them often breaks.
Attribute
Attribute
Fault Tree Analysis
FMEA
Direction
Top-down (deductive)
Bottom-up (inductive)
Starting point
A specific system failure
Individual component failure modes
Primary question
“How can this failure occur?”
“What happens if this component fails?”
Quantitative output
Failure probability modeling
Risk ranking or prioritization
External events
Can include environmental and human factors
Usually narrower in scope
failure mode from FMEA often feeds the fault tree, and the fault tree produces a safety requirement. Testing gets planned against that requirement, but the link back to the original hazard can get weak over time, especially when requirements change and nobody reassesses the downstream work.
Why Fault Tree Analysis Matters
Most safety methods look at failures one at a time. Fault tree analysis is one of the few that shows how failures combine. A sensor glitch on its own might be harmless, but pair it with an operator error and a backup system that shares the same power supply, and you’ve got a path to a catastrophic event that nobody saw coming.
That’s the real value of fault tree analysis: it forces you to think about how independent your redundancies actually are, whether your backup systems share common weaknesses, and which single points of failure the design still has. It also gives you something you can show to regulators and auditors, not just an opinion that the system is safe, but a documented chain of reasoning that proves it.
When to Use Fault Tree Analysis
Fault tree analysis is worth the effort in specific situations. It takes real work to do well, so it helps to know when it pays off and when something simpler would do. The clearest use cases look like this:
Catastrophic top events: When the failure you’re looking at could hurt people or damage the environment, fault tree analysis gives you a clear way to map every path to that failure.
Redundancy and common-cause risk: If the design uses redundant systems, the analysis can show whether those systems are truly independent or share a weakness the architecture missed.
Quantitative safety targets: Because fault tree analysis supports probability modeling, teams can calculate whether a design meets a safety target and decide where to add redundancy or change the architecture.
Regulatory and certification needs: NASA includes fault tree analysis in its system safety standards. Programs under DO-178C (airborne software), ISO 26262 (automotive functional safety), IEC 62304 (medical device software), and FDA design controls all use it because regulators want clear, documented reasoning about how failures happen.
If the top event isn’t catastrophic or the system isn’t complex enough for failures to combine in non-obvious ways, FMEA on its own may be enough.
How Fault Tree Analysis Works
The process starts with one question: what’s the one failure that absolutely can’t happen? You pick that as the top event and work backward through every combination of causes that could lead to it. The tree uses four main symbols (defined by IEC 61025):
Top event: The system failure you’re analyzing.
Basic event: A root cause where you stop breaking things down.
AND gate: Every failure in the group has to happen at the same time for the top event to occur.
OR gate: Any single failure on its own is enough to cause the top event.
You define the top event first. A broad one makes the tree unmanageable, so you want something specific enough to act on but serious enough to justify the work. From there, you break down causes layer by layer and connect them with AND and OR gates based on the system architecture, interfaces, and known hazards.
Once the tree is built, you look for the minimal cut sets, the smallest groups of failures that can cause the top event. Order-1 cut sets (single points of failure) need attention first because they show where the system is weaker than the team thought. If you have failure probability data, you can also put numbers on the tree and compare risk against safety targets.
Fault Tree Analysis Example: Medical Device
Take an infusion pump where the top event is “unintended drug overdose.” An OR gate at the top splits into two paths: either the pump delivers too much, or the system fails to catch the over-delivery. The first path breaks down through an AND gate (valve sticks open AND flow sensor gives a false reading at the same time). The second is an OR gate where any single alarm failure lets the overdose go unnoticed.
When you run the cut sets, you might find that one alarm circuit failure on its own is enough to cause the top event. That’s an Order-1 cut set, and it tells you the design needs a backup alarm or an independent shutoff. That’s where fault tree analysis changes the design, not just documents the risk. NASA, nuclear, and automotive teams all use the same logic on their own systems, and the analysis pays off in every case when its findings stay connected to the requirements and tests that prove the risk was handled.
Limitations and Where Fault Tree Analysis Falls Short
Fault tree analysis has real limits. It only models binary states (working or failed), it can’t capture the order events happen in, and complex systems produce trees that are hard to maintain. But the bigger problem is what happens after. Teams rarely struggle with the analysis itself. What breaks is the handoff:
Disconnected mitigations: The tree identifies a single-point failure, but the requirement that came from it lives in a different system and loses its connection to the original hazard.
Post-review requirement changes: A test or design constraint downstream doesn’t get updated because nobody sees the upstream change fast enough.
Surface-level audit trails: The analysis, requirement, risk control, and test all exist on paper. But the connection between them is weak or outdated, and nobody notices until an auditor pulls a sample.
Once those links break, the tree stops being useful evidence and turns into a static document. NASA research shows that fixing a requirements error at the test stage can cost 21 to 78 times more than catching it during requirements, and that number climbs to 29 to over 1,500 times more in operations. If a fault tree analysis finding gets lost between the safety review and the requirement baseline, the program has already made the problem much more expensive to fix.
Keep Fault Tree Analysis Findings Connected to the Design
The best fault tree analysis doesn’t end with a clean diagram. It ends with a changed design, a stronger requirement, a better test, or a risk control that stays linked as the product changes over time. Teams that keep those connections strong see 1.8X faster defect detection, 2.1X faster test execution, and 2.4X lower test failure rates compared to teams in the bottom quartile.
If you want those kinds of results, Jama Connect® is built for exactly this. Its Live Traceability™ approach flags when a change upstream affects something downstream, so the full chain from hazard to requirement to test stays visible as the project moves forward. Try Jama Connect free for 30 days.
Frequently Asked Questions About Fault Tree Analysis
What is the difference between fault tree analysis and FMEA?
Fault tree analysis picks a specific system failure and traces backward to find every combination of events that could cause it. FMEA goes the other direction, starting with individual parts and asking what happens when each one fails. The two work well together because fault tree analysis catches dangerous combinations while FMEA catches failure modes that might not show up in a top-down view.
When should you use fault tree analysis instead of other safety methods?
Fault tree analysis works best when the top event is catastrophic and you need to understand how failures combine to cause it. It’s the go-to when a program needs to put numbers on failure probabilities or show regulators clear safety evidence.
What is the difference between quantitative and qualitative fault tree analysis?
Qualitative fault tree analysis maps the failure paths and identifies the cut sets without calculating probabilities. It tells you which failures are dangerous and where single points of failure exist. Quantitative fault tree analysis goes further by assigning failure probability data to each basic event and calculating the overall likelihood of the top event. Use quantitative when you need to prove a design meets a specific safety target or compare risk between design options.
Can you do fault tree analysis without specialized software?
You can build simple fault trees with any diagramming tool or even on a whiteboard. The tree itself doesn’t need special software. Where things get harder is keeping the findings connected to requirements, tests, and risk controls as the design changes. That’s a traceability problem, and it’s where purpose-built tools like Jama Connect help most.
How do you keep fault tree analysis findings tied to the design?
The biggest risk is that findings get written down but never connected to the requirements, risk controls, or tests they should feed into. You need those connections to stay visible so that when something changes upstream, the downstream work gets checked too. Jama Connect’s Live Traceability does this by flagging when a change affects related work.
This is a preview of our recent webinar. Watch the entire webinar HERE.
Software Intensive Defense Systems: An Agile Approach to Electronic Warfare Development
Accelerating Mission Readiness in Software Intensive Electronic Warfare Programs
Defense systems programs are large, complex, and highly regulated. In electronic warfare and other software-intensive environments, requirements evolve rapidly while compliance and mission assurance remain non-negotiable.
Yet many teams still manage requirements, design decisions, and change requests across disconnected documents, spreadsheets, and siloed repositories, increasing program risk through stakeholder misalignment, costly rework, uncontrolled scope changes, and limited traceability across the system lifecycle.
In this webinar, Cary Bryczek, Solutions Architecture Director, A&D at Jama Software, explores the real-world challenges of managing requirements in software intensive defense systems and shares how an agile, structured approach can improve speed, alignment, and program confidence.
You’ll learn to:
Why document- and spreadsheet-based approaches break down on complex defense programs
How limited requirements visibility increases risk across electronic warfare development
Practical strategies for managing change while maintaining cross-functional alignment
Techniques to maintain end-to-end traceability from mission requirements through validation and deployment
What modern defense teams require to support compliance, speed to mission, and deployment readiness
Explore how Jama Connect® enables structured collaboration and AI-assisted workflows for agile defense development
THE VIDEO BELOW IS A PREVIEW OF THIS WEBINAR, WATCH THE ENTIRE PRESENTATION HERE
BELOW IS AN ABBREVIATED SECTION OF THIS TRANSCRIPT
Cary Bryczek: Let’s talk about our agenda for today’s webinar. Today, we’re going to talk about the call for more agility. We’ll look at an instrumented agile approach. We’ll go into depth on our AI-enabled engineering. We’ll look at measuring engineering as well. We’ll have a bit of a product demo and, of course, some Q&A. The US Department of War recognizes that acquisition programs still need to modernize to transform and meet rapid changes in today’s landscape, and agility is a really important theme in that. The department is going to be modernizing systems engineering across all acquisition pathways to enable agile development and technology insertion. They want improved technology and manufacturing risk management. They need to reduce the need for testing. They need to reduce the amount of rework and retesting to certify a system. This transformation is really critical, given the rapid modernization of technology and the increased use of software acquisition, advanced computing, AI, and model-based acquisition.
These tools, properly applied, inherently reduce requirements and design defects, and test in build-up and scope required when verifying, validating, and certifying end items. This particular strategy is echoed in numerous places. The CIA is radically shifting to a culture of speed, agility, and innovation. The Defense Acquisition University points out that requirements churn is still a fundamental problem that requires innovative approaches for more rapid delivery of capabilities to the warfighter. Interoperability is central to what they’re trying to achieve. Lawmakers are requiring the Army to outline how new systems will integrate with the existing programs of record. Cyber practices are still front and center. Acquisition and requirements, AI, all need to be realigned towards rapid incremental delivery of this operationalization of minimal mission capability.
Bryczek: So there are some success levers. These are the three big ones. Improved agility, interoperability, and the modernization of systems engineering. Agility doesn’t just mean having a DevSecOps process if you’re doing software development, but it also needs to bridge systems engineering itself and bring that interoperability to the forefront. I call out MOSA because it’s really an integrated business and technical strategy. MOSA implies the use of this modular design, including system interfaces designed according to accepted standards. And those types of conformances can be verified.
So it’s no mystery that document and spreadsheet-based approaches fall short on complex defense projects. And even when using some modeling tools in a siloed fashion, teams still experience manual efforts to cross-reference traceability and perform change impact. Poor requirements’ visibility leads to misalignment, rework, and scope creep. Defense systems projects are large, they’re complex, and they may involve many design standards in the mix. Yet many teams are still managing requirements design and program change decisions using disconnected documents, spreadsheets, and siloed tool repositories. This result is a misalignment between stakeholders. It promotes costly rework, scope creep, and, really, the limited traceability from early mission requirements through design, development, and deployment is a hindrance.
So what we really need to do is to bring that value stream closer to software development. Inside the software development process, the agility to plan, to process those user stories, to execute deployment in a secure manner, it’s there. But outside of that fast-moving DevSecOps process, where the warfighter’s mission needs the capabilities and the constraints, the CONOPS, those are often changing, and they’re often changing in a fast and unexpected way. The hardware that the software is deployed on may change, or whole new capabilities might need to be fielded. Agility needs to happen outside of the DevSecOps process. And the good news is, is that Jama Connect is a really good tool to make that happen.
Bryczek: So we’ve all heard about shifting left. It’s not just about performing testing earlier. To the left, we need to combine that DevSecOps process with the rest of the value chain. Tools like Jira and Azure DevOps are great for linking work items, code changes, builds, and releases. But complexity, especially in the context of embedded systems and complex software, system-of-systems, requires broader traceability with change and impact analysis that crosses outside of the DevSecOps boundaries; we have to think about that. Change is really complex. And Jama Connect really is the only platform that can truly solve for shifting to the left. Our software enables faster validation of the warfighter needs. We enable the warfighter to collaborate earlier via those feedback loops embedded directly in our software using our review center, rather than via email or document markups.
Our software can provide requirements to the software teams that represent the contextualized needs of the warfighter that are shifting as their needs morph. Jama Connect’s Live Traceability™ provides the agility that the program management teams need to assist in decision-making and to help the software teams adapt to those changing needs. Now, our software helps hardware and software teams stay aligned when following standards such as MOSA or FACE, and CMOSS. Jama Connect’s responsible AI for requirements and tests really radically increases the speed of development, speeding up the time to develop and link test cases to requirements. It’s a core aspect of building a high-quality product and speeding the delivery of warfighting capabilities. Being able to simply click a button and be presented with 10 relevant test cases with the steps is a huge leap forward and results in significant time savings. It’s already providing valuable time to our own Jama Software internal engineering teams. I’m really excited for our clients to start adapting this, and I can’t wait to show that to you in our demo.
This blog overviews our Customer Story, “SPAN Electrifies Its Product Development and Safety with Jama Connect” – Download the entire story HERE.
SPAN Electrifies Its Product Development and Safety with Jama Connect
“By implementing Jama Connect, our teams are able to maintain transparency across stakeholders, streamline communication, and ensure alignment on project goals. This integrated approach reduces redundant efforts and helps accelerate product development cycles while maintaining compliance and quality standards,” – Arnaldo Arancibia, Senior Staff Systems Architect, SPAN
ABOUT SPAN
SPAN is an innovative company revolutionizing the home energy market with smart electrical panels, EV chargers, and energy storage systems. Headquartered in San Francisco, SPAN emphasizes sustainability and cutting-edge technology to deliver smarter energy solutions for homes, advancing how people interact with energy systems.
CUSTOMER STORY OVERVIEW
SPAN needed to improve traceability of requirements across product ideation, systems, hardware, and software development, while ensuring compliance with critical safety standards such as UL 916, UL 60730, UL 1998, UL 3141, UL 1741, and UL 9540, among others. As requirements grew more complex and teams scaled across functions, the startup recognized the need to replace its manual process for managing traceability in spreadsheets based on product requirements documents (PRDs) in Confluence.
To address these challenges, SPAN selected Jama Connect for its centralized platform that enables cross-functional collaboration and alignment using an easy-to-use, single source of truth for managing its systems, hardware, and firmware requirements, tests plans, and compliance documentation.
Streamlined system validation, reducing timelines by up to 25% through effortless tracing and organized requirements management
Reviews with feedback and questions were reduced to two cycles, which expedited the way each new feature started being implemented
Fewer delays and improved efficiency through automatic syncing of tasks in Jira and requirements using Jama Connect Interchange™
Efficient reuse of requirements and tests for shared components between existing and next-generation products using Jama Connect’s Reuse and Sync capabilities
“Using Jama Connect for test reporting has increased my team’s visibility significantly. The ability to add custom cycles and show test progress is a huge help in getting clarity on the stability of our system.” – Paloma Fautley, Systems Integration Manager, SPAN
CHALLENGES
SPAN had various priorities during its growth phase, matched by equally pressing challenges that defined the criteria for a new solution, including:
Difficulty finding information in Confluence and maintaining traceability for complex requirements across projects in spreadsheets
Ineffective cross-team communication and collaboration due to siloed hardware and software departments and their workflows
Struggling with a highly iterative development process involving increasingly complex requirements, while scaling startup operations
EVALUATION
Key stakeholders with experience with Polarion and IBM® DOORS® recognized that Jama Connect was the right solution because of its intuitiveness, flexibility, interoperability, and structured collaboration.
Quick configuration and launch of a customized project structure that encouraged team collaboration and communication
Centralized system providing a single source for tracking changes and ensuring alignment with product safety standards
End-to-end traceability across hardware and software requirements and tests with connectivity to other development tools
“Jama Software’s core principles of collaboration, innovation, and customer focus have created a ‘Jama Connect culture’ at SPAN that encourages engineers to think systematically about requirements from development to testing, which are now central to their operations.” –Arnaldo Arancibia, Senior Staff Systems Architect, SPAN
Since implementing Jama Connect, SPAN has realized significant benefits from the solution that have contributed to greater confidence and speed in its product development process.
Savings of about three months of system validation due to ease of tracing and organizing requirements
Reviews with feedback and questions were reduced to two cycles which expedited the way each new feature started being implemented
Fewer delays and improved efficiency through automatic syncing of tasks in Jira and requirements in Jama Connect using Jama Connect Interchange
Efficient reuse of requirements and tests for shared components between existing and next-generation products using Jama Connect’s Reuse and Sync capabilities
Jama Software is always looking for news that would benefit and inform our industry partners. As such, we’ve curated a series of customer and industry spotlight articles that we found insightful. In this blog post, we share an article from AMA, titled “Augmented Intelligence in Medicine” and originally published on October 21, 2025.
Augmented intelligence in medicine
Artificial intelligence vs. augmented intelligence
The AMA House of Delegates uses the term augmented intelligence (AI) as a conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it.
AMA policy on AI development, deployment and use
The AMA is committed to ensuring that AI can meet its full potential to advance clinical care and improve clinician well-being. As the number of AI-enabled health care tools continue to grow, it is critical they are designed, developed and deployed in a manner that is ethical, equitable and responsible. The use of AI in health care must be transparent to both physicians and patients.
In addition to medical devices, AI is increasingly used in health care administration or to reduce physician burden, and policy and guidance for both device and non-device use of health care AI is necessary. Recognizing this, the AMA has developed new policy (PDF) that addresses the development, deployment and use of health care AI, with particular emphasis on:
Health care AI oversight
When and what to disclose to advance AI transparency
Generative AI policies and governance
Physician liability for use of AI-enabled technologies
AI data privacy and cybersecurity
Payor use of AI and automated decision-making systems
Physician sentiments on AI
In 2023, the AMA conducted a comprehensive study of over 1,000 physicians’ sentiments towards the use of AI in health care including current use and future motivations for use, key concerns, areas of greatest opportunity and requirements for adoption. Given the rapidly evolving AI landscape across health care, the AMA repeated the study in late 2024 (PDF). The objectives of this research remain:
Capturing the sentiment among practicing physicians regarding the increased usage of AI in health care
Evaluating AI use cases based on their familiarity, relevance, and usefulness
Identifying key resources and areas of need for physicians to consider implementation of AI tools to their practice
Physicians largely remain enthusiastic about the potential of AI in health care, with 68% seeing at least some advantage to the use of AI in their practice, up from 65% in 2023. We also saw use of AI increase from 38% in 2023 to 66% of physicians reporting they use some type of AI tool in practice in 2024.
However, there are still key concerns as physicians continue to explore how these tools will impact their practices. Implementation guidance and research, including clinical evidence, remain critical to helping physicians adopt AI tools.
Physician sentiments study on AI: AMA’s latest study on physician sentiments around the use of AI in heath care: motivations, opportunities, risks and use cases. Read Now (PDF)
AI is playing an increasingly important role at all stages of the medical education continuum, both as a tool for educators and learners and as a subject of study in and of itself. AI has the potential to transform the educational experience as a part of precision education and transform patient care as a part of precision health. Learn more about how AI can impact medical education.
In October 2025, AMA launched the Center for Digital Health and AI to put physicians at the center of shaping, guiding and implementing AI tools and other technologies that are transforming medicine.
AMA welcomes the federal government’s new 2025 action plan on AI and the opportunity to work with the administration to address key areas in shaping AI regulation, policy and implementation. Learn more.
An AMA issue brief (PDF) provides a brief overview of recent state legislative activity and discusses three key AI policy areas for state legislative/regulatory activity: health plan use of AI, transparency and physician liability.
To develop actionable guidance for AI in health care, the AMA reviewed literature on the challenges health care AI poses and reflected on existing guidance. These findings are published in a paper in Journal of Medical Systems:Trustworthy Augmented Intelligence in Health Care.
The current CPT® code set drives communication across health care by enabling the seamless processing and advanced analytics for medical procedures and services.
AMA offers several resources to provide guidance on the updated CPT® code set for classifying various AI applications as well as advisory expertise through the Digital Medicine Payment Advisory Group (DMPAG). DMPAG identifies barriers to digital medicine adoption and proposes comprehensive solutions on coding, payment, coverage and more. Stay up-to-date on the criteria for CPT® codes, access applications and read frequently asked questions.
Stop Scrambling for Submissions. Build Readiness Into Your Process With AI.
Regulatory submissions often become a stressful, last-minute rush, increasing risk, rework, and frustration. But what if you could embed submission readiness into your process from the start? Artificial Intelligence (AI) is making this a reality by connecting requirements, regulatory guidance, and ongoing monitoring seamlessly throughout the product lifecycle.
From Requirements to Regulatory: How AI Is Transforming Submission Readiness
Tom Rish: Thank you to everyone for being here today. We have a very exciting webinar about AI, a hot topic, of course, as always, and so I’m excited to dive into it. Before we do, I just want to talk very briefly about Jama Software and what we do. I know some of you have watched previous webinars, and you know all about this, but I want to give a high-level overview and talk just a little bit about how we are looking to incorporate AI to make your life easier when it comes to requirements management. So first, Jama Connect ®. As you all know, when it comes to launching a product, you have to keep track of all your requirements, all of your risk items, all of your testing, and everything like that. It can be a lot of work, especially on spreadsheets or disjointed systems, whatever it is you use.
And at Jama Software, what we’re trying to do is make it simple for you. We want you to focus on designing. We want you to focus on testing. We want you to focus on important things like the safety of the patient and not worry as much about paperwork and organizing everything. A lot of times, as you know, that’s done at the end, and it’s a checkbox activity. But we have a system, as you can see there on the left. I know many of you are used to a lot of documentation and everything. We want to bring that into a very organized V model that you’ve all seen there. Start with user needs. Enter those right into the system, build as you go. We can connect all of the systems you use, whether it’s software products, and you’re using a lot of things like Jira, GitHub, things like that, all your test systems, but we want to keep things organized.
Rish: What’s cool about Jama Connect is that we work with all industries, but we have frameworks specifically for medical devices. So out of the box, we’re able to build a framework where you can match it to your processes to track your user needs, design controls, risk management, and all of your tests. We have real-time collaboration so that you can do all of your reviews and comments in the software, create libraries, and release things. And finally, we have the AI guidance that I’m here to talk about today.
A couple of things here on this slide. This is mostly focused on requirements management. One of them is there today and available for use. Some of our customers are using it, and we’ve gotten some good feedback. Some of these here are things that are coming in the future. First thing that we have here today, though, is a scoring system. So when you enter your requirements into Jama Connect, we have AI that scans through INCOSE and EARS guidance and tells you how well this requirement is written. So it gives you a scoring system to tell you, “Hey, this one looks pretty good,” or, “This one doesn’t, and here’s why this is the rule or the guidance that it doesn’t quite meet.” So that’s ready to use today. I’ve talked to a few customers already who have said how helpful it has been for downstream operations like testing to create better testing and things like that.
We’re also working on some things where we will help rewrite requirements if needed. So not only does it give you scores, but help you rewrite them so that they can match the guidances better. So if you give it an initial draft of a requirement, we’ll go through, we’ll score it, but we’ll also give you some recommendations for changing it.
I think ideally everyone’s probably wondering how you just create them for us. So we are looking into some ways that we can enter some project inputs into the software, and then it will give you some requirements for you. So that will be in the future, along with PDF parsing. A lot of you come with existing documentation already. You might have requirements documents, software specification documents, things like that. We’re working on some AI features that will take those and create requirements automatically for you in the structure that they are.
Rish: A couple of other things. One thing that is new now, again, is test case generation. When you have your requirements in there, what we want to do is help you create good testing and guidance for creating the right acceptance criteria and things like that for your testing. Also, looking at an AI assistant, I think everyone is used to AI assistance these days, but a more conversational workflow where you can enter information into the software, and we’ll give you some guidance and feedback on that. Also, looking into ways that we can take your requirements and give you tips on how to link them together better, create better relationships, and finally help with reviews to detect areas that maybe are high risk.
I think later on, what we’re going to talk about is how the FDA and other regulatory bodies are starting to incorporate AI. So what we want to do is help you get it right up front so that when it’s sent over there, you feel good about everything. So that’s a little bit about Jama and how we’re using AI today. Now for the main event, I’m excited to pass it over to Adam. I met Adam at the MedTech conference in San Diego. And when I went up to his booth, I was instantly impressed. I think as a product development engineer, I spent a lot of time searching through the FDA databases.
And there are a lot of them, as I’m sure you all know, and there is excellent information in those databases. The challenging part is that it’s hard to go to each one every time and find what you need. The interfaces are a little outdated at times as well. You can find everything, but it’s just not easy. And what I always thought is, why can’t anybody scrape this information or pull this information and use it in a better format and make our lives easier? And that’s exactly what Adam and his team are doing. And so I’m excited to hand it over to him, and he will tell you more about Agent Astro and give some practical tips about how to better use AI throughout your process.
Navigating FDA AI Guidance for Medical Devices: A Practical Guide
For medical device professionals, the integration of Artificial Intelligence (AI) and Machine Learning (ML) represents a monumental leap forward in innovation. However, this progress comes with significant regulatory hurdles. As AI algorithms evolve, so do the rules that govern them, leaving many development, quality, and regulatory teams struggling to keep pace. Failing to understand and adapt to the latest FDA AI guidance can lead to submission delays, compliance issues, and costly rework.
This guide delivers a practical overview of the evolving FDA regulatory framework for AI and ML-based medical devices, drawing on both recent draft guidance and the agency’s longer-term action plans. We highlight essential concepts including the Predetermined Change Control Plan (PCCP), Good Machine Learning Practices (GMLP), and Real-World Performance (RWP) monitoring and show how these shape the compliance landscape for manufacturers.
TL;DR: The FDA is moving toward a holistic Total Product Lifecycle (TPLC) regulatory approach for AI/ML-enabled medical devices, emphasizing continuous monitoring, clear GMLP, and mechanisms for pre-planned algorithm updates. Robust, traceable documentation, and proactive lifecycle risk management are now essential for compliance and product success.
The FDA’s Evolving AI/ML Regulatory Framework
The FDA has signaled its commitment to adapting device oversight in response to rapid advances in AI/ML. Traditionally, regulatory submissions were point-in-time events. Now, regulators recognize that adaptive, learning systems require ongoing oversight, especially as software “learns” from real-world experience.
Key foundational documents illustrate this evolution:
FDA’s 2021 AI/ML-Based Software as a Medical Device (SaMD) Action Plan: This action plan lays out five pillars to modernize oversight including development of a tailored regulatory framework, advancement of GMLP, fostering transparency with users, promoting methodologies for bias/robustness, and supporting real-world performance pilots.
Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft Guidance, 2025): This draft guidance details expectations for managing AI within medical devices throughout the entire product lifecycle, including design, labeling, bias mitigation, cybersecurity, postmarket surveillance, and the importance of the Predetermined Change Control Plan.
Clinical Decision Support Software Guidance (2026): Clarifies FDA’s criteria for Clinical Decision Support (CDS) software functions, offering practical examples to distinguish between Non-Device CDS such as software functions excluded from device regulation and those that remain under device oversight.
FDA AI/ML-Enabled Medical Devices List: Provides a current catalog of FDA-authorized devices using AI/ML technologies, helping manufacturers benchmark their projects and understand regulatory precedent.
In summary: The FDA’s approach now encompasses both initial submissions and ongoing, risk-based management, aligning regulatory expectations with the unique characteristics of AI/ML-driven technologies.
Introduced in both the 2021 action plan and expanded in the draft 2025 guidance, a PCCP enables manufacturers to define anticipated modifications to an AI/ML algorithm upfront. The plan specifies “what” may be changed (pre-specifications) and “how” changes are managed (an algorithm change protocol). This approach recognizes the evolving nature of AI/ML models, especially those learning from real-world use.
2. Good Machine Learning Practices (GMLP)
The FDA calls for GMLP, which are best practices covering data management, training procedures, documentation, interpretability, and bias mitigation, all aligned with consensus standards. GMLP underpins both product quality and regulator confidence, reducing the risk of unexpected outcomes or patient harm (See Action Plan Pillar 2).
3. Transparency and User Trust
Both guidance documents emphasize transparency for end users including clinicians, patients, and caregivers. Clear labeling, robust documentation, and transparency about model logic, data sources, and limitations are expected to build trust in AI/ML-powered devices.
4. Real-World Performance (RWP) Monitoring
Unlike static software devices, AI/ML-based products must demonstrate ongoing safety and efficacy. The FDA encourages collection and review of real-world data as part of postmarket surveillance. Manufacturers should implement plans for ongoing performance monitoring by adapting both processes and documentation to ensure device quality over time.
5. Bias Mitigation and Robustness
AI/ML algorithms can inadvertently encode biases from historical datasets. The FDA expects proactive identification and management of bias through diverse, representative training data, ongoing performance validation, and transparent reporting on limitations and subgroup analysis.
Your design history, risk management, GMLP adherence, model versions, data sets, and algorithm updates should all be auditable and linked. Use digital solutions for traceability and compliance, making audit preparation seamless.
Step 3: Prepare and Maintain a PCCP
If your product uses adaptive algorithms, develop a comprehensive Predetermined Change Control Plan. Detail the types of future modifications, associated risk controls, and your process for validating postmarket changes.
Step 4: Embrace Ongoing RWP Monitoring
Postmarket surveillance now means real-world performance tracking including collecting user feedback, monitoring for data drift, bias, and managing field updates in a proactive, traceable way.
Step 5: Differentiate Wellness from Medical Claims
Consult the Wellness Policy to determine if any features of your device are exempt from device regulation and document your rationale.
Frequently Asked Questions
Q: What’s the difference between Software as a Medical Device (SaMD) and AI in Medical Devices (AiMD)? A: SaMD refers to software that is itself a medical device. AiMD is software that is integrated into a physical device. Both fall under the FDA’s AI/ML regulatory frameworks.
Q: Is a PCCP mandatory for all AI-enabled devices? A: PCCPs are expected for devices with adaptive/evolving algorithms. Rigid, non-learning AI products may not need a PCCP, but processes for documenting and justifying updates are still required (draft guidance, 2025).
Q: How should we implement GMLP? A: Follow best practices outlined by the FDA and consensus standards. Ensure your team manages data, training processes, versioning, and labeling in a repeatable, controlled, and demonstrable manner.
Master the Complexity of AI Medical Device Development
The regulatory landscape for AI medical devices is complex, but it shouldn’t stifle innovation. By adopting an integrated approach with a live digital thread, you can manage the intricate web of requirements, risks, and data that define modern device development. This not only prepares you to pass audits with confidence but also empowers your teams to build safer, more effective products faster.
Jama Connect®, enhanced with AI-powered features in Jama Connect Advisor™, provides the end-to-end traceability needed to manage the development of complex AI-enabled systems. Streamline your documentation, automate traceability, and ensure your team is always audit-ready.
Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by Tom Rish.
In this blog, we’ll recap a section of our recent Expert Perspectives video, “A Method to Assess Benefit-Risk More Objectively for Healthcare Applications” – Click HERE to watch it in it entirety.
Expert Perspectives: A Method to Assess Benefit-Risk More Objectively for Healthcare Applications
Welcome to our Expert Perspectives Series, where we showcase insights from leading experts in complex product, systems, and software development. Covering industries from medical devices to aerospace and defense, we feature thought leaders who are shaping the future of their fields.
Assessing benefit‑risk is a foundational requirement for medical device manufacturers, yet it has long been one of the most challenging aspects of risk management. While risks are analyzed with rigor and precision, benefits are often described qualitatively, making objective comparisons difficult and slowing decision‑making across the product lifecycle.
A new, revolutionary method for assessing benefit‑risk changes that dynamic by unifying benefit and risk into a single, objective framework. Our expert perspectives video, “A Method to Assess Benefit-Risk More Objectively for Healthcare Applications,” offers actionable insights for healthcare innovators aiming to meet rigorous regulatory requirements while ensuring patient safety and efficacy.
In this episode of Expert Perspectives, Richard Matt explains how his method, dubbed the “Grand Unified Theory of Risk Management”, enables medical device companies to perform benefit-risk analyses with unprecedented speed and precision, delivering definitive determinations within minutes. This efficiency allows for multiple assessments throughout a project, unlocking opportunities to refine patient populations, expand product indications, and even use a benefit-risk assessment as a design parameter during development. Beyond product development, this method also provides a robust framework for addressing regulatory requirements, post-market analysis, and quality management system evaluations.
By transforming a traditionally subjective process into a data-driven, objective methodology, Richard Matt’s approach empowers healthcare innovators to bring safer, more effective solutions to market. For a deeper dive into this method and its implications, download the whitepaper from Aspen Medical Risk Consulting.
Below is a preview of our interview. Click HERE to watch it in its entirety.
Kenzie Jonsson: Welcome to our expert perspective series where we showcase insights from leading experts in complex product, systems, and software development. Covering industries from medical devices to aerospace and defense, we feature thought leaders who are shaping the future in their fields. I’m Kenzie, your host, and today, I’m excited to welcome Richard Matt. Formerly educated in mechanical, electrical, and software engineering and mathematics, Richard has more than thirty years of experience in product development and product remediation. Richard has worked with everyone from Honeywell to Pfizer and is now a renowned risk management consultant. Today, Richard will be speaking with us about his patent pending method to assess benefit-risk more objectively in health care. Without further ado, I’d like to welcome Richard Matt.
Richard Matt: Hello. My name is Richard Matt, and I’m delighted to be speaking with you about our general solution to the problem of assessing whether the benefit of a medical action will outweigh its risk. I’ll start my presentation by saying a few words about my background and how this background led to the benefit-risk method you’ll be seeing in the presentation.
To understand my background, it really helps to go back to the first job I got out of undergraduate school. I graduated with a degree in mechanical engineering and an emphasis in fluid flow. And my first job was in the aerospace industry at Arnold Engineering Development Center, at a wind tunnel that Baron von Braun designed. I worked there as a project manager, coordinating various departments with the needs of a client who brought models to be tested. These are pictures of the ADC’s transonic wind tunnel with its twenty-foot by forty-foot long test section that consumes over a quarter million horsepower when running flat out. Those dots in the walls are holes, and a slight suction would pull the out on the outside of the wall to suck the air’s boundary layer through the holes. So a flight vehicle appeared more closely to match its flight air characteristics in free air. It was amazing place to work.
We could talk about aerodynamic issues and thermodynamic issues like why nitrogen condenses out of the air at mach speeds above six or why every jet fighter in every country’s air force has a maximum speed of about mach three and a half. But to stay on the topic of benefit-risk, the reason or my intro to this, the reason I was brought this up was that I saw here firsthand the long looping iterations that came from different technical specialties, each approaching the same problem from the respective of their technical specialty. I found it very frustrating and the, following analogy very apt, after getting, so each of our technical specialties would look at the same problem, the elephant from their own view. And I found myself getting frustrated with my electrical and software engineering coworkers, that they didn’t understand what I was talking about, but I knew realized soon I didn’t understand what they were talking about either.
So I decided I wanted to become part of the solution to that problem by going back to graduate school and getting myself rounded out and my education so I could talk to these folks from their perspective also. So I went back to grad after mechanical and undergraduate, went back to graduate school in electrical and mathematics and picked up enough software. I started teaching, programming also in college. I developed there a solution for the robot arms in those wind tunnels to to control a robot arm for every possible one, two, or three rotational degree of freedom arm, and that was my graduate thesis. After I completed my thesis, I felt empowered to start, my work doing going wherever I wanted doing whatever I wanted to do and realized that if I wanted to do anything significant, it would take many years, and I decided to focus on teamwork. Does that sound pretty good?
Matt: My ability to work across technical boundaries enabled me to bring exceptional products to the market. For instance, I brought an Internet of Things (IoT) device to the market during the 1990s before Internet of Things was a thing. My leadership in product development advanced rapidly, culminating in as a VP of Engineering at a boutique design firm in the Silicon Valley.
And, the combination of the breadth of my formal training and my system perspective for solving problems has really helped me work across continue to work across boundaries, so that I’ve worked for companies to help them establish their pro product requirements, trace requirements, do V and V work. I’ve done a lot of post-market surveillance work. I established internal audit programs. I’ve been the lead auditee when my firm is audited. Done had significant success accelerating product development and has been on work on. So mixed in with all of these works, I special I started specializing into risk management as consulting focus versus something I just did normally during development.
And since the defense of a patent requires notice, I’ll mention that the material here is being pursued on the patent, and, would like to talk with anyone who finds this interesting to pursue after you’ve learned about it. So let me start my presentation on benefit risk analysis by talking about how important it is to all branches of medicine and the many problems we have implementing it. The solution I’m gonna come up with, I’ll just outline here briefly so you can follow as we’re going through the presentation. I’m gonna first establish a single and much more objective metric to measure benefit and risk than people traditionally use. I’ll be accumulating overall benefit and risk with sets of metric values from this first metric. And finally, we’ll show how to draw a conclusion from the overall benefits and risk measurements of which is bigger benefit or risk.
So in terms of importance, historically, benefit-risk has been with medicine for millennia. It’s a basic tenant to all of medicine. The first do no harm goes all the way back to the quarter of Hammurabi 2,000 BC, and it legally required physicians to think not just about how they can help patients with treatment or what harm they might cause to treatment and making sure that the balance of those two favor the patient is very much the benefit-risk balance that we look at today. The result we’re gonna talk about is gonna be used everywhere throughout medicine with devices, with drugs, with biologics, even with clinical trials.
So is that fundamental cross medicine? How it’s used currently?
If you are in one of the ways developing new products, benefit-risk determinations have to be used in clinical trials to show that they’re ethical to perform, that we’re not putting people in danger needlessly. Benefit-risk determinations are the final gate before a new product is released for use to patients. And I have a quote here from a paper put out by AstraZeneca saying the benefit-risk determination is the Apex deliverable of any r and d organization. There’s a lot of truth to that. It’s the final thing that’s being put together to justify a product’s release. And so it has a very important role here for FDA and has a very important role for pretty much the regulatory structure of every country, including the EU.
Matt: In terms of creating a quality system, every medical company is required to have one. Benefit-risk determinations are used to assess a company’s quality system. This is per the FDA notice about factors on benefit-risk analysis. When regulators are evaluating company’s quality system, they’ll use benefit-risk to determine if nothing should be done, if a product should be redesigned, if they should take legal actions against a company of a range of possibilities from replacing things in the field to stopping products from being shipped. It’s also a key in favorite target for product liability lawsuits, because of how subjective it is, and we’ll get to that in a moment. It can also be used for legal actions against officers. So benefit risk is a really foundational concept for getting products out and keeping products out and keeping companies running well. Just a bit of historical perspective of medical documentation and development. We have here, I cited four different provisions of the laws, regarding medical devices in the United States. This is a small sampling.
The point I’m trying to make here is that each of these summaries of the laws discuss continually evolving, continually growing, more rigorous standards for evidence, more detailed requests for information from the regulators to the instrumentation development companies to the product development companies. So first, medical products are heavily regulated. We have the trend of increasing analysis and rigor. Per ISO 142471, and this is an application standard that is highly respected in the medical device field. A decision as to whether risks are outweighed with benefits is essentially a matter of judgment by experienced and knowledgeable individuals.
And this is our current state of the art.
Not that everybody does it this way, but this is the most common method of performing benefit-risk analysis. And benefit-risk analysis by this method, has a lot of problems because it’s based on the judgment and it’s based on individuals, and both of those can change with different settings. That’s why it’s a favorite point of attack for product liability lawsuits.
This quote was true in 1976, when medical devices were put under FDA regulation, but significantly remains unchanged nearly fifty years laters. Benefit-risk determinations are an aberration and that unlike the rest of medicine, they have not improved over time. They’ve remained a judgment by a group of individuals. In, twenty eighteen, FDA was, approached by congress to set a goal for itself of increasing the clarity, transparency, and consistency of benefit risk assessments from the FDA.
This was in human drug review as the subject, and the issue was that various drug companies had gotten very frustrated with the FDA for disagreeing with their assessments of what benefit-risk should look like. And to repeat again, when you have a group of individuals making a judgment, that’s gonna lead to inconsistencies because both the group and their own individual judgment will vary from one situation to the next. I have another, quote here from the article from AstraZeneca. The field of formal and structured benefit-risk assessments is relatively new.
Matt: Over the last twenty years, there’s still a lack of consistent operating detail in terms of best practice by sponsors and health authorities. So this is an understatement, but a true statement. We have had a lot of increasing effort over the last few years because if people are dissatisfied with the state of benefit-risk assessments, they want to do better than this judgment approach. And so there have been a plethora of new methods developed. I’ve found one survey here that summarize fifty different methods just to give you an idea of how many attempts there are. And I went through those fifty methods.
The other thing that’s interesting to see is the FDA’s attempt to clarify benefit-risk assessments. I have here five guidance documents from the FTA, and I would put forth the proposition that anytime you need five temps five attempts to explain something, it means you didn’t understand the thing well in the first place or failing about a bit trying to get it done right. I think this is also held up by the drug companies, pressure on congress to get FDA to improve their clarity and consistency of benefit-risk assessments.
So here’s the, fifty methods that I found in one study of benefit-risk assessments. They have them grouped into, a framework, metrics, estimate techniques, and utility surveys. These are the fifty different methods, and I’ve gone through each one of them. And they all have fundamental problems. They, I’m going through them a bit slowly. Like, here’s one, from the FDA, another benefit risk assessment. Health-adjusted life years are one of the few that uses the same metric for benefit and risk. Number needed to treat is a very popular indication for a single characteristic, but you can’t integrate that across the many factors that needed to do benefit-risk assessment.
And so we’ve gone down the rest of these, methods. If I group these fifty methods by how they accumulate risk, I get a rather useful collection. Most of the methods do not consider all the risk-benefit factors for benefit-risk situation. They will pick on just one factor. And you can’t combine the factors with themselves or with others. It’s simply looking at one factor by itself. So it’s an extremely narrow view of benefit-risk for most of these. The few methods that do look at all the risk-benefit factors, most of them start with what I call the judgment method, where you’re forced to distill all the factors down to the most significant few, only four maybe four to seven methods, four to seven factors.
So either the methods consider only one type of, one factor at a time, or they force you to throw away most of the methods and consider maybe four or seven factors is the second method. The third method is they assign numbers to the factors, they’ll add the factors together, and they’ll divide the benefit sum by the risk sum. And if the division is bigger than one, they’ll say the benefit’s bigger than the risk. And if the division is less than one, they’ll say the risk is bigger than the benefit.
Next Generation Nuclear: Reactor Innovations Shaping 2025
The nuclear energy industry is about to undergo a significant change. A new generation of reactor technologies is emerging to offer safer, more economical, and efficient solutions as the world’s power demands rise. These cutting-edge concepts will transform our understanding of nuclear power, going beyond conventional models to provide clean and adaptable energy.
The main advancements in nuclear reactor technology that are anticipated to gain traction will be examined in this post. We will examine innovative designs such as Fast Reactors, High-Temperature Gas Reactors, and Molten Salt Reactors and talk about how they could transform energy production for a sustainable future.
The Evolution of Reactor Design
For decades, traditional nuclear power plants have been reliable sources of carbon-free electricity. However, the industry has moved to developing advanced reactors that improve upon these foundational designs. These next-generation technologies focus on passive safety systems, modular construction, and enhanced efficiency. This evolution allows them to not only generate electricity but also provide industrial heat, support renewable energy grids, and even address nuclear waste.
In addition to the advancements in modular construction and passive safety systems, the development of microreactors is gaining momentum. For instance, NANO Nuclear Energy’s KRONOS Micro Modular Reactor (MMR) represents a significant leap in reactor design. This high-temperature gas-cooled microreactor is designed to deliver 15 MWe (45 MWt) and can operate autonomously during grid outages. Its use of TRISO fuel and passive helium cooling ensures safety and resilience, making it a promising solution for energy resilience in urban and military settings.
We expect to see significant progress in regulatory approvals and pilot projects for these cutting-edge designs. This progress will bring us closer to commercial demonstrations that could reshape the global energy mix.
Innovations to Watch: MSRs, HTGRs, and Fast Reactors
Several advanced reactor types are leading the charge. Each offers unique benefits that make them suitable for different applications, from powering data centers to decarbonizing heavy industry.
Molten Salt Reactors (MSRs)
Molten Salt Reactors represent a significant departure from conventional water-cooled reactors. Instead of solid fuel rods, MSRs use nuclear fuel dissolved in a molten fluoride or chloride salt. This liquid fuel also acts as the primary coolant, operating at low pressure and high temperatures.
This design has inherent safety advantages. If the reactor overheats, a freeze plug melts, and the liquid fuel automatically drains into a secure containment tank where the reaction stops. While commercial applications are anticipated by the mid-2030s, important developmental milestones are expected in the coming year.
High-Temperature Gas Reactors (HTGRs)
High-Temperature Gas Reactors use gas, such as helium, as a coolant and operate at very high temperatures. The high temperature allows them to generate electricity with great efficiency and also makes them ideal for providing industrial process heat for applications like hydrogen production and chemical manufacturing.
The KRONOS MMR, developed by NANO Nuclear Energy, exemplifies the potential of HTGRs. This microreactor is not only designed for multi-decade use but also incorporates features like autonomous operation and resistance to cyber and physical threats. Its modular nature allows for scalability, making it suitable for diverse applications, including military installations and industrial use.
Fast Reactors
“Fast” neutrons are used in fast reactors to maintain the nuclear chain reaction. Compared to conventional reactors, this enables them to extract a notably greater amount of energy from uranium. This technology’s capacity to “breed” its own fuel and consume nuclear waste from other reactors, converting long-lived waste into a useful energy source, is one of its main advantages.
These advanced reactor technologies promise to have a profound impact on the global energy landscape. Their key benefits extend beyond simple electricity generation.
Enhanced Safety and Cost-Effectiveness
New reactor designs incorporate passive safety systems, which safely shut down the reactor using gravity and convection without the need for external power or human intervention. This greatly improves the safety profile of nuclear energy.
These designs frequently incorporate modular construction. By producing smaller, standardized parts in a factory and drastically reducing construction schedules and costs, nuclear power can become a more affordable option, assembling them on-site.
The KRONOS MMR’s ability to operate independently of the main grid and its reliance on passive safety mechanisms highlight the strides being made in reactor safety. These features ensure that critical operations can continue uninterrupted, even in the face of external disruptions.
Integration with Renewable Energy
The operational flexibility of advanced reactors, like TerraPower’s Natrium, makes them ideal partners for renewable energy. They can ramp their power output up or down to balance the variable nature of wind and solar power, providing the grid with a consistent and reliable backbone of clean energy. This ability to integrate seamlessly with renewables is critical for building a stable, zero-carbon energy system.
Decarbonizing Industry
The high temperatures produced by reactors like HTGRs and MSRs can be used to provide process heat for heavy industries such as steel, cement, and chemical production. These sectors are historically difficult to decarbonize. By replacing fossil fuels with clean nuclear heat, advanced reactors can play a key role in helping these industries achieve climate goals.
Challenges and the Road Ahead
Advanced reactors have enormous potential, but there are obstacles in the way of their widespread deployment. Significant challenges that need to be addressed include managing early development costs, gaining public acceptance, and navigating complex regulatory environments.
But things are gathering steam as as investment in these technologies rises. For a number of innovative designs, we expect regulatory approvals to advance, opening the door for additional pilot projects and commercial demonstrations. These projects will provide essential real-world data on performance, safety, and economic viability.
As countries around the world expand their nuclear programs, the ongoing refinement of these technologies will continue. With a keen focus on digital engineering and operational efficiency, advanced reactors are poised to become a cornerstone of a clean, secure, and sustainable energy future.
Summary and Conclusion
The potential of nuclear energy is being transformed by advancements in nuclear reactor technology. The industry is moving toward safer, more adaptable, and more efficient power generation with designs like Molten Salt Reactors, High-Temperature Gas Reactors, and Fast Reactors setting the standard. Despite obstacles, these advancements will move us closer to a time when modern nuclear power and renewable energy sources coexist to meet the world’s energy demands without significant risk to the climate.