This is a preview of our recent webinar. Watch the entire webinar HERE.
Standardizing Requirements Management Across the Organization
Learn how to prevent costly production failures with standardized requirements management.
A survey by Engineering.com revealed that a staggering 83% of companies faced production outcome failures — such as significant delays, cost overruns, product defects, compliance gaps, recalls, omitted requirements, and extensive rework — often stemming from inadequate requirements management.
Join Grant Rhodes, Senior Solutions Consultant, to explore how standardized requirements management can drive consistency, predictability, and a competitive edge. This session will move beyond theory, offering actionable strategies to align cross-functional teams and streamline critical workflows.
Common challenges in standardization, like overcoming resistance and aligning cross-functional teams.
Strategies to maintain process consistency without disrupting current workflows.
How Jama Connect® streamlines requirements elicitation, tracking, change management, and collaboration to prevent costly errors.
Don’t miss this opportunity to learn best practices for successful requirements management and how Jama Connect can support a sustainable and effective approach.
THE VIDEO BELOW IS A PREVIEW OF THIS WEBINAR, WATCH THE ENTIRE PRESENTATION HERE
BELOW IS AN ABBREVIATED SECTION OF THIS TRANSCRIPT
Grant Rhodes: Hello, and thank you all for joining. I’m Grant Rhodes, a Senior Solutions Consultant here at Jama Software. It might be that you are new to the discipline of requirements management, or maybe you have been doing it for many years. Either way, I hope I can provide some value today on the topic of standardizing requirements management within an organization. In my career, working with global teams in many different project settings, I’ve seen the importance of standardization firsthand. Requirements management has proven itself a necessary aspect of product development, reducing defects earlier in the development cycle. Standardization of requirements management processes leads to faster and more complete adoption of those processes and greater collaboration across project teams. On the agenda today, we will talk about how standardizing requirements management processes can benefit your organization, and look at some of the challenges that organizations commonly face when developing a standardized process.
Then we will dive into how Jama Connect can make the successful and sustainable implementation of a standardized requirements management process within your organization a reality. Before we get started, let’s make sure that we are aligned on what we mean by requirements management. Requirements management, sometimes called requirements engineering or requirements definition, is the process of documenting, analyzing, tracing, prioritizing, and agreeing on requirements, communicating them to relevant stakeholders, and controlling changes. It is a continuous process throughout product development and is meant to help companies take their raw ideas into more detailed requirements. The pillars of requirements management include requirements definition, requirements validation and verification, and requirements change management. The most fundamental motivation for any requirements management activity is the need to communicate effectively. While requirements are originally elicited on the first steps of the product development lifecycle, it’s important to keep in mind that they are part of a bigger picture and that ownership of that bigger picture may vary.
Rhodes: For example, governance of requirements management processes may fall under your organization’s project or portfolio management office and be controlled centrally, or companies may opt for project-specific ownership. Just as there are multiple approaches to ownership of requirements processes, there is no one size fits all requirements management standard framework, and there are many standards that are proven to work. Examples include those defined in the Systems Engineering Book of Knowledge, the Business Analyst Book of Knowledge, and others. To point out a quote from Aristotle, “It is the mark of an educated mind to be able to entertain a thought without accepting it.” I highlight this because implementing some of the ideas in this webinar may lead to lively discussion that highlights competing ideas for requirements management and process standards. So now that we have a level set on our definition of requirements management and established that ownership and approach can vary from company to company and even from project to project, let’s move on to our main topic.
Standardizing requirements management across the organization, a concept that can be entirely agnostic and universally beneficial, no matter your project development structure or methodology. There is no question that requirements management has increased in prominence in recent years, and regardless of industry is largely no longer considered something that is nice to have for development, but rather an absolute necessity. Yet for most, implementation details often remain ambiguous and therefore difficult to apply. We can be entirely committed to getting the requirements right with little consensus on what getting the requirements right actually means. Without that agreement, how can we know if we are succeeding? Without a consistent end goal, how can we be sure the effort put into the requirements management process is worthwhile? This is where standardization arrives to save the day. The standard becomes our requirements management plan as opposed to a separate effort for each product or project that detracts from the effort that could be instead focused on development.
Rhodes: There’s massive evidence demonstrating the benefit of defining, deploying, and enforcing requirements management standards for an organization. Those benefits include providing a framework for efficiency, predictability, repeatability, and a benchmark for improvement, better traceability, mitigation of risk, easier training and onboarding, and the elimination of unnecessary rework. Additionally, standardization allows organizations to leverage a diverse array of resources while maintaining consistent results and provides transparency, both in process and work performed. Just as the concept of reusing requirements and leveraging work already done is highly appealing, the standardization of a requirements management process could be viewed as reusing a proven process to ensure the repeatability of a successful development effort. A strong case for standardization is illustrated in the quote, “Quality is free, but only to those who are willing to pay heavily for it.” What you put in is what you get out. Valuable products are a result of high-quality inputs and high-quality processes.
Even perfect requirements can’t withstand the damaging effects of poor process. The pressure to reduce development time is ever-increasing, and standardization liberates development teams from worrying about the mechanics of the process and allows them to instead give their full focus to solutions development. Consider this quote from Lee Iacocca: “You can have brilliant ideas, but if you can’t get them across, your ideas won’t get you anywhere.” Imagine that a new tech company is developing a revolutionary product, but everyone is trusted with their own processes, causing teams to work in silos, maybe even following strong individual processes, but with little alignment. This disconnect can lead to misunderstanding of shared requirements, resulting in bugs and causing delays or extensive meetings to try and realign. If instead the product team defines a standard process for communicating and aligning on the requirements with a communication plan for regular alignment meetings, it would enable them to coordinate more effectively with the same vision about what they’re building.
When Change Impact Becomes Chaos: A Business Analyst’s Survival Guide
Requirements change. It’s not a possibility. It’s a certainty. Priorities shift mid-sprint, regulators update compliance standards, and stakeholders introduce new dependencies long after a project has gained momentum. For business analysts (BAs), this constant flux creates a ripple effect that’s difficult to track and even harder to control.
The challenge isn’t change itself. The real problem is understanding what that change affects. When a regulatory requirement updates or a business rule shifts, business analysts (BAs) need answers fast: What components are impacted? Who needs to review the changes? What’s already been built, tested, or signed off?
Without a clear picture of these connections, change impact becomes guesswork. Teams scramble to notify the right people, rework spreads across departments, and costly surprises surface late in the development cycle, which is exactly when they’re most expensive to fix.
This post breaks down why fragmented traceability leads to chaos, how automated live traceability transforms the way teams respond to change, and what practical steps BAs can take to regain control.
BA teams often manage requirements through a patchwork of tools that were never designed to work together. Word documents capture business needs. Jira or Azure DevOps track delivery. Excel spreadsheets attempt to maintain traceability. Email threads handle approvals.
Each tool serves a purpose on its own. Together, however, they create a fragmented environment where critical relationships between requirements, design elements, test cases, and deliverables are invisible. When something changes, BAs must dig through documents, cross-reference spreadsheets, and send multiple follow-up messages just to confirm what’s affected.
This process is slow, error-prone, and frustrating for everyone involved. And the problem compounds as project complexity grows.
The Bottleneck Effect on Large Programs
On large transformation programs with multiple stakeholders, fragmented traceability becomes a serious bottleneck. Business, IT, and QA teams often work from different versions of the same document; each believing their source is current. BAs end up playing referee, reconciling conflicting information, chasing approvals, and rebuilding traceability matrices from scratch before every audit.
The downstream consequences are significant. Rework increases. Timelines slip. Defects that could have been caught early surface during user acceptance testing (UAT), when fixes are far more costly to implement. According to research on software development costs, defects identified during UAT can cost up to 15 times more to fix than those caught during the requirements phase.
Where Manual Impact Analysis Breaks Down
Manual impact analysis relies heavily on institutional knowledge to know which requirements connect to which design elements, which test cases cover which features, and which stakeholders own which components. When that knowledge lives in someone’s head rather than a shared system, any staff change or project transition creates gaps.
These gaps surface at the worst possible moments. A critical dependency gets missed during a change review. A test case that covers a recently modified requirement doesn’t get updated. A regulatory change triggers a cascade of downstream updates that no one mapped out in advance. Each of these scenarios is preventable, but only if teams have reliable visibility into the connections that matter.
Live Traceability™ as a Solution
Automated, Live Traceability changes how teams manage change impact at a fundamental level. Rather than manually reconstructing connections between requirements, design elements, test cases, and deliverables, teams can see these relationships in real time and act on them immediately.
When a requirement changes, the impact becomes visible instantly. BAs can identify affected components, notify relevant stakeholders, and assess whether anything downstream needs adjustment before the ripple effect takes hold.
Faster Decisions, Fewer Surprises
Live Traceability accelerates decision-making because the information BAs need is always current and always accessible. There’s no waiting for someone to update a spreadsheet or cross-reference a document. The connections are maintained automatically, so when a change occurs, the system surfaces what’s affected rather than leaving teams to discover it manually.
This visibility helps teams move faster without sacrificing quality. Changes get validated earlier in the development cycle, reducing the likelihood of expensive rework during UAT or post-release. Teams maintain alignment across departments because everyone works from the same system of record, not on parallel versions of a document that diverged weeks ago.
Alignment Across Departments
One of the most significant benefits of live traceability is the reduction of cross-functional friction. When business, IT, and QA teams share a single, authoritative view of requirements and their connections, communication improves dramatically.
BAs spend less time reconciling conflicting information and more time contributing to strategic decisions. Stakeholders get faster answers to change impact questions. Development teams understand exactly which requirements drive which deliverables, reducing ambiguity during implementation. The entire organization benefits from a more reliable, transparent process.
Compliance and Audit Readiness Without the Scramble
For teams operating in regulated industries such as medical devices, automotive, and aerospace and defense, regulatory compliance and audit preparation consumes considerable time and resources. Traceability matrices need to be current, complete, and accurate. When traceability is maintained manually, preparing for an audit means recreating documentation that should have been maintained throughout the project.
Live Traceability eliminates this problem. Because connections between requirements, design, and testing are maintained automatically and continuously, audit-ready documentation is always available. Teams don’t need to scramble because the record is already there.
Recognizing the problem is the first step. Acting on it requires a clear-eyed assessment of where your current process creates friction.
Start by measuring your current impact analysis process. Ask how long it takes your team to complete a change impact assessment today. How many tools and conversations are involved? How often do downstream surprises emerge during testing? How much time goes into rebuilding traceability matrices before audits? These questions surface the true cost of manual traceability, which is often much higher than teams realize.
Identify where the gaps are largest. In most organizations, the weakest link is the connection between requirements and testing. Changes to requirements frequently don’t trigger updates to test cases, leaving gaps that only become visible during UAT. Mapping these gaps helps teams prioritize where automation will deliver the greatest impact.
Evaluate tools designed specifically for requirements management. General-purpose platforms like Jira and Confluence are valuable for project delivery, but they weren’t built to maintain end-to-end traceability. Purpose-built requirements management tools offer automated traceability, change impact analysis, and audit trails that general-purpose platforms can’t replicate. Look for solutions that integrate with your existing delivery tools rather than replacing them. The goal is to close gaps, not add complexity.
Build change impact analysis into your workflow. Even with the right tools in place, process discipline matters. Establish a clear protocol for how change requests trigger impact assessments. Define who owns the review, who needs to be notified, and what criteria determine whether downstream components require updates. Embedding these steps into the standard workflow prevents the informal processes that create gaps.
Invest in team capability. Tools are only as effective as the people using them. Ensure BAs and project teams understand how to use traceability features, how to interpret impact analysis outputs, and how to communicate change implications to stakeholders clearly and quickly.
Taking Back Control of Change Impact
Change will always be part of complex software and systems development. The question every BA team must answer is not how to prevent change. It’s whether your team can respond to it with confidence or scramble to keep up.
Fragmented, manual traceability makes scrambling the default. Automated, live traceability makes confident, rapid response possible. Teams that invest in the right tools and processes gain more than efficiency. They gain the ability to absorb change without chaos by delivering projects that stay on track, meet compliance requirements, and reflect the most current understanding of what stakeholders actually need.
The cost of doing nothing compounds with every missed dependency, every late defect, every audit that requires days of preparation. The cost of acting is a more structured, connected, and resilient way of working that pays dividends across every project that follows.
Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by Kirsten Moss and Mark Levitt.
A Practical Guide to Translating User Needs into Design Inputs
As a former product development engineer, I remember the pressure to start designing immediately. I’d jump straight into CAD models and prototypes, eager to build the next innovative medical device. But sometimes, this meant I overlooked a critical first step: truly understanding what the end-user needed. This often led to features that missed the mark and created a mountain of documentation rework to justify our design choices after the fact.
Many engineers in the medical device space get stuck in this cycle. They struggle to distinguish between user needs and design inputs, or they don’t know how to translate a general user request into a measurable engineering requirement. This confusion isn’t just inefficient; it’s a compliance risk that can delay projects and frustrate teams who would rather be designing and testing than drowning in paperwork.
TL;DR: User needs are high-level goals describing what a user wants a device to do, while design inputs are the specific, measurable engineering requirements needed to meet those needs. Following a structured process to translate user needs into traceable design inputs is essential for complying with FDA regulations and building products that succeed.
What are User Needs? The Foundation of Your Design Control Process
User needs are the starting point for the entire medical device design control process. They are high-level, qualitative statements that capture the goals and expectations of the device’s intended users. Think of them as the “what” from the user’s perspective.
These needs are derived from various stakeholder needs, which can include patients, surgeons, nurses, technicians, or even hospital administrators. The key is to capture their desired outcomes without dictating a specific technical solution.
According to FDA 21 CFR 820.30, the design control process begins with establishing and maintaining procedures to ensure that the design requirements are appropriate and address the intended use of the device, including the needs of the user and patient.
Examples of User Needs:
A surgeon needs the device to provide clear visualization in a smoke-filled environment.
A home-care patient needs the device to be simple to operate without assistance.
A nurse needs the device to be easily and quickly sterilized between uses.
What are Design Inputs? The Blueprint for Your Device
If user needs are the “what,” design inputs are the “how”— from an engineering perspective. They are the detailed, objective, and verifiable requirements that describe the performance, physical, and functional characteristics of the device. Every design input must be traceable back to a specific user need.
These inputs form the technical blueprint that guides the entire development process. They must be unambiguous and measurable so that you can later prove the device meets them through design verification activities.
Key takeaway: Without clear design inputs, you have no objective criteria to design against or to test your final product.
Examples of Design Inputs (translated from the user needs above):
User Need: A surgeon needs clear visualization.
Design Input: The device’s camera shall operate in temperatures up to 60°C.
Design Input: The device’s lens shall be coated with an anti-fog agent.
User Need: A patient needs a simple device.
Design Input: The device shall have no more than three buttons for all primary functions.
Design Input: The device startup sequence shall complete in under 5 seconds.
User Need: A nurse needs easy sterilization.
Design Input: The device housing shall be made of medical-grade stainless steel 316L.
Design Input: The device shall withstand at least 100 autoclave sterilization cycles at 134°C.
A 4-Step Guide for Translating User Needs to Design Inputs
Translating vague user needs into concrete design inputs is a skill. It requires a systematic approach to ensure nothing is lost in translation. Following these steps will help you create a robust foundation for your medical device design control process.
Step 1: Gather and Define Clear User Needs
Before you can translate needs, you must capture them accurately. This involves engaging directly with your stakeholders through methods like interviews, surveys, and observational studies. Focus on understanding their goals and pain points. Write the user need from their perspective, avoiding technical jargon.
Step 2: Deconstruct Each User Need
A single user need can contain multiple implied requirements. Break down broad statements into their core components. For a need like, “The device must be portable,” ask clarifying questions:
What does “portable” mean to the user? Carried in a pocket, in a bag, or on a cart?
How long does it need to operate without being plugged in?
In what environments will it be used?
Step 3: Write Quantifiable and Verifiable Design Inputs
This is the most critical step. Convert each component of the user need into a specific, measurable requirement. A good design input is unambiguous and testable.
Use “shall” statements: This is standard practice for writing formal requirements.
Be specific: Instead of “lightweight,” write “The device shall weigh less than 500 grams.”
Make it measurable: Instead of “a long battery life,” write “The device shall operate continuously for a minimum of 8 hours on a single charge.”
Step 4: Establish and Maintain Traceability
Every design input you create must be linked directly back to the user need it helps fulfill. This traceability is not optional; it’s a regulatory requirement and the backbone of your medical device file. This link proves that your design is directly driven by user needs and that every requirement has a purpose.
Streamline Your Design Control Process with the Right Tools
Managing the complex web of user needs, design inputs, risks, and verification activities in spreadsheets or documents is a recipe for errors and audit findings. This is where modern requirements management platforms can transform your workflow.
Live Traceability™: Automatically create and visualize the links between user needs, design inputs, test cases, and other artifacts. This ensures you are always audit-ready and can easily analyze the impact of any changes.
Reuse and Libraries: Stop reinventing the wheel. Create libraries of common requirements, like those for specific standards or product lines, and reuse them across projects to ensure consistency and save valuable time.
AI-Powered Insights: With Jama Connect Advisor™, you can leverage AI to analyze your requirements for quality. Get instant feedback on whether your design inputs are clear, complete, and verifiable, helping your team write better requirements faster.
Q: Can a single user need lead to multiple design inputs? A: Yes, absolutely. A high-level user need like “the device must be safe for clinical use” will be broken down into dozens or even hundreds of specific design inputs related to biocompatible materials, electrical safety standards, alarm functionalities, and much more.
Q: What’s the difference between design inputs and design specifications? A: This is a common point of confusion. Design inputs define what the device must do (the requirements). Design specifications (also known as design outputs) describe how the device will meet those requirements. They are the tangible results of the design process, such as drawings, material specifications, and source code. People often think of the design outputs as the “recipe” showing how to build the device.
Q: How do I handle conflicting user needs? A: It’s common for different stakeholders to have competing needs (e.g., a large screen for visibility vs. a small size for portability). This requires a structured process of prioritization, risk analysis, and trade-off discussions with the project team and stakeholders. The key is to document these decisions and the rationale behind them within your design history file.
Master Your Design Inputs and Accelerate Innovation
Bridging the gap between user needs and design inputs doesn’t have to be a source of frustration. By adopting a structured process and leveraging the right tools, you can eliminate ambiguity, ensure compliance, and free up your engineers to do what they do best: build innovative products that improve lives.
Jama Connect® Features in Five: Industrial Machinery Development Solution
Streamline Industrial Machinery Development with Jama Connect!
In this Features in Five session, Patrick Garman, Solution Lead for Industrial Automation and Machinery at Jama Software, demonstrates how Jama Connect’s Industrial Machinery Data Model empowers teams to accelerate development and maximize project success in the industrial machinery space.
Key highlights include:
Purpose-built support for complex machinery, from robotic assembly cells to heavy equipment.
Centralized systems engineering with integrated safety, cybersecurity, risk management, and testing.
Tools for improving requirements quality, identifying gaps early, and ensuring seamless traceability.
Introduction to Industrial Machinery Data Model
Hi, everyone. I’m Patrick Garman, Solution Lead for Industrial Automation and Machinery at Jama Software. Today, I’ll introduce our industrial machinery data model and why it’s so powerful for teams building sophisticated machinery. Industrial machinery includes systems like robotic assembly cells, packaging equipment, elevators, and heavy machinery. Any automated system with software, safety, or network components.
Integration of Standards and Systems Engineering
These products must comply with a wide range of standards, and our data model integrates systems engineering, safety, cybersecurity, risk management, and testing into one structure in Jama Connect.
This gives your teams a head start so you can launch products faster without reinventing processes. With predefined structures, traceability models, and workflows, Jama Connect reduces rework and recalls by exposing gaps early. Centralized traceability helps teams respond to change confidently, measure progress, and identify risks before they become problems. At the core is our traceability information model, which enforces good engineering practices, prevents invalid links, and highlights gaps automatically. Let’s see how this works and looks in the tool.
First, here’s the project explorer tree. You’ll notice that it’s organized by product architecture as well as domain. This makes it easy for project members to quickly locate relevant data. And, of course, XAML is more than just a repository for requirements. We’re actively managing those requirements based on stakeholder review and feedback.
Utilizing Live Trace Explorer™ for Traceability
Next, let’s look at Live Trace Explorer. This gives a real-time view of traceability coverage across our project. We can immediately see what’s complete, what’s missing coverage, and so on.
Identifying Gaps in Coverage
This is really one of the biggest value drivers, knowing your gaps early before they turn into late-stage redesign. So let’s drill into one of these gaps right now. So I can see that I have just shy of seventeen percent coverage at the system level.
Using Trace View™ to Add Coverage
I can click that metric in the Live Trace Explorer diagram to open Trace View and find exactly where I need to add coverage. In Trace View, you can see that Jama Connect is prompting me to add coverage where required links are missing.
Creating and Managing Test Cases
And you can take action directly from this view to add that coverage, or we can open a specific requirement for a more detailed view. Here we have a system requirement with missing test coverage. I can author test cases directly in Jama Connect using the add related feature, or I can use Jama Connect Advisor™’s test case intelligence tool to generate suggested test cases, complete with test steps based on the context I provide. But of course, traceability doesn’t end with test coverage.
Jama Connect integrates directly with Jira to track development tasks. Jama Connect also has turnkey integrations for the most commonly used digital engineering and productivity tools. For example, I’m able to link my subsystem requirements to model elements in Simulink, again, with one click, links to the source artifacts. Pulling data from your digital thread into Jama Connect is not about duplicating work. Each team works in the tool fit for their purpose, and that work is reflected in Jama Connect for traceability and in context reporting. For teams managing product lines or customer-specific customizations, we can create catalog or library projects for reusable requirements.
Reusability and Component Management
With reuse, we can easily pull a reusable component and its related requirements into any project, and we can also use sync comparison to see which products a part or component is being leveraged in and how it may vary from what we have in our library. And that concludes our tour of the Industrial Machinery data model in Jama Connect. If you’d like a deeper dive or to learn more about Jama Connect Advisor and our live integration capabilities, please let us know.
How Digitization and Traceability Are Transforming Industrial Manufacturing
Modern industrial manufacturing undergoes frequent transformation as a result of technological innovations. Digitization and traceability have emerged as critical enablers that help manufacturers enhance operational efficiency, ensure regulatory compliance, and achieve sustainable growth in an increasingly competitive global market.
The integration of these processes (and technologies that support and enable them) represents more than a simple upgrade to existing systems. It reshapes how manufacturers approach
product development, quality control, and supply chain management. Companies that successfully implement digitization and traceability solutions position themselves to respond more effectively to market demands, reduce operational risks, and accelerate innovation cycles.
This comprehensive guide explores how digitization and traceability work together to create intelligent manufacturing ecosystems, the specific benefits they deliver, and the practical
considerations for successful implementation. We’ll examine real-world applications, address common challenges, and look ahead to emerging trends that will continue to shape the future of
industrial manufacturing.
For manufacturers ready to embark on this digital transformation journey, understanding these concepts and their strategic implications becomes essential for maintaining competitive
advantage and ensuring long-term success.
Understanding Digitization in Industrial Manufacturing
Digitization in manufacturing represents the systematic conversion of analog processes, systems, and data into digital formats that enable intelligent automation and data-driven decision making. This transformation extends beyond simple computerization to create interconnected networks of smart devices, systems, and processes that communicate seamlessly throughout the manufacturing ecosystem.
At its core, manufacturing digitization involves several key technological components working in harmony. AI-powered systems analyze vast amounts of production data to identify patterns,
predict equipment failures, and optimize manufacturing processes in real time. These intelligent systems learn from historical data and continuously improve their predictive capabilities, enabling manufacturers to make more informed decisions about production scheduling, resource allocation, and quality control measures.
Internet of Things (IoT) devices serve as the sensory network of digital manufacturing environments. Embedded sensors throughout production lines collect continuous streams of data on temperature, pressure, vibration, speed, and countless other operational parameters. This constant monitoring enables manufacturers to maintain optimal operating conditions and detect anomalies before they impact production quality or efficiency.
Real-time data analytics transforms the continuous flow of information from IoT sensors into actionable insights. Advanced analytics platforms process streaming data to identify trends, detect
deviations from normal operating parameters, and generate alerts that enable immediate corrective actions. This capability allows manufacturers to maintain consistent product quality while minimizing waste and downtime.
Cloud computing infrastructure provides the scalable foundation that supports these digital capabilities. Cloud platforms enable manufacturers to store and process massive datasets,
run complex analytical models, and provide secure access to critical information across global operations. The flexibility of cloud solutions allows companies to scale their digital capabilities as their operations grow and evolve.
These components work together to create a comprehensive digital ecosystem where every aspect of the manufacturing process generates valuable data. Production equipment communicates with
quality control systems, inventory management platforms share information with supply chain partners, and maintenance systems coordinate with production schedules to minimize disruption.
The result is a manufacturing environment that operates with improved visibility, control, and efficiency. Manufacturers can track individual products through every stage of production, monitor equipment health in real time, and adjust processes dynamically to meet changing demands or conditions.
The Role of Traceability in Modern Manufacturing
Traceability establishes the ability to track and document the complete history of a product, component, or process throughout its entire lifecycle. In manufacturing contexts, this capability
provides detailed records of materials, processes, quality checks, and handling procedures that enable manufacturers to verify product authenticity, identify sources of defects, and
demonstrate compliance with regulatory requirements.
The significance of traceability extends far beyond simple record-keeping. Enhanced supply chain transparency becomes possible when manufacturers can track components and materials
from their original sources through every transformation and handling step. This visibility enables better supplier relationships, more effective quality management, and faster response
to supply chain disruptions or quality issues.
Improved quality control represents another critical benefit of comprehensive traceability systems. When manufacturers can correlate product defects with specific batches of raw
materials, particular production runs, or individual pieces of equipment, they can implement targeted corrections that prevent similar issues from recurring. This capability reduces waste,
minimizes customer complaints, and protects brand reputation.
Better risk management becomes achievable through traceability systems that provide early warning of potential problems. When manufacturers can quickly identify which products might
be affected by a defective component or problematic production batch, they can take proactive measures to prevent widespread quality issues or safety concerns.
Regulatory compliance requirements across many industries mandate detailed traceability records. Pharmaceutical manufacturers must track ingredients and production processes to
ensure drug safety and efficacy. Food producers need comprehensive records to enable rapid response to contamination issues. Aerospace and automotive manufacturers require detailed
documentation to verify that components meet safety and performance standards.
Several key technologies enable comprehensive traceability in manufacturing environments. Blockchain technology provides immutable records of transactions and processes that create tamper-proof audit trails. Each step in the manufacturing process generates a blockchain entry that cannot be altered or deleted, providing absolute confidence in the accuracy and completeness of traceability records.
Radio Frequency Identification (RFID) systems enable automatic tracking of components, products, and equipment throughout manufacturing facilities. RFID tags attached to items provide unique identification that can be read automatically as products move through production processes, eliminating manual data entry errors and ensuring complete tracking coverage.
Advanced sensor technology continuously monitors environmental conditions, process parameters, and product characteristics throughout manufacturing operations. These sensors generate detailed records of the conditions under which products are manufactured, enabling manufacturers to correlate quality outcomes with specific environmental factors or process variables.
Benefits of Integrating Digitization and Traceability
The strategic integration of digitization and traceability technologies creates benefits that exceed what either approach can achieve independently. This combination enables manufacturers to build intelligent, responsive operations that adapt quickly to changing conditions while maintaining complete visibility into every aspect of their processes.
Enhanced Efficiency and Productivity
Digital traceability systems eliminate many manual data collection and recording tasks that traditionally consumed significant labor resources. Automated data capture through IoT sensors and RFID systems ensures complete and accurate records without requiring dedicated personnel for data entry or verification activities.
Predictive maintenance capabilities emerge when digitization platforms analyze traceability data to identify patterns that indicate impending equipment failures. By correlating equipment
performance data with maintenance records, manufacturers can schedule preventive maintenance activities during planned downtime periods, avoiding unexpected production interruptions.
Process optimization becomes more precise when manufacturers can analyze complete traceability records to identify the specific conditions and procedures that produce the highest quality outcomes. This analysis enables continuous improvement initiatives that incrementally enhance efficiency and product quality over time.
Improved Quality Control
Real-time quality monitoring becomes possible when digital systems continuously track product characteristics throughout manufacturing processes. Instead of relying on periodic sampling
and testing, manufacturers can monitor every product and immediately identify deviations from quality specifications.
Root cause analysis capabilities improve dramatically when comprehensive traceability records enable manufacturers to correlate quality issues with specific materials, processes, or environmental conditions. This detailed analysis capability reduces the time required to identify and correct quality problems.
Batch tracking and recall management become more efficient and accurate when digital systems maintain complete records of which specific materials and processes contributed to each finished product. If quality issues arise, manufacturers can quickly identify all affected products and take appropriate corrective actions.
Supply Chain Optimization
End-to-end visibility throughout complex supply chains becomes achievable when digitization and traceability systems extend beyond individual manufacturing facilities to include suppliers,
logistics providers, and distribution partners. This comprehensive visibility enables more effective coordination and planning across the entire supply network.
Demand forecasting accuracy improves when manufacturers have access to real-time data about inventory levels, production capacity, and customer demand patterns throughout their supply chains. This improved forecasting enables more efficient inventory management and production planning.
Supplier performance monitoring becomes more objective and comprehensive when digital systems track delivery performance, quality metrics, and compliance with specifications. This data-driven approach to supplier management enables better supplier relationships and more effective risk management.
Risk Mitigation and Compliance
Automated compliance documentation reduces the administrative burden of maintaining regulatory records while ensuring completeness and accuracy. Digital systems can automatically
generate the reports and documentation required by regulatory agencies, reducing compliance costs and eliminating the risk of incomplete or inaccurate submissions.
Proactive risk identification becomes possible when analytical systems monitor traceability data for patterns that indicate emerging risks. Early warning systems can alert manufacturers to
potential quality issues, supply chain disruptions, or compliance concerns before they impact operations or customers.
Audit trail integrity improves when blockchain and other tamper-proof technologies ensure that compliance records cannot be altered or deleted. This capability provides regulatory agencies
and customers with complete confidence in the accuracy and authenticity of compliance documentation.
Predictive Maintenance
Equipment health monitoring through continuous sensor data collection enables manufacturers to track the condition of critical production equipment and predict when maintenance activities
will be required. This capability reduces unplanned downtime and extends equipment life.
Maintenance scheduling optimization becomes possible when digital systems analyze equipment performance data, maintenance history, and production schedules to identify the optimal timing for preventive maintenance activities. This optimization minimizes production disruptions while ensuring equipment reliability.
Spare parts inventory management improves when predictive maintenance systems provide advance notice of which components will require replacement and when. This capability enables more efficient inventory management and reduces the risk of production delays due to parts shortages.
Sustainability Benefits
Energy consumption optimization becomes achievable when digital systems monitor and analyze energy usage patterns throughout manufacturing operations. This analysis enables manufacturers to identify opportunities to reduce energy consumption and carbon emissions while maintaining production efficiency.
Waste reduction initiatives become more effective when traceability systems provide detailed information about material usage, production yields, and waste generation. This data enables targeted improvements that minimize material waste and environmental impact.
Circular economy principles become more practical to implement when comprehensive traceability systems track materials and components throughout their entire lifecycles.
This visibility enables manufacturers to identify opportunities for recycling, reuse, and remanufacturing that reduce environmental impact and material costs.
In this blog, we’ll recap a section of our recent Expert Perspectives video, “A Method to Assess Benefit-Risk More Objectively for Healthcare Applications” – Click HERE to watch it in it entirety.
Expert Perspectives: A Method to Assess Benefit-Risk More Objectively for Healthcare Applications
Welcome to our Expert Perspectives Series, where we showcase insights from leading experts in complex product, systems, and software development. Covering industries from medical devices to aerospace and defense, we feature thought leaders who are shaping the future of their fields.
Assessing benefit‑risk is a foundational requirement for medical device manufacturers, yet it has long been one of the most challenging aspects of risk management. While risks are analyzed with rigor and precision, benefits are often described qualitatively, making objective comparisons difficult and slowing decision‑making across the product lifecycle.
A new, revolutionary method for assessing benefit‑risk changes that dynamic by unifying benefit and risk into a single, objective framework. Our expert perspectives video, “A Method to Assess Benefit-Risk More Objectively for Healthcare Applications,” offers actionable insights for healthcare innovators aiming to meet rigorous regulatory requirements while ensuring patient safety and efficacy.
In this episode of Expert Perspectives, Richard Matt explains how his method, dubbed the “Grand Unified Theory of Risk Management”, enables medical device companies to perform benefit-risk analyses with unprecedented speed and precision, delivering definitive determinations within minutes. This efficiency allows for multiple assessments throughout a project, unlocking opportunities to refine patient populations, expand product indications, and even use a benefit-risk assessment as a design parameter during development. Beyond product development, this method also provides a robust framework for addressing regulatory requirements, post-market analysis, and quality management system evaluations.
By transforming a traditionally subjective process into a data-driven, objective methodology, Richard Matt’s approach empowers healthcare innovators to bring safer, more effective solutions to market. For a deeper dive into this method and its implications, download the whitepaper from Aspen Medical Risk Consulting.
Below is a preview of our interview. Click HERE to watch it in its entirety.
Kenzie Jonsson: Welcome to our expert perspective series where we showcase insights from leading experts in complex product, systems, and software development. Covering industries from medical devices to aerospace and defense, we feature thought leaders who are shaping the future in their fields. I’m Kenzie, your host, and today, I’m excited to welcome Richard Matt. Formerly educated in mechanical, electrical, and software engineering and mathematics, Richard has more than thirty years of experience in product development and product remediation. Richard has worked with everyone from Honeywell to Pfizer and is now a renowned risk management consultant. Today, Richard will be speaking with us about his patent pending method to assess benefit-risk more objectively in health care. Without further ado, I’d like to welcome Richard Matt.
Richard Matt: Hello. My name is Richard Matt, and I’m delighted to be speaking with you about our general solution to the problem of assessing whether the benefit of a medical action will outweigh its risk. I’ll start my presentation by saying a few words about my background and how this background led to the benefit-risk method you’ll be seeing in the presentation.
To understand my background, it really helps to go back to the first job I got out of undergraduate school. I graduated with a degree in mechanical engineering and an emphasis in fluid flow. And my first job was in the aerospace industry at Arnold Engineering Development Center, at a wind tunnel that Baron von Braun designed. I worked there as a project manager, coordinating various departments with the needs of a client who brought models to be tested. These are pictures of the ADC’s transonic wind tunnel with its twenty-foot by forty-foot long test section that consumes over a quarter million horsepower when running flat out. Those dots in the walls are holes, and a slight suction would pull the out on the outside of the wall to suck the air’s boundary layer through the holes. So a flight vehicle appeared more closely to match its flight air characteristics in free air. It was amazing place to work.
We could talk about aerodynamic issues and thermodynamic issues like why nitrogen condenses out of the air at mach speeds above six or why every jet fighter in every country’s air force has a maximum speed of about mach three and a half. But to stay on the topic of benefit-risk, the reason or my intro to this, the reason I was brought this up was that I saw here firsthand the long looping iterations that came from different technical specialties, each approaching the same problem from the respective of their technical specialty. I found it very frustrating and the, following analogy very apt, after getting, so each of our technical specialties would look at the same problem, the elephant from their own view. And I found myself getting frustrated with my electrical and software engineering coworkers, that they didn’t understand what I was talking about, but I knew realized soon I didn’t understand what they were talking about either.
So I decided I wanted to become part of the solution to that problem by going back to graduate school and getting myself rounded out and my education so I could talk to these folks from their perspective also. So I went back to grad after mechanical and undergraduate, went back to graduate school in electrical and mathematics and picked up enough software. I started teaching, programming also in college. I developed there a solution for the robot arms in those wind tunnels to to control a robot arm for every possible one, two, or three rotational degree of freedom arm, and that was my graduate thesis. After I completed my thesis, I felt empowered to start, my work doing going wherever I wanted doing whatever I wanted to do and realized that if I wanted to do anything significant, it would take many years, and I decided to focus on teamwork. Does that sound pretty good?
Matt: My ability to work across technical boundaries enabled me to bring exceptional products to the market. For instance, I brought an Internet of Things (IoT) device to the market during the 1990s before Internet of Things was a thing. My leadership in product development advanced rapidly, culminating in as a VP of Engineering at a boutique design firm in the Silicon Valley.
And, the combination of the breadth of my formal training and my system perspective for solving problems has really helped me work across continue to work across boundaries, so that I’ve worked for companies to help them establish their pro product requirements, trace requirements, do V and V work. I’ve done a lot of post-market surveillance work. I established internal audit programs. I’ve been the lead auditee when my firm is audited. Done had significant success accelerating product development and has been on work on. So mixed in with all of these works, I special I started specializing into risk management as consulting focus versus something I just did normally during development.
And since the defense of a patent requires notice, I’ll mention that the material here is being pursued on the patent, and, would like to talk with anyone who finds this interesting to pursue after you’ve learned about it. So let me start my presentation on benefit risk analysis by talking about how important it is to all branches of medicine and the many problems we have implementing it. The solution I’m gonna come up with, I’ll just outline here briefly so you can follow as we’re going through the presentation. I’m gonna first establish a single and much more objective metric to measure benefit and risk than people traditionally use. I’ll be accumulating overall benefit and risk with sets of metric values from this first metric. And finally, we’ll show how to draw a conclusion from the overall benefits and risk measurements of which is bigger benefit or risk.
So in terms of importance, historically, benefit-risk has been with medicine for millennia. It’s a basic tenant to all of medicine. The first do no harm goes all the way back to the quarter of Hammurabi 2,000 BC, and it legally required physicians to think not just about how they can help patients with treatment or what harm they might cause to treatment and making sure that the balance of those two favor the patient is very much the benefit-risk balance that we look at today. The result we’re gonna talk about is gonna be used everywhere throughout medicine with devices, with drugs, with biologics, even with clinical trials.
So is that fundamental cross medicine? How it’s used currently?
If you are in one of the ways developing new products, benefit-risk determinations have to be used in clinical trials to show that they’re ethical to perform, that we’re not putting people in danger needlessly. Benefit-risk determinations are the final gate before a new product is released for use to patients. And I have a quote here from a paper put out by AstraZeneca saying the benefit-risk determination is the Apex deliverable of any r and d organization. There’s a lot of truth to that. It’s the final thing that’s being put together to justify a product’s release. And so it has a very important role here for FDA and has a very important role for pretty much the regulatory structure of every country, including the EU.
Matt: In terms of creating a quality system, every medical company is required to have one. Benefit-risk determinations are used to assess a company’s quality system. This is per the FDA notice about factors on benefit-risk analysis. When regulators are evaluating company’s quality system, they’ll use benefit-risk to determine if nothing should be done, if a product should be redesigned, if they should take legal actions against a company of a range of possibilities from replacing things in the field to stopping products from being shipped. It’s also a key in favorite target for product liability lawsuits, because of how subjective it is, and we’ll get to that in a moment. It can also be used for legal actions against officers. So benefit risk is a really foundational concept for getting products out and keeping products out and keeping companies running well. Just a bit of historical perspective of medical documentation and development. We have here, I cited four different provisions of the laws, regarding medical devices in the United States. This is a small sampling.
The point I’m trying to make here is that each of these summaries of the laws discuss continually evolving, continually growing, more rigorous standards for evidence, more detailed requests for information from the regulators to the instrumentation development companies to the product development companies. So first, medical products are heavily regulated. We have the trend of increasing analysis and rigor. Per ISO 142471, and this is an application standard that is highly respected in the medical device field. A decision as to whether risks are outweighed with benefits is essentially a matter of judgment by experienced and knowledgeable individuals.
And this is our current state of the art.
Not that everybody does it this way, but this is the most common method of performing benefit-risk analysis. And benefit-risk analysis by this method, has a lot of problems because it’s based on the judgment and it’s based on individuals, and both of those can change with different settings. That’s why it’s a favorite point of attack for product liability lawsuits.
This quote was true in 1976, when medical devices were put under FDA regulation, but significantly remains unchanged nearly fifty years laters. Benefit-risk determinations are an aberration and that unlike the rest of medicine, they have not improved over time. They’ve remained a judgment by a group of individuals. In, twenty eighteen, FDA was, approached by congress to set a goal for itself of increasing the clarity, transparency, and consistency of benefit risk assessments from the FDA.
This was in human drug review as the subject, and the issue was that various drug companies had gotten very frustrated with the FDA for disagreeing with their assessments of what benefit-risk should look like. And to repeat again, when you have a group of individuals making a judgment, that’s gonna lead to inconsistencies because both the group and their own individual judgment will vary from one situation to the next. I have another, quote here from the article from AstraZeneca. The field of formal and structured benefit-risk assessments is relatively new.
Matt: Over the last twenty years, there’s still a lack of consistent operating detail in terms of best practice by sponsors and health authorities. So this is an understatement, but a true statement. We have had a lot of increasing effort over the last few years because if people are dissatisfied with the state of benefit-risk assessments, they want to do better than this judgment approach. And so there have been a plethora of new methods developed. I’ve found one survey here that summarize fifty different methods just to give you an idea of how many attempts there are. And I went through those fifty methods.
The other thing that’s interesting to see is the FDA’s attempt to clarify benefit-risk assessments. I have here five guidance documents from the FTA, and I would put forth the proposition that anytime you need five temps five attempts to explain something, it means you didn’t understand the thing well in the first place or failing about a bit trying to get it done right. I think this is also held up by the drug companies, pressure on congress to get FDA to improve their clarity and consistency of benefit-risk assessments.
So here’s the, fifty methods that I found in one study of benefit-risk assessments. They have them grouped into, a framework, metrics, estimate techniques, and utility surveys. These are the fifty different methods, and I’ve gone through each one of them. And they all have fundamental problems. They, I’m going through them a bit slowly. Like, here’s one, from the FDA, another benefit risk assessment. Health-adjusted life years are one of the few that uses the same metric for benefit and risk. Number needed to treat is a very popular indication for a single characteristic, but you can’t integrate that across the many factors that needed to do benefit-risk assessment.
And so we’ve gone down the rest of these, methods. If I group these fifty methods by how they accumulate risk, I get a rather useful collection. Most of the methods do not consider all the risk-benefit factors for benefit-risk situation. They will pick on just one factor. And you can’t combine the factors with themselves or with others. It’s simply looking at one factor by itself. So it’s an extremely narrow view of benefit-risk for most of these. The few methods that do look at all the risk-benefit factors, most of them start with what I call the judgment method, where you’re forced to distill all the factors down to the most significant few, only four maybe four to seven methods, four to seven factors.
So either the methods consider only one type of, one factor at a time, or they force you to throw away most of the methods and consider maybe four or seven factors is the second method. The third method is they assign numbers to the factors, they’ll add the factors together, and they’ll divide the benefit sum by the risk sum. And if the division is bigger than one, they’ll say the benefit’s bigger than the risk. And if the division is less than one, they’ll say the risk is bigger than the benefit.
Transforming Requirements Engineering with AI to Enhance Clarity, Consistency, and Scalability
As systems grow more complex, traditional processes struggle to keep up, ultimately impacting requirements quality. AI can assist in processing the sheer volume of data, enhancing clarity, consistency, and scalability across workflows.
Join Katie Huckett, Product Line Manager for Advisor/AI at Jama Software, for an exclusive webinar exploring how AI is becoming an essential cognitive amplifier in requirements engineering. Discover how AI is redefining the way teams detect ambiguity, surface hidden conflicts, and maintain alignment at scale.
What You’ll Learn:
Understand why requirements quality is declining under modern system complexity.
Learn the hidden costs of poor requirements and why traditional practices fall short.
Discover how AI amplifies cognitive processing and improves requirements quality.
Explore practical steps for adopting AI in your engineering workflows.
Gain insights into the future of requirements engineering with AI.
The video below is a preview of this webinar, click HERE to watch it in its entirety
WEBINAR TRANSCRIPT PREVIEW
The Collapse of Requirements Quality Under System Complexity – How AI Can Help
Katie Huckett: Welcome, and thanks for joining. Today we’re going to talk about something many engineering organizations are experiencing, but rarely say out loud. Requirements quality is collapsing under the weight of modern system complexity. This session isn’t about tools, features, or automation for automation’s sake. It’s about why this problem exists, why traditional fixes are no longer sufficient, and why AI is becoming a necessity rather than a nice to have in requirements of engineering.
My name is Katie, and I lead product strategy focused on AI-driven capabilities and requirements management. I spend most of my time working with engineering teams in highly regulated complex industries, aerospace and defense, automotive, medical devices, and other systems where requirements quality is not optional. What I’m sharing today is based on what those teams are actually struggling with in practice, not theory.
Here’s how we’ll spend our time together. We’ll start looking at why requirements quality is breaking down despite increased process maturity. We’ll talk about the hidden costs of complexity and why traditional approaches no longer scale. Then we’ll look at how AI changes what’s possible, not as a replacement for engineers, but as a cognitive amplifier. And finally, we’ll discuss what this shift means for engineering organizations moving forward. We’ll have a brief Q&A portion before we conclude today. Let’s dive in.
Here’s the paradox we’re living in. Requirements practices are more mature than they’ve ever been. Teams have invested heavily in process, tooling, standards, and governance, and yet many organizations are seeing more rework, more late stage surprises, and more friction between teams than before. What’s important here is that this isn’t happening because teams stopped caring about quality. It’s happening because the nature of the systems we’re building has changed faster than the way we manage requirements. In other words, the rules of the game changed, but most practices did not.
Modern products are no longer confined to a single domain. A single system now routinely spans software behavior, physical components, data flows, safety constraints, regulatory requirements, and operational considerations. All of these elements evolve together, often on different timelines and often with different teams responsible for each part. As systems scale and change in parallel, the number of relationships between requirements increases dramatically, not linearly. And yet, many traditional approaches still assume that these relationships can be reasoned through manually during periodic reviews or checkpoints. The challenge isn’t capability or commitment. It’s that the structure of the work itself has fundamentally changed.
Huckett: Before we go further, I want to ground this discussion in your experience. We’re going to launch a poll. Please take a moment to answer honestly. What is the biggest contributor to requirements quality issues in your organization?
Looks like we have the results in. In nearly every organization I work with, the answer is rarely just one of these. These challenges stack on top of each other, and that compounding effect is exactly what overwhelms traditional requirements practice.
Traditional requirements practices were built for a world where change was slower, and systems were more predictable. Reviews happened at defined milestones. Documents were relatively stable. Dependencies were fewer and easier to reason about. Today, however, requirements are changing continuously, often across teams working in parallel. When you apply periodic document-centric review models to this environment, gaps are almost inevitable. The process itself isn’t wrong. It’s just being asked to operate outside the conditions it was designed for.
It’s important to say this clearly. This is not a lack of skill problem. It’s not a lack of effort problem. It’s not a lack of accountability problem. It’s a structural mismatch between human cognitive limits and the complexity of modern systems.
One of the most dangerous things about requirements quality issues is that they rarely fail loudly. A single ambiguous requirement doesn’t stop a project. It quietly creates multiple interpretations. Those interpretations propagate into design decisions, test cases, and validation activities. By the time the issue is discovered, multiple teams have already invested time and effort based on different assumptions. And at that point, the cost isn’t just fixing the requirement. It’s undoing everything that was built on top of it.
Huckett: Let’s do another quick poll. Where do requirements quality issues most often surface too late in your lifecycle?
Some interesting results here. Wherever this shows up in your lifecycle, the pattern is consistent. Humans don’t see the issue until it’s already costly. That’s not a vigilance problem, that’s a visibility problem. When quality issues surface, the instinctive response is to add more safeguards. That means more reviews, more sign-offs, more documentation. The problem is that these measures increase effort without increasing visibility. Teams end up spending more time checking artifacts, but not necessarily improving quality or alignment. In highly complex systems, quality doesn’t improve by adding friction. It improves by improving signal.
This is where AI fundamentally changes the equation. AI doesn’t get tired. It doesn’t lose focus. It doesn’t skip over sections because a document is long or familiar. It can continuously scan requirements, compare them, and look for patterns or anomalies across the entire system. That doesn’t replace human expertise. It supports it by ensuring that engineers are spending their time where judgment actually matters. In that sense, AI becomes part of the engineering infrastructure rather than a separate tool.
2026 Predictions for AECO: AI, Digital Twins, and the Path to Sustainable Transformation
As we step into 2026, the Architecture, Engineering, Construction, and Operations (AECO) industry is poised for a transformative leap. From the integration of AI and digital twins to the adoption of robotics and advanced materials, the sector is embracing innovation to tackle its most pressing challenges: sustainability, efficiency, and collaboration in a hybrid world.
This year’s predictions explore how emerging technologies like generative design, predictive analytics, and automation are reshaping the project lifecycle. We’ll dive into the role of advanced digital tools in achieving net-zero goals, the growing importance of cybersecurity in a connected ecosystem, and the long-term trends that will define the industry for years to come.
In part six of this year’s predictions series, we bring these insights to life with perspectives from Jama Software’s own AECO experts: Joe Gould – Senior Account Executive, and Michelle Solis – Associate Solutions Architect, who share their vision for the future. From AI-driven decision-making to the rise of modular construction and lifecycle optimization, this piece highlights the innovations and strategies that will shape 2026 and beyond.
Curious to read leading thought leaders’ predictions for their industries in 2026 and beyond? Dive into each blog below and stay tuned for part 6, the finale of this year’s series:
What specific emerging technologies (e.g., AI, digital twins, generative design, robotics) do you believe will have the most transformative impact on the AECO industry in the next five years? How can firms prepare to adopt and integrate these technologies effectively?
Joe Gould: AI and Machine Learning will become foundational across the entire project lifecycle.
Design & Planning: AI accelerates generative design by evaluating thousands of options against constraints like cost, performance, and sustainability—helping teams reach optimized solutions faster.
Predictive Insights: By analyzing large datasets, AI can forecast risks, schedule impacts, cost overruns, and potential failures, enabling earlier and more informed decisions.
Workflow Automation: Routine tasks such as data entry, document review, and quantity takeoffs are increasingly automated, allowing teams to focus on higher-value, strategic work.
Digital Twins extend these capabilities into operations.
Operational Optimization: Real-time digital replicas of assets enable continuous monitoring and simulation, improving energy performance, asset utilization, and long-term operating costs.
Predictive Maintenance: Simulating asset behavior under different conditions helps identify issues before failure, reducing downtime and extending asset life.
Collaboration: A shared, real-time data environment ensures all stakeholders are aligned on the most current information throughout the asset lifecycle.
Robotics and Automation have been moving from experimentation to real jobsite adoption.
On-Site Execution: AI-enabled robotics handle repetitive and high-risk tasks with greater precision and safety.
Autonomous Equipment: Drones and self-operating machinery are increasingly used for surveying, inspections, and material movement, improving efficiency while reducing labor constraints.
Sustainability and Net-Zero Goals
With the AECO industry under increasing pressure to meet sustainability and net-zero targets, what role do you see advanced software, materials innovation, and digital tools playing in achieving these goals? Are there specific technologies or strategies you think will lead the way?
Gould: Important question! Advanced digital tools allow teams to understand and manage environmental impact early in the process, long before construction begins.
At the core is Building Information Modeling (BIM), which provides a data-rich model that supports ongoing analysis of energy performance, material use, and constructability as designs evolve. Energy modeling and simulation extend this by forecasting real-world performance early, allowing teams to optimize efficiency and integrate renewables before decisions are locked in.
AI and machine learning add another layer by analyzing large datasets to improve decision-making, optimize resources, and surface risks earlier. Generative design helps teams evaluate thousands of design options that balance sustainability, cost, and performance. Digital twins, fed by real-time sensor data, carry this forward into operations—enabling predictive maintenance, smarter energy management, and continuous performance optimization over the life of the asset.
Life-cycle assessment tools tie it all together by informing material choices based on embodied carbon and long-term environmental impact, not just upfront cost.
Materials innovation focuses on reducing embodied carbon and supporting a more circular approach to construction.
This includes a shift toward low-carbon materials such as mass timber, green steel, and advanced concrete alternatives, along with greater use of recycled and reusable content. High-performance insulation and composites further improve operational efficiency by reducing long-term energy demand while maintaining durability and performance.
The real impact comes from integrating these tools into a single, data-driven approach—connecting design, construction, and operations.
Key strategies:
Data-driven decarbonization, using reliable project data for transparent reporting and continuous optimization
Prefabrication and modular construction, reducing waste, emissions, and schedule risk
Circular design principles, enabling reuse and recovery at end of life
Predictive maintenance, extending asset life and reducing long-term operational waste
By aligning digital tools, materials innovation, and lifecycle thinking, the industry can move beyond incremental gains and make measurable progress toward net-zero and long-term sustainability goals.
As hybrid and remote work models continue to evolve, how do you see these changes impacting collaboration, innovation, and project delivery in the AECO industry? What tools or processes will be critical for maintaining efficiency and creativity?
Gould: Hybrid and remote work are reshaping AECO, driving efficiency, expanding access to talent, and accelerating digital adoption—but they require more discipline around how teams collaborate and deliver work.
Collaboration has shifted from informal to intentional. Cloud-based platforms, shared models, and virtual design reviews are now standard, enabling distributed teams to stay aligned without being co-located. Innovation hasn’t slowed—it’s evolved. Access to broader talent pools and increased automation of routine tasks allow teams to spend more time on higher-value problem-solving.
From a delivery standpoint, hybrid models often reduce cycle times and costs. Work continues across time zones, travel is minimized, and documentation improves because communication has to be clearer by default.
Success in this environment depends less on tools alone and more on how they’re used. Cloud BIM, collaboration platforms, and project management systems form the backbone, but clear communication norms, standardized workflows, and outcome-based accountability are what keep teams productive.
To me, the shift isn’t about where people work—it’s about building repeatable, digital-first processes that support speed, clarity, and consistent project outcomes.
AI and Automation
How do you foresee AI and machine learning shaping decision-making, risk management, and project optimization in AECO? What are the biggest challenges or limitations the industry might face in scaling these technologies to automate processes?
Michelle Solis: While AI itself will make an impact on AECO companies, one additional area where we will see impact is in building the infostructure to handle the increase of AI usage across all industries. This will mean more jobs, job sites, data centers, and projects.
Gould: AI and machine learning are shifting AECO from reactive to proactive. When applied well, they improve decision-making, surface risk earlier, and optimize how projects are planned, built, and operated.
AI helps teams make better decisions by analyzing large volumes of historical and real-time data—highlighting patterns and risks humans typically miss. Generative design accelerates this by evaluating thousands of options against constraints like cost, performance, and sustainability. On the risk side, predictive analytics and real-time monitoring help identify schedule, cost, and safety issues before they escalate. AI also drives operational gains through task automation, smarter maintenance planning, and more resilient supply chains.
The challenge isn’t the technology—it’s scaling it. Most AECO firms struggle with fragmented data, limited system integration, and inconsistent standards. There are also a real skills gap and natural resistance to changing long-standing workflows. Add in high upfront costs, unclear use cases, unclear ROI, and legitimate concerns around data privacy and accountability, and adoption slows quickly.
The opportunity is real, but success depends on getting the fundamentals right: clean data, integrated systems, clear ownership, and practical use cases that tie directly to project and business outcome
Responsible AI Adoption
As AI and machine learning become more integrated into AECO workflows, what challenges or considerations should companies be mindful of to ensure successful implementation? How can firms address these challenges while maximizing the benefits of these technologies?
Gould: AI adoption in AECO isn’t a technology problem—it’s a fundamentals problem. Success depends on data, people, and how firms manage change.
Most organizations struggle with fragmented data, legacy systems, and limited AI-ready skills. Add natural resistance to new workflows, unclear ROI, and concerns around data security and accountability, and progress stalls quickly.
The path forward is straightforward:
Get the data right: standardize, govern it, and make it accessible
Upskill teams: treat AI as a productivity multiplier, not a replacement
Start small: focus on high-impact pilots that prove value fast
Modernize platforms: move toward cloud-based, integrated systems
Keep humans in the loop: clear ownership, transparency, and oversight matter
Firms that focus on these basics will scale AI effectively—and turn experimentation into measurable business outcomes.
Data-Driven Project Management
With the growing emphasis on predictive analytics, real-time monitoring, and data-driven decision-making, what strategies would you recommend for AECO firms to better harness data for optimizing project outcomes and resource allocation?
Gould: To use data effectively, AECO firms need to focus less on dashboards and more on fundamentals: integrated systems, clean data, and teams that actually trust and use it.
That starts with moving off siloed tools and spreadsheets and into cloud-based, integrated platforms that create a single source of truth across design, delivery, and operations. Strong data governance—clear ownership, standards, and quality controls—is non-negotiable. Without clean, consistent data, analytics don’t matter.
From there, predictive analytics should be embedded directly into project workflows, not buried in reports. Tracking the right KPIs and using data to flag schedule, cost, safety, and resource risks early shifts teams from reactive to proactive.
Finally, this only works if people are brought along. Start small with high-impact use cases, involve field teams early, and invest in basic data literacy, so insights drive decisions—not just meetings.
What upcoming regulatory changes or compliance requirements do you anticipate having the biggest impact on the AECO industry in 2026? How can companies stay ahead of these changes?
Gould: The biggest regulatory shifts hitting AECO in 2026 will center on ESG (Environmental, Social, and Governance), energy performance, and digital risk. ESG reporting is moving from “nice to have” to mandatory, with climate disclosure requirements cascading through supply chains. Energy codes will continue tightening, pushing firms toward higher-performance, low-carbon, and “zero-ready” buildings. At the same time, increased use of AI and cloud platforms is driving new expectations around transparency, governance, and cybersecurity.
The firms that stay ahead won’t treat this as a compliance exercise. They’ll lean on digital platforms to track energy, carbon, and materials from design through operations, put clear AI and data governance in place, and strengthen cybersecurity practices as reporting requirements tighten. Just as important, they’ll build regulatory awareness into project planning early—before requirements show up as cost, schedule, or risk surprises.
Cybersecurity in AECO
As digital tools and connected systems become more prevalent in AECO, what role do you see cybersecurity playing in protecting sensitive project data and ensuring operational continuity? Are there specific threats or solutions companies should prioritize?
Solis: As digital tools, connected platforms, and AI become more embedded in AECO workflows, cybersecurity will play a critical role in protecting sensitive project data and maintaining operational continuity. With the growing use of AI, firms must clearly define what data can and cannot be shared with AI models, particularly when working with proprietary designs, client information, or critical infrastructure data.
Beyond data leakage, organizations also need to address risks such as AI hallucinations, bias, and model misuse, which can directly impact design decisions, safety, and compliance if left unchecked. To mitigate these risks, companies should prioritize strong access controls, data governance policies, employee training, and secure AI deployments. Establishing clear guidelines around AI use, along with continuous monitoring and validation of outputs, will be essential to ensuring both cybersecurity and trust in digital systems as adoption accelerates.
Future of Innovation
What is the most innovative trend, tool, or process you’ve seen in the AECO industry recently? How do you anticipate it influencing the industry in the coming years?
Solis: One of the most impactful trends I’ve seen recently is the increased focus on Requirements Management across rail and broader AECO organizations. While this shift is often driven by hard lessons such as losing a contract or discovering unmet requirements late in a project, it signals a growing recognition that informal or disconnected requirement processes are no longer sustainable for complex, regulated projects.
Gould: The most meaningful innovation in AECO is the convergence of AI, digital twins, and integrated platforms. Together, they’re turning projects into connected, data-driven systems that move teams from static modeling to prediction, automation, and lifecycle optimization.
At the center is the digital thread. Requirements are no longer buried in PDFs and spreadsheets—they’re connected directly to BIM, schedules, costs, and real-time performance data. AI continuously validates designs against requirements, flags deviations early, and maintains traceability from concept through operations. That shift alone reduces rework, misalignment, and late-stage surprises.
AI-powered digital twins then extend this into delivery and operations, keeping stakeholders aligned and enabling smarter, faster decisions. The result is leaner execution, better compliance, and assets that actually perform as intended—not just on day one, but over their full lifecycle.
Long-Term Trends
What trends or technologies do you think will still be shaping the AECO industry five years from now? Ten years? How can companies position themselves to remain competitive in the long term?
Solis: I don’t think there’s one technology specifically that will shape the AECO industry. Companies who make an effort to welcome new technologies and not go against them will see success. This industry doesn’t want to evolve, but it will.
Gould: Over the next 5–10 years, AECO will be defined by digital maturity and industrialization. AI, BIM, and digital twins will move from tools to core infrastructure, while sustainability and offsite construction become standard, not optional.
In the next five years, BIM becomes the project command center—fully cloud-based and connected to schedule, cost, and lifecycle data. AI is embedded in planning and design to surface risk early, optimize decisions, and improve predictability. Modular and offsite construction scale quickly as firms respond to labor constraints and schedule pressure. Sustainability shifts from “nice-to-have” to a requirement.
Hard to say but looking ten years out I would predict that digital twins manage assets end-to-end, robotics handle more field execution, and buildings operate as connected systems within smart cities. Design, construction, and operations blur into a continuous, data-driven lifecycle.
The firms that win will invest early in integrated platforms, clean data, and workforce upskilling. They’ll focus on collaboration, specialization, and strong technology partnerships—turning digital capability into real project outcomes, not just innovation theater.
Engineering for the Cyber Resilience Act: Navigating Compliance Across the Product Lifecycle
Preparing for the Cyber Resilience Act: What Engineering Teams Need to Know Now
The EU Cyber Resilience Act (CRA) is setting new expectations for digital product development. It introduces mandatory requirements for vulnerability management, secure-by-design engineering, traceability, and post-market monitoring. For manufacturers of connected or software-enabled products, this represents a critical shift in how you build, document, and maintain your technology.
In this webinar,Patrick Garman, Manager of Solutions & Consulting at Jama Software, breaks down the complexities of the CRA, reviews enforcement timelines, and demonstrates how to integrate cybersecurity directly into your product lifecycle.
What You’ll Learn:
Deconstruct CRA Requirements: Gain a clear understanding of obligations for manufacturers, importers, and distributors, including secure development practices and vulnerability handling.
Operationalize Secure-by-Design: Learn practical strategies to embed security into your engineering workflows from day one.
Master Software Bill of Materials (SBOM) Transparency & Traceability: Discover how to maintain the rigorous documentation and traceability of the new regulation demands.
Navigate the Enforcement Timeline: Get a clear view of upcoming deadlines to help you prepare your organization strategically.
Leverage Jama Connect® for Compliance: Explore how a modern requirements management tool helps track threats, link mitigations to requirements, integrate testing, and prove compliance.
Don’t wait until the deadline approaches to address these critical changes. Watch now to ensure your team has the knowledge and tools to navigate the CRA successfully.
The video above is a preview of this webinar – Click HERE to watch it in its entirety!
TRANSCRIPT PREVIEW
Patrick Garman: Hi, everyone, and thank you for joining today. My name’s Patrick Garman, and I am the Solutions Manager for Energy, Industrial, and Consumer Electronics sectors here at Jama Software. Today, I’m going to be talking about the EU’s Cyber Resilience Act, or the CRA. I’ll explain what the CRA actually is, what it means for product developers, and how you can show evidence of secure by design without creating unnecessary overhead. I’m also going to briefly show how Jama Connect supports your CRA compliance. At a high level, the Cyber Resilience Act is an EU regulation that applies to products with digital elements, so hardware with software, firmware, or connectivity, and standalone software products as well. It’s not a technical standard, and it does not tell you how to implement security; it focuses on outcomes. Did you consider cybersecurity risks? Did you define mitigations? Can you show how those were implemented and maintained? It’s also worth saying what it’s not. It’s not saying that products must be perfectly secure, and it’s not trying to turn product teams into security researchers. It’s really about making cybersecurity part of normal product engineering, just integrating it into your process.
And the motivation behind the CRA is pretty straightforward: products today rely heavily on software, but cybersecurity practices across manufacturers vary a lot. Some teams are very disciplined, and others rely more on informal knowledge and experience. From a regulatory point of view, that makes it hard to assess product risk and hard to respond when vulnerabilities show up later, so the CRA is really about creating a consistent baseline, so cybersecurity is treated more like safety, reliability, or quality, something you design for, document, and revisit throughout the product lifecycle. And the penalties can be pretty stiff for non-compliance. You hear, for non-compliance, up to 15 million euros or 2.5% of your global annual turnover. Products can be barred from the EU market for non-compliance. It does include mandatory incident reporting, and it also establishes liability for manufacturers for unsafe or insecure products, so it is something that is very important to prepare for and be ready for. If you strip away the legal language, the CRA requirements really fall into a few practical buckets. First, you’re expected to identify cybersecurity risks that are relevant to your product and how it’s used.
Garman: Second, those risks should lead to actual security requirements, design constraints, controls, or behaviors that mitigate the risks. Third, there needs to be evidence, not just that you thought about security, but that the requirements were implemented and verified. And finally, the CRA expects manufacturers to manage vulnerabilities after release, things like intake, assessment, updates, and communication. And the challenge is doing it consistently and in a way that you can explain later, especially if this information is spread across different repositories. Before I jump into a demo in Jama Connect, I want to set up how to think about CRA compliance in Jama Connect. The CRA is ultimately asking for something pretty specific, can you prove a clean line from the cybersecurity risk to mitigation to verification, and then keep that story intact as the product changes? And Jama Connect’s a great tool for this because it’s designed for exactly this kind of lifecycle traceability with definable traceability information models that provide guardrails for your process. And the model I’m showing here, threats must link to one or more security requirements, and security requirements must link to verification evidence like test cases or analysis.
And if we want to go deeper, we can link into design and implementation artifacts as well. And the reason that this matters is that once these rules are in place, you’re not relying on memory or tribal knowledge. Jama Connect can guide teams towards consistent linking, and it becomes much easier to answer the questions that come up in audits and reviews, such as which risks are unmitigated, which mitigations aren’t verified, and what changed since the last release? And the other big benefit is the change impact. Sorry. When a new vulnerability pops up or a design decision shifts, Jama makes it practical to see what requirements, tests, and releases are affected without manually stitching it together across documents and spreadsheets. With that framing, what I’ll show next is a simple example. We’ll take a threat and author a requirement against it, and then see the verification evidence, so you’ll see how the relationship rule set keeps the traceability clean and reviewable. For this dem,o I’m going to keep the model intentionally simple. We’re going to start with a cybersecurity threat analysis, trace that to a security requirement, and then to a validation.
Garman: And in this scenario, I’m going to use the CVSS, which stands for the Common Vulnerability Scoring System, the 3.1 model, to score severity consistently. CVSS is traditionally used for vulnerabilities, but teams often use that same scoring structure for threat scenarios because it is familiar and repeatable. And I have a pre-created threat analysis item so that we can focus on the traceability aspects. But here you can see I have a place where I can provide a name, a description of the threat or vulnerability, and also select all of the appropriate vectors within the CVSS scoring model. And I’m also using Jama Connect Interchange™‘s Excel functions to calculate the base score and assign a severity rating, along with the temporal score and environmental score. Again, these are all calculated automatically on the backend as you define your threat vectors. And the reason I like capturing all of these attributes here in Jama Connect is it makes the assumptions explicit. Stakeholders can review the score, disagree with it, and adjust it, but we’re not hand-waving severity. And because it’s all on the same system as our requirements and validations, the cybersecurity story stays connected.
Requirements Elicitation: A Step-by-Step Approach to Defining the Right Requirements
The success of any new product or project hinges on a simple, yet challenging task: collecting requirements. When done well in a carefully controlled process that lives up to the more aptly named eliciting requirements, it leads to a product or project that meets everyone’s expectations. When done poorly in a haphazard manner, it results in costly rework, missed deadlines, and a final delivery that fails to satisfy anyone.
The process of gathering input from a diverse group of stakeholders—each with their own priorities and perspectives—poses multiple risks. Time and costs can quickly spiral, and the danger of missing a critical requirement is ever-present. This article explores the basics and benefits of following a systematic process for requirements elicitation.
The High Cost of Unstructured Requirements Collection
Product and project leads are under pressure to get requirements complete before anything else begins. Without a systematic process designed to ensure intended outcomes, project or program success is exposed to these significant risks:
Wasted Time and Resources: Ad-hoc soliciting, eliciting, tracking, and organizing requirements in documents and spreadsheets is incredibly time-consuming and prone to error. This inefficiency directly translates to higher project costs and slower time-to-market.
The Risk of Missing Requirements: A disorganized process makes it easy for critical requirements to fall through the cracks. Discovering these gaps late in the development cycle leads to expensive changes and frustrating delays.
Incomplete Stakeholder Input: Failing to identify and engage all relevant stakeholders—from internal teams like Sales and Product Management to external partners like customers and partners—can result in a product that is misaligned with market needs or technical constraints.
The key takeaway: An ad-hoc approach to collecting requirements is not just inefficient; it’s a direct threat to your project’s success.
How to Systematically Elicit Requirements: A 5-Step Process
To mitigate these risks, adopt a structured approach. These steps will help you gather, organize, and track requirements with greater clarity and efficiency.
Step 1: Define the Project or Project Scope and Objectives
Before you elicit a single requirement, ensure everyone has a shared understanding of the goals. What problem are you trying to solve? Who are the users, and what are their priorities? What does success look like? What industry or corporate standards will require documentation to demonstrate compliance?
A clear project charter or vision document is essential for keeping all subsequent requirements aligned with the core objectives. This document should be a living resource, regularly revisited, and carefully updated in a controlled manner based on learning throughout the process.
Step 2: Identify and Map Your Stakeholders
A stakeholder is anyone with an interest in or influence on your product or project. Missing input from a key stakeholder is a common point of failure. The lists below are some common stakeholders but are not an exhaustive list.
External Stakeholders: Customers, end-users, suppliers, partners, and regulatory bodies.
Create a stakeholder map to categorize individuals and groups based on their level of influence and interest. This helps you prioritize engagement and tailor your communication strategy.
Step 3: Choose Your Elicitation Techniques
There is no one-size-fits-all method for collecting requirements. Use a mix of techniques to gather comprehensive information:
Interviews: One-on-one conversations are great for understanding individual needs and complex details.
Observation: Ethnographic studies and usability analysis can expose current problems or identify opportunities that a product might solve, but that users and other stakeholders might not be able to see or articulate.
Focus Groups: Facilitated group sessions are effective for brainstorming, resolving conflicts, and building consensus among stakeholders.
Surveys: Use questionnaires to gather input from many stakeholders efficiently, as long as the requestions are articulated to avoid injecting bias and responses are interpreted carefully.
Document Analysis: Review existing business plans, market analysis, and technical specifications to extract relevant requirements.
All of these techniques are powerful but can be risky in the hands of inexperienced personnel.
Step 4: Document and Organize Requirements in a Centralized System
As you gather requirements, you must organize them in a way that is accessible, clear, and traceable. A scattered process makes it impossible to see dependencies, track changes, or ensure complete coverage.
The most important part of this step is moving away from manual methods and toward a single source of truth that applies a systematic approach and automation to maintain control and visibility.
Step 5: Review, Refine, and Validate
Collecting requirements is not a one-time event. It’s an iterative process, and work products can span generations of products and product lines. Once documented, requirements must be reviewed by stakeholders to ensure they are clear, accurate, and complete. This feedback loop is critical for refining the product or project definition and gaining formal sign-off before development begins.
Other Key Considerations
What is the difference between collecting, gathering, and eliciting requirements?
While often used interchangeably, “gathering” or “collecting” can imply simply accumulating information sitting around waiting to be picked up. “Eliciting” suggests a systematic and organized process of soliciting, documenting, and managing requirements from various sources to build a complete and validated set.
How can I ensure I haven’t missed any key stakeholders?
Start by brainstorming all possible groups and individuals affected by the project, both inside and outside your organization. Review past projects of a similar nature to see who was involved. A key practice is to ask the stakeholders you’ve already identified, “Who else should we talk to?”
What’s the biggest risk of a poor requirements collection process?
The biggest risk is building the wrong product. Missing or misunderstood requirements can lead to a final product that doesn’t meet customer needs or business goals, rendering the entire development effort a waste of time and money.
Can AI help speed up the process?
Yes, Generative AI can be useful in suggesting requirements and uncovering gaps in requirements already identified. Be prepared to store suggestions that are outside the scope of the current project for possible use in future ones.
To ensure that your process for eliciting requirements for complex products or projects goes smoothly, use a modern tool designed specifically for that purpose. Jama Connect® is designed to address the core pain points of requirements elicitation by providing a collaborative, single platform accessible to all your stakeholders from the start through the end of your project, as well as across product lines and product generations
With Jama Connect, you can:
Centralize Everything: Create, review, validate, and verify all requirements in one place, eliminating the chaos of documents and spreadsheets.
Improve Stakeholder Collaboration: Bridge silos between teams and provide all stakeholders with real-time visibility into goals, progress, and interdependencies.
Enhance Requirement Quality: Use the Jama Connect Advisor™ add-on to Jama Connect to author and analyze requirements for clarity and consistency against industry standards, including the EARS syntax. Natural language processing (NLP) helps you write better requirements from the start, avoiding ambiguity that leads to costly rework later.
Ensure Traceability: Easily track relationships between requirements, test cases, and risk analyses to understand the impact of any change.
Don’t let scattered documents and manual tracking derail your requirements elicitation activity. A systematic approach supported by the right tool is the key to developing complex products successfully.
Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by Mark Levitt and Sarah Crary Gregory.