Tag Archive for: Product Development & Management

ARP4754A

ARP4754A / ED-79A: Enhancing Safety in Aviation Development

Safety is always put first in the aviation sector. Strict adherence to rules and demanding standards helps to preserve this commitment to safety. This is where ARP4754A, a significant standard, comes into play. In this blog post, we will discuss the importance and function of ARP4754A (and its EASA equivalent ED-79A, henceforth ARP4754A) and how it impacts the design of civil aircraft and systems.

Understanding ARP4754

ARP4754A, commonly known as “Guidelines for Development of Civil Aircraft and Systems,” is an industry standard published by SAE International. Its goal is to create a structured procedure for the development and certification of aircraft and related equipment in order to guarantee adherence to safety rules. From initial concept to final certification, these rules are intended to serve as a reference for engineers, designers, and manufacturers. ARP4754A is recognized as an appropriate standard for aircraft system development and certification. The corresponding EASA Acceptable Means of Compliance AMC 25.1309 (included as a section of CS-25) does recognize ARP4754/ED–79 as well.


RELATED: Jama Connect® Airborne Systems Solution Overview


Purpose and Objectives

ARP4754A’s main goal is to increase aviation safety by encouraging a methodical and uniform approach to designing and developing aircraft and systems. It aims to reduce risks and dangers related to aircraft operations by resolving potential flaws and vulnerabilities. The standard’s goals consist of:

  • Safety Assessment: ARP4754A stresses performing in-depth safety evaluations to pinpoint dangers, weigh the risks, and put in place the right countermeasures. Revision A, specifically addresses functional safety and the design assurance process.
  • System Development: It offers recommendations for the development of aviation systems, including requirements management, verification and validation, and configuration management.
  • Considerations for Certification: ARP4754A guarantees that systems and aircraft adhere to legal criteria and certification procedures, supporting their secure integration into the aviation industry.

Development Lifecycle

The development lifecycle outlined by ARP4754A recommends adherence to established systems engineering principles and emphasizes the significance of iterative and incremental procedures, stakeholder collaboration, and requirement traceability throughout the lifecycle stages. The typical key processes covered by ARP4754A are well-defined:

  • Planning Process: This stage defines the means of producing an aircraft or system which will satisfy the aircraft/system requirements and provide the level of confidence which is consistent with airworthiness requirements.
  • Safety Assessment Process: Prescribes close interactions between the safety assessment process and system development process to capture safety requirements imposed on the design.
  • Architecture Planning and Development: The system architecture is established, including hardware, software, and interfaces
  • Requirements Process: Detailed system requirements are defined, considering functional, performance, security, and safety aspects.
  • Design Process: Detailed hardware and software item requirements are defined and allocated to system requirements.
  • Implementation Process: The system components are developed, integrated, and tested according to the defined design requirements.
  • Verification and Validation Process: This includes the activities necessary to demonstrate that the item requirements are complete, correct, and consistent with the system needs and architecture.
  • Integral Processes: ARP4754A describes additional processes that are applicable across all of the above processes. They are: Safety Assessment; Development Assurance Level Assignment; Requirements Capture; Requirements Validation; Configuration Management; Process Assurance; Certification & Regulatory Authority Coordination

RELATED: What Are DO-178C and ED-12C?


Impact on Aviation Safety

The policy related to ARP4754A plays a crucial role in ensuring safety in the aviation industry. It employs a step-by-step approach to identify and address potential hazards and risks during the early stages of development. This policy prioritizes safety assessments, risk reduction, and thorough testing, ultimately minimizing the chances of any mishaps or incidents in practical scenarios.

Moreover, ARP4754A promotes a culture of collaboration where stakeholders can effectively share knowledge and communicate throughout the development process. This ensures that safety concerns are addressed, and all parties involved have a clear understanding of their respective roles and responsibilities. The result is a coordinated effort that leads to a successful outcome.

Conclusion

The aviation industry relies heavily on ARP4754A as a fundamental benchmark and acceptable means of compliance for the development of civil aircraft and systems. By adhering to a structured approach to development, it ensures aviation safety and minimizes possible risks. Its systematic lifecycle stages, emphasis on safety assessments, and compliance with certification requirements significantly contribute to the overall reliability and integrity of aviation products. Even as the aviation industry progresses, ARP4754A remains a critical reference point, promoting a safety-first mindset and reinforcing the industry’s dedication to passenger safety.

Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by Decoteau Wilkerson and Cary Bryczek.


Learn how Jama Connect can be used to carry out ARP4754A: Digital Transformation and the Importance of Requirements Management Within the DoD



Benefit-Risk Analysis

Learn about the critical role of benefit-risk analysis in the development of safe and effective medical devices, including the use of ISO 14971, regulatory requirements, and optimizing for patient needs and healthcare costs.


The Importance of Benefit-Risk Analysis in Medical Device Development

Benefit-risk analysis is a crucial stage in the creation of medical devices. It entails evaluating the device’s possible benefits as well as drawbacks and deciding if the advantages outweigh the disadvantages. This examination aids in ensuring that medical devices are reliable, safe, and capable of being used by patients without harm.

The global standard for risk management of medical devices is ISO 14971. It offers a framework for recognizing, assessing, and managing hazards related to medical devices. Manufacturers must follow the standard and conduct a benefit-risk analysis as part of the risk management procedure. To ensure that the level of risk connected with a medical device are acceptable and that the benefits outweigh the risks, this analysis is crucial.

To start a benefit-risk analysis, it is important to first determine the device’s intended use(s). The device’s intended use should be defined in detail and contain information on the patient group it is meant for, the medical problem it is intended to support, and the clinical environment in which it will be used.


RELATED: Jama Connect® Features in Five: Risk Management for Medical Device


Finding the potential advantages of the device is the next step after defining its intended usage. Benefits could include better health outcomes, more comfortable patients, and lower healthcare expenses. These advantages should be measured and contrasted with any possible risks connected to using the technology.

A medical device’s dangers may include physical harm to the patient, adverse events, and device failure, according to the definition of harm within ISO 14971. The possibility and seriousness of each risk should be evaluated, and these risks should be identified and quantified.

Even after risks have been lowered as far as possible with risk controls, there may still be some unacceptable level of risks. This is why a benefit-risk analysis is so important in medical device development.

The following stage is to assess the benefit-risk balance after the potential advantages and risks have been determined. This entails weighing the device’s possible advantages and disadvantages to decide whether the benefits outweigh the risks.

If the benefits outweigh the risks, the device may be considered safe and effective for use in the intended patient population. However, if the risks outweigh the benefits, the device may not be considered safe or effective and may need to be redesigned or modified to reduce the risks, a medical device manufacturer might also make the decision not to launch the product to market.

Benefit-risk analysis must be optimized to guarantee the safety and efficacy of medical devices.


RELATED: Validation Kit for Medical Device & Life Sciences


The benefit-risk analysis should be an ongoing process throughout the development and life cycle of the device. As new information becomes available, the analysis should be updated to ensure that the benefits still outweigh the risks, as prescribed in various regulations and standards such as 14971:2019, and EU MDR/IVDR.

Regulatory standards for medical devices should also be considered in the benefit-risk analysis. The Food and Drug Administration (FDA), which oversees medical device regulation in the US, requires manufacturers to prove their devices are safe and effective before they can be marketed or sold.

The FDA requires med device manufacturers to perform a benefit-risk analysis as part of the product development process. This analysis is used to determine whether the benefits of the device outweigh the risks and whether the device is safe and effective for use in the intended patient population.

Conducting a benefit-risk analysis is a critical step in the development of med devices. It involves identifying and evaluating potential benefits and risks and determining whether the benefits outweigh the risks. ISO 14971 provides a framework for performing a benefit-risk analysis as part of the risk management process.

Optimizing benefit-risk analysis is important for ensuring that medical devices are safe and effective, meet regulatory requirements, and are reimbursed by healthcare payers. A systematic approach that considers all relevant factors, including patient needs and preferences, and clinical outcomes.

Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by McKenzie Jonsson and Vincent Balgos.

Ready to learn more about managing risk in medical device development?
Watch this short video: Jama Connect® Features in Five: Risk Management for Medical Device


TrustRadius


Jama Connect® Earns Top Marks on TrustRadius

Thank You to Our Customers!

Jama Connect® has earned its position as an outstanding solution on TrustRadius, solidifying its reputation as a leading platform for requirements, risk, and test management. With its intuitive interface, robust features, and exceptional customer support, Jama Connect has consistently earned praise from users for its ease of use, reliability, and reliability. Organizations across various industries have highlighted its ability to streamline collaboration, improve traceability, and enhance overall product development processes.

Reviewers have praised Jama Connect for its scalability, configurability, and advanced reporting capabilities, which have made it a top choice for businesses of all sizes. This recognition underlines Jama Connect’s unwavering commitment to delivering a reliable and efficient solution that empowers teams to drive innovation and achieve exceptional results.

Visit the full report to see why customers love using Jama Connect for product, systems, and software development.


RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution


As the leading provider of requirements management software, Jama Software® is proud to receive recognition for our services. We value the feedback from our clients who have used Jama Connect and are committed to providing them with the best support, resources, and expertise to help them succeed.

I’m VERY likely to recommend Jama to a colleague because they’d struggle to get anything done without using it! That’s the tool we’re using for Req Management now, so I recommend to my colleagues that they get amongst it!”

-From review collected and hosted on TrustRadius – Ian Webb, Systems Engineering Technical Writer – Enphase EnergyElectrical & Electronic Manufacturing

TrustRadius

Jama Connect is a solid framework for systems engineering that can integrate many design processes into a single tool. At a fundamental level, it is a great tool for handling requirements management and traceability but offers a variety of other features such as risk management and verification and validation. For someone who works in the medical device industry, the tool also complies with CFR requirements for electronic approvals and can be validated for such use.”

-From review collected and hosted on TrustRadius – User in Engineering, Medical Devices Company

TrustRadius

From all of us at Jama Software to all of you, thank you!

 



What Are DO-178C and ED-12C?


What Are DO-178C and ED-12C?

Safety is the top priority in the aviation industry. Whether it’s a civilian plane, a military jet, or an uncrewed aerial vehicle, the reliability and integrity of onboard software are essential to guaranteeing safe and secure operations. This blog will look at the importance of DO-178C (and its EASA equivalent ED-12C, henceforth DO-178C), the sector it affects, and the mechanisms it uses to demonstrate compliance.

The Aviation Industry and its Unique Challenges

The aviation business has complex systems, cutting-edge technologies, and strict safety regulations. Software tools are essential to use during the design and development of these systems. A systematic approach to software development and verification is mandated by the regulatory bodies such as FAA and EASA given the rising reliance on software and the potential risks it poses. Here, DO-178C enters the picture.

What is DO-178C?

The DO-178C, also known as “Software Considerations in Airborne Systems and Equipment Certification,” is a standard that was released by the RTCA (Radio Technical Commission for Aeronautics). It outlines the goals and methods for creating the software used in airborne systems. By outlining the procedures, actions, and artifacts required for compliance, DO-178C offers a formal framework for guaranteeing airborne software’s security, dependability, and maintainability.

DO-178C Compliance

The primary objective of DO-178C is to ensure that software used in airborne systems functions as intended and does not pose any safety risks. The compliance process encompasses all aspects of software development, from planning and requirements to coding, testing, configuration management, and verification. Compliance levels, also referred to as Software Levels (DAL A, B, C, and D), are determined based on the significance of the software’s function, as well as the size, complexity, and functionality of the code. The higher the DAL level, the more rigorous controls are required from software developers. And as you might expect, a DAL A system will cost a lot more time and money to produce based on the development constraints and evidence one must produce for certification.


RELATED: A Nod To MOSA: Deeper Documenting of Architectures May Have Prevented Proposal Loss


Key Components of DO-178C

  • Software Planning: In this preliminary stage, plans for the development and verification of software, including its traceability, resources, and timetables, are defined. This lays the groundwork for the succeeding steps.
  • Software Requirements: To make reliable software, you need clear, precise requirements. This is emphasized by DO-178C, which requires that a software requirement is traceable to a higher-level system requirement, its SW function, its verification cases, as well as the code.
  • Software Design: The design phase makes an architectural plan to do what the specifications say. To make sure it does that, you use procedures, models, and reviews.
  • Software Implementation: During this phase, the software is coded and documentation produced, including standards, instructions, and test cases. This is required by the DO-178C standard and has code reviews and coding standards to reduce errors.
  • Software Verification: Verification activities, like unit testing, integration testing, and system testing, are needed to make sure the software meets expectations and criteria. Functional and structural coverage analysis must be included depending on the DAL level Also, depending on the DAL level, one must show independence. This means that the person that writes a requirement must be different than the person that reviews the requirement. The person that writes the software code must be different than the person that tests that code.
  • Configuration Management: DO-178C focuses on configuration management to make sure changes to software are monitored, tracked, and documented throughout development.

Benefits and Impact

The aviation industry gains many advantages by following DO-178C. By adhering to these strict criteria, organizations can ensure they are following the processes called out in the regulations and that they are meeting the highest standards of aviation development safety:

  • Enhanced Safety: By focusing on safety, DO-178C reduces the chance of problems caused by software.
  • Regulatory Compliance: The Federal Aviation Administration (FAA) and European Union Aviation Safety Agency (EASA) recognize DO-178C as an acceptable means of compliance for software safety, as a condition of issuing an airworthiness certification.

RELATED: Functional Safety (FuSA) Explained: The Vital Role of Standards and Compliance in Ensuring Critical Systems’ Safety


How Can Jama Connect® Help?

Jama Connect®‘s digital engineering strategy is absolutely essential for any organization looking to boost efficiency and dependability. This strategy serves as a critical link between teams and optimizes design and engineering processes. With its comprehensive perspective of the entire system and reliable source of information, it’s an indispensable tool for success.

Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by Decoteau Wilkerson and Cary Bryczek.



DoD


Digital Transformation and the Importance of Requirements Management Within the DoD

Looking back on a technology career that started nearly 45 years ago gives me an opportunity to reflect on the evolution of technology trends. Most notably, for me, the usability and interfaces to that technology. Let me explain…

I started out as a young man in the Marines repairing and calibrating avionics equipment, and general test instrumentation. After boot camp (basic training) the Marines saw fit to send me to a half dozen electronic schools to learn my craft. Every piece of equipment I touched had a physical reference, and user’s manual, but for the most part, if you knew what function a piece of gear was supposed to provide, you probably didn’t need a manual very often. For example, if I knew how to use a Tektronix oscilloscope, I probably didn’t need any instruction on how to use an o-scope from Hewlett Packard or Phillips. Just about everything I needed to know was obvious and accessible on the front panel (the user interface).

Over the next 15 years that paradigm stayed relatively consistent. I would see different types of equipment, work on different ‘things,’ but it was rare that training or anything other than an occasional manual was needed to be productive. Physical interfaces tended to ‘tell all’ about what I could do, or what I needed to do with a piece of electronic equipment.

It was about this time that I made a mid-career switch. It had become obvious (to me, at least) that software was going to be in EVERYTHING. Consequently, I went back to school for a Computer Science (CS) degree to help in gaining access to that world. With CS degree in hand, I took a position writing test software on a classified program for an aerospace giant. It was interesting at one level when I could see the end results of my efforts, but it could be laborious at times. I shared a large cubicle with two teammates, and we were heads down grinding on code for most of the day. I know some find this immensely rewarding, but I was not one of them.


RELATED: A Nod To MOSA: Deeper Documenting of Architectures May Have Prevented Proposal Loss


Whilst I knew this wasn’t my calling in life, the introduction to a software development organization gave me exposure to tools supporting the software development lifecycle (SDLC). Disciplines like requirements management, configuration & change management, and architectural modeling. Plus, the software development process itself: Waterfall, Spiral, Iterative, Agile to name a few. The code may be the ultimate deliverable, but there are a lot of moving parts to get the code out the door. Of course, not every team believed in all of those ‘other’ disciplines: I had one teammate that had a sign over his desk that read, “Documentation is for wimps!” I also remember a cartoon at that time that read something like (manager speaking to developers): “You start writing the code while I go upstairs and see if they have any requirements.”

I’ve spent the bulk of the past 20 years of my work life supporting and working with Federal DoD programs – the large System Integrators, and direct with programs at military installations around the country. In that time, I’ve seen the transformation of segments of programs/projects being focused on a singular discipline (e.g., requirements, code, test, etc.) to the point where they are taking the big picture view; that systems and software development is really a team sport. Instead of each discipline developing their own assets/artifacts and ‘throwing them over the wall’ the work is now being attacked in a cohesive and coordinated fashion. Essentially, a digital transformation where we’ve gone from just ensuring that each discipline has tooling to support their own work, to the point where each segment of the development lifecycle prioritizes the ability to link and trace to the upstream and downstream activities.

In the earliest of days of tracing assets across all disciplines the tool vendor who could supply an environment that supported all facets of development, and link those assets together had an advantage. However, over time, the end user became more sophisticated. They did not want to lock themselves into a single vendor with tools that were ‘good enough,’ they wanted a best-of-breed product for each of those disciplines.

I think that’s the primary reason that I’m excited to be part of the team at Jama Software®. I spent 16+ years being that single vendor with tools that were integrated and were ‘good enough.’ Now, I have a product that arguably is the centerpiece of the most important of disciplines, requirements management. For without accurate requirements, on time, on budget, and meeting the needs of the end user (or warfighter) is a difficult undertaking.


RELATED: Streamlining Defense Contract Bid Document Deliverables with Jama Connect®


At Jama Software®, we support Live Traceability™ – the ability to link and trace outside our domain of requirements, to the other best-of-breed tools that support things like coding, change management, architectural modeling, testing, etc. Jama Connect® does not lock you into a single vendor but gives you the ability to continue to use the products you currently have in your arsenal. Live Traceability gives your team the ability to see the most up to date and complete up and downstream information for any requirement, no matter what state of development it is in or how many siloed tools and teams it spans.

Being a part of the Jama Connect team has allowed me to work with the most intuitive of all tools in my career. When I say intuitive, I mean I didn’t need any training to get up and running to be productive. Additionally, Jama Connect is a cloud-based product (self-hosted is also available), so no need to worry about getting your IT team engaged. If a Jama Connect project is properly set up, it should expose the bulk of the functions needed for a person working in the requirements discipline. Notice I did not say ‘requirements manager.’ Systems/Software development is a team sport, and more roles than just a requirements manager/SME will need access to the requirements. It is rather easy for a non-requirements person to access the tool, explore its functions, and be productive with limited or no training.

I continue to support Aerospace & Defense programs in my role at Jama Software. In addition to Jama Software offering a great requirements management tool, they are industry experts, and provide expert thought leadership and best practice guidance to their clients. This level of knowledge is a key distinguishing factor when searching for a requirements management tool. I am very happy to be part of this extremely energetic, client-focused company and truly looking forward to this next phase of my career.



Au

Jama Software is always looking for news that would benefit and inform our industry partners. As such, we’ve curated a series of customer and industry spotlight articles that we found insightful. In this blog post, we share an article, sourced from NPR, titled “After years of decline, the auto industry in Canada is making a comeback” – originally authored by H.J. Mai and published on March 12, 2023.


After Years of Decline, the Auto Industry in Canada is Making a Comeback

When most people think of Canada, they rarely think of cars. But the country, known for hockey, maple syrup and endless wilderness, is one of the largest car producers in North America. And with the growing importance of electric vehicles, Canada hopes to breathe new life into its automotive industry and maintain a more than 100-year-old tradition.

Canada’s automotive industry is primarily located in Ontario and Quebec, with Windsor, Ontario, claiming the title of Canada’s automotive capital.

“We’ve been the auto capital of Canada since about 1904, when the first auto plant opened in Canada,” said Windsor Mayor Drew Dilkins.

Windsor, just across the river from Detroit, has benefited from its proximity to the United States and the three major carmakers headquartered there.

Stellantis, formerly Fiat Chrysler, and South Korean battery maker LG Energy Solutions (LGES) announced last year that they will invest more than 5 billion Canadian dollars ($3.5 billion) in building a new large-scale battery manufacturing plant in Windsor. The plant is expected to be operational by 2024 and will create an estimated 2,500 jobs.


RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution for Automotive


“It’s a massive, game-changing investment, and I’m not even sure these two words are big enough to describe how important it is for our community,” Dilkins says. “This will have a generational impact. [Companies] will look at the new world of automotive and will start looking at Windsor Essex as a place to do business.

Investment by Stellantis and LGES is part of a larger trend that has seen more than CA$17 billion in announced investment in Ontario’s automotive sector since the beginning of 2021.

“Ontario has had the greatest new investment in vehicle production in its history over the past two years,” says Flavio Volpe, president of the Canadian Automobile Parts Manufacturers’ Association.

Most of this investment, worth nearly CA$13 billion, is in electric and battery production. And by passing the Inflation Reduction Act, U.S. lawmakers have given Canada a further boost to its EV ambitions.

“This is good news for Canadians, for our green economy, and for our growing EV manufacturing sector,” Canadian Prime Minister Justin Trudeau said in a tweet shortly after President Biden signed the law.

The law includes tax credits for EV buyers, but only if the car is largely made and assembled in North America, and its battery uses locally mined components. According to GM Canada’s David Paterson, this could give Canada an advantage over the U.S. and Mexico.

“What goes into our [sic] batteries are cathode active materials, which are mainly made of nickel and other critical minerals that we happen to have in abundance here in Canada,” he says.

“As we see less demand for gasoline, we see more demand for minerals, and Canada is an economy built on natural resources.”

In an effort to encourage the shift in the auto industry toward battery-powered EVs, Canada’s federal government along with Ontario’s provincial government have been investing billions of dollars.

“Our incentive is that you have a job because we invest about $2.5 billion in taxpayer money in these [car companies,” says Vic Fedeli, Ontario’s Minister of Economic Development, Job Creation and Trade.

The recent investment streak is a welcome sign for an industry that has gone through many ups and downs. Increased automation and competition from lower-wage regions have led to plant closures and job losses over the past two decades.

“We have been coming from a whole generation since about 2000, watching this critical sector decline. We have seen disinvestment in the sector, we have seen job losses in the sector, we have seen plants closed and communities are basically disappearing,” says Angelo DiCaro, research director for Unifor, a union representing about 230,000 Canadian auto workers.

The North American Free Trade Agreement, or NAFTA for short, contributed to this downturn as car companies moved their assembly lines to places like Mexico or the U.S. Southeast to cut costs. The USMAC, which replaced NAFTA in 2020, has somewhat leveled the playing field by boosting regional content requirements and instituting a minimum wage of at least $16 an hour.

DiCaro says that despite the uncertainty surrounding certain jobs that could be lost in this transition to electric vehicles, Canada’s auto workers have a sense of optimism and hope.


RELATED: Jama Connect® for Automotive Solution Overview


According to government data, the auto sector plays a key role in Canada’s economy, contributing CA$16 billion to its gross domestic product (GDP). With nearly 500,000 direct or indirect jobs, automotive is one of the country’s largest manufacturing sectors and one of its largest export industries.

Volkswagen and its battery company PowerCo announced Monday that they selected Ontario, Canada as the location of Volkswagen’s first cell manufacturing facility in North America.

The new battery plant in Canada will be the third group in the group, after Salzgitter, Germany and Valencia, Spain.

“Canada offers ideal conditions, including the local supply of raw materials and wide access to clean electricity,” the group said in a press release.

Production is expected to start in 2027.

Tesla is another company that publicly stated it is actively looking at Canada as a potential site for a new battery and / or assembly plant. The company would join Ford, General Motors, Honda, Stellantis and Toyota, which already have production facilities in Ontario.

“The success of the [Ontario] government and the federal government [sic] will not be defined by what we have landed at the moment. It will be whether we can lend a sixth automaker or a seventh,” Flavio Volpe says. “It will mean that our vision was worthy of the rhetoric and convince the best automakers in the world that the future runs through Ontario.”



IEC 51508

In this blog, we recap sections from our eBook, “IEC 61508 Overview: The Complete Guide for Functional Safety in Industrial Manufacturing” – Click HERE to read the entire eBook.


Functional Safety Made Simple: A Guide to IEC 61508 for Manufacturing

What Is IEC 61508?

As discussed previously, industrial manufacturing firms need to prevent dangerous failures that may occur with the use of their system. The challenge is that oftentimes systems are incredibly complex with many interdependencies, making it difficult to fully identify every potential safety risk.

According to the International Electrotechnical Commission, leading contributors to failure include:

  • Systematic or random failure of hardware or software
  • Human error
  • Environmental interference, such as temperature, weather, and more
  • Loss of electrical supply or other system disturbance
  • Incorrect system specifications in hardware or software

IEC 61508 creates requirements to ensure that systems are designed, implemented, operated, and maintained at the safety level required to mitigate the most dangerous risks. The international standard is used by a wide range of manufacturers, system engineers, designers, and industrial companies, and others that are audited based on compliance. The standard applies to safety-critical products including electrical, electronic, and programmable-related systems.

Why Was IEC 61508 Developed?

The primary goal of the standard is human safety, and it’s based on a couple of principles, including:

  1. Use of a safety lifecycle. The lifecycle outlines the best practices around identifying risks and mitigating potential design errors.
  2. Probable failure exercises. This assumes that if a device does fail, a “fail-safe” plan is needed.

IEC 61508 applies to all industries; however, even though it covers a broad range of sectors, every industry has its own nuances. As a result, many have developed their own standards based on IEC 61508.

Industry-specific functional safety standards include ones for:

  • Industrial – IEC 61496-1, IEC 61131-6, ISO 13849, IEC 61800-5-2, ISO 13850, IEC 62061, IEC 62061, ISO 10218
  • Transportation – EN 5012x, ISO 26262, ISO 25119, ISO 15998
  • Buildings – EN/ 81/ EN 115
  • Medical devices – IEC 60601, IEC 62304
  • Household appliances – IEC 60335, IEC 60730
  • Energy systems and providers – IEC 62109, IEC 61513, IEC 50156, IEC 61511

The standard includes Safety Integrity Levels (SILs), which cover four stages from SIL 1 to SIL 4 and indicate whether a safety function is likely to result in a dangerous failure.


RELATED: The Top Challenges in Industrial Manufacturing and Consumer Electronic Development


The Seven Parts of IEC 61508

The IEC 61508 standard covers the most common hazards that could occur in the event of a failure. The goal of the standard is to mitigate or reduce failure risk to a specific tolerance level. The standard includes a lifecycle with 16 phases, broken into seven parts, including:

  • Part 1: General requirements
  • Part 2: Requirements for electric, electric programmable safety-relevant systems
  • Part 3: Software requirements
  • Part 4: Abbreviations and definitions
  • Part 5: Examples and methods to determine the appropriate safety integrity levels
  • Part 6: Guidelines on how to apply Part 2 and Part 3 Part 7: An overview of techniques and measures

The first three parts highlight the standard’s requirements, and the rest explain the guidelines and provide examples of development.

IEC 61508 Certification: Is it Required?

IEC 61508 certification is optional in most cases, unless you contract with a firm that requires it, or it’s required by your local government. Even if it’s not mandatory, achieving certification provides peace of mind and creates a clear path to improving safety. Certification is offered through international agencies specializing in IEC 61508, such as the TÜV SÜD. Completing certification provides creditability around your IEC 61508 compliance and is a point of differentiation if bidding on a contract against multiple contractors.


RELATED: Lessons Learned for Reducing Risk in Product Development


Hazard and Risk Analysis for Determining SIL

Understanding functional safety requires a hazard analysis and risk assessment of the equipment under control (EUC).

The hazard analysis identifies all possible hazards of a product, process, or application. This will determine the functional safety requirements to meet a particular safety standard.

A risk assessment is needed for every hazard that you identify. The risk assessment will evaluate the frequency and likelihood of that hazard occurring, as well as the potential consequences if it does happen.

The risk assessment determines the appropriate SIL level, and you can then use either qualitative or quantitative analysis to assess the risk. The guidelines don’t require a specific method of analysis, so use whatever method you prefer.

To learn more, download the entire eBook HERE.


Live Traceability

This Features in Five video demonstrates how Jama Connect helps maintain Live Traceability across applications.


Jama Connect® Features in Five: Live Traceability™

Learn how you can supercharge your systems development process! In this blog series, we’re pulling back the curtains to give you a look at a few of the powerful features in Jama Connect®… in under five minutes.

In this Features in Five video, Jama Software® subject matter experts Neil Bjorklund, Sales Manager – Medical, and Steven Pink, Senior Solutions Architect, will demonstrate how Jama Connect helps maintain Live Traceability™ across applications.


VIDEO TRANSCRIPT:

Neil Bjorklund: So, what we wanted to do today is give you all a quick snapshot of what it looks like for Jama to be integrated across systems, and showing how Jama helps you all maintain Live Traceability across applications or what we call a connected digital thread.

As part of our demonstration today, we’re going to show you what that looks like across your V-Model here of system engineering, in which case we’re going to actually make a change from a Windchill item, so an item over in Windchill, making a change to a specification or a part over there. We’re going to trace from that Windchill item over to Jama at the subsystem design output level. So, you’ll be able to see those items synchronized across those applications. We’re then going to perform an Impact Analysis within Jama. So, that’s going to allow you to then visualize, if you were to make a change to a Windchill item, what impact does that have across your full system.

So, we’re going to then see that change cascade up to a system requirement level here. We’re going to then make a change to that system requirement, and Jama is going to have the suspect linking mechanism to be able to then identify what all items downstream could be impacted by this change. In which case, we’re going to show an example where a software requirement must be changed. We’re going to make that change, and we’re going to show that change and then cascade over into Jira.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


Bjorklund: So, the idea here is that, by managing Live Traceability within Jama, maintaining Jama as integrated with other applications or ecosystems, you’re going to be able to visualize that connected digital thread and see changes take place from Windchill into Jama, over then into Jira.

Now, one thing to remember, this integration is very flexible. So, we can integrate from Jama over to Windchill PLM parts, problem reports, change requests, requirements, different folder structures, and so forth. Within the software side, obviously, we’re integrating with Jira, that’s very flexible. But, we can also integrate with other applications like Azure DevOps, PFS, things like that. So, again, this is just one example, just to highlight the flexibility here of Jama, and this workflow.

Steven Pink: Okay, so thank you, Neil, for that. Now, we’re taking a look at a spec here in Windchill. This is what we’re going to be making a change to today. I’m going to go ahead and make an edit to this, and this is what’s going to synchronize across into Jama. So, I’m going to update the description, update the specification within Windchill. We’ve now saved this update within Windchill. We’ve updated the description here, and this is going to synchronize across to Jama in real-time. So, I’m going to switch over to this speck from Windchill that has synced into Jama. I’ll refresh it here, and we’ll see the description has now updated in real-time. If we want to understand the impact this change could potentially have, we’ll use Jama’s Impact Analysis feature. This will allow us to look up and downstream from this specification.

So, based on those relationship rules Neil showed earlier in that Live Traceability in Jama, we can look from a spec, one of these subsystem design outputs, all the way upstream through hardware and software to those higher level system requirements to understand what the potential impact could be. So, I’ll go ahead and run this Impact Analysis. We’ll take a look, and it’ll find everything that’s directly and indirectly connected to this specification from Windchill.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


Pink: We can see hardware requirements, the system requirement, a high-level user need, and maybe the system requirement is impacted by our change. We can click into the system requirement. If we need to make an update to this impacted system requirement, we can come in and modify the description here.

When I save this system requirement, Jama is actually going to identify everything downstream that has been impacted through the Suspect Link feature. So now, we’re flagging these downstream hardware and software requirements that could be impacted by the change we made to this higher-level system requirement. If there’s been an impact to this software requirement, for example, I can click into this software requirement. I can then edit the description to reflect the necessary updates based on that impact assessment.

And now, when I save this software requirement, this has been synchronized with Jira, so we’ll be able to see the updated software requirement updated into Jira in real-time. So, I’m going to switch over to Jira here, and this is that software requirement that we’re synchronizing. And now, we can see the update to that description has synced across to Jira in real-time, providing us Live Traceability between specifications in Windchill, through our requirements in Jama, all the way down through the lower level software and development work occurring in Jira.


RELATED: G2 Again Names Jama Connect® the Standout Leader in Requirements Management Software in their Spring 2023 Grid® Report


Bjorklund: Okay, thank you, Steven, for that. So, this is a quick recap. So, we’ve gone from a item in Windchill. We made that change within Windchill. That change was automatically reflected over into Jama. We then performed Impact Analysis within Jama, made changes across our system-level requirements, which then cascaded changes down into our software requirements over in Jira. Now, again, this is just one example where we’ve taken a change, we’ve integrated Jama with different applications, but Jama has the ability to integrate with all the applications across your product development lifecycle, across that V-Model system engineering.

So, if there are groups that are maybe not using Jira, you’d certainly have the ability to then manage change across different applications and, Jama serves as that central system to be able to manage Live Traceability and maintain that connected digital thread. Thank you.


To view more Jama Connect Features in Five topics, visit: Jama Connect Features in Five Video Series



Redux

“What I cannot create, I do not understand.”

Richard Feynman

Redux is pretty simple. You have action creators, actions, reducers, and a store. What’s not so simple is figuring out how to put everything together in the best or most “correct” way. In this blog, we begin by explaining the motivation behind using Redux and highlight its benefits, such as predictable state management and improved application performance. It then delves into the core concepts of Redux, including actions, reducers, and the store, providing a step-by-step guide on how to implement Redux in a JavaScript application. We will emphasize the importance of understanding Redux’s underlying principles and showcases code examples to illustrate its usage.

To rewrite Redux, we used a wonderful article by Lin Clark as a reference point, as well as the Redux codebase itself, and of course, the Redux docs.

You may note we’re using traditional pre-ES6 Javascript throughout this article. It’s because everyone who knows Javascript knows pre-ES6 JS, and we want to make sure we don’t lose anyone because of syntax unfamiliarity.

The Store

Redux, as is the same with any data layer, starts with a place to store information. Redux, by definition of the first principle of Redux, is a singular shared data store, described by its documentation as a “Single source of truth”, so we’ll start by making the store a singleton:

var store;

function getInstance() { 
 if (!store) store = createStore();
 return store;
}

function createStore() { 
 return {}; 
}

module.exports = getInstance();

The dispatcher

The next principle is that the state of the store can only change in one way: through the dispatching of actions. So let’s go ahead and write a dispatcher.

However, in order to update state in this dispatcher, we’re going to have to have state to begin with, so let’s create a simple object that contains our current state.

function createStore() { 
 var currentState = {}; 
}

Also, to dispatch an action, we need a reducer to dispatch it to. Let’s create a default one for now. A reducer receives the current state and an action and then returns a new version of the state based on what the action dictates:

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 
}

This is just a default function to keep the app from crashing until we formally assign reducers, so we’re going to go ahead and just return the state as is. Essentially a “noop”.

The store is going to need a way to notify interested parties that an update has been dispatched, so let’s create an array to house subscribers:

function createStore() { 
 var currentState = {}; 
 
 var currentReducer = function(state, action) { 
  return state; 
 } 
 
 var subscribers = []; 
}

Cool! OK, now we can finally put that dispatcher together. As we said above, actions are handed to reducers along with state, and we get a new state back from the reducer. If we want to retain the original state before the change for comparison purposes, it probably makes sense to temporarily store it.

Since an action is dispatched, we can safely assume the parameter a dispatcher receives is an action.

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 

 var subscribers = [];

 function dispatch(action) {
  var prevState = currentState;
 }

 return {
  dispatch: dispatch
 };
}

We also have to expose the dispatch function so it can actually be used when the store is imported. Kind of important.

So, we’ve created a reference to the old state. We now have a choice: we could either leave it to reducers to copy the state and return it, or we can do it for them. Since receiving a changed copy of the current state is part of the philosophical basis of Redux, we’re going to go ahead and just hand the reducers a copy to begin with.

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 

 var subscribers = [];

 function dispatch(action) {
  var prevState = currentState;
  currentState = currentReducer(cloneDeep(currentState), action);
 }

 return {
  dispatch: dispatch
 };
}

We hand a copy of the current state and the action to the currentReducer, which uses the action to figure out what to do with the state. What is returned is a changed version of the copied state, which we then use to update the state. Also, we’re using a generic cloneDeepimplementation (in this case, we used lodash’s) to handle copying the state completely. Simply using Object.assign wouldn’t be suitable because it retains references to objects contained by the base level object properties.

Now that we have this updated state, we need to alert any part of the app that cares. That’s where the subscribers come in. We simply call to each subscribing function and hand them the current state and also the previous state, in case whoever’s subscribed wants to do delta comparisons:

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 

 var subscribers = []; 

 function dispatch(action) {
  var prevState = currentState;
  currentState = currentReducer(cloneDeep(currentState), action);
  subscribers.forEach(function(subscriber){
   subscriber(currentState, prevState);
  });
 }

 return {
  dispatch: dispatch
 };
}

Of course, none of this really does any good with just that default noop reducer. What we need is the ability to add reducers, as well.


RELATED: New Research Findings: The Impact of Live Traceability™ on the Digital Thread


Adding Reducers

In order to develop an appropriate reducer-adding API, let’s revisit what a reducer is, and how we might expect reducers to be used.

In the Three Principles section of Redux’s documentation, we can find this philosophy:

“To specify how the state tree is transformed by actions, you write pure reducers.”

So what we want to accommodate is something that looks like a state tree, but where the properties of the state are assigned functions that purely change their state.

{ 
 stateProperty1: function(state, action) { 
  // does something with state and then returns it
 }, 
 stateProperty2: function(state, action) { 
  // same 
 }, ... 
}

Yeah, that looks about right. We want to take this state tree object and run each of its reducer functions every time an action is dispatched.

We have currentReducer defined in the scope, so let’s just create a new function and assign it to that variable. This function will take the pure reducers we passed to it in the state tree object, and run each one, returning the outcome of the function to the key it was assigned.

function createStore() { 
 var currentReducer = function(state, action) { 
  return state; 
 } ...

 function addReducers(reducers) {
  currentReducer = function(state, action) {
   var cumulativeState = {};
   
   for (key in reducers) {
    cumulativeState[key] = reducers[key](state[key], action);
   }
  
   return cumulativeState;
  }
 }
}

Something to note here: we’re only ever handing a subsection of the state to each reducer, keyed from its associated property name. This helps simplify the reducer API and also keeps us from accidentally changing other state areas of the global state. Your reducers should only be concerned with their own particular state, but that doesn’t preclude your reducers from taking advantage of other properties in the store.

As an example, think of a list of data, let’s say with a name “todoItems”. Now consider ways you might sort that data: by completed tasks, by date created, etc. You can store the way you sort that data into separate reducers (byCompleted and byCreated, for example) that contain ordered lists of IDs from the todoItems data, and associate them when you go to show them in the UI. Using this model, you can even reuse the byCreated property for other types of data aside from todoItems! This is definitely a pattern recommended in the Redux docs.

Now, this is fine if we add just one single set of reducers to the store, but in an app of any substantive size, that simply won’t be the case. So we should be able to accommodate different portions of the app adding their own reducers. And we should also try to be performant about it; that is, we shouldn’t run the same reducers twice.

// State tree 1 
{ 
 visible: function(state, action) { 
  // Manage visibility state 
 } ... 
}
// State tree 2
{ 
 visible: function(state, action) { 
  // Manage visibility state (should be the same function as above) 
 } ... 
}

In the above example, you might imagine two separate UI components having, say, a visibility reducer that manages whether something can be seen or not. Why run that same exact reducer twice? The answer is “that would be silly”. We should make sure that we collapse by key name for performance reasons, since all reducers are run each time an action is dispatched.

So keeping in mind these two important factors — ability to ad-hoc add reducers and not adding repetitive reducers — we arrive to the conclusion that we should add another scoped variable that houses all reducers added to date.

... 
function createStore() { 
 ... 
 var currentReducerSet = {};

 function addReducers(reducers) {
  currentReducerSet = Object.assign(currentReducerSet, reducers);

  currentReducer = function(state, action) {
   var cumulativeState = {};

   for (key in currentReducerSet) {
    cumulativeState[key] = currentReducerSet[key](state[key], action);
   }
 
   return cumulativeState;
  }

 }
 ...
}
...

The var currentReducerSet is combined with whatever reducers are passed, and duplicate keys are collapsed. We needn’t worry about “losing” a reducer because two reducers will both be the same if they have the same key name. Why is this?

To reiterate, a state tree is a set of key-associated pure reducer functions. A state tree property and a reducer have a 1:1 relationship. There should never be two different reducer functions associated with the same key.

This should hopefully illuminate for you exactly what is expected of reducers: to be a sort of behavioral definition of a specific property. If we have a “loading” property, what we’re saying with my reducer is that “this loading property should respond to this set specific actions in these particular ways”. We can either directly specify whether something is loading — think action name “START_LOADING — or we can use it to increment the number of things that are loading by having it respond to action names of actions that we know are asynchronous, such as for instance LOAD_REMOTE_ITEMS_BEGIN” and “LOAD_REMOTE_ITEMS_END”.

Let’s fulfill a few more requirements of this API. We need to be able to add and remove subscribers. Easy:

function createStore() { 
 var subscribers = []; 
 ... 

 function subscribe(fn) { 
  subscribers.push(fn); 
 }

 function unsubscribe(fn) {
  subscribers.splice(subscribers.indexOf(fn), 1);
 }

 return {
  ...
  subscribe: subscribe,
  unsubscribe: unsubscribe
 };
}

And we need to be able to provide the state when someone asks for it. And we should provide it in a safe way, so we’re going to only provide a copy of it. As above, we’re using a cloneDeep function to handle this so someone can’t accidentally mutate the original state, because in Javascript, as we know, if someone changes the value of a reference in the state object, it will change the store state.

function createStore() { 
 ... 

 function getState() { 
  return cloneDeep(currentState); 
 }

 return {
  ...
  getState: getState
 };
}

And that’s it for creating Redux! At this point, you should have everything you need to be able to have your app handle actions and mutate state in a stable way, the core fundamental ideas behind Redux.

Let’s take a look at the whole thing (with the lodash library):

var _ = require('lodash'); 
var globalStore;

function getInstance(){ 
 if (!globalStore) globalStore = createStore();
 return globalStore;
}

function createStore() { 
 var currentState = {}; 
 var subscribers = []; 
 var currentReducerSet = {}; 
 currentReducer = function(state, action) { 
  return state; 
 };
 
 function dispatch(action) {
  var prevState = currentState;
  currentState = currentReducer(_.cloneDeep(currentState), action);
  subscribers.forEach(function(subscriber){
   subscriber(currentState, prevState);
  });
 }
 
 function addReducers(reducers) {
  currentReducerSet = _.assign(currentReducerSet, reducers);
  currentReducer = function(state, action) {
   var ret = {};
   _.each(currentReducerSet, function(reducer, key) {
    ret[key] = reducer(state[key], action);
   });
   return ret;
  };
 }
	
 function subscribe(fn) {
  subscribers.push(fn);
 }
	
 function unsubscribe(fn) {
  subscribers.splice(subscribers.indexOf(fn), 1);
 }
	
 function getState() {
  return _.cloneDeep(currentState);
 }
	
 return {
  addReducers,
  dispatch,
  subscribe,
  unsubscribe,
  getState
 };
}
module.exports = getInstance();

So what did we learn by rewriting Redux?

We learned a few valuable things in this experience:

  1. We must protect and stabilize the state of the store. The only way a user should be able to mutate state is through actions.
  2. Reducers are pure functions in a state tree. Your app’s state properties are each represented by a function that provides updates to their state. Each reducer is unique to each state property and vice versa.
  3. The store is singular and contains the entire state of the app. When we use it this way, we can track each and every change to the state of the app.
  4. Reducers can be thought of as behavioral definitions of state tree properties.

RELATED: Leading Quantum Computing Company, IonQ, Selects Jama Connect® to Decrease Review Cycles, Reduce Rework


Bonus section: a React adapter

Having the store is nice, but you’re probably going to want to use it with a framework. React is an obvious choice, as Redux was created to implement Flux, a core principle data architecture of React. So let’s do that too!

You know what would be cool? Making it a higher-order component, or HOC as you’ll sometimes see them called. We pass an HOC a component, and it creates a new component out of it. And it is also able to be infinitely nested, that is, HOCs should be able to be nested within each other and still function appropriately. So let’s start with that basis:

Note: Going to switch to ES6 now, because it provides us with class extension, which we’ll need to be able to extend React.Component.

import React from 'react';
export default function StoreContainer(Component, reducers) { 
	return class extends React.Component { }
}

When we use StoreContainer, we pass in the Component class — either created with React.createClass or React.Component — as the first parameter, and then a reducer state tree like the one we created up above:

// Example of StoreContainer usage 
import StoreContainer from 'StoreContainer'; 
import { myReducer1, myReducer2 } from 'MyReducers';

StoreContainer(MyComponent, { 
 myReducer1, 
 myReducer2
});

Cool. So now we have a class being created and receiving the original component class and an object containing property-mapped reducers.

So, in order to actually make this component work, we’re going to have to do a few bookkeeping tasks:

  1. Get the initial store state
  2. Bind a subscriber to the component’s setState method
  3. Add the reducers to the store

We can bootstrap these tasks in the constructor lifecycle method of the Component. So let’s start with getting the initial state.

... 
export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 
 
  constructor() { 
   super(props); 
   // We have to call this to create the initial React 
   // component and get a `this` value to work with 
   this.state = store.getState(); 
  } 

 } 
}

Next, we want to subscribe the component’s setState method to the store. This makes the most sense because setting state on the component will then set off the top-down changes the component will broadcast, as we’d want in the Flux model.

We can’t, however, simply send this.setState to the subscribe method of the store — their parameters don’t line up. The store wants to send new and old state, and the setState method only accepts a function as the second parameter.

So to solve this, we’ll just create a marshalling function to handle it:

... 
import store from './Store';

function subscriber(currentState, previousState) { 
 this.setState(currentState); 
}

export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 

  constructor() { 
   ... 
   this.instSubscriber = subscriber.bind(this); 
   store.subscribe(this.instSubscriber);
  }

  componentWillUnmount() {
   store.unsubscribe(this.instSubscriber);
  }
 }
}
...

Since the store is a singleton, we can just import that in and call on its API directly.

Why do we have to keep the bound subscriber around? Because binding it returns a new function. When unmounting the component, we want to be able to unsubscribe to keep things clean. We know that the store merely looks for the function reference in its internal subscribers array and removes it, so we need to make sure we keep that reference around so we can get it back when we need to identify and remove it.

One last thing to do in the constructor: add the reducers. This is as simple as passing what we received to the HOC into the store.addReducers method:

... 
export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 
  ... 
  constructor() { 
   ... 
   store.addReducers(reducers); 
  } 
  ... 
 } 
}
...

So now we’re ready to provide the rendering of the component. This is the essence of HOCs. We take the Component we received and render it within the HOC, imbuing it with whatever properties the HOC needs to provide it:

... 
export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 
  ... 
  render() { 
   return (<Component {...this.props} {...this.state} />); 
  } 
 } 
} 			
...

We are “spreading” the properties and state of the HOC down to the Component it is wrapping. This effectively ensures that whatever properties we pass to the HOC get down to the component it wraps, a vital feature of infinitely nestable HOCs. It may or may not be wise to place the state as properties on the Component, but it worked well in my testing, and it was nice being able to access to the state through the this.props object of the Component that is wrapped, as you might expect to normally do with a React component that receives data from a parent component.

Here’s the whole shabang:

import React from 'react'; import store from './Store';
function subscriber(currentState, previousState) { 
 this.setState(currentState);
}

export default function StoreContainer(Component, reducers) {
 return class extends React.Component { 
  
  constructor(props) { 
   super(props); 
   this.state = store.getState(); 
   this.instSubscriber = subscriber.bind(this); 
   store.subscribe(this.instSubscriber);
   store.addReducers(reducers); 
  }
 
  componentWillUnmount() {
   store.unsubscribe(this.instSubscriber);
  }
  
  render() {
   return (<Component {...this.props} {...this.state} />);
  }
 }
}

Implementation of using StoreContainer:

import StoreContainer from 'StoreContainer'; 
import { myReducer } from 'MyReducers';
let MyComponent extends React.Component { 
 // My component stuff 
}
export default StoreContainer(MyComponent, { myReducer });

Implementation of using the Component that uses StoreContainer (exactly the same as normal):

import MyComponent from 'MyComponent'; 
import ReactDOM from 'react-dom';

ReactDOM.render(<MyComponent myProp=’foo’ />, document.body);

But you don’t have to define the data basis of your MyComponent immediately or in a long-lasting class definition; you could also do it more ephemerally, in implementation, and perhaps this is wiser for more generalized components:

import StoreContainer from 'StoreContainer'; 
import { myReducer } from 'MyReducers'; 
import GeneralizedComponent from 'GeneralizedComponent'; 
import ReactDOM from 'react-dom';
let StoreContainedGeneralizedComponent = StoreContainer(GeneralizedComponent, { myReducer });
ReactDOM.render(<StoreContainedGeneralizedComponent myProp='foo' />, document.body);

This has the benefit of letting parent components control certain child component properties.

Conclusion

By gaining a solid understanding of Redux through this blog, we hope teams can enhance their state management and write efficient, scalable code.

In addition to leveraging Redux, teams can further optimize their product development process by utilizing Jama Connect®‘s powerful features, such as Live Traceability™ and Traceability Score™, to improve engineering quality and speed up time to market.

Jama Connect empowers teams with increased visibility and control by enabling product development to be synchronized between people, tools, and processes across the end-to-end development lifecycle. Learn more here. 



MOSA


A Nod To MOSA: Deeper Documenting of Architectures May Have Prevented Proposal Loss

Lockheed loses contract award protest in part due to insufficient Modular Open Systems Approach (MOSA) documentation.

On April 6th the GAO handed down a denial to Sikorsky-Boeing proposal protest of the Army tiltrotor award to Textron Bell team. This program is called the Future Long Range Assault Aircraft (FLRAA) which is supposed to be a replacement for the Blackhawk helicopter. In reading the Decision from the GAO, it is apparent that there was a high degree of importance placed on using a Modular Open Systems Approach (MOSA) as an architecture technique for the design and development. For example, the protest adjudication decision reveals, “…[o]ne of the methods used to ensure the offeror’s proposed approach to the Future Long-Range Assault Aircraft (FLRAA) weapon system meets the Army’s MOSA objectives was to evaluate the offeror’s functional architecture.” Sikorsky failed to “allocate system functions to functional areas of the system” in enough detail as recommended by the MOSA standard down to the subsystem level which is why they were given an Unacceptable in the engineering part of their proposal response.

MOSA will enable aerospace products and systems providers to not only demonstrate conformance to MOSA standards for their products but allow them to deliver additional MOSA-conformant products and variants more rapidly. By designing for open standards from the start, organizations can create best-in-class solutions while allowing the acquirer to enable cost savings and avoidance through reuse of technology, modules, or elements from any supplier via the acquisition lifecycle.

Examining MOSA

What is a Modular Open Systems Approach (MOSA)?

A Modular Open Systems Approach (MOSA) is a business and technical framework that is used to develop and acquire complex systems. MOSA emphasizes the use of modules that are designed to work together to create a system that is interoperable, flexible, and upgradeable. To do this MOSA’s key focus is designing modular interface commonality with the intent to reduce costs and enhance sustainability efforts.

More specifically, according to the National Defense Industrial Association (NDIA), “MOSA is seen as a technical design and business strategy used to apply open system concepts to the maximum extent possible, enabling incremental development, enhanced competition, innovation, and interoperability.”

Further, on January 7, 2019, the U.S. Department of Defense (DoD) issued a memo, signed by the Secretaries of the Army, Air Force, and Navy, mandating the use of the Modular Open Systems Approach (MOSA). The memo states that “MOSA supporting standards should be included in all requirements, programming and development activities for future weapon system modifications and new start development programs to the maximum extent possible.”

In fact, this mandate for MOSA is even codified into a United States law (Title 10 U.S.C. 2446a.(b), Sec 805) that states all major defense acquisition programs (MDAP) are to be designed and developed using a MOSA open architecture.

MOSA has become increasingly important to the DoD where complex systems such as weapons platforms and communication systems require a high level of interoperability and flexibility. Their main objective is to ensure systems are designed with highly cohesive, loosely coupled, and severable modules that can be competed separately and acquired from independent vendors. This allows the DoD to acquire systems, subsystems, and capabilities with increased level of flexibility of competition over previous proprietary programs. However, MOSA can also be applied to other industries, such as healthcare and transportation, where interoperability and flexibility are also important considerations.

The basic idea behind MOSA is to define architectures that are composed of more, more manageable modules that can be developed, tested, and integrated independently. Each module is designed to operate within a standard interface, allowing it to work with other modules and be easily replaced or upgraded.


RELATED: Streamlining Defense Contract Bid Document Deliverables with Jama Connect®


The DOD requires the following to be met to satisfy a MOSA architecture:

  • Characterize the modularity of every weapons system — this means identifying, defining, and documenting system models and architectures so suppliers will know where to integrate their modules.
  • Define software interfaces between systems and modules.
  • Deliver the interfaces and associated documentation to a government repository.

And, according to the National Defense Authorization Act for Fiscal Year 2021, “the 2021 NDAA and forthcoming guidance will require program officers to identify, define, and document every model, require interfaces for systems and the components they use, and deliver these modular system interfaces and associated documentation to a specific repository.”

  • Modularize the system
  • Specify what each component does and how it communicates
  • Create interfaces for each system and component
  • Document and share interface information with suppliers

MOSA implies the use of open standards and architectures, which are publicly available and can be used by anyone. This helps to reduce costs, increase competition, and encourage innovation.

Why is MOSA important to complex systems development?

MOSA, an important element of the national defense strategy, is important for complex systems development because it provides a framework for developing systems that are modular, interoperable, and upgradeable. Here are some reasons why MOSA is important:

  • Interoperability: MOSA allows different components of a system to work together seamlessly, even if they are developed by different vendors or organizations. This means that the system can be upgraded or enhanced without having to replace the entire system.
  • Flexibility: MOSA promotes the use of open standards and architectures, which allows for greater flexibility in system development. It also allows for more competition among vendors, which can lead to lower costs and better innovation.
  • Cost-effectiveness: MOSA can reduce costs by allowing organizations to reuse existing components or develop new components that can be integrated into existing systems. It can also reduce the cost of maintenance and upgrades over the lifecycle of the system.
  • Futureproofing: MOSA allows for systems to be upgraded or modified over time, as new technology becomes available. This helps to future-proof the system, ensuring that it can adapt to changing needs and requirements.

RELATED: Digital Engineering Between Government and Contractors


How can Live Traceability™ in Jama Connect® help with a MOSA?

Live Traceability™ in Jama Connect® can help with MOSA by providing mechanisms to establish traces between MOSA architecture elements and interfaces, and the requirements and verification & validation data that support them. Live Traceability is the ability to track and record changes to data elements and their relationships in real-time. This information can be used to improve documenting system design, identify potential issues, and track changes over time.

Here are some specific ways that Live Traceability can help with MOSA:

  • Status monitoring: Live Traceability allows systems engineers to monitor the progress of architecture definition in real-time, identifying issues from a requirements perspective as they arise. This can help to increase efficiency and ensure that the stakeholders are aware of changes as they occur.
  • Digital Engineering: Live Traceability can help with digital engineering by providing mechanisms to capture architectures, requirements, risks, and tests including the traceability between individual elements.
  • Configuration and Change Management: Live Traceability can help with change management by tracking changes to system architectures and interfaces including requirements that are allocated to them. This can help to ensure that changes are properly documented and that they do not impact other parts of the system. Baselining and automatic versioning enable snapshots in time that represent an agreed-upon, reviewed, and approved set of data that have been committed to a specific milestone, phase, or release.
  • Testing and Validation: Live Traceability can help with verification and validation to ensure that system meets specified requirements and needs. This can help reduce risk by identifying issues early in the development process and ensuring that the system meets its requirements.
  • Future-proofing: Live Traceability can help to future-proof the system by providing a record of system changes and modifications over time. This can help to ensure that the system remains flexible and adaptable to changing needs and requirements.

In summary, Live Traceability in Jama Connect can help with MOSA by providing real-time visibility into the traceability between architectures, interfaces, and requirements. It can help to improve documenting the system design, identify potential issues, and track changes over time, which are all important considerations for MOSA.