Tag Archive for: Product Development & Management

Au

Jama Software is always looking for news that would benefit and inform our industry partners. As such, we’ve curated a series of customer and industry spotlight articles that we found insightful. In this blog post, we share an article, sourced from NPR, titled “After years of decline, the auto industry in Canada is making a comeback” – originally authored by H.J. Mai and published on March 12, 2023.


After Years of Decline, the Auto Industry in Canada is Making a Comeback

When most people think of Canada, they rarely think of cars. But the country, known for hockey, maple syrup and endless wilderness, is one of the largest car producers in North America. And with the growing importance of electric vehicles, Canada hopes to breathe new life into its automotive industry and maintain a more than 100-year-old tradition.

Canada’s automotive industry is primarily located in Ontario and Quebec, with Windsor, Ontario, claiming the title of Canada’s automotive capital.

“We’ve been the auto capital of Canada since about 1904, when the first auto plant opened in Canada,” said Windsor Mayor Drew Dilkins.

Windsor, just across the river from Detroit, has benefited from its proximity to the United States and the three major carmakers headquartered there.

Stellantis, formerly Fiat Chrysler, and South Korean battery maker LG Energy Solutions (LGES) announced last year that they will invest more than 5 billion Canadian dollars ($3.5 billion) in building a new large-scale battery manufacturing plant in Windsor. The plant is expected to be operational by 2024 and will create an estimated 2,500 jobs.


RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution for Automotive


“It’s a massive, game-changing investment, and I’m not even sure these two words are big enough to describe how important it is for our community,” Dilkins says. “This will have a generational impact. [Companies] will look at the new world of automotive and will start looking at Windsor Essex as a place to do business.

Investment by Stellantis and LGES is part of a larger trend that has seen more than CA$17 billion in announced investment in Ontario’s automotive sector since the beginning of 2021.

“Ontario has had the greatest new investment in vehicle production in its history over the past two years,” says Flavio Volpe, president of the Canadian Automobile Parts Manufacturers’ Association.

Most of this investment, worth nearly CA$13 billion, is in electric and battery production. And by passing the Inflation Reduction Act, U.S. lawmakers have given Canada a further boost to its EV ambitions.

“This is good news for Canadians, for our green economy, and for our growing EV manufacturing sector,” Canadian Prime Minister Justin Trudeau said in a tweet shortly after President Biden signed the law.

The law includes tax credits for EV buyers, but only if the car is largely made and assembled in North America, and its battery uses locally mined components. According to GM Canada’s David Paterson, this could give Canada an advantage over the U.S. and Mexico.

“What goes into our [sic] batteries are cathode active materials, which are mainly made of nickel and other critical minerals that we happen to have in abundance here in Canada,” he says.

“As we see less demand for gasoline, we see more demand for minerals, and Canada is an economy built on natural resources.”

In an effort to encourage the shift in the auto industry toward battery-powered EVs, Canada’s federal government along with Ontario’s provincial government have been investing billions of dollars.

“Our incentive is that you have a job because we invest about $2.5 billion in taxpayer money in these [car companies,” says Vic Fedeli, Ontario’s Minister of Economic Development, Job Creation and Trade.

The recent investment streak is a welcome sign for an industry that has gone through many ups and downs. Increased automation and competition from lower-wage regions have led to plant closures and job losses over the past two decades.

“We have been coming from a whole generation since about 2000, watching this critical sector decline. We have seen disinvestment in the sector, we have seen job losses in the sector, we have seen plants closed and communities are basically disappearing,” says Angelo DiCaro, research director for Unifor, a union representing about 230,000 Canadian auto workers.

The North American Free Trade Agreement, or NAFTA for short, contributed to this downturn as car companies moved their assembly lines to places like Mexico or the U.S. Southeast to cut costs. The USMAC, which replaced NAFTA in 2020, has somewhat leveled the playing field by boosting regional content requirements and instituting a minimum wage of at least $16 an hour.

DiCaro says that despite the uncertainty surrounding certain jobs that could be lost in this transition to electric vehicles, Canada’s auto workers have a sense of optimism and hope.


RELATED: Jama Connect® for Automotive Solution Overview


According to government data, the auto sector plays a key role in Canada’s economy, contributing CA$16 billion to its gross domestic product (GDP). With nearly 500,000 direct or indirect jobs, automotive is one of the country’s largest manufacturing sectors and one of its largest export industries.

Volkswagen and its battery company PowerCo announced Monday that they selected Ontario, Canada as the location of Volkswagen’s first cell manufacturing facility in North America.

The new battery plant in Canada will be the third group in the group, after Salzgitter, Germany and Valencia, Spain.

“Canada offers ideal conditions, including the local supply of raw materials and wide access to clean electricity,” the group said in a press release.

Production is expected to start in 2027.

Tesla is another company that publicly stated it is actively looking at Canada as a potential site for a new battery and / or assembly plant. The company would join Ford, General Motors, Honda, Stellantis and Toyota, which already have production facilities in Ontario.

“The success of the [Ontario] government and the federal government [sic] will not be defined by what we have landed at the moment. It will be whether we can lend a sixth automaker or a seventh,” Flavio Volpe says. “It will mean that our vision was worthy of the rhetoric and convince the best automakers in the world that the future runs through Ontario.”



IEC 51508

In this blog, we recap sections from our eBook, “IEC 61508 Overview: The Complete Guide for Functional Safety in Industrial Manufacturing” – Click HERE to read the entire eBook.


Functional Safety Made Simple: A Guide to IEC 61508 for Manufacturing

What Is IEC 61508?

As discussed previously, industrial manufacturing firms need to prevent dangerous failures that may occur with the use of their system. The challenge is that oftentimes systems are incredibly complex with many interdependencies, making it difficult to fully identify every potential safety risk.

According to the International Electrotechnical Commission, leading contributors to failure include:

  • Systematic or random failure of hardware or software
  • Human error
  • Environmental interference, such as temperature, weather, and more
  • Loss of electrical supply or other system disturbance
  • Incorrect system specifications in hardware or software

IEC 61508 creates requirements to ensure that systems are designed, implemented, operated, and maintained at the safety level required to mitigate the most dangerous risks. The international standard is used by a wide range of manufacturers, system engineers, designers, and industrial companies, and others that are audited based on compliance. The standard applies to safety-critical products including electrical, electronic, and programmable-related systems.

Why Was IEC 61508 Developed?

The primary goal of the standard is human safety, and it’s based on a couple of principles, including:

  1. Use of a safety lifecycle. The lifecycle outlines the best practices around identifying risks and mitigating potential design errors.
  2. Probable failure exercises. This assumes that if a device does fail, a “fail-safe” plan is needed.

IEC 61508 applies to all industries; however, even though it covers a broad range of sectors, every industry has its own nuances. As a result, many have developed their own standards based on IEC 61508.

Industry-specific functional safety standards include ones for:

  • Industrial – IEC 61496-1, IEC 61131-6, ISO 13849, IEC 61800-5-2, ISO 13850, IEC 62061, IEC 62061, ISO 10218
  • Transportation – EN 5012x, ISO 26262, ISO 25119, ISO 15998
  • Buildings – EN/ 81/ EN 115
  • Medical devices – IEC 60601, IEC 62304
  • Household appliances – IEC 60335, IEC 60730
  • Energy systems and providers – IEC 62109, IEC 61513, IEC 50156, IEC 61511

The standard includes Safety Integrity Levels (SILs), which cover four stages from SIL 1 to SIL 4 and indicate whether a safety function is likely to result in a dangerous failure.


RELATED: The Top Challenges in Industrial Manufacturing and Consumer Electronic Development


The Seven Parts of IEC 61508

The IEC 61508 standard covers the most common hazards that could occur in the event of a failure. The goal of the standard is to mitigate or reduce failure risk to a specific tolerance level. The standard includes a lifecycle with 16 phases, broken into seven parts, including:

  • Part 1: General requirements
  • Part 2: Requirements for electric, electric programmable safety-relevant systems
  • Part 3: Software requirements
  • Part 4: Abbreviations and definitions
  • Part 5: Examples and methods to determine the appropriate safety integrity levels
  • Part 6: Guidelines on how to apply Part 2 and Part 3 Part 7: An overview of techniques and measures

The first three parts highlight the standard’s requirements, and the rest explain the guidelines and provide examples of development.

IEC 61508 Certification: Is it Required?

IEC 61508 certification is optional in most cases, unless you contract with a firm that requires it, or it’s required by your local government. Even if it’s not mandatory, achieving certification provides peace of mind and creates a clear path to improving safety. Certification is offered through international agencies specializing in IEC 61508, such as the TÜV SÜD. Completing certification provides creditability around your IEC 61508 compliance and is a point of differentiation if bidding on a contract against multiple contractors.


RELATED: Lessons Learned for Reducing Risk in Product Development


Hazard and Risk Analysis for Determining SIL

Understanding functional safety requires a hazard analysis and risk assessment of the equipment under control (EUC).

The hazard analysis identifies all possible hazards of a product, process, or application. This will determine the functional safety requirements to meet a particular safety standard.

A risk assessment is needed for every hazard that you identify. The risk assessment will evaluate the frequency and likelihood of that hazard occurring, as well as the potential consequences if it does happen.

The risk assessment determines the appropriate SIL level, and you can then use either qualitative or quantitative analysis to assess the risk. The guidelines don’t require a specific method of analysis, so use whatever method you prefer.

To learn more, download the entire eBook HERE.


Live Traceability

This Features in Five video demonstrates how Jama Connect helps maintain Live Traceability across applications.


Jama Connect® Features in Five: Live Traceability™

Learn how you can supercharge your systems development process! In this blog series, we’re pulling back the curtains to give you a look at a few of the powerful features in Jama Connect®… in under five minutes.

In this Features in Five video, Jama Software® subject matter experts Neil Bjorklund, Sales Manager – Medical, and Steven Pink, Senior Solutions Architect, will demonstrate how Jama Connect helps maintain Live Traceability™ across applications.


VIDEO TRANSCRIPT:

Neil Bjorklund: So, what we wanted to do today is give you all a quick snapshot of what it looks like for Jama to be integrated across systems, and showing how Jama helps you all maintain Live Traceability across applications or what we call a connected digital thread.

As part of our demonstration today, we’re going to show you what that looks like across your V-Model here of system engineering, in which case we’re going to actually make a change from a Windchill item, so an item over in Windchill, making a change to a specification or a part over there. We’re going to trace from that Windchill item over to Jama at the subsystem design output level. So, you’ll be able to see those items synchronized across those applications. We’re then going to perform an Impact Analysis within Jama. So, that’s going to allow you to then visualize, if you were to make a change to a Windchill item, what impact does that have across your full system.

So, we’re going to then see that change cascade up to a system requirement level here. We’re going to then make a change to that system requirement, and Jama is going to have the suspect linking mechanism to be able to then identify what all items downstream could be impacted by this change. In which case, we’re going to show an example where a software requirement must be changed. We’re going to make that change, and we’re going to show that change and then cascade over into Jira.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


Bjorklund: So, the idea here is that, by managing Live Traceability within Jama, maintaining Jama as integrated with other applications or ecosystems, you’re going to be able to visualize that connected digital thread and see changes take place from Windchill into Jama, over then into Jira.

Now, one thing to remember, this integration is very flexible. So, we can integrate from Jama over to Windchill PLM parts, problem reports, change requests, requirements, different folder structures, and so forth. Within the software side, obviously, we’re integrating with Jira, that’s very flexible. But, we can also integrate with other applications like Azure DevOps, PFS, things like that. So, again, this is just one example, just to highlight the flexibility here of Jama, and this workflow.

Steven Pink: Okay, so thank you, Neil, for that. Now, we’re taking a look at a spec here in Windchill. This is what we’re going to be making a change to today. I’m going to go ahead and make an edit to this, and this is what’s going to synchronize across into Jama. So, I’m going to update the description, update the specification within Windchill. We’ve now saved this update within Windchill. We’ve updated the description here, and this is going to synchronize across to Jama in real-time. So, I’m going to switch over to this speck from Windchill that has synced into Jama. I’ll refresh it here, and we’ll see the description has now updated in real-time. If we want to understand the impact this change could potentially have, we’ll use Jama’s Impact Analysis feature. This will allow us to look up and downstream from this specification.

So, based on those relationship rules Neil showed earlier in that Live Traceability in Jama, we can look from a spec, one of these subsystem design outputs, all the way upstream through hardware and software to those higher level system requirements to understand what the potential impact could be. So, I’ll go ahead and run this Impact Analysis. We’ll take a look, and it’ll find everything that’s directly and indirectly connected to this specification from Windchill.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


Pink: We can see hardware requirements, the system requirement, a high-level user need, and maybe the system requirement is impacted by our change. We can click into the system requirement. If we need to make an update to this impacted system requirement, we can come in and modify the description here.

When I save this system requirement, Jama is actually going to identify everything downstream that has been impacted through the Suspect Link feature. So now, we’re flagging these downstream hardware and software requirements that could be impacted by the change we made to this higher-level system requirement. If there’s been an impact to this software requirement, for example, I can click into this software requirement. I can then edit the description to reflect the necessary updates based on that impact assessment.

And now, when I save this software requirement, this has been synchronized with Jira, so we’ll be able to see the updated software requirement updated into Jira in real-time. So, I’m going to switch over to Jira here, and this is that software requirement that we’re synchronizing. And now, we can see the update to that description has synced across to Jira in real-time, providing us Live Traceability between specifications in Windchill, through our requirements in Jama, all the way down through the lower level software and development work occurring in Jira.


RELATED: G2 Again Names Jama Connect® the Standout Leader in Requirements Management Software in their Spring 2023 Grid® Report


Bjorklund: Okay, thank you, Steven, for that. So, this is a quick recap. So, we’ve gone from a item in Windchill. We made that change within Windchill. That change was automatically reflected over into Jama. We then performed Impact Analysis within Jama, made changes across our system-level requirements, which then cascaded changes down into our software requirements over in Jira. Now, again, this is just one example where we’ve taken a change, we’ve integrated Jama with different applications, but Jama has the ability to integrate with all the applications across your product development lifecycle, across that V-Model system engineering.

So, if there are groups that are maybe not using Jira, you’d certainly have the ability to then manage change across different applications and, Jama serves as that central system to be able to manage Live Traceability and maintain that connected digital thread. Thank you.


To view more Jama Connect Features in Five topics, visit: Jama Connect Features in Five Video Series



Redux

“What I cannot create, I do not understand.”

Richard Feynman

Redux is pretty simple. You have action creators, actions, reducers, and a store. What’s not so simple is figuring out how to put everything together in the best or most “correct” way. In this blog, we begin by explaining the motivation behind using Redux and highlight its benefits, such as predictable state management and improved application performance. It then delves into the core concepts of Redux, including actions, reducers, and the store, providing a step-by-step guide on how to implement Redux in a JavaScript application. We will emphasize the importance of understanding Redux’s underlying principles and showcases code examples to illustrate its usage.

To rewrite Redux, we used a wonderful article by Lin Clark as a reference point, as well as the Redux codebase itself, and of course, the Redux docs.

You may note we’re using traditional pre-ES6 Javascript throughout this article. It’s because everyone who knows Javascript knows pre-ES6 JS, and we want to make sure we don’t lose anyone because of syntax unfamiliarity.

The Store

Redux, as is the same with any data layer, starts with a place to store information. Redux, by definition of the first principle of Redux, is a singular shared data store, described by its documentation as a “Single source of truth”, so we’ll start by making the store a singleton:

var store;

function getInstance() { 
 if (!store) store = createStore();
 return store;
}

function createStore() { 
 return {}; 
}

module.exports = getInstance();

The dispatcher

The next principle is that the state of the store can only change in one way: through the dispatching of actions. So let’s go ahead and write a dispatcher.

However, in order to update state in this dispatcher, we’re going to have to have state to begin with, so let’s create a simple object that contains our current state.

function createStore() { 
 var currentState = {}; 
}

Also, to dispatch an action, we need a reducer to dispatch it to. Let’s create a default one for now. A reducer receives the current state and an action and then returns a new version of the state based on what the action dictates:

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 
}

This is just a default function to keep the app from crashing until we formally assign reducers, so we’re going to go ahead and just return the state as is. Essentially a “noop”.

The store is going to need a way to notify interested parties that an update has been dispatched, so let’s create an array to house subscribers:

function createStore() { 
 var currentState = {}; 
 
 var currentReducer = function(state, action) { 
  return state; 
 } 
 
 var subscribers = []; 
}

Cool! OK, now we can finally put that dispatcher together. As we said above, actions are handed to reducers along with state, and we get a new state back from the reducer. If we want to retain the original state before the change for comparison purposes, it probably makes sense to temporarily store it.

Since an action is dispatched, we can safely assume the parameter a dispatcher receives is an action.

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 

 var subscribers = [];

 function dispatch(action) {
  var prevState = currentState;
 }

 return {
  dispatch: dispatch
 };
}

We also have to expose the dispatch function so it can actually be used when the store is imported. Kind of important.

So, we’ve created a reference to the old state. We now have a choice: we could either leave it to reducers to copy the state and return it, or we can do it for them. Since receiving a changed copy of the current state is part of the philosophical basis of Redux, we’re going to go ahead and just hand the reducers a copy to begin with.

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 

 var subscribers = [];

 function dispatch(action) {
  var prevState = currentState;
  currentState = currentReducer(cloneDeep(currentState), action);
 }

 return {
  dispatch: dispatch
 };
}

We hand a copy of the current state and the action to the currentReducer, which uses the action to figure out what to do with the state. What is returned is a changed version of the copied state, which we then use to update the state. Also, we’re using a generic cloneDeepimplementation (in this case, we used lodash’s) to handle copying the state completely. Simply using Object.assign wouldn’t be suitable because it retains references to objects contained by the base level object properties.

Now that we have this updated state, we need to alert any part of the app that cares. That’s where the subscribers come in. We simply call to each subscribing function and hand them the current state and also the previous state, in case whoever’s subscribed wants to do delta comparisons:

function createStore() { 
 var currentState = {}; 

 var currentReducer = function(state, action) { 
  return state; 
 } 

 var subscribers = []; 

 function dispatch(action) {
  var prevState = currentState;
  currentState = currentReducer(cloneDeep(currentState), action);
  subscribers.forEach(function(subscriber){
   subscriber(currentState, prevState);
  });
 }

 return {
  dispatch: dispatch
 };
}

Of course, none of this really does any good with just that default noop reducer. What we need is the ability to add reducers, as well.


RELATED: New Research Findings: The Impact of Live Traceability™ on the Digital Thread


Adding Reducers

In order to develop an appropriate reducer-adding API, let’s revisit what a reducer is, and how we might expect reducers to be used.

In the Three Principles section of Redux’s documentation, we can find this philosophy:

“To specify how the state tree is transformed by actions, you write pure reducers.”

So what we want to accommodate is something that looks like a state tree, but where the properties of the state are assigned functions that purely change their state.

{ 
 stateProperty1: function(state, action) { 
  // does something with state and then returns it
 }, 
 stateProperty2: function(state, action) { 
  // same 
 }, ... 
}

Yeah, that looks about right. We want to take this state tree object and run each of its reducer functions every time an action is dispatched.

We have currentReducer defined in the scope, so let’s just create a new function and assign it to that variable. This function will take the pure reducers we passed to it in the state tree object, and run each one, returning the outcome of the function to the key it was assigned.

function createStore() { 
 var currentReducer = function(state, action) { 
  return state; 
 } ...

 function addReducers(reducers) {
  currentReducer = function(state, action) {
   var cumulativeState = {};
   
   for (key in reducers) {
    cumulativeState[key] = reducers[key](state[key], action);
   }
  
   return cumulativeState;
  }
 }
}

Something to note here: we’re only ever handing a subsection of the state to each reducer, keyed from its associated property name. This helps simplify the reducer API and also keeps us from accidentally changing other state areas of the global state. Your reducers should only be concerned with their own particular state, but that doesn’t preclude your reducers from taking advantage of other properties in the store.

As an example, think of a list of data, let’s say with a name “todoItems”. Now consider ways you might sort that data: by completed tasks, by date created, etc. You can store the way you sort that data into separate reducers (byCompleted and byCreated, for example) that contain ordered lists of IDs from the todoItems data, and associate them when you go to show them in the UI. Using this model, you can even reuse the byCreated property for other types of data aside from todoItems! This is definitely a pattern recommended in the Redux docs.

Now, this is fine if we add just one single set of reducers to the store, but in an app of any substantive size, that simply won’t be the case. So we should be able to accommodate different portions of the app adding their own reducers. And we should also try to be performant about it; that is, we shouldn’t run the same reducers twice.

// State tree 1 
{ 
 visible: function(state, action) { 
  // Manage visibility state 
 } ... 
}
// State tree 2
{ 
 visible: function(state, action) { 
  // Manage visibility state (should be the same function as above) 
 } ... 
}

In the above example, you might imagine two separate UI components having, say, a visibility reducer that manages whether something can be seen or not. Why run that same exact reducer twice? The answer is “that would be silly”. We should make sure that we collapse by key name for performance reasons, since all reducers are run each time an action is dispatched.

So keeping in mind these two important factors — ability to ad-hoc add reducers and not adding repetitive reducers — we arrive to the conclusion that we should add another scoped variable that houses all reducers added to date.

... 
function createStore() { 
 ... 
 var currentReducerSet = {};

 function addReducers(reducers) {
  currentReducerSet = Object.assign(currentReducerSet, reducers);

  currentReducer = function(state, action) {
   var cumulativeState = {};

   for (key in currentReducerSet) {
    cumulativeState[key] = currentReducerSet[key](state[key], action);
   }
 
   return cumulativeState;
  }

 }
 ...
}
...

The var currentReducerSet is combined with whatever reducers are passed, and duplicate keys are collapsed. We needn’t worry about “losing” a reducer because two reducers will both be the same if they have the same key name. Why is this?

To reiterate, a state tree is a set of key-associated pure reducer functions. A state tree property and a reducer have a 1:1 relationship. There should never be two different reducer functions associated with the same key.

This should hopefully illuminate for you exactly what is expected of reducers: to be a sort of behavioral definition of a specific property. If we have a “loading” property, what we’re saying with my reducer is that “this loading property should respond to this set specific actions in these particular ways”. We can either directly specify whether something is loading — think action name “START_LOADING — or we can use it to increment the number of things that are loading by having it respond to action names of actions that we know are asynchronous, such as for instance LOAD_REMOTE_ITEMS_BEGIN” and “LOAD_REMOTE_ITEMS_END”.

Let’s fulfill a few more requirements of this API. We need to be able to add and remove subscribers. Easy:

function createStore() { 
 var subscribers = []; 
 ... 

 function subscribe(fn) { 
  subscribers.push(fn); 
 }

 function unsubscribe(fn) {
  subscribers.splice(subscribers.indexOf(fn), 1);
 }

 return {
  ...
  subscribe: subscribe,
  unsubscribe: unsubscribe
 };
}

And we need to be able to provide the state when someone asks for it. And we should provide it in a safe way, so we’re going to only provide a copy of it. As above, we’re using a cloneDeep function to handle this so someone can’t accidentally mutate the original state, because in Javascript, as we know, if someone changes the value of a reference in the state object, it will change the store state.

function createStore() { 
 ... 

 function getState() { 
  return cloneDeep(currentState); 
 }

 return {
  ...
  getState: getState
 };
}

And that’s it for creating Redux! At this point, you should have everything you need to be able to have your app handle actions and mutate state in a stable way, the core fundamental ideas behind Redux.

Let’s take a look at the whole thing (with the lodash library):

var _ = require('lodash'); 
var globalStore;

function getInstance(){ 
 if (!globalStore) globalStore = createStore();
 return globalStore;
}

function createStore() { 
 var currentState = {}; 
 var subscribers = []; 
 var currentReducerSet = {}; 
 currentReducer = function(state, action) { 
  return state; 
 };
 
 function dispatch(action) {
  var prevState = currentState;
  currentState = currentReducer(_.cloneDeep(currentState), action);
  subscribers.forEach(function(subscriber){
   subscriber(currentState, prevState);
  });
 }
 
 function addReducers(reducers) {
  currentReducerSet = _.assign(currentReducerSet, reducers);
  currentReducer = function(state, action) {
   var ret = {};
   _.each(currentReducerSet, function(reducer, key) {
    ret[key] = reducer(state[key], action);
   });
   return ret;
  };
 }
	
 function subscribe(fn) {
  subscribers.push(fn);
 }
	
 function unsubscribe(fn) {
  subscribers.splice(subscribers.indexOf(fn), 1);
 }
	
 function getState() {
  return _.cloneDeep(currentState);
 }
	
 return {
  addReducers,
  dispatch,
  subscribe,
  unsubscribe,
  getState
 };
}
module.exports = getInstance();

So what did we learn by rewriting Redux?

We learned a few valuable things in this experience:

  1. We must protect and stabilize the state of the store. The only way a user should be able to mutate state is through actions.
  2. Reducers are pure functions in a state tree. Your app’s state properties are each represented by a function that provides updates to their state. Each reducer is unique to each state property and vice versa.
  3. The store is singular and contains the entire state of the app. When we use it this way, we can track each and every change to the state of the app.
  4. Reducers can be thought of as behavioral definitions of state tree properties.

RELATED: Leading Quantum Computing Company, IonQ, Selects Jama Connect® to Decrease Review Cycles, Reduce Rework


Bonus section: a React adapter

Having the store is nice, but you’re probably going to want to use it with a framework. React is an obvious choice, as Redux was created to implement Flux, a core principle data architecture of React. So let’s do that too!

You know what would be cool? Making it a higher-order component, or HOC as you’ll sometimes see them called. We pass an HOC a component, and it creates a new component out of it. And it is also able to be infinitely nested, that is, HOCs should be able to be nested within each other and still function appropriately. So let’s start with that basis:

Note: Going to switch to ES6 now, because it provides us with class extension, which we’ll need to be able to extend React.Component.

import React from 'react';
export default function StoreContainer(Component, reducers) { 
	return class extends React.Component { }
}

When we use StoreContainer, we pass in the Component class — either created with React.createClass or React.Component — as the first parameter, and then a reducer state tree like the one we created up above:

// Example of StoreContainer usage 
import StoreContainer from 'StoreContainer'; 
import { myReducer1, myReducer2 } from 'MyReducers';

StoreContainer(MyComponent, { 
 myReducer1, 
 myReducer2
});

Cool. So now we have a class being created and receiving the original component class and an object containing property-mapped reducers.

So, in order to actually make this component work, we’re going to have to do a few bookkeeping tasks:

  1. Get the initial store state
  2. Bind a subscriber to the component’s setState method
  3. Add the reducers to the store

We can bootstrap these tasks in the constructor lifecycle method of the Component. So let’s start with getting the initial state.

... 
export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 
 
  constructor() { 
   super(props); 
   // We have to call this to create the initial React 
   // component and get a `this` value to work with 
   this.state = store.getState(); 
  } 

 } 
}

Next, we want to subscribe the component’s setState method to the store. This makes the most sense because setting state on the component will then set off the top-down changes the component will broadcast, as we’d want in the Flux model.

We can’t, however, simply send this.setState to the subscribe method of the store — their parameters don’t line up. The store wants to send new and old state, and the setState method only accepts a function as the second parameter.

So to solve this, we’ll just create a marshalling function to handle it:

... 
import store from './Store';

function subscriber(currentState, previousState) { 
 this.setState(currentState); 
}

export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 

  constructor() { 
   ... 
   this.instSubscriber = subscriber.bind(this); 
   store.subscribe(this.instSubscriber);
  }

  componentWillUnmount() {
   store.unsubscribe(this.instSubscriber);
  }
 }
}
...

Since the store is a singleton, we can just import that in and call on its API directly.

Why do we have to keep the bound subscriber around? Because binding it returns a new function. When unmounting the component, we want to be able to unsubscribe to keep things clean. We know that the store merely looks for the function reference in its internal subscribers array and removes it, so we need to make sure we keep that reference around so we can get it back when we need to identify and remove it.

One last thing to do in the constructor: add the reducers. This is as simple as passing what we received to the HOC into the store.addReducers method:

... 
export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 
  ... 
  constructor() { 
   ... 
   store.addReducers(reducers); 
  } 
  ... 
 } 
}
...

So now we’re ready to provide the rendering of the component. This is the essence of HOCs. We take the Component we received and render it within the HOC, imbuing it with whatever properties the HOC needs to provide it:

... 
export default function StoreContainer(Component, reducers) { 
 return class extends React.Component { 
  ... 
  render() { 
   return (<Component {...this.props} {...this.state} />); 
  } 
 } 
} 			
...

We are “spreading” the properties and state of the HOC down to the Component it is wrapping. This effectively ensures that whatever properties we pass to the HOC get down to the component it wraps, a vital feature of infinitely nestable HOCs. It may or may not be wise to place the state as properties on the Component, but it worked well in my testing, and it was nice being able to access to the state through the this.props object of the Component that is wrapped, as you might expect to normally do with a React component that receives data from a parent component.

Here’s the whole shabang:

import React from 'react'; import store from './Store';
function subscriber(currentState, previousState) { 
 this.setState(currentState);
}

export default function StoreContainer(Component, reducers) {
 return class extends React.Component { 
  
  constructor(props) { 
   super(props); 
   this.state = store.getState(); 
   this.instSubscriber = subscriber.bind(this); 
   store.subscribe(this.instSubscriber);
   store.addReducers(reducers); 
  }
 
  componentWillUnmount() {
   store.unsubscribe(this.instSubscriber);
  }
  
  render() {
   return (<Component {...this.props} {...this.state} />);
  }
 }
}

Implementation of using StoreContainer:

import StoreContainer from 'StoreContainer'; 
import { myReducer } from 'MyReducers';
let MyComponent extends React.Component { 
 // My component stuff 
}
export default StoreContainer(MyComponent, { myReducer });

Implementation of using the Component that uses StoreContainer (exactly the same as normal):

import MyComponent from 'MyComponent'; 
import ReactDOM from 'react-dom';

ReactDOM.render(<MyComponent myProp=’foo’ />, document.body);

But you don’t have to define the data basis of your MyComponent immediately or in a long-lasting class definition; you could also do it more ephemerally, in implementation, and perhaps this is wiser for more generalized components:

import StoreContainer from 'StoreContainer'; 
import { myReducer } from 'MyReducers'; 
import GeneralizedComponent from 'GeneralizedComponent'; 
import ReactDOM from 'react-dom';
let StoreContainedGeneralizedComponent = StoreContainer(GeneralizedComponent, { myReducer });
ReactDOM.render(<StoreContainedGeneralizedComponent myProp='foo' />, document.body);

This has the benefit of letting parent components control certain child component properties.

Conclusion

By gaining a solid understanding of Redux through this blog, we hope teams can enhance their state management and write efficient, scalable code.

In addition to leveraging Redux, teams can further optimize their product development process by utilizing Jama Connect®‘s powerful features, such as Live Traceability™ and Traceability Score™, to improve engineering quality and speed up time to market.

Jama Connect empowers teams with increased visibility and control by enabling product development to be synchronized between people, tools, and processes across the end-to-end development lifecycle. Learn more here. 



MOSA


A Nod To MOSA: Deeper Documenting of Architectures May Have Prevented Proposal Loss

Lockheed loses contract award protest in part due to insufficient Modular Open Systems Approach (MOSA) documentation.

On April 6th the GAO handed down a denial to Sikorsky-Boeing proposal protest of the Army tiltrotor award to Textron Bell team. This program is called the Future Long Range Assault Aircraft (FLRAA) which is supposed to be a replacement for the Blackhawk helicopter. In reading the Decision from the GAO, it is apparent that there was a high degree of importance placed on using a Modular Open Systems Approach (MOSA) as an architecture technique for the design and development. For example, the protest adjudication decision reveals, “…[o]ne of the methods used to ensure the offeror’s proposed approach to the Future Long-Range Assault Aircraft (FLRAA) weapon system meets the Army’s MOSA objectives was to evaluate the offeror’s functional architecture.” Sikorsky failed to “allocate system functions to functional areas of the system” in enough detail as recommended by the MOSA standard down to the subsystem level which is why they were given an Unacceptable in the engineering part of their proposal response.

MOSA will enable aerospace products and systems providers to not only demonstrate conformance to MOSA standards for their products but allow them to deliver additional MOSA-conformant products and variants more rapidly. By designing for open standards from the start, organizations can create best-in-class solutions while allowing the acquirer to enable cost savings and avoidance through reuse of technology, modules, or elements from any supplier via the acquisition lifecycle.

Examining MOSA

What is a Modular Open Systems Approach (MOSA)?

A Modular Open Systems Approach (MOSA) is a business and technical framework that is used to develop and acquire complex systems. MOSA emphasizes the use of modules that are designed to work together to create a system that is interoperable, flexible, and upgradeable. To do this MOSA’s key focus is designing modular interface commonality with the intent to reduce costs and enhance sustainability efforts.

More specifically, according to the National Defense Industrial Association (NDIA), “MOSA is seen as a technical design and business strategy used to apply open system concepts to the maximum extent possible, enabling incremental development, enhanced competition, innovation, and interoperability.”

Further, on January 7, 2019, the U.S. Department of Defense (DoD) issued a memo, signed by the Secretaries of the Army, Air Force, and Navy, mandating the use of the Modular Open Systems Approach (MOSA). The memo states that “MOSA supporting standards should be included in all requirements, programming and development activities for future weapon system modifications and new start development programs to the maximum extent possible.”

In fact, this mandate for MOSA is even codified into a United States law (Title 10 U.S.C. 2446a.(b), Sec 805) that states all major defense acquisition programs (MDAP) are to be designed and developed using a MOSA open architecture.

MOSA has become increasingly important to the DoD where complex systems such as weapons platforms and communication systems require a high level of interoperability and flexibility. Their main objective is to ensure systems are designed with highly cohesive, loosely coupled, and severable modules that can be competed separately and acquired from independent vendors. This allows the DoD to acquire systems, subsystems, and capabilities with increased level of flexibility of competition over previous proprietary programs. However, MOSA can also be applied to other industries, such as healthcare and transportation, where interoperability and flexibility are also important considerations.

The basic idea behind MOSA is to define architectures that are composed of more, more manageable modules that can be developed, tested, and integrated independently. Each module is designed to operate within a standard interface, allowing it to work with other modules and be easily replaced or upgraded.


RELATED: Streamlining Defense Contract Bid Document Deliverables with Jama Connect®


The DOD requires the following to be met to satisfy a MOSA architecture:

  • Characterize the modularity of every weapons system — this means identifying, defining, and documenting system models and architectures so suppliers will know where to integrate their modules.
  • Define software interfaces between systems and modules.
  • Deliver the interfaces and associated documentation to a government repository.

And, according to the National Defense Authorization Act for Fiscal Year 2021, “the 2021 NDAA and forthcoming guidance will require program officers to identify, define, and document every model, require interfaces for systems and the components they use, and deliver these modular system interfaces and associated documentation to a specific repository.”

  • Modularize the system
  • Specify what each component does and how it communicates
  • Create interfaces for each system and component
  • Document and share interface information with suppliers

MOSA implies the use of open standards and architectures, which are publicly available and can be used by anyone. This helps to reduce costs, increase competition, and encourage innovation.

Why is MOSA important to complex systems development?

MOSA, an important element of the national defense strategy, is important for complex systems development because it provides a framework for developing systems that are modular, interoperable, and upgradeable. Here are some reasons why MOSA is important:

  • Interoperability: MOSA allows different components of a system to work together seamlessly, even if they are developed by different vendors or organizations. This means that the system can be upgraded or enhanced without having to replace the entire system.
  • Flexibility: MOSA promotes the use of open standards and architectures, which allows for greater flexibility in system development. It also allows for more competition among vendors, which can lead to lower costs and better innovation.
  • Cost-effectiveness: MOSA can reduce costs by allowing organizations to reuse existing components or develop new components that can be integrated into existing systems. It can also reduce the cost of maintenance and upgrades over the lifecycle of the system.
  • Futureproofing: MOSA allows for systems to be upgraded or modified over time, as new technology becomes available. This helps to future-proof the system, ensuring that it can adapt to changing needs and requirements.

RELATED: Digital Engineering Between Government and Contractors


How can Live Traceability™ in Jama Connect® help with a MOSA?

Live Traceability™ in Jama Connect® can help with MOSA by providing mechanisms to establish traces between MOSA architecture elements and interfaces, and the requirements and verification & validation data that support them. Live Traceability is the ability to track and record changes to data elements and their relationships in real-time. This information can be used to improve documenting system design, identify potential issues, and track changes over time.

Here are some specific ways that Live Traceability can help with MOSA:

  • Status monitoring: Live Traceability allows systems engineers to monitor the progress of architecture definition in real-time, identifying issues from a requirements perspective as they arise. This can help to increase efficiency and ensure that the stakeholders are aware of changes as they occur.
  • Digital Engineering: Live Traceability can help with digital engineering by providing mechanisms to capture architectures, requirements, risks, and tests including the traceability between individual elements.
  • Configuration and Change Management: Live Traceability can help with change management by tracking changes to system architectures and interfaces including requirements that are allocated to them. This can help to ensure that changes are properly documented and that they do not impact other parts of the system. Baselining and automatic versioning enable snapshots in time that represent an agreed-upon, reviewed, and approved set of data that have been committed to a specific milestone, phase, or release.
  • Testing and Validation: Live Traceability can help with verification and validation to ensure that system meets specified requirements and needs. This can help reduce risk by identifying issues early in the development process and ensuring that the system meets its requirements.
  • Future-proofing: Live Traceability can help to future-proof the system by providing a record of system changes and modifications over time. This can help to ensure that the system remains flexible and adaptable to changing needs and requirements.

In summary, Live Traceability in Jama Connect can help with MOSA by providing real-time visibility into the traceability between architectures, interfaces, and requirements. It can help to improve documenting the system design, identify potential issues, and track changes over time, which are all important considerations for MOSA.



TMF

What is a Trial Master File in the Medical Device Industry?

A Trial Master File, also known as a TMF, is a collection of records and documentation about the creation, evaluation, and regulatory approval of a medical device. It shows the quality control procedures used in the device’s design, production, and testing to make sure it meets all applicable regulations. Regulators look at the TMF during inspections and audits to see if the device is in compliance.

How is a Clinical Trial Master File (TMF) similar to a Trial Master File?

A Clinical Trial Master File (TMF) is similar to a Trial Master File in that they are both collections of documents and records related to a specific project. However, while a Trial Master File pertains to the development, testing, and regulatory approval of a medical device, a Clinical Trial Master File pertains to the clinical trials conducted to evaluate the safety and efficacy of a medical device, pharmaceutical product, or treatment.

Both types of TMFs provide evidence of the processes and procedures used during the development and testing phases, and both are subject to review by regulatory agencies during inspections and audits.

What is an Electronic Trial Master File?

An Electronic Trial Master File (eTMF) is an electronic version of TMF that stores documents and records generated during the clinical trial process. eTMFs can replace paper-based TMFs and provide a more efficient and effective way to manage the vast amount of information generated during a clinical trial. Using an eTMF is becoming more common in the clinical trial industry due to its many benefits over paper-based TMFs, including improved efficiency, increased security and accessibility, and enhanced regulatory compliance.

To achieve compliance, organizations need defined processes for development and production and detailed traceability, from the high-level user needs through to test management. Documentation is a large part of proving compliance, and Jama Connect® makes it easy to compile the necessary documentation, like eTMFs. By automating the process, teams can focus on what’s important and avoid potential errors.


RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution for Medical Device & Life Sciences


What types of regulations are TMFs or eTMFs expected to meet?

Both Clinical Trial Master File (TMF) and Electronic Trial Master File (eTMF) must adhere to various regulatory requirements depending on the jurisdiction in which the clinical trial is conducted. Some of the common regulations that a TMF or eTMF must comply with include:

  • International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines: This is a set of global guidelines for the development, registration, and post-approval of pharmaceuticals.
  • Good Clinical Practice (GCP) guidelines: This is an international ethical and scientific quality standard for designing, conducting, recording, and reporting clinical trials that involve the participation of human subjects.
  • The Food and Drug Administration (FDA) 21 CFR Part 11: This is a regulation that establishes the criteria under which electronic records and electronic signatures are considered trustworthy, reliable, and equivalent to paper records.
  • Health Insurance Portability and Accountability Act (HIPAA): This is a US federal law that requires the protection and confidential handling of personal health information (PHI) stored in electronic form.
  • European Union Clinical Trials Regulation (EU CTR): This is a regulation that governs the conduct of clinical trials in the European Union and aims to harmonize the regulatory requirements across EU Member States.

How can a TMF help an organization with successful product development and management?

It is important for the trial sponsor, sponsor’s representative or the CRO to ensure that the TMF or eTMF meets all relevant regulatory requirements to ensure the integrity and quality of the clinical trial data.

A Clinical Trial Master File (TMF) can help an organization with successful product management by providing a centralized repository of all the relevant documentation and information related to the development and testing of a product. The TMF helps to ensure that all necessary documentation is captured and easily accessible, which can help to:

  • Streamline the development process
  • Ensure regulatory compliance
  • Improve collaboration and communication
  • Facilitate post-market monitoring

RELATED: [Webinar Recap] An Overview of the EU Medical Device Regulation (MDR) and In-Vitro Device Regulation (IVDR)


Overall, a well-managed TMF can play a critical role in the successful development, testing, and management of a product, by providing a comprehensive and centralized record of all relevant information and documentation.

Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by Decoteau Wilkerson, McKenzie Jonsson, and Vincent Balgos.



ISO 240891

ISO 24089, developed by the International Organization for Standardization (ISO), is a standard that provides guidelines for managing software updates in a methodical and orderly way. Planning, testing, deployment, and monitoring are all included in the framework for managing the software update process that is specified by the standard. The main requirements and advantages of ISO 24089 as it relates to software update management systems will be highlighted in this post.

Software is a crucial building block of the modern connected and automated vehicles. Once the product is sold to the customer it starts its utilization phase. Important software updates are needed to keep the vehicle up to date, roll out new features, eliminate defects or bugs and most importantly redress security vulnerabilities. These software updates are in most cases delivered remotely through over-the-air technologies. There is no need to necessarily take the car to a workshop in order to install these updates. These over the air technologies make the whole process vulnerable and a proper framework needs to be set up to organize this process and make sure that the right updates are delivered to the right vehicles. Therefore, companies producing cars must have a software update management system. An organization may run the risk of security flaws, software bugs, and compatibility problems if software upgrades are not managed properly. The UNECE (United Nations Economic Commission for Europe) has put a new regulation R156 in place to regulate the Software update and software update management system which by this regulation become mandatory for the type of approval process in the regulated markets. The goal of ISO 24089 is to provide a thorough method for managing software updates that reduces risks and guarantees that updates are implemented in a consistent and efficient manner to support compliance with the UNECE R156.


RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution for Software Development


The framework of ISO 24089 revolves around a list of conditions that must be fulfilled in order to comply with the standard. These prerequisites consist of:

  1. Policy Planning: Establishing a policy for software updates and creating a plan for handling updates are requirements for the organization. The goals and parameters of the software update management system should be specified in the policy, along with the roles and duties of the various participants.
  2. Risk Management: The company must evaluate the risks posed by software updates and put precautions in place to reduce those risks. This entails locating potential security gaps and making sure upgrades don’t interfere with business as usual.
  3. Testing and Validation: Before updates are deployed, the organization needs to set up a process for testing and validating them. This procedure should make sure that updates are compatible with the current software environment and do not add any new errors or compatibility problems.
  4. Deployment: A procedure for deploying updates to production environments must be established by the company. This procedure should guarantee that updates are distributed in a regulated and safe manner, reducing the possibility of operations disruption for the company.
  5. Monitoring: Establishing a process for monitoring and evaluating the software update management system’s performance is necessary for the company. Regular audits and evaluations of the system’s effectiveness and the identification of potential improvement areas should be part of this process.

RELATED: [Webinar Recap] Why it Makes Sense to Store Cybersecurity Risk Management Items Inside a Requirements Management System


Businesses can make sure that their software update management system is well-designed, efficient, and compliant with ISO 24089 by following these requirements. The standard offers businesses a framework for creating a dependable and consistent procedure for managing software updates, lowering the risks involved with updates, and making sure upgrades are applied quickly and effectively.

One of ISO 24089’s major advantages is that it aids businesses in raising the caliber of their software updates. Organizations can guarantee that updates are adequately tested and verified before deployment by putting in place a structured procedure for testing and validation, which lowers the chance of errors and compatibility problems. As a result, the organization’s overall operational environment becomes more solid and reliable.

The ability to lower the risk of security vulnerabilities brought on by software upgrades is another advantage of ISO 24089. Organizations can lessen the risk of cyberattacks and other security breaches by putting in place a risk management plan that involves the identification and mitigation of potential security threats.

Additionally, ISO 24089 supports businesses in enhancing their adherence to legal specifications for software updates. Numerous regulatory frameworks mandate that businesses have a formal, written process in place for handling software changes. Organizations can demonstrate compliance with these criteria and lower their risk of regulatory fines and other consequences by adhering to ISO 24089.

ISO 24089 assists enterprises in lowering the risks related to updates, enhancing the quality of their software environments, and meeting regulatory obligations by providing a thorough framework for managing updates. A more effective, dependable, and secure software update management system can help organizations that use ISO 24089 improve their overall operational performance and lower risk.

Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by McKenzie Jonsson and Atef Ghribi.



Defense

Streamlining Defense Contract Bid Document Deliverables with Jama Connect®

In the defense sector, whether it is a large prime, a subcontractor, or one of the thousands of other organizations under the defense umbrella, winning a contract bid for the United States Government comes with as astronomical amount of document deliverables that can be daunting and cumbersome. These documents are listed from the get-go in the Contract Data Requirements List (CDRL) which links directly to the Statement of Work (SOW). When designing and building these complex programs, as part of the CDRL, the contractor delivers countless Data Item Descriptions (DIDs).

These DIDs are formatted with very particular structures and arrangements as defined by MIL-HDBK-245 so that each unique deliverable defines the necessary data. Examples of these documents are Interface Design Description (IDD), System/Subsystem Design Description (SSDD), and Operation Concept Description (OCD) just to name a few of the variations. Below you can see a list of DIDs that are offered with Jama Connect®. The Software Requirements Specification (SRS) DID contains the formatted document below in the explorer tree so the export is pre-built and only the pertinent information can be filled in.

Figure 1: Jama Connect DID Template Library

Prior to entering the world of Jama Connect, my experience at the largest government contractor as a Systems Engineer exposed me to countless hours of document writing, specifically Interface Design Descriptions (IDD’s). The team had to create 80 documents from scratch for two different programs. Mind you, these 80 documents were solely for the navigation data that our systems sent to the rest of the ship. These documents are just a small segment of all the other similar documents that numerous teams worked on. These IDD’s were kept on the Local Area Network (LAN) where revision management was adding a most recently edited date to the end of the file name and completion was tracked in a excel workbook. Upon completing a document, the team would sit in room together to review a few IDD’s, try to remember if any system changes that were made, cross reference Cable Block Diagrams and Cable Run Sheets to see if all connector types and port numbers were correct, and struggle to make sure all interfaces for the connected systems were captured.


RELATED: Digital Engineering Between Government and Contractors


There was no relationship capability, no way to ensure up to date references, and minimal visibility into change incorporation. This led to unnecessary hours spent reviewing these items ultimately leading to late delivery of documents and the possibility of incorrect or out of date documents which the customer was not appreciative of. With the navigation system being the core of all ship operability, the risk of incorrect documentation had the potential to lead to countless issues on ship in an environment where the stakes are already so high. The premise of being able use Jama Connect to build these DID’s with the information already created in the software that establishes traceability and ease of review in the review center is lightyears easier and faster than the process we had in place.

Now that the stage is set for the effort that goes into delivering all the CDRL items, by utilizing Jama Connect, customers no longer must burn thousands of hours meticulously revisioning items in Word documents. The DID structure can be built into a Jama Connect project so that the DID can be broken down into individual components to be versioned on. Utilization of the REUSE capability allows for requirements and other text items from the project to be simply implemented in the DID structure so that the content is the same and the exported deliverable meets the criteria. Alongside the simplicity of creating DID’s within Jama Connect, the overall authoring process is reduced tenfold since numerous users can work in conjunction on their own elements of the DID, the out of the box templates offered give users the ability to immediate start creating the content, and overall deliverable time is cut down.


RELATED: How Jama Connect® Helps Program Managers with DOD 5000 Adaptive Acquisition Framework


Conclusion

The daunting task of delivering countless documents to the government is much less alarming when streamlined with the use of Jama Connect. With Jama Software’s DID’s Library, customers are able to dive into their deliverable creation while simultaneously working in an environment that provides perfect versioning and change management for the items that make up the DID. The ability to deliver documentation to the customer in a faster, more concise manner yields companies the ability to stop wasting time of writing documents from scratch and to instead complete other tasks leading to overall acceleration of a program.



Quality Management System (QMS)

Jama Connect® Features in Five: Using Jama Connect with a Quality Management System (QMS) for Medical Device & Life Sciences

Learn how you can supercharge your systems development process! In this blog series, we’re pulling back the curtains to give you a look at a few of Jama Connect®’s powerful features… in under five minutes.

In this Features in Five video, Steven Pink, Senior Solutions Architect  at Jama Software®, will provide insight into how Jama Connect is commonly used in the context of a medical device Quality Management System (QMS.)

In this video, we will:

  • Provide insight on how Jama Connect is commonly used in the context of a medical device quality management system
  • Demonstrate key features that provide value to those responsible for quality and regulatory matters
  • Offer clear guidance on how Jama Connect – a requirements management solution – supplements a separate quality management system within a cohesive ecosystem of complimentary applications


VIDEO TRANSCRIPT:

Steven Pink: Welcome to this segment of Features in Five. I’m Steven Pink, a senior solutions architect at Jama Software and today I’ll be giving an overview to help provide some insight into how Jama Connect is commonly used in the context of a medical device quality management system.

We’ll demonstrate some of the key features that provide value to those responsible for quality and regulatory matters and clear guidance on how Jama Connect, a requirements management solution supplements a separate quality management system within a cohesive ecosystem of complimentary applications.

We often work with medical device or life science companies that have some form of quality management system whether that be paper-based or an eQMS and they’re working to introduce a requirements management solution like Jama Connect for the first time.

For individuals with a quality and regulatory background that have not yet worked in an environment using a formal requirements management solution, this can seem like a foreign and potentially disruptive change to a well-defined process.


RELATED: Jama Connect® vs. DOORS®: Filters, Search, and Analysis: A User Experience Roundtable Chat


Pink: So before we provide some insight to help address that common concern, we want to provide some context as to why an organization would want to introduce Jama Connect in the first place. Prior to using a formal requirements management solution, engineering and R&D are often left managing requirements related data during development in documents, spreadsheets and tools like Jira, Confluence, or SharePoint that are not designed to support complex requirements management.

In this type of scenario, engineering often finds it difficult to manage and maintain complex traceability as they work. So they often leave it to be completed at the end of a phase or milestone as opposed to maintained in real time. This often leads to gaps or errors being identified late in development which is significantly more costly to address the later they’re identified. In addition to having difficulty maintaining traceability, engineering often struggles to manage change to requirements and understand the full impact of each change.

They’ll find it hard to keep data in sync between requirements stored in documents or spreadsheets and other tools like Jira or Azure DevOps where data resides in silos. They’ll often waste a lot of time or effort compiling documentation for their design history file at the end of a given phase before these artifacts can be signed off and stored as an auditable record in a document control system. As products increase in complexity and regulatory guidelines continue to increase in rigor, these challenges grow exponentially for engineering.

To help address these challenges, Jama Connect provides engineering and product teams with a working environment to manage requirements, risks, tests and the traceability between these items in real time. We call this managing live traceability.


RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution for Medical Device & Life Science


Pink: From a quality and regulatory perspective, Jama Connect’s relationship rule diagram provides a governing structure to ensure that requirement traceability is maintained following proper design controls and quality procedures. This structure makes it simple to manage change, perform impact analysis, and ensure traceability coverage throughout development.

The first thing we see when working on a project in Jama Connect is the dashboard with traceability rules. This makes it easy to understand the expectations for traceability and identify exceptions through dashboard widgets, such as gaps in test coverage or finding unmitigated risks.

With data living and Jama Connect, managing documentation and traceability becomes easier. Once documentation has been authored, it can be sent for a formalized review. Cross-functional teams can utilize the review center to conduct iterative reviews and significantly increase the quality and efficiency of the feedback being given.

Once all items for a given release have been reviewed and approved, these items can automatically transition into an accepted and blocked state, ensuring that changes are not made to approved items unintentionally. When the time comes to generate auditable documentation, Jama Connect allows teams to automatically or manually capture baselines and export these baseline documents out of the system to be signed off in a separate document control system as an auditable record. This process reduces the time spent manually reworking documents as part of the QMS process. And these document export templates can easily be customized to match existing internal quality standards and ensure consistency in the way requirements and other details are documented.

In the end, Jama Connect can help engineering team more easily manage their work and simplify the process of maintaining traceability. As a byproduct of their efforts, quality, and regulatory teams are provided with higher-quality auditable documents without making changes to their existing quality management systems.


RELATED: How to Use Requirements Management as an Anchor to Establish Live Traceability in Systems Engineering


To view more Jama Connect Features in Five topics visit: Jama Connect Features in Five Video Series



Industrial Manufacturing and Consumer Electronic Development

In this blog, we preview whitepaper, “The Top Challenges in Industrial Manufacturing and Consumer Electronic Development” — Click HERE to read the entire thing.


The Top Challenges in Industrial Manufacturing and Consumer Electronic Development

From supply chain disruptions to digitization – learn more about what development teams are up against and get expert suggestions for how to overcome them

PART I: The Top Challenges in Industrial Manufacturing and Consumer Electronic Development

Industrial manufacturing has always been a cornerstone of economic growth and development worldwide. Over the last few years (or more), the
industrial manufacturing sector has undergone significant transformation; from the introduction of automation and robotics to advanced analytics.

Today, industrial manufacturers are facing a host of new challenges that are forcing teams to rethink their strategies and adapt to changing market influences and demands.

The need to increase operational efficiency while also cutting expenses is one of the most pressing issues facing industrial manufacturing — a challenge not unique to this industry alone. Manufacturers are under a lot of pressure to optimize their processes, cut lead times, and boost product quality due to intense competition and rising customer expectations. Teams must also figure out how to lower production costs and waste while adhering to strict industry regulations. To be boost innovation and optimize operations, teams need to have not only an in-depth knowledge of the production process, but also access to cutting-edge technologies like the internet of things (IoT), artificial intelligence (AI), product development platforms, and machine learning (ML).

In this whitepaper, we’ll explore some of the challenges industrial manufacturing teams are up against and offer expert insights and strategies on how to work through them.

CHALLENGE #1: Supply Chain Disruptions

The industrial manufacturing sector may continue to endure supply chain disruptions as a result of the ongoing COVID-19 epidemic — primarily due to a lack of workers, raw materials, and component parts. Outside of the pandemic, supply chain interruptions are also being caused by trade conflicts and tariffs.

Supply chain disruptions continue to be a complex challenge for industrial manufacturers, and engineers play a critical role in identifying and mitigating these risks. By developing robust supply chain management strategies and leveraging innovative and modern technologies, engineers can help to reduce the impact of disruptions and ensure a more efficient and reliable manufacturing process.


RELATED: IEC 61508 Overview: The Complete Guide for Functional Safety in Industrial Manufacturing


CHALLENGE #2: Environmental Sustainability

As demand for environmentally friendly and sustainable products rises, firms must adopt more sustainable procedures in their business operations. This can entail cutting back on carbon emissions, switching to renewable energy, and reducing waste and pollution.

Consumers in the manufacturing sector have become more and more demanding of environmental sustainability. As the world’s population shifts to be more environmentally concerned, customers are increasingly seeking out goods that were produced using sustainable methods, such as using renewable energy sources, lowering carbon emissions, and limiting waste and pollution.

While environmental sustainability is of vital import for society (and meeting modern ethics standards), it may also be very advantageous to a business’s bottom line. Manufacturers who place a high priority on sustainability can significantly lessen their energy and resource consumption, improve the reputation of their brands, and boost their market share and consumer loyalty.

Today’s forward-thinking industrial product, systems, and software developers are embracing a variety of environmentally friendly practices and modern technologies to both optimize production and satisfy this growing demand for environmental sustainability.

CHALLENGE #3: Automation and Digitalization

The industrial sector is changing as a result of the growing use of automation and digital technology, but these changes also bring issues. Manufacturers must upskill their staff, invest in new technology, and manage the risks related to cyber-attacks and data security.

In order to maximize productivity, cut costs, improve product quality, and adapt to changing customer demands, businesses across all sectors, including industrial manufacturing, are automating and digitizing more of their processes. By enabling the automation of production processes and the use of data analytics to enhance operations, automation and digitalization technologies are revolutionizing the
manufacturing sector.

Here’s how manufacturers are leveraging new technology:

  • Robotic automation: It is possible to boost productivity, reduce labor expenses, and improve product quality by using robots and other automated technologies to complete tasks that were previously completed by hand.
  • Digital twin technology: Manufacturing processes are simulated and improved using digital models of physical systems in the real world. By spotting and fixing issues before they arise, manufacturers can increase the quality of their goods while lowering costs.
  • Predictive maintenance: In order to predict when repairs are required and prevent unscheduled downtime, predictive maintenance uses data analytics and machine learning algorithms. This increases equipment dependability and lowers maintenance costs.
  • Internet of Things (IoT): The IoT involves the use of sensors and other devices to collect data on processes and equipment. The data can then be used to optimize processes, reduce downtime, and improve product quality.

Automation and digitalization technologies are being adopted in industrial manufacturing for a number of reasons, including efforts to lower costs, increase efficiency, and meet shifting client demands. By lowering the need for human labor and enhancing quality control, this move toward automation can also increase safety.


RELATED: The Complete Guide to ISO/IEC/IEEE 15288:2015 — Systems and Software Engineering and Lifecycle Processes


CHALLENGE #4: Lack of Talent

There is a talent deficit in the industrial manufacturing sector, notably in fields like engineering, computer science, and data analytics. Manufacturers must invest in training and development initiatives to recruit and keep people in order to meet this challenge.

The labor shortage in industrial manufacturing has been an ongoing challenge for many years, and the COVID-19 pandemic only exacerbated it. Several factors play into the labor shortage, including an aging workforce, a lack of skilled workers, and shifting attitudes towards traditional work among younger generations.

To overcome the labor shortage, teams are implementing a range of strategies, including:

  • Investing in automation and robotics: Automation and robotics reduce the need for manual labor. By investing in automation technology, organizations can reduce their reliance on human labor costs and increase productivity.
  • Offering training and upskilling programs for employee attraction and retention: Many organizations are offering programs to their current employees to help them acquire new skills and advance their careers. The cost of replacing an employee is shockingly expensive. In fact, studies show that every time an organization replaces a salaried employee, it costs six to nine months’ salary on average. By investing in their employees, manufacturers can increase employee retention and reduce the need to hire new workers.
  • Implementing flexible working arrangements: Organizations across all industries are moving towards more flexible working arrangements, such as remote work and flexible scheduling, to attract and retain workers who are looking for a better work-life balance.
  • Collaborating with educational institutions: Many industrial manufacturing organizations are partnering with educational institutions to create training programs and apprenticeships that prepare students for careers in manufacturing.
  • Offering competitive incentive and benefits packages: Offering competitive compensation and benefits packages to attract and retain workers might include; higher salaries, flexible working environment, competitive health benefits, retirement plans, and other incentives.
This has been a preview of The Top Challenges in Industrial Manufacturing and Consumer Electronic Development whitepaper. Click HERE to read the entire thing!