Tag Archive for: stakeholder

 

Adopting MBSE

This blog contains excerpts from a whitepaper titled, The Comprehensive Guide to Successfully Adopting MBSE, written by Lou Wheatcraft. 


Adopting MBSE: Challenging the Status Quo in Product Development

For product development teams to successfully implement a practice such as model-based systems engineering (MBSE), it requires the willingness of an organization to perhaps change processes and even tooling. Often, companies choose to stay with the status quo, but at what cost? Here’s a look at how adopting MBSE might help your teams, and the cost of not adopting MBSE. 

Understand the Need to Move for Change 

What is the Risk of Staying with the Status Quo? 

For the MBSE Implementation Project Team to be successful, management must recognize the need to change. How can management be convinced? Three words – RETURN ON INVESTMENT (ROI)!  

Think about these questions: 

  •  What has been the impact of the current, poorly executed product development efforts? 
  • What is the overhead associated with the current document-based approach?  
  • What are the current quality issues facing the organization that is catering it to profits: Failures, recalls, returns, warranty costs, lawsuits, negative reviews on social media, decreasing market share?  

The ROI argument usually works with management especially when they can be convinced that by investing in a data-centric practice of SE tailored to the organization’s needs, the overall product development process, product quality, time to market, and profitability can be improved as discussed previously. 

What is the ROI of Adopting MBSE? 

To entice anyone — especially an entire organization — to make a change, proving ROI on the resource investment is key. Here are some key points to consider: 

  • The more effective the Systems Engineer (SE) processes, the less rework and fewer cost and schedule overruns.  
  • By moving to a data-centric practice of SE, the probability of achieving a competitive advantage can be improved by removing obstacles to being able to deliver products on time, on budget, and that meet or exceed customer and quality expectations. 
  • From a cultural perspective, the personnel responsible for product development and will be most affected by the change must be shown, and how the change will benefit them.  

Visit our interactive content page for ROI Calculators and more! 

Combating Opposition to Change

A good process is not one that is something people have to do in addition to their job, rather it is one that helps people do their job more effectively. – LOU WHEATCRAFT


RELATED POST: Webinar: Eliminating Barriers to MBSE Adoption with Jama Software


Get Team Buy-In

To get buy in from the product development teams, the MBSE Implementation Project Team must: 

  • Understand what problems the product development teams are having and show them how moving to a more data-centric practice of SE will address those problems and make their job easier than the current document-centric approach. If the change results in more work or makes communication harder, the battle will be lost. For example, the lead engineer or project manager may already be over their head and working 50–60-hour weeks. Requiring them to learn how to use a new tool or set of tools and implement a new process may be too much of a load for them to bear! However, if they are provided with a dedicated person that has the training, knowledge, and experience in the new processes and tools to help implement the changes and train other team members, they will be much more receptive. They will also be much more receptive if this results in them having to work fewer hours and having fewer crises to deal with each day! 
  • Be convinced of the utility of the changes, how the changes result in a better product and result in less rework for them. Frequently the reason project team members are working long hours is because they are always fighting fires, going from one crisis to another, which resulted from the lack of the proper SE tools, processes, data, and information in the first place! The culture needs to be changed from one of firefighting to one of fire prevention. As time passes, they will become advocates for the changes that have been made and welcome further change. 

The MBSE Implementation Project Team’s mission statement will be something like: “Improve our product development processes by adopting MBSE within the organization by moving from a document-centric to a data-centric practice of systems engineering.” Along with this mission statement, they will need to define a set of specific goals and objectives along with measures of success. Once defined, they will need to get agreement from management on these goals, objectives, and measures. 


RELATED POST: Systems Engineers Career Path – How to Elevate


Get Management Buy-In 

Project success is dependent on having a high-level, C-suite project champion and getting management buy in. A major challenge for the project will be convincing management and other key stakeholders that it is time for adopting MBSE and moving from a document-centric to a data-centric practice of SE and knocking down the walls of resistance. Some common reasons for them not wanting to move to a data-centric practice of SE include: 

  • “We have been doing product development using our current processes for years, why should we change?” 
  • “Implementing SE from a data-centric perspective may work for others, but not for us.” 
  • “This all seems very complicated, we don’t have the knowledge, experience, or tools.” 
  • “Our current SE work products, like requirements, are managed in an RMT. FFBDs, and other diagrams we are currently using are models, so aren’t we already adopting MBSE?” 
  • “It is too expensive to procure the needed SE toolset, maintain the tools, and train our people to use those tools.” 
  • “We don’t have the budget to incorporate SE from a data-centric perspective at this time.” 
  • “The expense and associated process to get new SE toolset installed on organizational computers is too great.” 
  • “We would have to make signification IT infrastructure upgrades to accommodate the additional volume of data and performance requirements of the new SE tools.” 
  • “We deal with the development of classified systems; controlling access and maintaining security will be too difficult.” 

Sound familiar? Often the pushback can be attributed to a lack of understanding the risks associated with the current state of the organization, understanding the benefits of moving toward a more data-centric practice of SE, and what level of SE capability is appropriate for the organization.


RELATED POST: Whitepaper: A Path to Model Based Engineering (MBSE) with Jama Connect


Adopting MBSE: The Road to Success 

To inspire a shift from a document-centric to a data-centric practice of SE in your organization, it’s vital to show both teams and management the value and expected ROI in doing so. Moving away from a current way of doing things isn’t always an easy road, but the risks of staying with the status quo are often great—and the rewards for changing processes and culture are often even greater.  

An organization will be successful in practicing SE from a data-centric perspective when it is considered to be the “gold standard” for system development within the organization. However, the road to success is long — it takes very strong, unwavering leadership and experience to get this done right. It is human nature to try to push back and say that it isn’t possible, but it is.   

 


This is Part 2 of a blog series covering a whitepaper titled,The Comprehensive Guide to Successfully Adopting Model-Based Systems Engineering MBSE. Visit these links for the rest of this series: Part I, Part III, and Part IV.


Adopting MBSE

What does it mean to practice SE from a data-centric perspective?

Successfully adopting MBSE and moving toward a data-centric practice of SE is much more than just acquiring and using a specific tool, set of tools, or focusing on the use of a specific type of model. As stated previously, MBSE is not just about the development of SysML or other language-based models nor just practicing model-based design. MBSE is itself made up of puzzle pieces, all of which contribute to the successful adoption of MBSE. To be successful, the following ten areas of capability associated with data-centricity must be addressed.

01: Holistic Product Development

A key tenet of data-centricity is taking a holistic view of product development and managing data and information within an integrated/federated environment. The focus is on multidiscipline, collaborative, project teams (e.g., integrated product teams). Many organizations still operate in organizational silos, with team members’ loyalty toward their specific silo rather than to the project team. When issues occur, the tendency is to blame those in other silos. Each silo often has its own processes, specific toolsets, data, and artifacts. Often the data and information are generated independently from those in other silos and are not in a form to enable sharing. This can result in inconsistencies, correctness, completeness, and currency issues of the data maintained in these artifacts. When moving toward data-centricity, organizations must have a holistic view of product development, minimizing the silos, encouraging collaboration, and improving communications not only between team members but between different tools used to generate and maintain data and information. Rather than treating Systems Engineering separate from Project Management (PM), projects must integrate both sets of functions such that there is a single project team that does both functions.

02: Manage Product Development Across the Lifecycle

Rather than having tools that are specific to a given organizational silo, a key characteristic of data centricity it that related data and information that represents lifecycle activities and associated artifacts can be linked resulting in “digital threads” that link related information together across the product lifecycle. This linkage enables project team members to work collaboratively and establish traceability between needs, design input requirements, system analysis artifacts, diagrams, models, architecture, design, system verification artifacts, and system validation artifacts. Traceability aids in change impact assessment across the product lifecycle helping ensure completeness, correctness, consistency, and currency of the data and information and resulting artifacts.

03: Enterprise Level Data and Governance Policy, Processes, & Procedures

Because of the dependence of not just the project teams, but the overall organization on electronic forms of data and information and increasing threats associated with the security of this data and information; enterprise-level policies, processes, and procedures concerning data governance and information management must be defined, implemented, and enforced.

04: Project Level Data and Information Management

Within the context of the enterprise-level data governance and information policies, each project must include their specific implementation of these policies within their Project Management Plan (PMP) and Systems Engineering Management Plan (SEMP). Because of the importance of managing the project’s data and information, the project is encouraged to develop and enforce a project-level Information Management Plan (IMP). Other supporting plans (e.g., requirements management plan) need to comply with the data management policies within the higher-level plans for both the project and enterprise.


RELATED POST: The Real Intent of Model-Based Systems Engineering


05: Master Ontology

Terminology and language are key to successful communications not only between team members but between tools. For a given enterprise, an enterprise-level ontology (data dictionary and glossary) must be developed to clearly define specific terminology and relationships of various terms used within the organization and the project. This is critical when there are product lines, multiple project teams, and the need to share data and information between current projects as well as reuse data and information for future projects. Within the enterprise-level ontology, individual project teams can define their project-specific ontology consistent with the enterprise-level ontology.

06: Master Schema

Here the word “schema” is used to describe how the data and information are organized and managed within individual tools and associated databases. It includes the naming of individual data and information items, defining relationships between data items, and the import and export of data and information. From both an enterprise and project perspective it is important to define a master schema that the SE and PM tools within their toolset are compliant in order to enable data integration, shareability, and reuse.

07: Use of Attributes and Associated Measures

Data centricity enables the project to define and use attributes that can be used to manage project activities across all system life cycle stages. For needs and requirements, attributes can include rationale, priority, criticality, source, owner, traceability, risk, maturity of needs definition, needs and requirements definition status, design implementation, system verification, and system validation. Attributes can be defined to aid in reusability and product line management. Attributes can also be associated with key measures defined by stakeholders within their goals and objectives. These measures include key performance indicators (KPI), measures of suitability (MOS), measures of effectiveness (MOE), measures of performance (MOP), key performance measures KPP), technical performance measures (TPM), and leading indicators (LI). Data representing these measures and attributes can be used within the SE and PM tools to generate reports, dashboards, etc. which are used to better manage the project and system engineering processes providing managers near real-time status information and enabling them to identify and correct possible issues before they become problems.

08: Configuration Management

Adopting data-centricity, the project’s artifacts and underlying data and information are developed, analyzed, and managed holistically within the data and information model. Because the data and information are managed within the project’s data and information model, this model represents a single source of truth (SSoT) for the project. Rather than configuration control of each individual artifact represented by the data and information in the model, the project team can configuration control the model which represents the baseline state of the artifacts represented by the data and information in the model at any given time. “Visualizations” of the data and information in the form of the various artifacts represent the baseline version of that artifact. Even when these visualizations are extracted as reports, the SSoT is still the data and information model from which they were generated.

Note: for many organizations, this is often their biggest challenge in that it requires the organization to redefine its concept of configuration management. However, as stated previously, configuration management of individual artifacts requires significant overhead in both cost and time to individually configuration manage individual documents as compared to managing the data and information model that is representative of moving towards a data-centric practice of SE and PM.


RELATED POST: Webinar: Eliminating Barriers to MBSE Adoption with Jama Software


09: Systems Engineering (SE) Tool Set

Data centricity requires projects to move beyond the use of common office applications: word processing, spreadsheets, presentations, basic drawing and diagraming tools, and requirement only management tools to define, analyze, record, and manage needs and requirements and other SE artifacts. Rather, projects must transform their SE process such that SE artifacts are developed using SE tools that are fully compliant with interoperability and data sharing standards, are consistent with the enterprise and project ontology, stores the data and information consistent with the project’s master schema, and allows linking of data and information across lifecycle activities and resulting artifacts. This data and information must be managed in a form that is shareable between the SE tools within the project’s toolset as well as shareable with the project’s PM tools. When selecting specific SE tools to be included in the project’s toolset, it is important that the project determine the types of information and methods of analysis that are needed based on their specific product line, culture, and workforce.

10: Project Management (PM) Tool Set

Data centricity also requires projects to move beyond the use of common office applications for project management e.g., budgeting, scheduling, cost management, risk management). Rather, projects must transform their PM process such that most of the PM artifacts are being developed using PM tools that are fully compliant with interoperability and data sharing standards, are consistent with the enterprise and project ontology, stores the data and information consistent with the project’s master schema, and allows linking of data and information across lifecycle activities and resulting artifacts. This data and information must be managed in a form that is shareable between the PM tools within the project’s toolset as well as shareable with the project’s SE tools. For example, Work Breakdown Structures (WBS) can be linked to Product Breakdown Structures (PBS) and physical architectures to enable management of budgets, schedules, resources, and risks associated with each system and system element within the product physical architecture.

Visit these links for the rest of this series: Part I, Part III, and Part IV.

To download the entire paper, visit: Whitepaper: The Comprehensive Guide to Successfully Adopting Model-Based Systems Engineering MBSE




This is Part I of a blog series covering a whitepaper titled, The Comprehensive Guide to Successfully Adopting Model-Based Systems Engineering MBSE. Visit these links for the rest of this series: Part II, Part III, and Part IV.


Introduction 

In a previous paper, we discussed key questions concerning Model-based Systems Engineering (MBSE) including what MBSE is, its true intent, why organizations should adopt MBSE, and the benefits. If you haven’t read that paper, it’s worth taking a look. 

We made the point that the goal of an organization when adopting MBSE, is to move from a document-centric to a data-centric practice of Systems Engineering (SE) to realize the real intent of MBSE which is to develop, maintain, and manage a data and information model of the system being developed — along with a model of all the system life cycle process activities, resulting in artifacts, and their underlying data and information. 

This paper will go into more detail as to key factors associated with successfully adopting MBSE, what it means to practice SE from a data-centric perspective, and provide a methodology to define a road map tailored to your organization resulting in the successful adoption of MBSE. 


RELATED POST: The Real Intent of Model-Based Systems Engineering


Key factors associated with successfully adopting MBSE 

Sadly, the attempts of many organizations to successfully adopt MBSE often end in failure. The process of adopting innovative technology like MBSE and moving toward a data-centric practice of SE can be considered to be a project in its own right. There have been numerous studies and reports concerning factors of why projects fail, and factors associated with projects that succeed. When adopting MBSE, these factors must be considered. Organizations that are able to successfully adopt MBSE and move to a data-centric practice of SE address the following key factors: 

01 – Getting Corporate Level Management Buy-In and Support – Success Starts at the Top! 

In an earlier paper, we discussed issues associated with a document-centric approach to product development of today’s increasingly complex, software-centric products along with the benefits of adopting MBSE from a data-centric perspective to address these issues. There must be a project champion that can clearly communicate these issues and benefits at the corporate level in order to get buy-in across the organization. 

A key consideration when getting this buy-in is how these issues and benefits are communicated. The adage “know your audience” is important. A common mistake when approaching higher-level management is using terminology that does not address their needs in a language they understand. When getting buy-in concerning the company adopting MBSE, you must clearly communicate to them what MBSE is and how the organization will benefit in terms of outcomes they can relate to. Giving them a demonstration of a specific tool using a lot of technical jargon can result in them quickly losing interest. They are interested intangible outcomes of a proposed solution that addresses business-related issues (problems): less overhead, decreased time to market, higher quality, decreases in post-launch issues, fewer issues associated with a product being approved for use, increased profits, rising stock prices, and a growing company. They want to understand how adopting MBSE will result in these types of outcomes. 

02Forming a Dedicated Project Team 

Rather than leaving it up to individual project teams to each attempt to adopt MBSE, a corporate level dedicated MBSE Implementation Project Team (IPT) is needed. For smaller organizations and startups this IPT might be a single person. MBSE is just one puzzle piece in the larger set of organizational puzzle pieces. To be successful, the larger, integrated puzzle must be considered to ensure the MBSE puzzle piece will fit. Other puzzle pieces include data governance policies, information management plan procedures and work instructions, information technology (IT) infrastructure (networks, internet, clouds, applications, computing devices, etc.), the product line, product development processes, procurement processes, company culture, workforce, etc. A dedicated project team can deal with possible issues in all these areas from a corporate, holistic perspective across organizational silos enabling a successful adoption of MBSE, helping to ensure the MBSE puzzle piece can be integrated within the overall corporate puzzle. 


RELATED POST: Webinar: Eliminating Barriers to MBSE Adoption with Jama Software


03Involving Key Stakeholders 

The various stakeholders involved in adopting MBSE must be included. These stakeholders must not only include the users, but other stakeholders that will be affected by the adoption of MBSE including those that will benefit, those involved in the activities required to adopt MBSE, and those from enabling and supporting organizations. Referring back to the puzzle analogy, stakeholders must be included that represent each of the above-listed puzzle pieces. Each stakeholder has expectations that must be addressed by the project team along with key drivers and constraints a successful project must consider in order to achieve a successful outcome. 

04 – Defining The Problem, Opportunity, Outcomes, Needs, and Requirements at the Beginning of the Project 

The project team and stakeholders at all levels of the organization must be aligned to a common understanding of the problem/opportunity that is driving the adoption of MBSE, to a common mission statement, goals, objectives, clear outcomes, needs, and requirements. Like any other project, these must be defined and agreed to from the beginning so that there is a clear roadmap to success and well-defined outcomes against which success can be measured. 

05 – Understanding the “Goldilocks Principle” 

The Goldilocks principle is about doing what is “just right” – not too little, not too much. When adopting MBSE and moving towards a data-centric practice of SE, the project team must understand the needs of the organization, what it means to practice SE from a data-centric perspective, and develop a practical and feasible roadmap. Delivering an MBSE capability that is too little can result in stakeholder expectations not being met, disappointment, and a failure of project teams to successfully adopt MBSE. Going overboard and implementing more than is needed can be overwhelming, turning people off to the concept and again a failure of project teams to adopt MBSE. 

This last point is especially important. “Just right” must be defined from a user perspective. The users are the product development project team members who will be conducting their project based on the processes and tools provided that will enable them to adopt MBSE for their project and move to developing their products from a data-centric perspective. They have expectations concerning being able to be more productive and effective. Meaning the processes and tools provided should not be viewed as things they have to do and use in addition to their job – resulting in more work; rather processes and tools they can follow and use that are an integral part of how they do their job – resulting in less work, a higher quality product with a shorter time to market. The new processes and tools enable them to deliver winning products: those that meet the needs of their customers, within budget and schedule, with the required quality. 

From a user perspective the following attributes must be addressed within the processes defined and tools selected for use: 

  • Full functionality; does what is needed, nor more, no less
  • Intuitive; easy to learn and use
  • Easy and fast to implement
  • Enable collaboration between team members, no matter their geolocation
  • Enable traceability of data, information, and artifacts across the system lifecycle
  • Enable change impact assessment across the system lifecycle
  • Reduces the time to define and manage needs and requirements
  • Supports verification and validation planning and execution
  • Tailorable to the organization’s product line, work instructions, and workflow
  • Helps ensure compliance with standards and regulations
  • Helps manage risk across the lifecycle
  • Enables management to track project status across the lifecycle 

Visit these links for the rest of this series: Part II, Part III, and Part IV.

To download the entire paper, visit: Whitepaper: The Comprehensive Guide to Successfully Adopting Model-Based Systems Engineering MBSE



Enabling Digital Transformation

In this blog post, experts from Cadence, OpsHub, and Jama Software talk about enabling digital transformation in the hardware and semiconductor industries.


The relentless pace of innovation, rapidly changing markets, and increasing product complexity are creating intense pressures on companies in the semiconductor and hardware space. Some of the biggest challenges relate to scaling effectively and efficiently within the context of digital transformations.

Organizations in all sectors are looking to support faster release cycles and accelerate innovation. Siloed and legacy tool chains create a major hurdle in accomplishing these goals.

Watch the webinar or read the recording to learn more about:

  • Rich collaboration
  • Complete traceability
  • Full transparency among all stakeholders
  • Faster releases
  • Improved quality and productivity

Below is an abbreviated transcript and a recording of our webinar.


 

asd

 

Jama Connect: the Leading Platform for Requirements

Matt Graham: Thanks everybody for joining. So today, before we get into the agenda just to introduce the three products that there are three subject matter experts about. First of all, something near and dear to my heart, the Cadence vManager, verification management platform which is a scalable, reliable and very feature rich verification planning and management solution from Cadence. That sits on top of a number of our verification and provides a sort of roll up capability. And we’ll describe it in a little more detail in a couple of slides. On the OpsHub opposite side, we’ll be looking at the OpsHub integration manager that enables enterprises to integrate their best of breed tools together that are best suited for the various teams and their various roles and connect those two together for integration and collaboration. And then Jama Connect, which is the leading platform for requirements, risk and test management to help provide that sort of end-to-end compliance solution.

Our agenda today. First we’ll look at some of the challenges of the semiconductor and hardware development ecosystem. This is obviously a very fast paced, highly competitive type of environment and there’s a lot of specific challenges that the integration of the tools I just mentioned can help address and solve. We’ll look at how engineers in this space can scale effectively and efficiently utilizing some of these, the tools to address some of the ongoing transformations in that space. And then specific to semiconductor domain, bridging the gap in what has historically been a very siloed development process. And bringing together for efficiency, quality and reliability all of the various tools that I mentioned and giving it a really nice integrated development and verification environment. We’ll then have a specific use case and demo showing how the three tools work in concert and then look at some key takeaways. And as Marie mentioned, some Q and A.

Standards for Requirements such as ISO 26262

Specifically to the semiconductor and hardware ecosystem, there are a developing set of challenges. And of course they’ve always been challenges in this area. First pass design success is critical for hardware development. Just because the tooling costs are so great. We don’t want to have to respurn hardware. It’s not like just releasing more software. It is it requires expense. But that has been the way of hardware development for some time. In the last several years we’ve seen a need creeping into that environment for even stricter compliance, particularly around mission critical domain such as aero and defense, automotive especially as self-driving and autonomous vehicles come in. And adherence to standards like ISO 26262 presents another layer of requirements and need for management and collaboration on top of an already strict set of sort of design parameters.

As I mentioned, this development environment tends to be very siloed in its nature because it is so specialist. You have specialist designers, specialists verification engineers to test the designs, specialists post silicon, specialists layout engineers and so on and so forth. And all of those silos, well somewhat required of the specialty of each of those tasks tends to hinder collaboration, compromise quality and just impact efficiency and velocity overall. In an area where efficiency and quality is critical. We can’t have bugs in semiconductors going to automotive and we need to be able to turn those new cell phones, those new mobile devices as quickly as possible. So turnaround time is just getting compressed and the requirement for quality is increasing at the same time.


RELATED: A Guide to Understanding ISO Standards


All of that sort of siloed nature of the specialties as well as the need for velocity and quality really ends up in poor traceability of results in terms of compliance and quality issues creeping in. Especially when it comes to doing things like audits for ISO and other similar standards that are becoming the requirement across again aero and defense type applications, automotive type applications and even down into the sort of consumer device applications. And really traceability is a watch word now in the ecosystem of hardware and semiconductor development.

So how does the offering from Cadence, vManager fit into and help provide a solution to those challenges that I just mentioned? Well, for a number of years now, in fact, vManager has been around for about 15 years and in that entire time it’s had the key capability of the verification plan. And the verification plan really exists to provide traceability between what is being executed during the testing or verification of your semiconductor or hardware design. And what were the goals of that or the requirements of that testing or verification project. Things like testing interfaces, both internal and external to the semiconductor, testing compliance with standards like ethernet and USB, such as that, things like that. As well as the internal requirements of the device, it must route packets this fast. It must answer phone calls in this manner or whatever it might be.

And the verification plan in vManager really allows the user to enter those requirements and then connect them to the real results that are occurring. We ran these tests, these tests were associated with a given requirement. Those tests passed therefore the requirement is satisfied. And so the V plan becomes a very natural place. And in fact the appropriate place to connect the rest of the ecosystem via OpsHub, two tool requirements coming from Jama Connect so that we can have traceability across the software development, the hardware development, whatever. The mechanical development et cetera ecosystems. And the vManager and the verification plan is really where that hardware verification, that hardware and semiconductor development information enters that ecosystem through the conduit of the verification plan. So let’s look a little bit more on, well what exactly is in that verification plan that vManager provides.

Enabling Digital Transformation: Static Documents Cause Challenges

And the V plan is really what we call, what we refer to in our vManager sort of pitch if you will as an executable verification specification or an executable verification contract. And what that means is that there’s data incoming to that during the creation, the authoring of that verification plan. Not only through connectivity to tools like Jama but also from say static documents like standards specifications, ethernet that I mentioned before, USB those are standard protocols that have very lengthy standards documents and needs to be a way to import, kind of gather the data from that and put it in the verification plan. Another input to the verification plan is other verification plans. So if you think about a system on a chip that is not a single piece of intellectual property, it’s built up of many, many different pieces, a USB piece, a central processing piece, a memory management piece and so on.

And each of those pieces can have their own verification plans for the verification at that sort of lower block level as well as then can sort of conglomerating or aggregating those verification plans into a single sort of system on a chip verification plan. And the vManager, V plan allows that through sort of parameterization and instantiation and really flexible set of sort of reuse capabilities for verification lands. And then of course just engineers authoring their verification plan. Literally writing, typing in here’s a specific requirement et cetera. And then we have the component of mapping those requirements to items that exist in the actual testing environment. Things like we have a test, did it pass or fail? What requirement is that test related to? So there’s mapping the test to a particular requirement and then did that test pass or fail. Those of you familiar with hardware verification know that tests passing and failing is not the only statistic or metric that we track.

There’s other metrics and statistics such as code coverage, functional coverage, assertion coverage, software coverage, all tracking what scenarios and what stimulus were driven to the specific device under test. And what was the reaction of the device under test? And then what percentage of the device has been exercised during that test? It is all basically statistics gathering from the testing effort. All that data can be mapped into the verification plan, directed to the specific requirement or multiple requirements that it may satisfy. And of course, this gives us the ability to not only specify a requirement, but then capture whether that requirement was met. Was it satisfied? And this is the place where I’ll hand over to Jeremy now to talk about what those requirements in those higher level requirements or system level requirements in the general world and how they’re going to connect into this hardware verification, hardware development world.



You might have noticed that requirements activities on projects sometimes lead to adversarial relationships. Customers don’t always feel that business analysts have their best interests at heart. Product managers get frustrated when customers demand never-ending changes in requirements but don’t want the delivery date to slip. Testers don’t appreciate having to redo their work because no one told them about updates to the requirements. Developers bristle when the project manager holds their feet to the fire to meet schedule despite piling on new features through scope creep. It’s a wonder anyone still speaks to their colleagues at the end of the project.

If you’re interested in pursuing better requirements processes, you have to respect the many cross-connections between the software development group and numerous external stakeholders. This article identifies some of those important interfaces and suggests ways to engage such stakeholders in a collaborative approach toward more successful projects in the future.

Figure 1. Requirements-related interfaces between software development and other stakeholders.

Figure 1 shows some of the project stakeholders that can interface with a software development group and some of the contributions they make to a project’s requirements engineering activities. Explain to your contact people in each functional area the information and contributions you need from them if the product development effort is to succeed. Agree on the form and content of key communication interfaces between development and other functional areas, such as a system requirements specification or a marketing requirements document. Too often, important project documents are written from the author’s point of view without full consideration of the information that the readers of those documents need.

On the flip side, ask the other organizations what they need from the development group to make their jobs easier. What input about technical feasibility will help marketing plan their product concepts better? What requirements status reports will give management adequate visibility into project progress? What collaboration with system engineering will ensure that system requirements are properly partitioned among software and hardware subsystems? Strive to build collaborative relationships between development and the other stakeholders of the requirements process.

When the software development group changes its requirements processes, the interfaces it presents to other project stakeholder communities also change. People don’t like to be forced out of their comfort zone, so expect some resistance to your proposed requirements process changes. Understand the origin of the resistance so that you can both respect it and defuse it.

Much resistance comes from fear of the unknown. To reduce the fear, communicate your process improvement rationale and intentions to your counterparts in other areas. Explain the benefits that these other groups will realize from the new process. Make sure they understand the pain that projects have experienced in the past because of shortcomings in your current processes. The prospect of eliminating pain is a great motivator for change. When seeking collaboration on process improvement, begin from this viewpoint: “Here are the problems we’ve all experienced. We think that these process changes will help solve those problems. Here’s what we plan to do, this is the help we’ll need from you, and this is how our work will help us both.”

Here are some forms of resistance–both active and passive—that you might encounter:

• A change control process might be viewed as a barrier thrown up by development to make it harder to get changes made. In reality, a change control process is a structure, not a barrier. It permits well-informed people to make good business decisions. The software team is responsible for ensuring that the change process really does work. If new processes don’t yield better results, people will find ways to work around them—and they probably should.

• Some developers view writing and reviewing requirements documents as bureaucratic time-wasters that prevent them from doing their “real” work of writing code. If you can explain the high cost of continually rewriting the code while the team tries to figure out what the system should do, developers and managers will better appreciate the need for good requirements.

• If customer-support costs aren’t linked to the development process, the development team might not be motivated to change how they work because they don’t suffer the consequences of poor product quality.

• If one objective of improved requirements processes is to reduce support costs by creating higher-quality products, the support manager might feel threatened. Who wants to see his empire shrink?

• Busy customers sometimes claim that they don’t have time to spend working on the requirements. Remind them of earlier projects that delivered unsatisfactory systems and the high cost of responding to customer input after delivery. You’re going to get the customer input eventually. It’s a lot cheaper and less painful (and people are in a better mood) to get it earlier rather than later.

Anytime people are asked to change the way they work, the natural reaction is to ask, “What’s in it for me?” However, process changes don’t always result in fabulous and immediate benefits for every individual involved. A better question—and one that any process improvement leader must be able to answer convincingly—is, “What’s in it for us?” Every process change should offer the prospect of clear benefits to the project team, the development organization, the company, the customer, or the universe. You can often sell these benefits in terms of correcting the known shortcomings of the current ways of working that lead to less than desirable business outcomes.

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.

Software products are created for users, be they human beings, hardware devices, or other software systems. A user is a stakeholder who will interact with a completed system either directly (that is, hands on) or indirectly (for example, using reports from the system but not generating those reports personally). Users can be grouped into user classes, communities of users who have similar characteristics and similar requirements for a system.

Discussions of use cases always involve the concept of actors. An actor is an entity outside the system boundary that interacts with the system for the purpose of completing an event, such as execution of a use case. Actors are related to—but are not precisely the same as—user classes. This distinction between user classes and actors is subtle. It doesn’t help that the books on use cases employ somewhat different terminology. Here are the key points:

  • User classes represent groups of actual people or non-human users. A human user is a member of one or more user classes. You need to identify your product’s user classes so you know which people to talk with about requirements. You also need to understand which user classes are “favored” over others. Satisfying the needs of a favored user class is more important from a business perspective than meeting the needs of other groups of users. This distinction helps when making priority decisions and resolving requirement conflicts.
  • An actor is an abstraction, a role performed by a member of a specific user class when he interacts with a product at a specific time. When you are talking with user class representatives, have them identify the various roles that members of each class can perform from time to time. If those user roles involve interacting with the system through a use case, the roles represent actor names. Consider developing personas, descriptions of representative actors who can execute certain use cases.

I like to imagine the members of each user class as having a stack of hats available, each of which is labeled with a particular actor or role name. Whenever a user needs to perform a specific use case with the system, he puts on the hat labeled with the name of the actor who initiates that use case. The system behaves as though it’s interacting with that actor, regardless of what user class that individual user belongs to.

Figure 1 illustrates the relationship between actors and user classes for a bank. Bank Customers constitute one class of users of banking services. A particular Bank Customer might perform various functions from time to time with the bank’s systems, perhaps as an indirect user working through a bank employee. When performing those functions, the Bank Customer is assuming the role of a particular actor. When he makes a cash withdrawal from an automated teller machine, the customer is performing the role of an Account Owner. This is more specific than calling him a generic Bank Customer. As far as the ATM is concerned, it’s performing a service for an Account Owner.

Figure 1. A member of one user class could take on multiple actor roles.

On another occasion, that same person might walk into the bank and apply for a loan. At that time, he’s wearing the hat of a Loan Applicant actor, not an Account Owner. The system the customer interacts with at that time thinks of the user as being a Loan Applicant. In a third situation, a bank customer might deposit a check into an account he doesn’t own, perhaps on behalf of his spouse or a business colleague. In that case, the customer is filling the role of the Depositor actor. Note how many actor names end in -er or -or, which indicates that the actor is a performer attempting to accomplish a particular objective.

There could also be a many-to-one relationship between user classes and actors, as Figure 2 illustrates. When I worked on a project called the Chemical Tracking System at one company, we had several important user classes: chemists, chemical technicians, members of the chemical stockroom staff, and laboratory managers. Each of these groups had largely different sets of requirements, but there was some overlap. For example, members of all these user classes might have to place requests for chemicals periodically. Now, no one at this company had a job title of “chemical requester.” The Chemical Requester actor is an abstraction that represents anybody who needs to request a chemical and is authorized to do so. The system doesn’t care whether the person requesting a chemical works as a chemist, a lab manager, or something else. All the system knows is that a Chemical Requester actor is executing certain use cases associated with chemical request activities.

Figure 2. Members of multiple user classes could all perform as the same actor.

Actors appear in use case diagrams drawn according to the convention of the Unified Modeling Language. Figure 3 shows a portion of a use case diagram for the Chemical Tracking System. The box represents the system boundary. Each oval inside the box represents a single use case, something a user would need to accomplish with the help of the system. The stick figures outside the box represent actors. The stick figure notation is a standard convention that’s used whether the actor is a human being or an inanimate entity.

Figure 3. Use case diagrams show primary and secondary actors for specific use cases.

An arrow drawn from an actor to a use case indicates that the actor can initiate the corresponding use case; this is called the primary actor for that use case. The Chemical Requester actor can initiate the use cases Request a Chemical, Receive Chemical, Check Order Status, and so forth. An arrow directed from a use case to an actor indicates a secondary actor. Secondary actors participate in the completion of a use case, but they don’t derive the benefit from the use case; the primary actor gets the benefit. The Training Database and Buyer actors are secondary actors with respect to the Request a Chemical use case. The Chemical Tracking System might have to rely on the Training Database to see whether a user is properly trained in how to handle a dangerous chemical. And, if someone submits a request for a chemical that needs to be purchased from a vendor, the system will route that request to a Buyer for handling. Note that actors are primary or secondary with respect to a specific use case, not with respect to the overall system.

As you begin your requirements exploration, be sure to identify your user classes so that you know who you’ll need to talk to in order to discover user requirements. Key representatives of user classes (product champions) can work with the business analyst during elicitation. The product champions identify the use cases that represent tasks or goals that members of their user class will need to accomplish with the system’s help.

As you explore each use case, think of an appropriate name to describe the actor who will initiate that use case. Try to select a name that reflects what the user is attempting to accomplish, such as Chemical Requester, Buyer, or Loan Applicant. Consider whether members of other user classes might also have occasion to perform that same use case—that is, whether they might have occasion to function in that same actor role. The analyst should consult representatives of those user classes to see if their stated needs enrich his understanding of the use case, perhaps by identifying additional alternative flows.

Consider developing catalogs that describe your common user classes and actors so that you can reuse these definitions in a consistent fashion across multiple projects. Any opportunities for requirements reuse will reduce errors, save time, and help you build a more cohesive set of integrated products.

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles on our web site via a series of blog posts, whitepapers and webinars.  Karl Wiegers is an independent consultant and not an employee of Jama.  He can be reached at http://www.processimpact.com.  Enjoy these free requirements management resources.