Tag Archive for: best practices

Jama Connect captures real-time feedback and decisions to give you visibility into every stage of the product development cycle, from benchmarking and monitoring team progress to enabling stronger collaboration between stakeholders.

You have a wealth of information available to you in Jama Connect, but it’s only helpful if you can find what you need when you need it. Here are three tips for finding data quickly and easily.

Search Bar

OK, you probably thought of using the search bar, but you may not be using it to its full potential.

When you search for a term like “hardware,” Jama Connect’s robust search functionality will pull up every item containing that keyword, including Word, PDF, and text attachments. Your search isn’t restricted to items with the word “hardware” in the title or description; Jama Connect will show you every item that contains the word “hardware” anywhere. This powerful functionality tends to be underused.

You can also limit your search to a specific field, like the item name, just by adding a colon to your search. “Name: Hardware” will locate all items with the word “hardware” in the document name.

Say you’re looking for something more specific, like a client’s name. If there’s only one result for your search term, Jama Connect will automatically open a window taking you to that item, so you can hit the ground running.

Note: For the next two tips, we’re assuming a certain level of familiarity with Jama. For a more granular guide to these features, start with the Jama Connect User Guide or get your questions answered in the Jama Support Community.

Find Me 

Jama Connect’s Find Me feature allows you to locate the item you’re working on within the project structure, represented by the Explorer Tree. At any moment, you can get a comprehensive, integrated view of the project and see how each item fits in. The Find Me feature keeps you from missing the forest for the trees by putting the holistic vision for the project top-of-mind. At the same time, Find Me helps you orient each task you’re working on in relation to the project as a whole.

Trace View

Trace View offers live traceability within the product development cycle, showing you upstream and downstream items, missing relationships and item details in the context of relationships.

To access Trace View, select it from Projects. The items you selected in List View will appear in the Source column, with related items downstream from the source to the right and related items upstream from the source on the left. From there, you can use the blue arrows at either side of your screen to toggle upstream or down.

You can also apply filters to Trace View to see items by type — like subsystem requirement, verification or design description.

Additionally, Jama Connect lets you save a Trace View and bookmark it on your homepage for easy access. You can save multiple views to facilitate working on or managing items in different capacities.

Finally, you can export saved Trace Views as CSV files or share them with team members by copying and sharing the URL. (Note that Trace Views cannot be made public, so the best way to give every team member who needs to see a specific upstream or downstream view is to send them the URL.)

For a deeper dive into maximizing Jama Connect, check out the Ask Jama webinar, or explore the Jama Connect User Guide, which is full of tools to plan and track progress and performance.

Jama Connect allows you to create groups to manage permissions and facilitate collaboration at the project level. Groups allow you to work smarter and faster by empowering you to manage notifications, permissions, access and action for multiple users at once.  

There are two types of groups: 

  • Project groups: These groups are created in the context of a specific project and are available to that project when adding permissions. 
  • Organization groups: These groups have no product context; they’re available to all projects in the organization.  

You can name groups according to your internal structure (i.e. job title or work group), by access permission (read-only, read/write), or by role (project admin, review admin).  

Jama Connect comes equipped with several pre-defined project and organization groups, such as Organization Admin, a default group with organization and project permissions. 

Creating groups based on users’ roles or permissions allows you to: 

  • Grant access and role permissions 
  • Initiate reviews 
  • Subscribe users to items 
  • Notify users of changes to content or workflow 

Only your organization’s Jama administrator can determine which people have access to which project. However, as a project administrator, you’re empowered to group your users in a way that makes sense for your project.  

The Users tab within your project will display everyone who has any level of access to the project. (This is the list of users determined by your organization’s administrator.) To create a new group within your project, click Add Group in the upper right. You can change who’s in the group, manage the group subscriptions, change the group name or even remove the group. For stakeholders who work on the project in different capacities, you can also add users to multiple groups.  

Groups aren’t just an easy way to manage multiple users’ permissions at once (although that’s certainly useful!). Groups enable collaboration, provide transparency and visibility across the product development cycle and allow for greater security when required.  

Groups are where the organizational administration of Jama Connect really overlaps with project management within the platform. We recommend identifying high-level groups at the organizational level that are managed by the administrator, like a read access group, and then letting project managers handle the group permissions, notifications and actions within each project.  

For more on creating and editing groups, check out the Ask Jama webinar, or explore the Jama Connect User Guide. 

Technical debt refers to the implied cost of rework necessitated by choosing a fix that’s easy to implement now, rather than a better resolution that would take longer. In simple terms, tech debt points to the work your team will have to do in the future as a result of cutting corners today. 

That said, tech debt isn’t a negative indicator across the board, and it’s impossible to avoid altogether. Sometimes taking on tech debt makes sense, like when you need to speed your time to market or account for last-minute spec changes.  

Complex software that requires extensive refactoring — that is, restructuring of existing code — is particularly associated with tech debt, but refactoring is not only a response to problems with the code. Refactoring often represents an evolving understanding of a problem and the best way to solve it.  

Still, it behooves teams to minimize unnecessary tech debt. In our ebook, “The Essential Guide to Software Development Team Metrics,” we dig into metrics to assess the speed, accuracy, quality and health of your team. Seven of these metrics are especially important to bear in mind when working to minimize tech debt: 

Ticket churn 

Also referred to as “ticket bounceback,” ticket churn is a measure of how many tickets have been moved back in progress over a period of time. Ticket churn gauges rework, so high ticket churn can indicate increasing tech debt, and is likely to impact your team’s velocity and overall product quality. 

New vs. closed bugs 

Track how many bugs are opened against how many are closed to find the rate at which your team clears bugs. This metric is an indicator of your overall tech debt, as well as whether your team is moving in the right direction in terms of general code quality.  

Bug burndown 

Development and quality assurance teams use bug burndown to understand open bugs and their projected closures based on the average bug closure rate. Teams that don’t keep a close eye on bug burndown can lose a handle on their overall product quality and take on excessive amounts of tech debt in their effort to fix bugs quickly. 

Percentage of high-priority bugs 

This straightforward calculation — dividing the number of current, high-priority bugs by the total number of bugs — is simply the percentage of bugs that your team has tagged as high-priority (sometimes high-severity) due to their impact on customers or the product as a whole. Tracked over time, this metric illuminates part of the story of both product quality and tech debt. An increasing trend of more high-priority bugs is often symptomatic of a team struggling with product requirements, test cases and test suites.  

Code churn 

Code churn reflects the number of lines of code deleted and added in the same line. It’s a measure of activity over time that highlights hotspots in your code.  

With brand-new features, a lot of activity in one area isn’t a problem, but over time, code churn should diminish; if it doesn’t, you’re doing too much rework and accumulating an unnecessary amount of tech debt. High code churn can also predict a drop in quality and speed.  

Code coverage 

Also called test coverage, code coverage is the percentage of lines of code that are executed when running your test suite. Fundamentally, code coverage refers to how effective your test process is at producing a quality product. As a rule of thumb, coverage should be in the 80-90% range.  

Code coverage doesn’t measure the inherent quality of your product; rather, it helps reveal the process your team is undertaking to achieve a quality product. Code coverage highlights breakdowns in your test process, like when new code is added but not tested.  

If your code coverage percentage drops over time, devote more resources to test-driven development (TTD) and make sure untested areas are covered. A high-quality test process heads off quality issues and reduces tech debt, setting your team up to be nimbler in the future.  

Frontend app response time 

Frontend app response time, or the average time it takes for your pages to be available to end-users, helps your team identify when important product updates or infrastructure upgrades are necessary to keep your solutions running smoothly.  

Usually monitored closely by DevOps teams, frontend app response time is critical to product success. Your product might add huge value, but users will start looking elsewhere if it’s too slow. An increase in response time often suggests rising technical debt: your short-term solutions are creating long-term problems for your customers and your team.  

Get the full story on metrics for development teams: Pick up The Essential Guide to Software Development Team Metrics now.

Requirements obsolescence is an issue brought about by disruption in technology.

Complex development projects can include hundreds of requirements and span multiple years. In that time, a lot can change in the market and with a product’s intended use and design.

So how should you go about questioning requirements identified at the beginning of a project to determine whether they should make it into the final product?

We put this question to three requirements experts in a recent webinar on disrupting requirements, and here’s what we learned.

Focus on the Requirement’s Purpose

First of all, when considering if a requirement is legitimate or obsolete, Colin Hood, Principal at Colin Hood Systems Engineering, urges to identify the purpose of the requirement itself.

To do this, you must separate the “what” from the “how.” Ask yourself: Is the requirement a basic statement on what you’re trying to do? Or does the requirement state how to do a certain thing?

So long as the requirement is a problem statement outlining the desired outcome you want to achieve – i.e., the former of the two questions above – it shouldn’t have a specific lifespan and could likely exist forever.

In this case, it would be the responsibility of the product owner to decide if and when this requirement would go into the backlog and eventually be realized in the evolving product.

On the other hand, if you answered affirmatively to the second question – meaning the requirement’s purpose is to tell you how to execute a solution – the requirement may face obsolescence very quickly.

The Problem with a Solution-Based Requirement

Requirements that are solution-based are often technology-dependent. With the rate of change occurring in today’s world, technology is easily disrupted. What was once necessary may no longer be supported.

Furthermore, requirements that focus on how to get work done take you into the solution domain too quickly and can restrict the design phase.

Christer Fröling, Scandinavian Marketing Lead at the Reuse Company, cautions that when gathering requirements, going from the problem side to the solution side too quickly can cause you to miss important contextual information and yield low-quality requirements. The results can include inaccurate cost estimates and costly overruns due to an incomplete scope of the problem.

Bottom line: You don’t want your team spending time and budget working on an obsolete requirement.

Understanding the Purpose

Michael Jastram, Sr. Solutions Architect at Jama Software, reminds us that functional modeling can also be used to separate the general function from how it is being done on a technology level.

This distinction could help you salvage a seemingly obsolete requirement by uncovering a real understanding of what the requirement aims to achieve apart from how the requirement suggests it be executed.

In addition to making some requirements obsolete, change affects a lot of the development process. To hear requirements experts discuss and debate what that means for the requirements-gathering process, traceability and the future of requirements, listen to our webinar, Disrupting Requirements: Finding a Better Way.

Today, most consumers research products online before purchasing. Doesn’t matter if the product is B2C or B2B, or even if there’s a single competitor in the marketplace. You can bet customers are searching for opinions on your product, as well as alternative options.

“We are seeing the growing power of a customer in driving perceptions of brands/products, and reviews is just one way,” Tom Collinger, Executive Director of the Spiegel Research Center at Northwestern University, wrote in an email to Jama Software.

Recent research has highlighted the ways in which consumers perceive online reviews and how they inform purchasing decisions. And from that has come insights into how companies can think about online reviews — including why negative ones aren’t all bad — as well as ideas for future-proofing from backlash.

Value of Online Reviews

Researchers at the Institute of Cognitive Neuroscience at University College London, for instance, recently looked at how online reviews influence the perception of a product. The study first had 18 participants rate a range of Amazon items based only on image and description. The subjects were then asked to score the products a second time, but were instead shown the image along with aggregated user reviews, which displayed the average score and total number of reviews.

Turns out the subjects’ opinions were very much swayed by reviews, as their second round of ratings fell somewhere between their original score and the average. As Science Daily notes, “Crucially, when products had a large number of reviewers, participants were more inclined to give ratings that lined up with the review score, particularly if they lacked confidence in their initial appraisals, while they were less influenced by ratings that came from a small number of reviewers.”

As the study showed, people leaned more toward group consensus when their confidence about a product’s overall quality was low and the pool of reviews was large. Anecdotally, this seems to track. If you see a product online with over a thousand five-star reviews versus one with just two five-star reviews, you’re probably more likely to trust the one with more reviews, since it appears more credible.

Importance of Average Scores

Not that products with tons of 5-star reviews receive a blanket pass. Another recent study on the power of online reviews, this time conducted by Northwestern University’s Spiegel Research Center, in conjunction with the platform PowerReviews, analyzed millions of customer experiences from online retailers.

Northwestern discovered that products with near perfect scores can sometimes appear almost too good to be true. “Across product categories, we found that purchase likelihood typically peaks at ratings in the 4.0 – 4.7 range, and then begins to decrease as ratings approach 5.0,” the report states. So while you shouldn’t seek to attract purely negative reviews, some can help your product’s authenticity.

Beyond that, Northwestern also detailed some other ways in which negative reviews can help a product. By displaying at least five reviews online, positive or negative, the purchase likelihood is 270% greater than that of a product with zero reviews, according to the study.

Importance of Early Reviews

The study also discovered that nearly all increases in purchase likelihood of a product from online reviews occurred within the initial 10 reviews posted, with the first half of those being the most influential.

Of course, not all websites display reviews the same way. Some sort by review quality over chronological order, for instance. For the purposes of this research, the first five were found to be the most important regardless of how they were presented. “Our analysis describes the first five the consumer sees,” Collinger wrote in an email to Jama Software. “This is independent of the way in which they are listed by the retailer.”

This is why it’s so important for a company to get a product right at release. If a new offering gets savaged by a wave of online reviews initially, regardless of how they’re sorted on a website, those will be the first and most influential opinions people read when considering a product. And that reputation can stick. Patching the problem in future releases well help, but then you’re counting on people updating their reviews later on, which is no sure thing.

Good Products Come From Good Processes

When managing online reviews, the Northwestern study urged companies to focus on the first five reviews, embrace critical reviews, and follow-up purchases via email to urge consumers to write reviews. In fact, reviews from actual, verified buyers — as opposed to those who post reviews anonymously — are more likely to be positive than negative. Also, making it easy for buyers to post reviews on a company’s website, for instance, regardless of their device or platform, was recommended.

From a business standpoint, one other fundamental way to get online reviews working in your favor is by releasing the best product possible. That starts with a solid development process, with plenty of quality safeguards in place.

Best practices like implementing test management early and often, for instance, will reduce the number of defects or bugs that are likely to show up in your final release. Often, it’s exactly those types of design misfires that’ll get a product maligned by early online reviews.

And while ensuring the quality of a final product is a key factor of success with online reviews, according to Collinger, it doesn’t stop there. “Getting the entire customer experience right is the very simple solution to leverage this growing customer influence,” he wrote.

What do you call a system that has never-changing requirements? An obsolete system. Systems that are healthy and growing are always evolving or changing in some way.

Avionics, automotive and medical systems require airtight requirements. For the customers and users of these complex systems—and the companies building them—safety and security is critical.

Getting safety-critical requirements right is a surefire step in the right direction.

But too often, the requirements for these systems’ components invite more questions than they answer, and requirements management turns into a process of trying to pick out the bits of data you need, when you need them, while everything churns together in a roiling, boiling stew.

Safety-critical systems developers working with regulations and compliance standards know that opting for speed over safety, or safety over speed, adds risk. To succeed, both must be prioritized.

The majority of product, version and variant failures stem from weak requirements.

According to Vance Hilderman, CEO of the safety-critical systems and software engineering company AFuzion, “Safety-critical requirements include safety aspects, but not exclusively. There’s a grey area between functional, performance and safety requirements because if the system doesn’t function, it can’t be safe. If it doesn’t meet performance criteria it might not achieve safety aspects, but then there are explicit safety requirements as well. The problem is that most safety-critical requirement specifications are incomplete. They lack complete hazard-prevention mitigation. They can all be improved.”

“Almost all accidents related to software components in the past 20 years can be traced to flaws in the requirement specifications such as unhandled cases.”Software Engineering, Safety-Critical Requirements & Specification

The challenge is to prevent those accidents in the first place and try to make tomorrow’s unhandled case be a handled case today. Knowing the right procedures for developing safety-critical requirements is the key. But what are these best practices, why are they the “best,” and how do teams utilize them? Vance offers some tips, below:

  • Good requirements should be mandatory. That means not a goal, not if you have time but truly mandatory.
  • Requirements must be consistent. Meaning, they don’t conflict with other requirements. Systems engineers must be able to manage and analyze requirements; with complex systems you need tools for that.
  • Requirements must be uniquely identified. They must state what we do, not how. The “how” is design and architecture.
  • Requirements need to be complete and unambiguous. That means full concurrence among developers as to what a requirement means, with no need for interpretation, because the requirement has sufficient detail to know exactly what the developer of that requirement intended.
  • Requirements must be consistent, with no conflicting characteristics. We know the priority, the timing aspects and we know the performance attributes. We cannot have conflicting logic. A or B, or A and B when C is not valid. We avoid different terms. For example, “trigger, assign, prompt, display, cue.” Those five words could be interchangeable but they’re different. We must be consistent.
  • Requirements must be traceable. That’s an essential part of good requirements management activities—to trace up to a system-level requirement and trace down to the implementation and the test. Traceability is also used during the code review to assess, “Does this logic fully implement all it should”?
  • Requirements must be testable. Each requirement is defined in quantifiable terms. For each requirement, can a test be formulated that will unambiguously answer the question, “Has the requirement been met?” Some things are that simple.
  • Requirements must be verifiable. For example, take this requirement, “The system shall support autonomous driving or flying.” All of us in the automotive world, we’re leaning towards that, from aviation to new AVs.

To learn more about how to build, maintain and reuse a rock-solid requirements foundation, please watch Developing Safety-Critical System & Software Requirements.

DO-178C Avionics Development Best Practices

We’ve all heard the joke, “How do you get to Carnegie Hall? Practice, practice, practice.” The adage is true for many goals in life we strive for.  As defined, practice is…

“Repeated exercise in or performance of an activity or skill so as to acquire or maintain proficiency in it.”

From an early age, we’re taught that the more we do something, the more we learn about the best way to do it. If practice doesn’t always make perfect, it gets us closer to perfection.

But in avionics development there is hardly time to practice. When moving quickly and passing regulatory compliance audits are top priorities, teams can’t sacrifice time or effort to repetition and refinement.

Adding to this pressure, avionics development has precious little margin for error, with schedules, budgets and safety all on the line.

During each avionics product development project, every organization wants to minimize the same things: cost, schedule, risk, defects, reuse difficulty and compliance and regulation certification roadblocks.

So, how then can “practice” be reconciled with avionics development?

The best answer is to understand the breadth of current development processes and glean the best knowledge and solutions from the aviation ecosystem.

Welcome to DO–178C Best Practices.

Creating and instilling a set of DO-178C best practices for avionics development helps engineers and stakeholders focus on the right processes at the right times.

Certain avionics software development practices are self-evident, such as utilizing defect prevention and automating testing. The DO-178C best practices we have identified are subtler and considerably less practiced, and yet, when utilized together, they greatly increase the probability of avionics project and product success.

According to Vance Hilderman, founder of two of the world’s largest avionics development services companies and primary author of the best-selling book on DO-178 and DO-254,  here are the top 10 not-always-obvious DO-178C best practices that every avionics developer needs to know:

  1. Improved LLR Detail: If requirements are the foundation of good engineering, detailed requirements are the foundation of great engineering.
  2. Parallel Test Case Definition: If a tester cannot unambiguously understand the meaning of a software requirement, how could the developer?
  3. Testing Standards Implementation: Requirements, design and code all have standards. What should a software test standard cover?
  4. Model Framework Templates: Software modeling will eventually fade away … when software functionality, complexity and size all decrease 90%.
  5. Fewer, Better Reviewers: Why one great reviewer is better than many good reviewers.
  6. Automated Regression & CBT: How devoting upfront time to a test automation framework can provide the single largest reduction in development expense.
  7. Automated Design Rule Checker: On their best days, humans perform satisfactorily when checking software design rules; in the safety-critical world, not all days are best days.
  8. Advanced Performance Testing: Would you want to buy a new car model which has never been tested in aggressive driving conditions?
  9. Parallel Traceability / Transition Audits: The reasons why experienced developer teams deploy proactive SQA and tools to monitor bi-directional traceability continuously.
  10. Technical Training Workshops: The four critical processes that yield improved productivity, consistency and high ROI.

To learn more about how these best practices can make a difference in your avionics product development projects, read DO-178C Best Practices For Avionics Engineers & Managers

“Gartner clients report that poor requirements are a major cause of rework and friction between business and IT. Broad market adoption of software requirements solutions is low, exacerbating this situation.” This begins the key findings in Gartner’s newest Market Guide for Software Requirements Definition and Management Solutions.
The guide provides key findings, recommendations, market definition and direction, summarily stating:

Requirements management software provides tools and services that aid the definition and management of software requirements and user experience. Application development executives should invest in requirements skills, practices and tools to improve user experience and software quality.

In choosing a requirements management tools vendor, Gartner advises companies consider, among other factors, the ability to:

  • Work in shared (rather than collaborative) environments.
  • Use a true requirements repository (featuring a robust meta-model that enables reuse and impact analysis) rather than simple storage and tagging.
  • Integrate with other ADLM tools in use (including test case management, and agile planning).
  • Support regulatory and reporting needs (for compliance with both internal and external governance processes).

Gartner, Market Guide for Software Requirements Definition and Management Solutions, Thomas E. Murphy, Magnus Revang, Laurie F. Wurster, 24 June 2016 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

requirements
The NASA Ames Research Center in Moffett Field, nestled amongst the global headquarters of Google, LinkedIn, Yahoo and Symantec, is home to Carnegie Mellon Silicon Valley. At this campus, graduate students prepare to become the technical leaders of the Fortune 100 companies that surround them.
Professor Cécile Péraire teaches courses out of the CMU ECE Master Program in Software Engineering. With a PhD in software testing, she has a robust background working with the world’s leading software companies.
In the past, Professor Péraire’s classes used Word documents to write and manage requirements and used basic Kanban boards to track work. But those traditional processes didn’t reflect the complex, critical work students would do after graduation. She sought out a SaaS, cloud-based solution that students could use to write, manage and trace requirements. It needed to be user friendly, so students could find value in it quickly and accomplish their work within the term. After in-depth evaluation of five requirements management tools, Professor Péraire selected Jama Software and became a member of Jama’s Innovator Partner Program. “Overall,” she said, “Jama was the one that performed the best and satisfied all the criteria on my list.”
requirements
At the start of the term, Professor Péraire introduced the Jama solution to her class and provided an overview of the tool. Thanks in part to Jama eLearning, she quickly learned the basics of the tool to help her students hit the ground running. Students used Jama to collaborate on their term project, which involved developing an application to assist real-life local first responder teams. Students reported that the tool has helped them work in teams more effectively. One student noted: “Jama is a great collaboration platform.”
Students weren’t the only ones to benefit from the introduction of a modern requirements solution. “I used Jama’s Review Center to evaluate my students’ work and provide more frequent and actionable feedback,” says Professor Péraire. She used item-based reviews to comment on specific elements of the students’ projects and help them improve their work throughout the semester. She concluded: “Overall, as the faculty, a tool like Jama provides me with an improved visibility into the students work, and also improves my ability to effectively collaborate with students outside the classroom, for both mentoring and evaluation purposes. For students, the tool reduces overhead in terms of structuring the information so they can focus on content creation and hence maximize learning.”
Interested in trying Jama in an academic setting? Sign up for a free trial and explore our free eLearning to get up and running quickly.
Learn more about how academic institutions, technology incubators and educational foundations can join Jama’s Innovator Partner Program >>

And calculates the costs and losses caused by inferior RM

“Forty to fifty percent.” That’s the percentage of companies that manage the creation, iteration, testing and launch of new products with MS Office, Google Docs and other all-purpose documentation tools, according to research from Gartner’s “Market Guide for Software Requirements Definition and Management Solutions.”

The result? Gartner clients report: Poor requirements definition and management is a major cause of rework and friction between business and IT.

Gartner notes the growing need for Requirements Management (RM) to be applied to continuous processes within IT, business and product delivery organizational areas. New tools and practices have emerged that emphasize collaboration, integrations and definition. These in turn have generated new demand for dedicated RM tools in both hardware and software development. As Gartner notes:

The delivery of effective software solutions begins with the effective capture of the business objectives, desired business outcomes and software requirements. Too often, requirements appear to be the leading source of delivered defects in software.

In choosing a requirements management tools vendor, Gartner advises companies consider, among other factors:

  • The ability to work in shared versus collaborative environments
  • Integration to other ADLM tools (Agile, test management)
  • Support for your chosen development practice (Waterfall versus Agile)
  • Who will be involved in actively creating, reviewing, approving and consuming requirements?
  • Regulatory and reporting needs for compliance