Tag Archive for: software development

How can you distinguish excellent software requirements and software requirements specifications (SRS) from those that could cause problems? In this post, we’ll start by discussing several different characteristics that individual requirements should exhibit. Then, we’ll then look at the desirable traits a successful SRS should have as a whole.

Characteristics of Effective Requirements

In an ideal world, every individual user, business, and functional requirement would exhibit the qualities described in the following sections.


Each requirement must fully describe the functionality to be delivered. It must contain all the information necessary for the developer to design and implement that bit of functionality. If you know you’re lacking certain information, use TBD (to be determined) as a standard flag to highlight these gaps. Resolve all TBDs in each portion of the requirements before you proceed with construction of that portion.

Nothing says you need to make the entire requirements set complete before construction begins. In most cases, you’ll never achieve that goal. However, projects using iterative or incremental development life cycles should have a complete set of requirements for each iteration.

Using minimal requirements specifications runs the risk of having different people fill in the blanks in different ways, based on different assumptions and decisions. Keep requirements details verbal instead of written also makes it hard for business analysts, developers, and testers to share a common understanding of the requirements set.

RELATED: Move to Jama Connect® — A Modern Requirements Management Alternative to IBM® DOORS®


Each requirement must accurately describe the functionality to be built.

The reference for correctness is the source of the requirement, such as an actual user or a high-level system requirement. A software requirement that conflicts with its parent system requirement is not correct.

Only user representatives can determine the correctness of user requirements (such as use cases), which is why users or their close surrogates must review the requirements.


It must be possible to implement each requirement within the known capabilities and limitations of the system and its operating environment. To avoid specifying unattainable requirements, have a developer work with marketing or the BA throughout the elicitation process.

The developer can provide a reality check on what can and cannot be done technically and what can be done only at excessive cost. Incremental development approaches and proof-of-concept prototypes are ways to evaluate requirement feasibility.


Each requirement should document a capability that the stakeholders really need or one that’s required for conformance to an external system requirement or a standard.

Every requirement should originate from a source that has the authority to specify requirements. Trace each requirement back to specific voice-of-the-customer input, such as a use case, a business rule, or some other origin of value.

RELATED: Carnegie Mellon University Software Engineering Program Teaches Modern Software Engineering Using Jama Connect


Assign an implementation priority to each functional requirement, feature, use case, or user story to indicate how essential it is to a particular product release.

If all the requirements are considered equally important, it’s hard for the project manager to respond to budget cuts, schedule overruns, personnel losses, or new requirements added during development. Prioritization is an essential key to successful iterative development.


All readers of a requirement statement should arrive at a single, consistent interpretation of it, but natural language is highly prone to ambiguity. Write requirements in simple, concise, straightforward language appropriate to the user domain. “Comprehensible” is a requirement quality goal related to “unambiguous”: readers must be able to understand what each requirement is saying. Define all specialized terms and those that might confuse readers in a glossary.


See whether you can devise a few tests or use other verification approaches, such as inspection or demonstration, to determine whether the product properly implements each requirement.

If a requirement isn’t verifiable, determining whether it was correctly implemented becomes a matter of opinion, not objective analysis. Requirements that are incomplete, inconsistent, infeasible, or ambiguous are also unverifiable.

Characteristics of Effective Software Requirements Specifications (SRS)

It’s not enough to have excellent individual requirement statements. Sets of requirements that are collected into a software requirements specification (SRS) ought to exhibit the characteristics described in the following sections.


No requirements or necessary information should be absent. Missing requirements are hard to spot because they aren’t there! Focusing on user tasks, rather than on system functions, can help you to prevent incompleteness. I don’t know of any way to be absolutely certain that you haven’t missed a requirement. There’s a chapter of my book “Software Requirements, Third Edition that offers some suggestions about how to see if you’ve overlooked something important.

WRITE BETTER REQUIREMENTS: Jama Connect® Features in Five: Jama Connect Advisor™


Consistent software requirements don’t conflict with other requirements of the same type or with higher-level business, system, or user requirements. Disagreements between requirements must be resolved before development can proceed. If you spot a pair of conflicting requirements, you might not know which one (if either) is correct until you do some research. Recording the originator of each requirement lets you know who to talk to if you discover conflicts in your software requirements specification.


You must be able to revise the SRS when necessary and maintain a history of changes made to each requirement. This dictates that each requirement be uniquely labeled and expressed separately from other requirements so that you can refer to it unambiguously.

Each requirement should appear only once in the SRS. It’s easy to generate inconsistencies by changing only one instance of a duplicated requirement. Consider cross-referencing subsequent instances back to the original statement instead of duplicating the requirement. A table of contents and an index will make the SRS easier to modify. Storing requirements in a database or a commercial requirements management solution makes them into reusable objects.


A traceable requirement can be linked backwards to its origin and forward to the design elements and source code that implement it and to the test cases that verify the implementation as correct. Traceable requirements are uniquely labeled with persistent identifiers. They are written in a structured, fine-grained way as opposed to crafting long narrative paragraphs. Avoid lumping multiple requirements together into a single statement; the individual requirements might trace to different design and code elements.

How Do You Know If Your Requirements and SRS Exhibit These Attributes?

The best way to tell whether your requirements have these desired attributes is to have several project stakeholders carefully review the SRS. Different stakeholders will spot different kinds of problems. For example, analysts and developers can’t accurately judge completeness or correctness, whereas users can’t assess technical feasibility.

You’ll never create an SRS in which all requirements demonstrate all these ideal attributes. However, if you keep these characteristics in mind while you write and review the requirements, you will produce better requirements documents and you will build better products.

To learn more about how to write requirements in a way that all stakeholders have a clear understanding of development needs,
visit The Essential Guide to Requirements Management and Traceability

Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles. Karl Wiegers is an independent consultant and not an employee of Jama Software. He can be reached at ProcessImpact.com


This post on Software as a Medical Device (SaMD) development is written by Mercedes Massana, the Principal Consultant of MDM Engineering Consultants

SaMD software is software intended to be used for one or more medical purposes that do not require embedding in a medical device. These purposes can range anywhere from helping to diagnose or treat disease, helping in the clinical management of a disease, or providing information to help with the clinical management of a disease. SaMD differs from other medical device software in that it will operate on different platforms and will interconnect with other devices, carrying with it an increased cybersecurity risk and a commensurate increase in the use of off-the-shelf software. 

On the surface it may appear that the development of SaMD software is no more difficult than the development of Medical Device embedded software, but appearances can be deceiving, and the development of SaMD products can be quite challenging. In order to deal with these challenges, there are four key best practices that should be followed for SaMD software development.  

These practices are:  

  1. Make use of standards and guidance documents  
  2. Apply the right level of rigor 
  3. Understand the difference between Verification and Validation  
  4. Implement a post-market monitoring program

Best Practice #1– Making Use of Standards and Guidance Documents 

Although standards development organizations and regulatory bodies have only started to scratch the surface in the creation of standards and guidance documents to help SaMD development organizations, there is a sufficiently detailed body of work available to help development organizations succeed. The most relevant of these are the documents generated by the International Medical Device Regulators Forum (IMDRF) related to SaMD and IEC 82304 Safety and Security of Health Software Products. The IEC standard points to other well-known standards, such as IEC 62304 Software Development Lifecycle, IEC 62336 Usability Engineering and ISO 14971. Additionally, several FDA guidance documents are available that apply to all medical device software and are useful for the development of SaMD, these include General Principles of Software Validation, Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, Off-The-Shelf Software Use in Medical Devices and the FDA pre and post market cybersecurity guidance, as well as other guidance documents 

Best Practice #2 –Applying the Right Level of Rigor  

Within the development of SaMD, a clear understanding of the scope and intended use of the product is necessary, and to that end, it is necessary to have a method to gauge the risks associated with SaMD use. The IMDRF “Software as a Medical Device” Possible Framework for Risk Categorization and Corresponding Considerations, IEC 62304 , and FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices all provide a method for risk based classification of SaMD. The rigor applied to the development of SaMD should be commensurate with the level of risk. IEC 62304 uses the Safety Classification to define the level of rigor of activities to be performed in the software lifecycle based on the level of risk. Ideally, the development process is sufficiently flexible to avoid the over-engineering or under-engineering of the SaMD in question. A development process that is adaptable requires organizational maturity and experience, in order to perform the right set of activities and understand the value provided by those activities.

RELATED: Regulatory Shift for Machine Learning in Software as a Medical Device (SaMD)

Best Practice #3 – Understanding the Differences Between Verification and Validation  

SaMD is a medical product, a system in and of itself. Therefore, SaMD system requirements must be verified to have been implemented correctly, and  SaMD user’s needs must be validated to ensure that the product satisfies the user’s need and that the right product was built for the customer. Validation typically includes human factors testing, clinical evaluation to determine the efficacy of the software for its intended use, and a demonstration that risk controls are effective. This requires more than just your standard set of software testing, which typically consists of code reviews, unit testing, static analysis, integration testing and requirements-based testing. 

Best Practice #4 – You Are Not Done When the Software is Released  

Your SaMD software has successfully completed verification and validation, has been cleared by regulatory agencies, and is now ready to go live. Time to breathe a sigh of relief and congratulate the team for a job well done. Pop the champagne and throw a party, you have all earned it. Enjoy the festivities, but afterwards turn your attention promptly to monitoring the performance of the SaMD post launch. Rigorous and exhaustive as your testing was, you can never anticipate every possibility. Be ready to respond to identified software defects and emerging cyber threats and deploy patches or software fixes as issues are identified.  The nature of cybersecurity risks is ever-changing, and no system, regardless of how well-designed, or rigorously-tested, is free of software defects. Additionally, customer input can provide opportunities for enhancements and improvement.  However, be aware that proposed changes can impact the intended use of the device, the classification of the SaMD, and can ultimately result in additional regulatory submissions. It is paramount that prospective SaMD developers evaluate changes thoroughly and plan actions related to these changes to avoid surprises that can delay, or outright derail, SaMD implementation. 

In summary, SaMD can provide great benefits to the user; however, to successfully launch SaMD that meets the user’s needs, several best practices should be utilized in their development. These best practices, making use of standards and guidance documents, applying the appropriate level of rigor in the development of SaMD, understanding the difference between Verification and Validation, managing change appropriately, and implementing a good post-market monitoring program.  These best practices are indispensable in ensuring a safe and effective SaMD product. 

Ever wish you could jump right into a software development project without first creating a product requirements document?

OK, let’s get real here: have you ever not only wished it, but actually done it?

If so, you know the results, and they probably weren’t great. In fact, your project was likely a disaster in terms of time spent, budget wasted, and the overall quality (or lack thereof) of the finished product.

So, skipping the product requirements document really isn’t a viable approach. How, then, can you create a good product requirements document with minimal hassle?

Simply follow these eight steps.

1. Brainstorm Software Requirements

Your first step in writing a software development product requirements document doesn’t even involve writing. Well, it does, but not in the way you might think. You’ll need to call together all your project stakeholders and solicit their input, taking copious notes all the while.

Remember that in the true spirit of a brainstorm, there are no right or wrong answers. Encourage the whole team to contribute generously and focus on recording their ideas. Sure, you’ll get some real outlier ideas, and the team may even go off on tangents. But you’ll get everyone’s needs out in the open, which will ultimately make it easier for you to deliver a product that meets them.

Only after the fact will you begin to separate the wheat from the chaff — and then give structure to the wheat. Which brings us to our next step.

2. Create a Product Requirements Document Outline

Remember back in high school when your English teacher made you write — and submit — an outline for your term paper before you started the actual writing? Turns out she wasn’t crazy. If you can’t summarize your thoughts in an outline, it’ll be a lot tougher to write a coherent final product requirements document.

Taking the input you received during the brainstorming session, you’re now going to create the framework of your software development product requirements document. You don’t have to worry about sounding perfect in an outline — use just enough words to get your point across. But do make sure that each point flows logically to the next.

If you come across a point that doesn’t fit the flow of your document, don’t just assume you’ll fix it when you get to the writing phase; instead, ask yourself if it should be moved to a different part of the document, or if it should be cut entirely.

Learn how better requirements can impact your business by downloading our whitepaper, “The Bottom Line: Better Requirements Add Business Value.”

3. Make Sure that All Software Requirements Are Specific and Testable

A vague product requirements document is little better than none at all. If you give your developers lots of wiggle room by using imprecise language, there’s no telling what you’ll get back in the end.

So, once you’ve completed your outline, take a close look at what it actually specifies about the finished product. The product shouldn’t provide “a number of ways” for the user to complete a task; it should provide, say, two specific ways. The home screen shouldn’t load up “instantly;” it should load within six milliseconds.

Of course, creating exact specifications for your product won’t do much good if you can’t test for these specifications. Ask your QA and testing organization how they can enhance the product development process, what kinds of testing technology they can deploy and even what pitfalls they think you may face during development.

4. Write a Draft of Your Software Requirements

Hate writing? Don’t worry. Most of the hard work has already been done in the outlining phase. Now that you know exactly what you want your document to say, you just have to say it.

Take your lean and logical outline and turn it into sentence form. As you work, remember that simple, clear language is better than all those vocabulary words you were supposed to learn for the SAT. Your readers will appreciate you getting to the point and stating in plain English what it is that the software should do.

Sometimes, the best writing isn’t writing at all — it’s a picture. Don’t hesitate to use a diagram or graphic to replace a long, tedious paragraph. Again, your readers will appreciate being able to understand your point at a glance rather than spending valuable time reading.

5. Proofread, Edit, and Logic-Check

Sometimes good writing is simply good editing. A software development product requirements document that’s riddled with typos and grammatical errors is far less likely to be taken seriously. But even more significantly, a document that lacks a logical flow and is missing key considerations could bring development grinding to a halt.

Once you have a first draft, get vicious with what you’ve written. Go over it with a highly critical eye. Try to cut out needless sentences, and trim unnecessary clauses and phrases out of overly long sentences. One useful old trick is to read the document out loud. If you hear yourself droning on and on without really saying anything, that’s generally a sign you need to pare down your text.

Learn from the experts on how to conquer the five biggest challenges of requirements by reading our white paper.

6. Conduct Peer Reviews

In your haste to produce a product requirements document, don’t cut corners. You’ll be surprised at what errors extra sets of eyes can find, what perspectives they bring and what potential disasters they prevent.

That’s why you want the most honest and open feedback from stakeholders to strengthen your software requirements. And you also want to give them enough time so they can be thoughtful about what you’ve presented, while still being mindful of the fact you’re under a time crunch.

Hopefully you’re not emailing around versioned documents, and soliciting feedback from stakeholders that way, because that takes forever and invariably someone’s thoughts get missed in the process. And the opinion you lose might just be the one that introduces a tidal wave of risk.

Modern requirements solutions can cut your review times in half, while capturing everyone’s feedback in real time. Not only will you hit your deadline, you won’t need to sit through lengthy stakeholder meetings as they pore through each detail.

7. Rewrite Your Product Requirements Document

Take the feedback you received on your first draft and give your document a thorough reworking. If the changes were significant, consider running your product requirements document past your stakeholders a second time to get their signoff before making it official.

8. Use Your Finished Product Requirements Document as a Template for Next Time

Whew, you made it! But if this process was a success, then it should become your model for all future projects. So, be sure to save your product requirements document as a template that you can use on your next project. Rather than starting from scratch, you’ll be able to go through the different sections of the document and fill in the blanks.

There’s no failsafe plan for coming up with the perfect software development requirements document. But we think these steps will keep you on the right track — which is exactly what your finished document will do for your developers.

Download our white paper, “Writing High Quality Requirements,” to learn more about the ins and outs of creating a quality product requirements document.

requirements management plan

Developers often want to freeze software requirements following some initial work and then proceed with development, unencumbered by those pesky changes. This is the classic waterfall paradigm. It doesn’t work well in most situations. It’s far more realistic to define a requirements baseline and then manage changes to that baseline.

What is a Requirements Baseline?

A requirements baseline is a snapshot in time that represents an agreed-upon, reviewed, and approved set of requirements that have been committed to a specific product release.

That “release” could be a complete delivered product or any interim development increment of the product. When stakeholders “sign off” on requirements, what they’re really doing is agreeing and committing to a specific requirements baseline (whether they think of it in those terms or not).

Once the project team establishes a requirements baseline, the team should follow a pragmatic change control process to make good business and technical decisions about adding newly-requested functionality and altering or deleting existing requirements.

A change control process is not about stifling change; it’s about providing decision-makers with the information that will let them make timely and appropriate decisions to modify the planned functionality. That planned functionality is the baseline.

Typically, a baseline is also given a unique name so that all the project participants can refer to it unambiguously. And good configuration management practices allow the team to reconstruct accurately any previous baseline and all its components.

Implementing a Requirements Baseline

Whereas the scope definition distinguishes what’s in from what’s out, the requirements baseline explicitly identifies only those requirement specifications that the project will implement. A baseline is not a tangible item but rather a defined list of items. One possible storage location is a software requirements specification (SRS) document.

If that SRS document contains only—and all—the requirements for a specific product release, the SRS constitutes the requirements baseline for the release. However, the SRS document might include additional, lower-priority requirements that are intended for a later release.

Conversely, a large project might need several software, hardware, and interface requirement specifications to fully define the baseline’s components. The goal is to provide the project stakeholders with a clear understanding of exactly what is intended to go into the upcoming release.

Perhaps you’re storing your requirements in a requirements management solution, rather than in documents. In that case, you can define a baseline as a specific subset of the requirements stored in the database that are planned for a given release.

RELATED: The Gap Between the Increasing Complexity of Products and Requirements Management

Storing requirements in a solution allows you to maintain an aggregated set of both currently committed requirements and planned future requirements. Some commercial requirements management tools include a baselining function to distinguish those requirements (perhaps even down to the specific version of each requirement) that belong to a certain baseline.

Alternatively, you could define a requirement attribute in the solution to hold the release number or another baseline identifier. Moving a requirement from one baseline to another is then a simple matter of changing the value for that requirement attribute.

The attribute approach will work when each requirement belongs to only a single baseline. However, you might well allocate the same requirement (or different versions of the same requirement) to several baselines if you’re concurrently developing multiple versions of your product, such as home and professional versions. Tool support is essential for such complex baseline management.

When following an incremental or iterative development life cycle, the baseline for each iteration will represent just a fraction of the overall system’s functionality.

A small project my team once worked on took this approach. This project worked in three-week release cycles. For each cycle, the BA specified the software requirements that were to be designed, coded, integrated, and verified during the next three weeks. Each requirements baseline was therefore quite small. In a classic agile approach, the product grew incrementally toward full functionality as the developer periodically released useful versions to the users.

RELATED: How to Perform Better Impact Analysis on Upstream and Downstream Relationships

When to Perform a Requirements Baseline

Business analysts sometimes struggle with exactly when to define a requirements baseline. It’s an important decision because establishing the baseline has the following implications:

Formal change control begins. Change requests are made against an established baseline. The baseline. therefore, provides the point of reference for each proposed change. Make sure your change control process and players are in place before you define any project baselines.

Project managers determine the staffing levels and budgets needed. There are five dimensions to a software project that must be managed: features, quality, schedule, staff, and budget. Once the features and quality goals are defined in the baseline, the project manager adjusts the other three dimensions to accomplish the project’s objectives. It can work the other way, too. If staff, budget, and/or schedule are pre-established by external forces, the baseline composition is necessarily constrained to fit inside the project box bounded by those limits.

RELATED: Getting the Most from a Requirements Management Tool

Project managers make schedule commitments. Prior to baselining, requirements are still volatile and uncertain, so estimates are similarly volatile and uncertain. Once a baseline is established, the contents of the release should be sufficiently well understood so that managers can make realistically achievable commitments. The managers still need to anticipate requirements’ growth (per their requirements management plan) by including sensible contingency buffers in their committed schedules.

Baselining requirements too early can push your change process into overdrive. In fact, receiving a storm of change requests after defining a baseline could be a clue that your requirements elicitation activities were incomplete and perhaps ineffective. On the other hand, waiting too long to establish a baseline could be a sign of analysis paralysis:  perhaps the BA is trying too hard to perfect the set of requirements before handing them to the development team.

Keep in mind that requirements elicitation attempts to define a set of requirements that is good enough to let the team proceed with construction at an acceptable level of risk. Use the checklist in Table 1 to judge when you’re ready to define a requirements baseline as a solid foundation for continuing the development effort.

Table 1. Factors to Consider Before Defining a Requirements Baseline

Business Rules Determine whether you’ve identified the business rules that affect the system and whether you’ve specified functionality to enforce or comply with those rules.
Change Control Make sure a practical change control process is in place for dealing with requirement changes and that the change control board is assembled and chartered. Ensure that the change control tool you plan to use is in place and configured and that the tool users have been trained.
Check back with your key customer representatives to see whether their needs have changed since you last spoke. Have new business rules come into play? Have existing rules been modified? Have priorities changed? Have new customers with different needs been identified?
Interfaces See if functionality has been defined to handle all identified external interfaces to users, other software systems, hardware components, and communications services.
Model Validation Examine any analysis models with the user representatives, perhaps by walking through test cases, to see if a system based on those models would let the users perform their necessary activities.
Prototypes If you created any prototypes, did appropriate customers evaluate them? Did the BA use the knowledge gained to revise the SRS?
Alignment Check to see if the defined set of requirements would likely achieve the project’s business objectives. Look for alignment between the business requirements, user requirements, and functional requirements.
Reviews Have several downstream consumers of the requirements review them. These consumers include designers, programmers, testers, documentation and help writers, human factors specialists, and anyone else who will base their own work on the requirements.
Scope Confirm that all requirements being considered for the baseline are within the project scope as it is currently defined. The scope might have changed since it was originally defined early in the project.
TBDs Scan the documents for TBDs (details yet to be determined). The TBDs represent requirements development work remaining to be done.
Templates Make sure that each section of the SRS document template has been populated. Alternatively, look for an indication that certain sections do not apply to this project. Common oversights are quality requirements, constraints, and assumptions.
User Classes See whether you’ve received input from appropriate representatives of all the user classes you’ve identified for the product.
Verifiability Determine how you would judge whether each requirement was properly implemented. User acceptance criteria are helpful for this.


RELATED POST: 8 Do’s and Don’ts for Writing Requirements

You’re never going to get perfect, complete requirements. The BA and project manager must judge whether the requirements are converging toward a product description that will satisfy some defined portion of customer needs and is achievable within the known project constraints.

Establishing a baseline at that point establishes a mutual agreement and expectation among the project stakeholders regarding the product they’re going to have when they’re done. Without such an agreed-upon baseline, there’s a good chance someone will be surprised by the outcome of the project.

And software surprises are rarely good news.

To learn more about how to write requirements in a way that all stakeholders have a clear understanding of development needs, download our eBook, Best Practices for Writing Requirements.


Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles. Karl Wiegers is an independent consultant and not an employee of Jama. He can be reached at ProcessImpact.com

In 1967, computer scientist and programmer Melvin Conway coined the adage that carries his name: “Organizations that design systems are constrained to produce designs that are copies of the communication structures of these organizations.”

In other words, a system will tend to reflect the structure of the organization that designed it. Conway’s law is based on the logic that effective, functional software requires frequent communication between stakeholders. Further, Conway’s law assumes that the structure of a system will reflect the social boundaries and conditions of the organization that created it.

One example of Conway’s law in action, identified back in 1999 by UX expert Nigel Bevan, is corporate website design: Companies tend to create websites with structure and content that mirror the company’s internal concerns — rather than speaking to the needs of the user.

The widely accepted solution to Conway’s law is to create smaller teams focused around single projects so they can iterate rapidly, delivering creative solutions and responding adroitly to changing customer needs. Like anything else, though, this approach has its drawbacks, and being aware of those downsides in advance can help you mitigate their impact.

Here, we’ll unpack the benefits of leveraging smaller teams; assess whether Conway’s law holds up to scrutiny by researchers; and lay out how to balance the efficiency of small, independent teams against organizational cohesion and identity to build better products.

Smaller Teams Can Yield Better Results

Plenty of leading tech companies, including Amazon and Netflix, are structured as multiple (relatively) small teams, each responsible for a small part of the overall organizational ecosystem. These teams own the whole lifecycle of their product, system, or service, giving them much more autonomy than bigger teams with rigid codebases. Multiple smaller teams allow your organization to experiment with best practices and respond to change faster and more efficiently, while ossified, inflexible systems are slow to adapt to meet evolving business needs.

When your organization structure and your software aren’t in alignment, tensions and miscommunication are rife. If this is your situation, look for ways to break up monolithic systems by business function to allow for more fine-grained communication between stakeholders throughout the development lifecycle.

Testing Conway’s Law

In 1967, the Harvard Business Review rejected Conway’s original paper, saying he hadn’t proved his thesis. Nevertheless, software developers eventually came to accept Conway’s law because it was true to their experiences, and by 2008, a team of researchers at MIT and Harvard Business School had begun analyzing different codebases to see if they could prove the hypothesis.

For this study, researchers took multiple examples of software created to serve the same purpose (for example, word processing or financial management). Codebases created by open-source teams were compared with those crafted by more tightly coupled teams. The study found “strong evidence” to support Conway’s law, concluding that “distributed teams tend to develop more modular products.”

In other words, there’s definitely some justification for the idea that smaller teams will work more effectively and produce better results, while bigger groups may lack cohesion and exhibit dysfunction.

Organization First, Team Second

As a recent Forbes article noted, there are potential drawbacks to letting Conway’s law guide the structure of your organization. The thinking goes that “once you entrench small teams in this way, their respect and loyalty for that team often comes to outweigh their allegiance to the organization as a whole… Teams in disparate locations end up forming strong but exclusive identities as individual departments.”

So how do you balance the benefits of small, nimble groups against an organization-wide sense of solidarity, cooperation, and transparency?

Platforms that enable organization-wide collaboration can break down the barriers erected by Conway’s law without robbing small teams of their independence and agility. Josh McKenty, a vice president at Pivotal, argues that using collaborative platforms can neutralize the sense of otherness, of separateness, that can inhibit organization-wide cohesion: “Platforms can allow businesses to cultivate a sense of ‘we’re all in this together,’ in which everyone is respected, treated with mutual regard, and can clean up each other’s messes – regardless of whether they created the mess in the first place,” McKenty told a conference audience in 2017, according to Forbes.

That solidarity is crucial in complex product and systems development, where rapidly shifting requirements, evolving standards, and updated customer specs require consistent and dedicated communication within and across teams. If your teams are forming strong bonds, that’s terrific, but you don’t want those bonds to become exclusionary. If teams are turning into cliques, your organization has lost its internal cohesion.

A collaborative platform that unites disparate teams across functions and locations can help you actualize the benefits of small, focused teams without losing coherence.

To learn more about success strategies for systems engineers and developers, check out our whitepaper, “Product Development Strategies for Systems Engineers.”


Starting a new internship can be intimidating; new responsibilities, new people, new office, and perhaps the most daunting of all, a new codebase. With so many new things, it’s easy to become overwhelmed and want to dive straight into the code. I’ve had the opportunity to be in five software internships over the past few years through the Portland Cooperative Education Program (PCEP) and as a result, have learned enough to compile a list of things to do at the start of an internship. I hope the following things help you start off on the right foot:

1. Complete Onboarding

When it comes to onboarding, every company does it differently. If you are a part-time intern like I was most of the time, this is a good time to establish the hours you’re going to be working with your team. Make sure to give yourself time to get from school to the office and vice versa. More importantly, make sure you’re not too overloaded to do your schoolwork.

2. Introduce Yourself

I didn’t learn the importance of making a proper introduction to my coworkers right away. I’d spend hours on end banging my head against a wall (metaphorically) because I didn’t want to ask for help from someone I didn’t know. If I had introduced myself and gotten to know them earlier, I would’ve felt a lot more comfortable asking for help. Feel free to invite people for coffee or lunch and take every opportunity you can to go with someone who invites you. You’re being hired on as an intern to work with your team, not to be a lone wolf who tries to solve all their own problems.

3. Settle into Your Environment

I had an internship where I did didn’t get my computer until two weeks in and had to read out of a textbook. Hopefully this is a rare experience, but if you don’t have what you need don’t be shy about asking for it and following up with IT.

In my experience, setting up my dev environment has been tedious. Most companies do not have a well documented process, so this is where some assistance from a co-worker will be really helpful. Asking them for help shouldn’t be a problem now because you’ve already introduced yourself. Now is a good time to download any programs you use for productivity and configure your IDE to adhere to the code style of the team.

4. Get the Lay of the Land

Once you’re all set up, you can actually start learning. If you don’t already know about git (or any other Version Control Software), I highly recommend you to learn it. It’s absolutely crucial to figure out how VCS is used before diving into the code.

You should also get familiar with the programming language(s) that the application is using and the parts of code that your team works with. Watching a few videos and then doing an online tutorial or two helps me get comfortable with the code. Once you’ve got a better grasp on the language, you should take a look at the structure of the code for the application. Some things to look out for are:

  • The folder structure for the project, including
    • Back-end/Server-side code
    • Front-end code and styles
    • Test code
    • API code
    • Database code
  • Which libraries/modules are used
    • These can typically be found in files that deal with dependencies such as pom.xml, package.json, Gemfile, etc)

5. Experiment

Look through the application, think of something you’d like to change, and then make the necessary code changes. Figure out how to write tests for your code and look at existing tests as reference. Try to make changes related to each area listed above if your team works in those areas.

Another thing that’s proven to be extremely helpful for me is picking up a small and easy task and go through it with another engineer who has been around for a while. At Jama, we use a technique called pair-programming to get the perfect balance between productivity and sharing domain knowledge. If you’d like to learn more about pair-programming, check out my colleague’s blog post about it.

6. Contribute!

After you’ve gotten comfortable with the code and making changes, you should be ready to start fixing small bugs and adding new features. To get your features and fixes live, you’ll need to know the process for putting your code into production. This process typically involves some form of merge request to get your code into the master branch. The merge requests are a an excellent way to review your code changes and have another developer look over it to ensure that it’s high quality.

Above all, remember that the main point of an internship is to learn new things. Don’t worry about contributing a lot until you’re comfortable with making changes on your own. Don’t be afraid to make mistakes; your VCS is very forgiving and your team will be too.

“Gartner clients report that poor requirements are a major cause of rework and friction between business and IT. Broad market adoption of software requirements solutions is low, exacerbating this situation.” This begins the key findings in Gartner’s newest Market Guide for Software Requirements Definition and Management Solutions.
The guide provides key findings, recommendations, market definition and direction, summarily stating:

Requirements management software provides tools and services that aid the definition and management of software requirements and user experience. Application development executives should invest in requirements skills, practices and tools to improve user experience and software quality.

In choosing a requirements management tools vendor, Gartner advises companies consider, among other factors, the ability to:

  • Work in shared (rather than collaborative) environments.
  • Use a true requirements repository (featuring a robust meta-model that enables reuse and impact analysis) rather than simple storage and tagging.
  • Integrate with other ADLM tools in use (including test case management, and agile planning).
  • Support regulatory and reporting needs (for compliance with both internal and external governance processes).

Gartner, Market Guide for Software Requirements Definition and Management Solutions, Thomas E. Murphy, Magnus Revang, Laurie F. Wurster, 24 June 2016 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

In the beginning, there is a simple code base written by a few developers. The code’s deficiencies are easily kept in the brains of developers creating it and they most likely know what needs to be fixed and were trouble can be found. Then the code grows, more developers are hired, features are added, and the code base evolves. Suddenly, its authors no longer easily retain the mind-map of the code and its faults, and the code base becomes a mysterious source of bugs, performance problems and exhibits remarkable resistance to change. This is legacy code.

Your code base presents challenges – technical debt accumulates, new features demand the existing code to evolve, performance issues surface, and bugs are discovered. How do you meet these challenges? What proactive steps can you take to make your legacy code more adaptable, performant, testable, and bug free? Code forensics can help you focus your attention on the areas of your code base that need it most.

Adam Tornhill introduced the idea of code forensics in his book Your Code as a Crime Scene. (The Pragmatic Programmers, 2015). I highly recommend his book and have applied his ideas and tools to improve the Jama code base. His thesis is that criminal investigators and programmers ask many of the same open-ended questions while examining evidence. By questioning and analyzing our code base, we will not only identify offenders (bad code we need to improve), but also discover ways in which the development process can be improved, in effect eliminating repeat offenders.

For this blog post, I focus on one forensic tool that will help your team find the likely crime scenes in your legacy code. Bugs and tech debt can exist anywhere, but the true hot spots are to be found wherever you find evidence of three things:
• Complexity
• Low or no test coverage
• High rate of change


Complexity of a class or method can be measured several ways, but research shows that simply counting the lines of code is good enough and closely predicts complexity just as well as more formal methods (Making Software: What Really Works chapter 8: Beyond Lines of Code: Do we need more complexity metrics by Israel Herraiz and Ahmed E. Hassan. O’Reilly Media, Inc).

Another quick measure of complexity: indentation. Which of these blocks of code looks more complex?code forensicsThe sample on the left has deep indentation representing branching and loops. The sample on the right has several short methods with little indentation, and is less complicated to understand and to modify. When looking for complexity, look for long classes and methods and deep levels of indentation. It’s simple, but it’s a proven marker of complexity.

Test Coverage

Fast-running unit tests covering every line of code you write are a requirement for the successful continuous delivery of high quality software. It is important to have a rigorous testing discipline like Test Driven Development, otherwise testing might be left as a task to be done after the code is written, or is not done at all.

The industry average bug rate is 15 to 50 bugs in every 1000 lines of code. Tests do not eliminate all the bugs in your code, but they do ensure you find the majority of them. Your untested legacy code has a high potential bug rate and it is in your best interest to write some tests and find these bugs before your users find them.

High rate of change

A section of code that is under frequent change is signaling something. It may have a high defect rate requiring frequent bug fixes. It may be highly coupled to all parts of your system and has to change whenever anything in the system changes. Or, it may be just the piece of your app that is the focus of new development. Whatever the source of the high rate of change, evidence of a specific section of code getting modified a lot should draw your investigative attention.

Gathering evidence

How do you find which parts of your system are complex, untested, and undergoing lots of change? You need tools like a smart build system integrated with a code quality analyzer, and a source code repository with an API that allows for scripted analysis of code commits. At Jama, we are very successful using Team City coupled with SonarQube as our continuous integration server and code quality analyzer. Our source code repository is git.

Here is an example analysis of complexity and test coverage produced by Sonar. Each bubble represents a class and the size of the bubble represents the number of untested lines of code in that class. In other words, the larger the bubble, the more untested lines it has.

Code forensics

In this example, there are several giant bubbles of tech debt and defects floating high on the complexity scale.

Both Team City and Sonar report on the test coverage per class so with every build you not only know what code is the least tested, but you know the overall trend for coverage.

Using these tools, you now know where your complexity and untested code lives, but you need to know which parts of the suspect code are undergoing churn. This is where forensic analysis of your source code repository comes in.

Code repositories like git produce detailed logs, which can be analyzed by scripts. A command-line tool for doing this analysis is provided by Adam Tornhill to accompany his book and is available on his web site. This tool will do complexity analysis as well as change analysis.

When looking at the results of your change analysis, you are searching for not only what is changing the most, but also what code tends to change together. Classes and modules that are frequently appearing together in code commits are evidence of a large degree of coupling. Coupling is bad.

What other forensic tools does your code repository offer? You can analyze commit messages and produce word clouds to see what terms are dominating change descriptions. You would prefer to see terms like “added”, “refactored”, “cleaned”, and “removed” to red flag terms like “fixed”, “bug”, and “broken”. And of course commit messages dominated by swearing indicate real problems.

Another useful data point is which parts of your codebase are dominated by which developers. If you have classes or modules that are largely written and maintained by one or two devs, you have potential bus factor issues and need to spread the knowledge of this code to the wider team.

Pulling it all together

After the above analysis is complete, you have an ordered list of the most untested and most complex code undergoing the highest rate of change. The offenders that appear at the top of the list are the prime candidates for refactoring.

All software systems evolve and change over time and despite our best efforts tech debt sneaks in, bugs are created, and complexity increases. Using forensic tools to identify your complex, untested, and changing components lets you focus on those areas at the highest risk for failure and as a bonus can help you study the way your teams are working together.


A few weeks ago I had a friend Grace reach out to me and ask me if I could speak to my experience with the DevOps movement from an engineering management perspective. Grace is one of the organizers of the Portland DevOps Groundup meetup group. Their goal is to educate others and discuss topics having to do with DevOps. I agreed to speak as well as host the event at Jama (one of the very cool things that we do as an organization is to host such community events).

Grace asking me to speak was timely as I have been doing a lot of thinking lately about the culture of DevOps and how it is applied here at Jama.

The term DevOps did not use to be widely known, now it has become a fairly common term. With that wide adoption also comes misuse and misunderstanding. People are using the term for all sorts of things as well as it being buzzword for catchall job titles. To me, DevOps is all about collaboration, communication and integration. I titled my talk “DevOps is dead, long live DevOps” on purpose to gain a reaction from people (which I definitely did get reactions from some of the recruiters in attendance). My point in picking that title was that the term has become diluted and misused and is becoming irrelevant.


I focused my talk on my personal history in software development coming from an operations background. I’m no expert, this was just me sharing my experiences as a manager of technical people and how I’ve tried to build highly collaborative teams that enjoy working together and solving tough problems. I really enjoyed being able to share three separate work experiences with a large group of people and discuss how I’ve learned from each job and applied those learnings in an effort to improve upon the process each time. I spoke at length to my most current experience here at Jama and how we are working as a group to better integrate the practices and principals of DevOps into all of engineering instead of it being a single team called “DevOps” that is tasked with the work. This cultural shift is starting to happen and that is a good thing for all of Jama engineering.


I spoke for the better part of an hour and received some really thoughtful questions at the end of the talk around how people can work to affect change in culture and gain business adoption of these practices. DevOps in some ways is still mysterious for people or they think of it only in terms of tools and technologies, my hope is that my talk made it less of a mystery and starting more people thinking in terms of collaboration, communication and integration across the company culture.

Jama Debating Scalability

Like many maturing companies Jama found itself in a situation where their monolithic software architecture prohibited scaling. Scalability is here a catch-all for many quality attributes such as maintainability across a growing team, performance, and true horizontal scalability. The solution was simple — on paper. Our software had to be split up. We are talking late 2013, micro-services are taking off, and a team starts carving out functions of the monolith into services that could then be deployed separately in our emerging SaaS environment. We are a SaaS company, after all. Or we are a SaaS company first. Or, well, we are a SaaS company which deeply cares about those on-premises customers that don’t move to the cloud… yet… for a variety of reasons, whether we like it or not.

Planning our Strategy

Will we keep on delivering the full monolith to on-premises customers, including those parts we deploy separately in SaaS? That would be a pretty crappy economic proposition for us, as we’d essentially be building, then testing everything twice. On-premises customers would not benefit any of the scaling benefits of the services architecture, nor can the engineering team really depart from the monolithic approach that is slowing them down. (On a side-note, as a transitional solution we’ve used this approach for a little while, and be assured that there’s little to love there.)

Then, will we deliver a monolith to on-premises customers that’s lacking a growing number of features, having those as a value add in SaaS perhaps? That works… up to a point… we currently have services like SAML, OAuth, and centralized monitoring in our SaaS environment, that aren’t available to our on-premises customers. They let us get away with that. But there is only so many services you can really carve out, before hitting something that’s mission critical to on-premises customers.


2014: Scribbling our options on a whiteboard

The only solution that makes sense: bring the services to the on-premises customers. (For completeness sake: there was this one time someone proposed not supporting on-premises installations anymore. They were voted off the island.)

So, services are coming to an on-premises near you.

Implications of Services

Huge. The implications are huge, in areas such as the following:

  • Strategy. Since 2010 we have been attempting to focus on our SaaS model and in turn driving our customers to our hosted environment. The reality is that our customers are slow to adopt and requires us to refocus back to the on-premises deployment. That is okay, and there’s no reason we can’t do both, but it’s sobering to pull yourself back after so much focus went into “being more SaaS” (which came with the good hopes of the gradual transition of (all) customers to the cloud).
  • Architecture. Our SaaS environment has a lot of bells and whistles that make no sense for on-premises customers, and it relies on a plethora of other SaaS providers to do its work, and this needs to be scaled down. Scaled down in a way that keeps the components still usable for both on-premises customers and in the SaaS environment.
  • Usability. Coming from WAR deployments, where a single WAR archive is distributed, and loaded in a standardized application server (specifically Apache Tomcat), which is all relatively easy. We are now moving to a model with multiple distribution artifacts, which then also need to be orchestrated to run together as one Jama application.
  • Culture. There is a lot of established thinking that had to be overcome, in fairly equal parts by ourselves and by our customers. I mean, change, there’s plenty of books on change, and on how it’s typically resisted.

Within Engineering (which is what I’ll continue to focus on), I’ve been involved in ongoing discussions about a deployment model for services, going back to 2014. One of the early ideas was to just bake a scaled down copy of our SaaS environment into a single virtual machine. (And expect some flavors with multiple virtual machines to support scalability.) Too many customers just outright reject the notion of loading into their environment a virtual machine that is not (fully) under their control. A virtual machine would be unlikely to follow all the IT requirements of our customers, and lead to a lot of anxiety around security and the ability to administrate this alien. So, customers end up running services on their machines.

That quickly leads to another constraint. The administrators at our customers traditionally needed one skill: be able to manage Apache Tomcat, running Jama’s web archive file (WAR). While we have an awesome team of broadly-skilled, DevOps-minded engineers working on our SaaS environment, we can’t expect such ultra-versatility from every lone Jama administrator in the world. We were in need of a unified way across our different services to deploy them. This is an interesting discussion to have at a time where your Engineering team still mostly consists of Java developers, and where DevOps was still an emerging capability (compared to the mindset of marrying development and operations that is now more and more being adopted by Jama Engineering). We had invested in a “services framework”, which was entirely in Java, using the (may I say: amazing) Spring Boot, and “service discovery” was dealt with using configuration files inside the Java artifacts (“how does service A know how and where to call service B”). It was a culture shift to collectively embrace the notion that a service is not a template of a Java project, but it’s a common language of tying pieces of running code together.

Docker and Replicated

In terms of deployment of services we discussed contracts of how to start/stop a service (“maybe every service needs a folder with predefined start/stop scripts”). We discussed standardized folder structures for log files and configuration. Were we slowly designing ourselves into Debian deb packages (dpkg, apt) or RPM (yum) packages, the default distribution mechanism for the respective Linux distributions? What could Maven do here for us? (Not a whole lot, as it turns out.) And how about this new thing…

This new thing… Docker. It is very new (remember, this was 2014, Docker’s initial release was in 2013, the company changed its name to Docker Inc. only as recent then as October of 2014). We dismissed it, and kept talking in circles until the subject went away for a while.

Early 2015, coincidentally roughly around the time we created the position of DevOps Manager, we got a bunch of smart people in a room to casually speak about perhaps using Docker for this. There was nothing casual about the meeting, and it turned out that we weren’t prepared to answer the questions that people would have. We were mostly talking from the perspective of the Java developer, with their Java build, trying to produce Docker images at the tail end of the Java build, ready for deployment. We totally overlooked the configuration management involved outside of our world of Java, and the tremendous amount of work there, that we weren’t seeing. And in retrospect, we must have sounded like the developer stereotype of wanting to play with the cool, new technology. We were quickly cornered by what I will now lovingly refer to as an angry mob: “there is not a single problem [in our SaaS environment] that Docker solves for us”. I’m way cool about it now, but that turned out to be my worst week at Jama, with a distance. Things got better. We were able to create some excitement by using Docker to improve the way we were doing continuous automated system testing. We needed some help from the skeptics, which gave them a chance to start adjusting their views. We recruited more DevOps folk, with Docker in mind while hiring. And we did successful deployments with Docker for some of our services. We were adopting this new technology. But more importantly, we were slowly buying into the different paradigm that Docker offers, compared to our traditional deployment tools (WAR files, of course, and we used a lot of Chef).

docker and replicated

We were also telling our Product Management organization about what we were learning. How Docker was going to turn deployments into liquid gold. How containers are different than virtual machines (they are). They started testing these ideas with customers. And toward the second half of 2015 the lights turned green. Or… well… some yellowish, greenish kind of color. Scared for the big unknown: will we be able to harden it for security, is it secure, will customers believe it is secure? But also: will it perform as well as we expect? How hard will it be to install?

One of the prominent questions still also was around the constraint that I mentioned earlier, how much complexity are we willing to incur onto our customers? Even today, Docker is fairly new, and while there is a growing body of testimony around production deployments, all of our customers aren’t necessarily on that forefront. First of all, Docker means Linux, whereas we had traditionally also supported Windows-based deployments. (I believe we even supported OS X Server at some point in time.)

Secondly, the scare was that customers would end up managing a complex constellation of Docker containers. We had been using Docker Compose a bit for development purposes now, and that let us at least define the configuration of Docker containers (which I like to refer to as orchestration), and we’d have to write some scripts (a lot?) to do the rest. Around that time, we were introduced to Replicated, which we did some experiments with, and a cost-benefit analysis. It let us do the orchestration of Docker containers, manage the configuration of the deployment, all through a user interface, web-based, but installed on-premises. Not only would it offer a much more user-friendly solution, it would take care of a lot of the orchestration pain, and we decided to go for it.

Past the Prototype

The experiments were over, and I formally rolled onto the actual project on November 11th 2015. We were full steam ahead with Docker and Replicated. Part of the work was to turn our proof of concept into mature production code. This turned out not to be such a big deal. We know how to write code, and Docker is just really straightforward. The other part of the work was to deal with the lack of state. Docker containers are typically stateless, which means that any kind of persisted state has to go outside of the container. Databases, data files, even log files, need to be stored outside of the container. For example, you can mount a folder location of the host system into a Docker container, so that the container can read/write that folder location.

Then the realization snuck up to us that customers had been making a lot of customizations to Jama. We had anticipated a few, but it turns out that customers have hacked our application in all sorts of ways. Sometimes as instructed by us, sometimes entirely on their own. It was easy enough to look inside the (exploded) WAR file and make a few changes. They have changed configuration files, JavaScript code, even added completely new Java files. With Docker that would not be possible anymore, and we dealt with many such customizations, coming up with alternative solutions for all the customizations that we knew of. Some configuration files can again be changed, storing them outside of the container; some options have been lifted into the user interface that a root user in Jama has for configuring the system, storing them in the database; and sometimes we decided that a known customization was undesired, and we chose not to solve it. By doing all that, we are resetting and redefining our notion of what is “supported”, and hopefully have a better grasp, going forward, on the customizations that we support. And with it, we ended up building a lot, a lot of the configuration management that was initially underappreciated.

Ready for the Next Chapter

Meanwhile, we are now past an Alpha program, a Beta program, and while I’m writing this we are code complete and in excited anticipation of the General Availability release of Jama 8.0. We have made great strides in Docker-based configuration management, and learned a lot, which is now making its way back into our SaaS environment, while the SaaS environment has seen a lot of work on horizontal scalability that will be rolled into our on-premises offering in subsequent releases — the pendulum constantly swinging. While I’m probably more of a back-end developer, and while “installers” probably aren’t the most sexy thing to be working on, it was great to work on this project: we are incorporating an amazing technology (Docker), and I’m sure that our solution will be turning some heads!