How can you distinguish excellent software requirements and software requirements specifications (SRS) from those that could cause problems? In this post, we’ll start by discussing several different characteristics that individual requirements should exhibit. Then, we’ll then look at the desirable traits a successful SRS should have as a whole.
Characteristics of Effective Requirements
In an ideal world, every individual user, business, and functional requirement would exhibit the qualities described in the following sections.
Complete
Each requirement must fully describe the functionality to be delivered. It must contain all the information necessary for the developer to design and implement that bit of functionality. If you know you’re lacking certain information, use TBD (to be determined) as a standard flag to highlight these gaps. Resolve all TBDs in each portion of the requirements before you proceed with construction of that portion.
Nothing says you need to make the entire requirements set complete before construction begins. In most cases, you’ll never achieve that goal. However, projects using iterative or incremental development life cycles should have a complete set of requirements for each iteration.
Using minimal requirements specifications runs the risk of having different people fill in the blanks in different ways, based on different assumptions and decisions. Keep requirements details verbal instead of written also makes it hard for business analysts, developers, and testers to share a common understanding of the requirements set.
Each requirement must accurately describe the functionality to be built.
The reference for correctness is the source of the requirement, such as an actual user or a high-level system requirement. A software requirement that conflicts with its parent system requirement is not correct.
Only user representatives can determine the correctness of user requirements (such as use cases), which is why users or their close surrogates must review the requirements.
Feasible
It must be possible to implement each requirement within the known capabilities and limitations of the system and its operating environment. To avoid specifying unattainable requirements, have a developer work with marketing or the BA throughout the elicitation process.
The developer can provide a reality check on what can and cannot be done technically and what can be done only at excessive cost. Incremental development approaches and proof-of-concept prototypes are ways to evaluate requirement feasibility.
Necessary
Each requirement should document a capability that the stakeholders really need or one that’s required for conformance to an external system requirement or a standard.
Every requirement should originate from a source that has the authority to specify requirements. Trace each requirement back to specific voice-of-the-customer input, such as a use case, a business rule, or some other origin of value.
Assign an implementation priority to each functional requirement, feature, use case, or user story to indicate how essential it is to a particular product release.
If all the requirements are considered equally important, it’s hard for the project manager to respond to budget cuts, schedule overruns, personnel losses, or new requirements added during development. Prioritization is an essential key to successful iterative development.
Unambiguous
All readers of a requirement statement should arrive at a single, consistent interpretation of it, but natural language is highly prone to ambiguity. Write requirements in simple, concise, straightforward language appropriate to the user domain. “Comprehensible” is a requirement quality goal related to “unambiguous”: readers must be able to understand what each requirement is saying. Define all specialized terms and those that might confuse readers in a glossary.
Verifiable
See whether you can devise a few tests or use other verification approaches, such as inspection or demonstration, to determine whether the product properly implements each requirement.
If a requirement isn’t verifiable, determining whether it was correctly implemented becomes a matter of opinion, not objective analysis. Requirements that are incomplete, inconsistent, infeasible, or ambiguous are also unverifiable.
Characteristics of Effective Software Requirements Specifications (SRS)
It’s not enough to have excellent individual requirement statements. Sets of requirements that are collected into a software requirements specification (SRS) ought to exhibit the characteristics described in the following sections.
Complete
No requirements or necessary information should be absent. Missing requirements are hard to spot because they aren’t there! Focusing on user tasks, rather than on system functions, can help you to prevent incompleteness. I don’t know of any way to be absolutely certain that you haven’t missed a requirement. There’s a chapter of my book “Software Requirements, Third Edition” that offers some suggestions about how to see if you’ve overlooked something important.
Consistent software requirements don’t conflict with other requirements of the same type or with higher-level business, system, or user requirements. Disagreements between requirements must be resolved before development can proceed. If you spot a pair of conflicting requirements, you might not know which one (if either) is correct until you do some research. Recording the originator of each requirement lets you know who to talk to if you discover conflicts in your software requirements specification.
Modifiable
You must be able to revise the SRS when necessary and maintain a history of changes made to each requirement. This dictates that each requirement be uniquely labeled and expressed separately from other requirements so that you can refer to it unambiguously.
Each requirement should appear only once in the SRS. It’s easy to generate inconsistencies by changing only one instance of a duplicated requirement. Consider cross-referencing subsequent instances back to the original statement instead of duplicating the requirement. A table of contents and an index will make the SRS easier to modify. Storing requirements in a database or a commercial requirements management solution makes them into reusable objects.
Traceable
A traceable requirement can be linked backwards to its origin and forward to the design elements and source code that implement it and to the test cases that verify the implementation as correct. Traceable requirements are uniquely labeled with persistent identifiers. They are written in a structured, fine-grained way as opposed to crafting long narrative paragraphs. Avoid lumping multiple requirements together into a single statement; the individual requirements might trace to different design and code elements.
How Do You Know If Your Requirements and SRS Exhibit These Attributes?
The best way to tell whether your requirements have these desired attributes is to have several project stakeholders carefully review the SRS. Different stakeholders will spot different kinds of problems. For example, analysts and developers can’t accurately judge completeness or correctness, whereas users can’t assess technical feasibility.
You’ll never create an SRS in which all requirements demonstrate all these ideal attributes. However, if you keep these characteristics in mind while you write and review the requirements, you will produce better requirements documents and you will build better products.
Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles. Karl Wiegers is an independent consultant and not an employee of Jama Software. He can be reached at ProcessImpact.com.
https://www.jamasoftware.com/media/2013/03/Brand_Erosion_Product_Recall.jpg5121024Karl Wiegers/media/jama-logo-primary.svgKarl Wiegers2022-05-30 06:00:402023-01-12 16:46:40Characteristics of Effective Software Requirements and Software Requirements Specifications (SRS)
This post on Software as a Medical Device (SaMD) development is written by Mercedes Massana, the Principal Consultant of MDM Engineering Consultants.
SaMD software is software intended to be used for one or more medical purposes that do not require embedding in a medical device. These purposes can range anywhere from helping to diagnose or treat disease, helping in the clinical management of a disease, or providing information to help with the clinical management of a disease. SaMD differs from other medical device software in that it will operate on different platforms and will interconnect with other devices, carrying with it an increased cybersecurity risk and a commensurate increase in the use of off-the-shelf software.
On the surface it may appear that the development of SaMD software is no more difficult than the development of Medical Device embedded software, but appearances can be deceiving, and the development of SaMD products can be quite challenging. In order to deal with these challenges, there are four key best practices that should be followed for SaMD software development.
These practices are:
Make use of standards and guidance documents
Apply the right level of rigor
Understand the difference between Verification and Validation
Implement a post-market monitoring program
Best Practice #1– Making Use of Standards and Guidance Documents
Although standards development organizations and regulatory bodies have only started to scratch the surface in the creation of standards and guidance documents to help SaMD development organizations, there is a sufficiently detailed body of work available to help development organizations succeed. The most relevant of these are the documents generated by the International Medical Device Regulators Forum (IMDRF) related to SaMD and IEC 82304 Safety and Security of Health Software Products. The IEC standard points to other well-known standards, such as IEC 62304 Software Development Lifecycle, IEC 62336 Usability Engineering and ISO 14971. Additionally, several FDA guidance documents are available that apply to all medical device software and are useful for the development of SaMD, these include General Principles of Software Validation, Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, Off-The-Shelf Software Use in Medical Devices and the FDA pre and post market cybersecurity guidance, as well as other guidance documents
Best Practice #2 –Applying the Right Level of Rigor
Within the development of SaMD, a clear understanding of the scope and intended use of the product is necessary, and to that end, it is necessary to have a method to gauge the risks associated with SaMD use. The IMDRF “Software as a Medical Device” Possible Framework for Risk Categorization and Corresponding Considerations, IEC 62304 , and FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices all provide a method for risk based classification of SaMD. The rigor applied to the development of SaMD should be commensurate with the level of risk. IEC 62304 uses the Safety Classification to define the level of rigor of activities to be performed in the software lifecycle based on the level of risk. Ideally, the development process is sufficiently flexible to avoid the over-engineering or under-engineering of the SaMD in question. A development process that is adaptable requires organizational maturity and experience, in order to perform the right set of activities and understand the value provided by those activities.
Best Practice #3 – Understanding the Differences Between Verification and Validation
SaMD is a medical product, a system in and of itself. Therefore, SaMD system requirements must be verified to have been implemented correctly, and SaMD user’s needs must be validated to ensure that the product satisfies the user’s need and that the right product was built for the customer. Validation typically includes human factors testing, clinical evaluation to determine the efficacy of the software for its intended use, and a demonstration that risk controls are effective. This requires more than just your standard set of software testing, which typically consists of code reviews, unit testing, static analysis, integration testing and requirements-based testing.
Best Practice #4 – You Are Not Done When the Software is Released
Your SaMD software has successfully completed verification and validation, has been cleared by regulatory agencies, and is now ready to go live. Time to breathe a sigh of relief and congratulate the team for a job well done. Pop the champagne and throw a party, you have all earned it. Enjoy the festivities, but afterwards turn your attention promptly to monitoring the performance of the SaMD post launch. Rigorous and exhaustive as your testing was, you can never anticipate every possibility. Be ready to respond to identified software defects and emerging cyber threats and deploy patches or software fixes as issues are identified. The nature of cybersecurity risks is ever-changing, and no system, regardless of how well-designed, or rigorously-tested, is free of software defects. Additionally, customer input can provide opportunities for enhancements and improvement. However, be aware that proposed changes can impact the intended use of the device, the classification of the SaMD, and can ultimately result in additional regulatory submissions. It is paramount that prospective SaMD developers evaluate changes thoroughly and plan actions related to these changes to avoid surprises that can delay, or outright derail, SaMD implementation.
In summary, SaMD can provide great benefits to the user; however, to successfully launch SaMD that meets the user’s needs, several best practices should be utilized in their development. These best practices, making use of standards and guidance documents, applying the appropriate level of rigor in the development of SaMD, understanding the difference between Verification and Validation, managing change appropriately, and implementing a good post-market monitoring program. These best practices are indispensable in ensuring a safe and effective SaMD product.
https://www.jamasoftware.com/media/2021/08/2021-08-17-samd-four-best-practices_1024x512.jpg5121024Mercedes Massana/media/jama-logo-primary.svgMercedes Massana2021-08-17 03:00:492023-01-12 16:48:10Software as a Medical Device (SaMD): Four Key Development Best Practices
Ever wish you could jump right into a software development project without first creating a product requirements document?
OK, let’s get real here: have you ever not only wished it, but actually done it?
If so, you know the results, and they probably weren’t great. In fact, your project was likely a disaster in terms of time spent, budget wasted, and the overall quality (or lack thereof) of the finished product.
So, skipping the product requirements document really isn’t a viable approach. How, then, can you create a good product requirements document with minimal hassle?
Simply follow these eight steps.
1. Brainstorm Software Requirements
Your first step in writing a software development product requirements document doesn’t even involve writing. Well, it does, but not in the way you might think. You’ll need to call together all your project stakeholders and solicit their input, taking copious notes all the while.
Remember that in the true spirit of a brainstorm, there are no right or wrong answers. Encourage the whole team to contribute generously and focus on recording their ideas. Sure, you’ll get some real outlier ideas, and the team may even go off on tangents. But you’ll get everyone’s needs out in the open, which will ultimately make it easier for you to deliver a product that meets them.
Only after the fact will you begin to separate the wheat from the chaff — and then give structure to the wheat. Which brings us to our next step.
2. Create a Product Requirements Document Outline
Remember back in high school when your English teacher made you write — and submit — an outline for your term paper before you started the actual writing? Turns out she wasn’t crazy. If you can’t summarize your thoughts in an outline, it’ll be a lot tougher to write a coherent final product requirements document.
Taking the input you received during the brainstorming session, you’re now going to create the framework of your software development product requirements document. You don’t have to worry about sounding perfect in an outline — use just enough words to get your point across. But do make sure that each point flows logically to the next.
If you come across a point that doesn’t fit the flow of your document, don’t just assume you’ll fix it when you get to the writing phase; instead, ask yourself if it should be moved to a different part of the document, or if it should be cut entirely.
3. Make Sure that All Software Requirements Are Specific and Testable
A vague product requirements document is little better than none at all. If you give your developers lots of wiggle room by using imprecise language, there’s no telling what you’ll get back in the end.
So, once you’ve completed your outline, take a close look at what it actually specifies about the finished product. The product shouldn’t provide “a number of ways” for the user to complete a task; it should provide, say, two specific ways. The home screen shouldn’t load up “instantly;” it should load within six milliseconds.
Of course, creating exact specifications for your product won’t do much good if you can’t test for these specifications. Ask your QA and testing organization how they can enhance the product development process, what kinds of testing technology they can deploy and even what pitfalls they think you may face during development.
4. Write a Draft of Your Software Requirements
Hate writing? Don’t worry. Most of the hard work has already been done in the outlining phase. Now that you know exactly what you want your document to say, you just have to say it.
Take your lean and logical outline and turn it into sentence form. As you work, remember that simple, clear language is better than all those vocabulary words you were supposed to learn for the SAT. Your readers will appreciate you getting to the point and stating in plain English what it is that the software should do.
Sometimes, the best writing isn’t writing at all — it’s a picture. Don’t hesitate to use a diagram or graphic to replace a long, tedious paragraph. Again, your readers will appreciate being able to understand your point at a glance rather than spending valuable time reading.
5. Proofread, Edit, and Logic-Check
Sometimes good writing is simply good editing. A software development product requirements document that’s riddled with typos and grammatical errors is far less likely to be taken seriously. But even more significantly, a document that lacks a logical flow and is missing key considerations could bring development grinding to a halt.
Once you have a first draft, get vicious with what you’ve written. Go over it with a highly critical eye. Try to cut out needless sentences, and trim unnecessary clauses and phrases out of overly long sentences. One useful old trick is to read the document out loud. If you hear yourself droning on and on without really saying anything, that’s generally a sign you need to pare down your text.
In your haste to produce a product requirements document, don’t cut corners. You’ll be surprised at what errors extra sets of eyes can find, what perspectives they bring and what potential disasters they prevent.
That’s why you want the most honest and open feedback from stakeholders to strengthen your software requirements. And you also want to give them enough time so they can be thoughtful about what you’ve presented, while still being mindful of the fact you’re under a time crunch.
Hopefully you’re not emailing around versioned documents, and soliciting feedback from stakeholders that way, because that takes forever and invariably someone’s thoughts get missed in the process. And the opinion you lose might just be the one that introduces a tidal wave of risk.
Modern requirements solutions can cut your review times in half, while capturing everyone’s feedback in real time. Not only will you hit your deadline, you won’t need to sit through lengthy stakeholder meetings as they pore through each detail.
7. Rewrite Your Product Requirements Document
Take the feedback you received on your first draft and give your document a thorough reworking. If the changes were significant, consider running your product requirements document past your stakeholders a second time to get their signoff before making it official.
8. Use Your Finished Product Requirements Document as a Template for Next Time
Whew, you made it! But if this process was a success, then it should become your model for all future projects. So, be sure to save your product requirements document as a template that you can use on your next project. Rather than starting from scratch, you’ll be able to go through the different sections of the document and fill in the blanks.
There’s no failsafe plan for coming up with the perfect software development requirements document. But we think these steps will keep you on the right track — which is exactly what your finished document will do for your developers.
Download our white paper, “Writing High Quality Requirements,” to learn more about the ins and outs of creating a quality product requirements document.
https://www.jamasoftware.com/media/2018/05/Product_Requirements_Document_Header_Image.png5121024Jama Software/media/jama-logo-primary.svgJama Software2019-06-27 04:30:292023-01-12 16:51:34Create a Software Development Product Requirements Document in 8 Steps
Developers often want to freeze software requirements following some initial work and then proceed with development, unencumbered by those pesky changes. This is the classic waterfall paradigm. It doesn’t work well in most situations. It’s far more realistic to define a requirements baseline and then manage changes to that baseline.
What is a Requirements Baseline?
A requirements baseline is asnapshot in time that represents an agreed-upon, reviewed, and approved set of requirements that have been committed to a specific product release.
That “release” could be a complete delivered product or any interim development increment of the product. When stakeholders “sign off” on requirements, what they’re really doing is agreeing and committing to a specific requirements baseline (whether they think of it in those terms or not).
Once the project team establishes a requirements baseline, the team should follow a pragmatic change control process to make good business and technical decisions about adding newly-requested functionality and altering or deleting existing requirements.
A change control process is not about stifling change; it’s about providing decision-makers with the information that will let them make timely and appropriate decisions to modify the planned functionality. That planned functionality is the baseline.
Typically, a baseline is also given a unique name so that all the project participants can refer to it unambiguously. And good configuration management practices allow the team to reconstruct accurately any previous baseline and all its components.
Implementing a Requirements Baseline
Whereas the scope definition distinguishes what’s in from what’s out, the requirements baseline explicitly identifies only those requirement specifications that the project will implement. A baseline is not a tangible item but rather a defined list of items. One possible storage location is a software requirements specification (SRS) document.
If that SRS document contains only—and all—the requirements for a specific product release, the SRS constitutes the requirements baseline for the release. However, the SRS document might include additional, lower-priority requirements that are intended for a later release.
Conversely, a large project might need several software, hardware, and interface requirement specifications to fully define the baseline’s components. The goal is to provide the project stakeholders with a clear understanding of exactly what is intended to go into the upcoming release.
Perhaps you’re storing your requirements in a requirements management solution, rather than in documents. In that case, you can define a baseline as a specific subset of the requirements stored in the database that are planned for a given release.
Storing requirements in a solution allows you to maintain an aggregated set of both currently committed requirements and planned future requirements. Some commercial requirements management tools include a baselining function to distinguish those requirements (perhaps even down to the specific version of each requirement) that belong to a certain baseline.
Alternatively, you could define a requirement attribute in the solution to hold the release number or another baseline identifier. Moving a requirement from one baseline to another is then a simple matter of changing the value for that requirement attribute.
The attribute approach will work when each requirement belongs to only a single baseline. However, you might well allocate the same requirement (or different versions of the same requirement) to several baselines if you’re concurrently developing multiple versions of your product, such as home and professional versions. Tool support is essential for such complex baseline management.
When following an incremental or iterative development life cycle, the baseline for each iteration will represent just a fraction of the overall system’s functionality.
A small project my team once worked on took this approach. This project worked in three-week release cycles. For each cycle, the BA specified the software requirements that were to be designed, coded, integrated, and verified during the next three weeks. Each requirements baseline was therefore quite small. In a classic agile approach, the product grew incrementally toward full functionality as the developer periodically released useful versions to the users.
Business analysts sometimes struggle with exactly when to define a requirements baseline. It’s an important decision because establishing the baseline has the following implications:
Formal change control begins. Change requests are made against an established baseline. The baseline. therefore, provides the point of reference for each proposed change. Make sure your change control process and players are in place before you define any project baselines.
Project managers determine the staffing levels and budgets needed. There are five dimensions to a software project that must be managed: features, quality, schedule, staff, and budget. Once the features and quality goals are defined in the baseline, the project manager adjusts the other three dimensions to accomplish the project’s objectives. It can work the other way, too. If staff, budget, and/or schedule are pre-established by external forces, the baseline composition is necessarily constrained to fit inside the project box bounded by those limits.
Project managers make schedule commitments. Prior to baselining, requirements are still volatile and uncertain, so estimates are similarly volatile and uncertain. Once a baseline is established, the contents of the release should be sufficiently well understood so that managers can make realistically achievable commitments. The managers still need to anticipate requirements’ growth (per their requirements management plan) by including sensible contingency buffers in their committed schedules.
Baselining requirements too early can push your change process into overdrive. In fact, receiving a storm of change requests after defining a baseline could be a clue that your requirements elicitation activities were incomplete and perhaps ineffective. On the other hand, waiting too long to establish a baseline could be a sign of analysis paralysis: perhaps the BA is trying too hard to perfect the set of requirements before handing them to the development team.
Keep in mind that requirements elicitation attempts to define a set of requirements that is good enough to let the team proceed with construction at an acceptable level of risk. Use the checklist in Table 1 to judge when you’re ready to define a requirements baseline as a solid foundation for continuing the development effort.
Table 1. Factors to Consider Before Defining a Requirements Baseline
Business Rules
Determine whether you’ve identified the business rules that affect the system and whether you’ve specified functionality to enforce or comply with those rules.
Change Control
Make sure a practical change control process is in place for dealing with requirement changes and that the change control board is assembled and chartered. Ensure that the change control tool you plan to use is in place and configured and that the tool users have been trained.
Customer
Perspective
Check back with your key customer representatives to see whether their needs have changed since you last spoke. Have new business rules come into play? Have existing rules been modified? Have priorities changed? Have new customers with different needs been identified?
Interfaces
See if functionality has been defined to handle all identified external interfaces to users, other software systems, hardware components, and communications services.
Model Validation
Examine any analysis models with the user representatives, perhaps by walking through test cases, to see if a system based on those models would let the users perform their necessary activities.
Prototypes
If you created any prototypes, did appropriate customers evaluate them? Did the BA use the knowledge gained to revise the SRS?
Alignment
Check to see if the defined set of requirements would likely achieve the project’s business objectives. Look for alignment between the business requirements, user requirements, and functional requirements.
Reviews
Have several downstream consumers of the requirements review them. These consumers include designers, programmers, testers, documentation and help writers, human factors specialists, and anyone else who will base their own work on the requirements.
Scope
Confirm that all requirements being considered for the baseline are within the project scope as it is currently defined. The scope might have changed since it was originally defined early in the project.
TBDs
Scan the documents for TBDs (details yet to be determined). The TBDs represent requirements development work remaining to be done.
Templates
Make sure that each section of the SRS document template has been populated. Alternatively, look for an indication that certain sections do not apply to this project. Common oversights are quality requirements, constraints, and assumptions.
User Classes
See whether you’ve received input from appropriate representatives of all the user classes you’ve identified for the product.
Verifiability
Determine how you would judge whether each requirement was properly implemented. User acceptance criteria are helpful for this.
You’re never going to get perfect, complete requirements. The BA and project manager must judge whether the requirements are converging toward a product description that will satisfy some defined portion of customer needs and is achievable within the known project constraints.
Establishing a baseline at that point establishes a mutual agreement and expectation among the project stakeholders regarding the product they’re going to have when they’re done. Without such an agreed-upon baseline, there’s a good chance someone will be surprised by the outcome of the project.
And software surprises are rarely good news.
To learn more about how to write requirements in a way that all stakeholders have a clear understanding of development needs, download our eBook, Best Practices for Writing Requirements.
Jama Software has partnered with Karl Wiegers to share licensed content from his books and articles. Karl Wiegers is an independent consultant and not an employee of Jama. He can be reached at ProcessImpact.com.
https://www.jamasoftware.com/media/2019/06/defining-and-implementing-requirements-baselines_1024x512-1.jpg5121024Karl Wiegers/media/jama-logo-primary.svgKarl Wiegers2019-06-18 04:00:522023-01-12 16:51:36Defining and Implementing Requirements Baselines
In 1967, computer scientist and programmer Melvin Conway coined the adage that carries his name: “Organizations that design systems are constrained to produce designs that are copies of the communication structures of these organizations.”
In other words, a system will tend to reflect the structure of the organization that designed it. Conway’s law is based on the logic that effective, functional software requires frequent communication between stakeholders. Further, Conway’s law assumes that the structure of a system will reflect the social boundaries and conditions of the organization that created it.
One example of Conway’s law in action, identified back in 1999 by UX expert Nigel Bevan, is corporate website design: Companies tend to create websites with structure and content that mirror the company’s internal concerns — rather than speaking to the needs of the user.
The widely accepted solution to Conway’s law is to create smaller teams focused around single projects so they can iterate rapidly, delivering creative solutions and responding adroitly to changing customer needs. Like anything else, though, this approach has its drawbacks, and being aware of those downsides in advance can help you mitigate their impact.
Here, we’ll unpack the benefits of leveraging smaller teams; assess whether Conway’s law holds up to scrutiny by researchers; and lay out how to balance the efficiency of small, independent teams against organizational cohesion and identity to build better products.
Smaller Teams Can Yield Better Results
Plenty of leading tech companies, including Amazon and Netflix, are structured as multiple (relatively) small teams, each responsible for a small part of the overall organizational ecosystem. These teams own the whole lifecycle of their product, system, or service, giving them much more autonomy than bigger teams with rigid codebases. Multiple smaller teams allow your organization to experiment with best practices and respond to change faster and more efficiently, while ossified, inflexible systems are slow to adapt to meet evolving business needs.
When your organization structure and your software aren’t in alignment, tensions and miscommunication are rife. If this is your situation, look for ways to break up monolithic systems by business function to allow for more fine-grained communication between stakeholders throughout the development lifecycle.
Testing Conway’s Law
In 1967, the Harvard Business Review rejected Conway’s original paper, saying he hadn’t proved his thesis. Nevertheless, software developers eventually came to accept Conway’s law because it was true to their experiences, and by 2008, a team of researchers at MIT and Harvard Business School had begun analyzing different codebases to see if they could prove the hypothesis.
For this study, researchers took multiple examples of software created to serve the same purpose (for example, word processing or financial management). Codebases created by open-source teams were compared with those crafted by more tightly coupled teams. The study found “strong evidence” to support Conway’s law, concluding that “distributed teams tend to develop more modular products.”
In other words, there’s definitely some justification for the idea that smaller teams will work more effectively and produce better results, while bigger groups may lack cohesion and exhibit dysfunction.
Organization First, Team Second
As a recent Forbes article noted, there are potential drawbacks to letting Conway’s law guide the structure of your organization. The thinking goes that “once you entrench small teams in this way, their respect and loyalty for that team often comes to outweigh their allegiance to the organization as a whole… Teams in disparate locations end up forming strong but exclusive identities as individual departments.”
So how do you balance the benefits of small, nimble groups against an organization-wide sense of solidarity, cooperation, and transparency?
Platforms that enable organization-wide collaboration can break down the barriers erected by Conway’s law without robbing small teams of their independence and agility. Josh McKenty, a vice president at Pivotal, argues that using collaborative platforms can neutralize the sense of otherness, of separateness, that can inhibit organization-wide cohesion: “Platforms can allow businesses to cultivate a sense of ‘we’re all in this together,’ in which everyone is respected, treated with mutual regard, and can clean up each other’s messes – regardless of whether they created the mess in the first place,” McKenty told a conference audience in 2017, according to Forbes.
That solidarity is crucial in complex product and systems development, where rapidly shifting requirements, evolving standards, and updated customer specs require consistent and dedicated communication within and across teams. If your teams are forming strong bonds, that’s terrific, but you don’t want those bonds to become exclusionary. If teams are turning into cliques, your organization has lost its internal cohesion.
A collaborative platform that unites disparate teams across functions and locations can help you actualize the benefits of small, focused teams without losing coherence.
https://www.jamasoftware.com/media/2019/04/Conways-law.png5121024Eira Long May/media/jama-logo-primary.svgEira Long May2019-04-04 04:00:432023-01-12 16:51:43What Can Development Teams Learn from Conway’s Law?
Starting a new internship can be intimidating; new responsibilities, new people, new office, and perhaps the most daunting of all, a new codebase. With so many new things, it’s easy to become overwhelmed and want to dive straight into the code. I’ve had the opportunity to be in five software internships over the past few years through the Portland Cooperative Education Program (PCEP) and as a result, have learned enough to compile a list of things to do at the start of an internship. I hope the following things help you start off on the right foot:
1. Complete Onboarding
When it comes to onboarding, every company does it differently. If you are a part-time intern like I was most of the time, this is a good time to establish the hours you’re going to be working with your team. Make sure to give yourself time to get from school to the office and vice versa. More importantly, make sure you’re not too overloaded to do your schoolwork.
2. Introduce Yourself
I didn’t learn the importance of making a proper introduction to my coworkers right away. I’d spend hours on end banging my head against a wall (metaphorically) because I didn’t want to ask for help from someone I didn’t know. If I had introduced myself and gotten to know them earlier, I would’ve felt a lot more comfortable asking for help. Feel free to invite people for coffee or lunch and take every opportunity you can to go with someone who invites you. You’re being hired on as an intern to work with your team, not to be a lone wolf who tries to solve all their own problems.
3. Settle into Your Environment
I had an internship where I did didn’t get my computer until two weeks in and had to read out of a textbook. Hopefully this is a rare experience, but if you don’t have what you need don’t be shy about asking for it and following up with IT.
In my experience, setting up my dev environment has been tedious. Most companies do not have a well documented process, so this is where some assistance from a co-worker will be really helpful. Asking them for help shouldn’t be a problem now because you’ve already introduced yourself. Now is a good time to download any programs you use for productivity and configure your IDE to adhere to the code style of the team.
4. Get the Lay of the Land
Once you’re all set up, you can actually start learning. If you don’t already know about git (or any other Version Control Software), I highly recommend you to learn it. It’s absolutely crucial to figure out how VCS is used before diving into the code.
You should also get familiar with the programming language(s) that the application is using and the parts of code that your team works with. Watching a few videos and then doing an online tutorial or two helps me get comfortable with the code. Once you’ve got a better grasp on the language, you should take a look at the structure of the code for the application. Some things to look out for are:
The folder structure for the project, including
Back-end/Server-side code
Front-end code and styles
Test code
API code
Database code
Which libraries/modules are used
These can typically be found in files that deal with dependencies such as pom.xml, package.json, Gemfile, etc)
5. Experiment
Look through the application, think of something you’d like to change, and then make the necessary code changes. Figure out how to write tests for your code and look at existing tests as reference. Try to make changes related to each area listed above if your team works in those areas.
Another thing that’s proven to be extremely helpful for me is picking up a small and easy task and go through it with another engineer who has been around for a while. At Jama, we use a technique called pair-programming to get the perfect balance between productivity and sharing domain knowledge. If you’d like to learn more about pair-programming, check out my colleague’s blog post about it.
6. Contribute!
After you’ve gotten comfortable with the code and making changes, you should be ready to start fixing small bugs and adding new features. To get your features and fixes live, you’ll need to know the process for putting your code into production. This process typically involves some form of merge request to get your code into the master branch. The merge requests are a an excellent way to review your code changes and have another developer look over it to ensure that it’s high quality.
Above all, remember that the main point of an internship is to learn new things. Don’t worry about contributing a lot until you’re comfortable with making changes on your own. Don’t be afraid to make mistakes; your VCS is very forgiving and your team will be too.
https://www.jamasoftware.com/media/2017/02/software-internship-blog-featured-image.jpg573856Max Marchuk/media/jama-logo-primary.svgMax Marchuk2017-02-14 09:00:582023-01-12 16:54:326 Things You Need to Do at the Start of Your Software Internship
“Gartner clients report that poor requirements are a major cause of rework and friction between business and IT. Broad market adoption of software requirements solutions is low, exacerbating this situation.” This begins the key findings in Gartner’s newest Market Guide for Software Requirements Definition and Management Solutions.
The guide provides key findings, recommendations, market definition and direction, summarily stating:
Requirements management software provides tools and services that aid the definition and management of software requirements and user experience. Application development executives should invest in requirements skills, practices and tools to improve user experience and software quality.
In choosing a requirements management tools vendor, Gartner advises companies consider, among other factors, the ability to:
Work in shared (rather than collaborative) environments.
Use a true requirements repository (featuring a robust meta-model that enables reuse and impact analysis) rather than simple storage and tagging.
Integrate with other ADLM tools in use (including test case management, and agile planning).
Support regulatory and reporting needs (for compliance with both internal and external governance processes).
Gartner, Market Guide for Software Requirements Definition and Management Solutions, Thomas E. Murphy, Magnus Revang, Laurie F. Wurster, 24 June 2016
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.
00Madeline Wigen Kernan/media/jama-logo-primary.svgMadeline Wigen Kernan2016-10-03 08:30:452023-01-12 16:54:53Gartner Reports on the Best Requirements Management Tools for Business
In the beginning, there is a simple code base written by a few developers. The code’s deficiencies are easily kept in the brains of developers creating it and they most likely know what needs to be fixed and were trouble can be found. Then the code grows, more developers are hired, features are added, and the code base evolves. Suddenly, its authors no longer easily retain the mind-map of the code and its faults, and the code base becomes a mysterious source of bugs, performance problems and exhibits remarkable resistance to change. This is legacy code.
Your code base presents challenges – technical debt accumulates, new features demand the existing code to evolve, performance issues surface, and bugs are discovered. How do you meet these challenges? What proactive steps can you take to make your legacy code more adaptable, performant, testable, and bug free? Code forensics can help you focus your attention on the areas of your code base that need it most.
Adam Tornhill introduced the idea of code forensics in his book Your Code as a Crime Scene. (The Pragmatic Programmers, 2015). I highly recommend his book and have applied his ideas and tools to improve the Jama code base. His thesis is that criminal investigators and programmers ask many of the same open-ended questions while examining evidence. By questioning and analyzing our code base, we will not only identify offenders (bad code we need to improve), but also discover ways in which the development process can be improved, in effect eliminating repeat offenders.
For this blog post, I focus on one forensic tool that will help your team find the likely crime scenes in your legacy code. Bugs and tech debt can exist anywhere, but the true hot spots are to be found wherever you find evidence of three things:
• Complexity
• Low or no test coverage
• High rate of change
Complexity
Complexity of a class or method can be measured several ways, but research shows that simply counting the lines of code is good enough and closely predicts complexity just as well as more formal methods (Making Software: What Really Works chapter 8: Beyond Lines of Code: Do we need more complexity metrics by Israel Herraiz and Ahmed E. Hassan. O’Reilly Media, Inc).
Another quick measure of complexity: indentation. Which of these blocks of code looks more complex?The sample on the left has deep indentation representing branching and loops. The sample on the right has several short methods with little indentation, and is less complicated to understand and to modify. When looking for complexity, look for long classes and methods and deep levels of indentation. It’s simple, but it’s a proven marker of complexity.
Test Coverage
Fast-running unit tests covering every line of code you write are a requirement for the successful continuous delivery of high quality software. It is important to have a rigorous testing discipline like Test Driven Development, otherwise testing might be left as a task to be done after the code is written, or is not done at all.
The industry average bug rate is 15 to 50 bugs in every 1000 lines of code. Tests do not eliminate all the bugs in your code, but they do ensure you find the majority of them. Your untested legacy code has a high potential bug rate and it is in your best interest to write some tests and find these bugs before your users find them.
High rate of change
A section of code that is under frequent change is signaling something. It may have a high defect rate requiring frequent bug fixes. It may be highly coupled to all parts of your system and has to change whenever anything in the system changes. Or, it may be just the piece of your app that is the focus of new development. Whatever the source of the high rate of change, evidence of a specific section of code getting modified a lot should draw your investigative attention.
Gathering evidence
How do you find which parts of your system are complex, untested, and undergoing lots of change? You need tools like a smart build system integrated with a code quality analyzer, and a source code repository with an API that allows for scripted analysis of code commits. At Jama, we are very successful using Team City coupled with SonarQube as our continuous integration server and code quality analyzer. Our source code repository is git.
Here is an example analysis of complexity and test coverage produced by Sonar. Each bubble represents a class and the size of the bubble represents the number of untested lines of code in that class. In other words, the larger the bubble, the more untested lines it has.
In this example, there are several giant bubbles of tech debt and defects floating high on the complexity scale.
Both Team City and Sonar report on the test coverage per class so with every build you not only know what code is the least tested, but you know the overall trend for coverage.
Using these tools, you now know where your complexity and untested code lives, but you need to know which parts of the suspect code are undergoing churn. This is where forensic analysis of your source code repository comes in.
Code repositories like git produce detailed logs, which can be analyzed by scripts. A command-line tool for doing this analysis is provided by Adam Tornhill to accompany his book and is available on his web site. This tool will do complexity analysis as well as change analysis.
When looking at the results of your change analysis, you are searching for not only what is changing the most, but also what code tends to change together. Classes and modules that are frequently appearing together in code commits are evidence of a large degree of coupling. Coupling is bad.
What other forensic tools does your code repository offer? You can analyze commit messages and produce word clouds to see what terms are dominating change descriptions. You would prefer to see terms like “added”, “refactored”, “cleaned”, and “removed” to red flag terms like “fixed”, “bug”, and “broken”. And of course commit messages dominated by swearing indicate real problems.
Another useful data point is which parts of your codebase are dominated by which developers. If you have classes or modules that are largely written and maintained by one or two devs, you have potential bus factor issues and need to spread the knowledge of this code to the wider team.
Pulling it all together
After the above analysis is complete, you have an ordered list of the most untested and most complex code undergoing the highest rate of change. The offenders that appear at the top of the list are the prime candidates for refactoring.
All software systems evolve and change over time and despite our best efforts tech debt sneaks in, bugs are created, and complexity increases. Using forensic tools to identify your complex, untested, and changing components lets you focus on those areas at the highest risk for failure and as a bonus can help you study the way your teams are working together.
00Ken Richards/media/jama-logo-primary.svgKen Richards2016-05-11 08:30:432023-01-12 16:55:35Improve Your Code with Code Forensics
A few weeks ago I had a friend Grace reach out to me and ask me if I could speak to my experience with the DevOps movement from an engineering management perspective. Grace is one of the organizers of the Portland DevOps Groundup meetup group. Their goal is to educate others and discuss topics having to do with DevOps. I agreed to speak as well as host the event at Jama (one of the very cool things that we do as an organization is to host such community events).
Grace asking me to speak was timely as I have been doing a lot of thinking lately about the culture of DevOps and how it is applied here at Jama.
The term DevOps did not use to be widely known, now it has become a fairly common term. With that wide adoption also comes misuse and misunderstanding. People are using the term for all sorts of things as well as it being buzzword for catchall job titles. To me, DevOps is all about collaboration, communication and integration. I titled my talk “DevOps is dead, long live DevOps” on purpose to gain a reaction from people (which I definitely did get reactions from some of the recruiters in attendance). My point in picking that title was that the term has become diluted and misused and is becoming irrelevant.
I focused my talk on my personal history in software development coming from an operations background. I’m no expert, this was just me sharing my experiences as a manager of technical people and how I’ve tried to build highly collaborative teams that enjoy working together and solving tough problems. I really enjoyed being able to share three separate work experiences with a large group of people and discuss how I’ve learned from each job and applied those learnings in an effort to improve upon the process each time. I spoke at length to my most current experience here at Jama and how we are working as a group to better integrate the practices and principals of DevOps into all of engineering instead of it being a single team called “DevOps” that is tasked with the work. This cultural shift is starting to happen and that is a good thing for all of Jama engineering.
I spoke for the better part of an hour and received some really thoughtful questions at the end of the talk around how people can work to affect change in culture and gain business adoption of these practices. DevOps in some ways is still mysterious for people or they think of it only in terms of tools and technologies, my hope is that my talk made it less of a mystery and starting more people thinking in terms of collaboration, communication and integration across the company culture.
00Jama Software/media/jama-logo-primary.svgJama Software2016-05-06 07:00:482023-01-12 16:55:36DevOps is Dead, Long Live DevOps
“What I cannot create, I do not understand.”
Richard Feynman
Redux is pretty simple. You have action creators, actions, reducers and a store. What’s not so simple is figuring out how to put all of that together the best or most “correct” way. At least this is the problem I had with it. In order to try to gain a better understanding of the philosophy behind Redux, and to gain knowledge I could further share with my team, I decided to rewrite Redux, and to document it.
To rewrite Redux, I used a wonderful article by Lin Clark as a reference point, as well as the Redux codebase itself, and of course, the Redux docs.
You may note I’m using traditional pre-ES6 Javascript throughout this article. It’s because everyone who knows Javascript knows pre-ES6 JS, and I want to make sure I don’t lose anyone because of syntax unfamiliarity.
The Store
Redux, as is the same with any data layer, starts with a place to store information. Redux, by definition of the first principle of Redux, is a singular shared data store, described by its documentation as a “Single source of truth”, so I think I’ll start by making the store a singleton:
var store;
function getInstance() {
if (!store) store = createStore();
return store;
}
function createStore() {
return {};
}
module.exports = getInstance();
The dispatcher
The next principle is that the state of the store can only change in one way: through the dispatching of actions. So let’s go ahead and write a dispatcher.
However, in order to update state in this dispatcher, we’re going to have to have state to begin with, so let’s create a simple object that contains our current state.
function createStore() {
var currentState = {};
}
Also, to dispatch an action, we need a reducer to dispatch it to. Let’s create a default one for now. A reducer receives the current state and an action and then returns a new version of the state based on what the action dictates:
function createStore() {
var currentState = {};
var currentReducer = function(state, action) {
return state;
}
}
This is just a default function to keep the app from crashing until we formally assign reducers, so we’re going to go ahead and just return the state as is. Essentially a “noop”.
The store is going to need a way to notify interested parties that an update has been dispatched, so let’s create an array to house subscribers:
function createStore() {
var currentState = {};
var currentReducer = function(state, action) {
return state;
}
var subscribers = [];
}
Cool! OK, now we can finally put that dispatcher together. As we said above, actions are handed to reducers along with state, and we get a new state back from the reducer. If we want to retain the original state before the change for comparison purposes, it probably makes sense to temporarily store it.
Since an action is dispatched, we can safely assume the parameter a dispatcher receives is an action.
function createStore() {
var currentState = {};
var currentReducer = function(state, action) {
return state;
}
var subscribers = [];
function dispatch(action) {
var prevState = currentState;
}
return {
dispatch: dispatch
};
}
We also have to expose the dispatch function so it can actually be used when the store is imported. Kind of important.
So, we’ve created a reference to the old state. We now have a choice: we could either leave it to reducers to copy the state and return it, or we can do it for them. Since receiving a changed copy of the current state is part of the philosophical basis of Redux, I’m going to go ahead and just hand the reducers a copy to begin with.
function createStore() {
var currentState = {};
var currentReducer = function(state, action) {
return state;
}
var subscribers = [];
function dispatch(action) {
var prevState = currentState;
currentState = currentReducer(cloneDeep(currentState), action);
}
return {
dispatch: dispatch
};
}
We hand a copy of the current state and the action to the currentReducer, which uses the action to figure out what to do with the state. What is returned is a changed version of the copied state, which we then use to update the state. Also, I’m using a generic cloneDeepimplementation (in my case, I used lodash’s) to handle copying the state completely. Simply using Object.assign wouldn’t be suitable because it retains references to objects contained by the base level object properties.
Now that we have this updated state, we need to alert any part of the app that cares. That’s where the subscribers come in. We simply call to each subscribing function and hand them the current state and also the previous state, in case whoever’s subscribed wants to do delta comparisons:
function createStore() {
var currentState = {};
var currentReducer = function(state, action) {
return state;
}
var subscribers = [];
function dispatch(action) {
var prevState = currentState;
currentState = currentReducer(cloneDeep(currentState), action);
subscribers.forEach(function(subscriber){
subscriber(currentState, prevState);
});
}
return {
dispatch: dispatch
};
}
Of course, none of this really does any good with just that default noop reducer. What we need is the ability to add reducers, as well.
Adding Reducers
In order to develop an appropriate reducer-adding API, let’s revisit what a reducer is, and how we might expect reducers to be used.
In the Three Principles section of Redux’s documentation, we can find this philosophy:
“To specify how the state tree is transformed by actions, you write pure reducers.”
So what we want to accommodate is something that looks like a state tree, but where the properties of the state are assigned functions that purely change their state.
{
stateProperty1: function(state, action) {
// does something with state and then returns it
},
stateProperty2: function(state, action) {
// same
}, ...
}
Yeah, that looks about right. We want to take this state tree object and run each of its reducer functions every time an action is dispatched.
We have currentReducer defined in the scope, so let’s just create a new function and assign it to that variable. This function will take the pure reducers we passed to it in the state tree object, and run each one, returning the outcome of the function to the key it was assigned.
function createStore() {
var currentReducer = function(state, action) {
return state;
} ...
function addReducers(reducers) {
currentReducer = function(state, action) {
var cumulativeState = {};
for (key in reducers) {
cumulativeState[key] = reducers[key](state[key], action);
}
return cumulativeState;
}
}
}
Something to note here: we’re only ever handing a subsection of the state to each reducer, keyed from its associated property name. This helps simplify the reducer API and also keeps us from accidentally changing other state areas of the global state. Your reducers should only be concerned with their own particular state, but that doesn’t preclude your reducers from taking advantage of other properties in the store.
As an example, think of a list of data, let’s say with a name “todoItems”. Now consider ways you might sort that data: by completed tasks, by date created, etc. You can store the way you sort that data into separate reducers (byCompleted and byCreated, for example) that contain ordered lists of IDs from the todoItems data, and associate them when you go to show them in the UI. Using this model, you can even reuse the byCreated property for other types of data aside from todoItems! This is definitely a pattern recommended in the Redux docs.
Now, this is fine if we add just one single set of reducers to the store, but in an app of any substantive size, that simply won’t be the case. So we should be able to accommodate different portions of the app adding their own reducers. And we should also try to be performant about it; that is, we shouldn’t run the same reducers twice.
// State tree 1
{
visible: function(state, action) {
// Manage visibility state
} ...
}
// State tree 2
{
visible: function(state, action) {
// Manage visibility state (should be the same function as above)
} ...
}
In the above example, you might imagine two separate UI components having, say, a visibility reducer that manages whether something can be seen or not. Why run that same exact reducer twice? The answer is “that would be silly”. We should make sure that we collapse by key name for performance reasons, since all reducers are run each time an action is dispatched.
So keeping in mind these two important factors — ability to ad-hoc add reducers and not adding repetitive reducers — we arrive to the conclusion that we should add another scoped variable that houses all reducers added to date.
...
function createStore() {
...
var currentReducerSet = {};
function addReducers(reducers) {
currentReducerSet = Object.assign(currentReducerSet, reducers);
currentReducer = function(state, action) {
var cumulativeState = {};
for (key in currentReducerSet) {
cumulativeState[key] = currentReducerSet[key](state[key], action);
}
return cumulativeState;
}
}
...
}
...
The var currentReducerSet is combined with whatever reducers are passed, and duplicate keys are collapsed. We needn’t worry about “losing” a reducer because two reducers will both be the same if they have the same key name. Why is this?
To reiterate, a state tree is a set of key-associated pure reducer functions. A state tree property and a reducer have a 1:1 relationship. There should never be two different reducer functions associated with the same key.
This should hopefully illuminate for you exactly what is expected of reducers: to be a sort of behavioral definition of a specific property. If I have a “loading” property, what I’m saying with my reducer is that “this loading property should respond to this set specific actions in these particular ways”. I can either directly specify whether something is loading — think action name “START_LOADING“ — or I can use it to increment the number of things that are loading by having it respond to action names of actions that I know are asynchronous, such as for instance “LOAD_REMOTE_ITEMS_BEGIN” and “LOAD_REMOTE_ITEMS_END”.
Let’s fulfill a few more requirements of this API. We need to be able to add and remove subscribers. Easy:
function createStore() {
var subscribers = [];
...
function subscribe(fn) {
subscribers.push(fn);
}
function unsubscribe(fn) {
subscribers.splice(subscribers.indexOf(fn), 1);
}
return {
...
subscribe: subscribe,
unsubscribe: unsubscribe
};
}
And we need to be able to provide the state when someone asks for it. And we should provide it in a safe way, so we’re going to only provide a copy of it. As above, we’re using a cloneDeep function to handle this so someone can’t accidentally mutate the original state, because in Javascript, as we know, if someone changes the value of a reference in the state object, it will change the store state.
function createStore() {
...
function getState() {
return cloneDeep(currentState);
}
return {
...
getState: getState
};
}
And that’s it for creating Redux! At this point, you should have everything you need to be able to have your app handle actions and mutate state in a stable way, the core fundamental ideas behind Redux.
Let’s take a look at the whole thing (with the lodash library):
var _ = require('lodash');
var globalStore;
function getInstance(){
if (!globalStore) globalStore = createStore();
return globalStore;
}
function createStore() {
var currentState = {};
var subscribers = [];
var currentReducerSet = {};
currentReducer = function(state, action) {
return state;
};
function dispatch(action) {
var prevState = currentState;
currentState = currentReducer(_.cloneDeep(currentState), action);
subscribers.forEach(function(subscriber){
subscriber(currentState, prevState);
});
}
function addReducers(reducers) {
currentReducerSet = _.assign(currentReducerSet, reducers);
currentReducer = function(state, action) {
var ret = {};
_.each(currentReducerSet, function(reducer, key) {
ret[key] = reducer(state[key], action);
});
return ret;
};
}
function subscribe(fn) {
subscribers.push(fn);
}
function unsubscribe(fn) {
subscribers.splice(subscribers.indexOf(fn), 1);
}
function getState() {
return _.cloneDeep(currentState);
}
return {
addReducers,
dispatch,
subscribe,
unsubscribe,
getState
};
}
module.exports = getInstance();
So what did we learn by rewriting Redux?
We learned a few valuable things in this experience:
We must protect and stabilize the state of the store. The only way a user should be able to mutate state is through actions.
Reducers are pure functions in a state tree. Your app’s state properties are each represented by a function that provides updates to their state. Each reducer is unique to each state property and vice versa.
The store is singular and contains the entire state of the app. When we use it this way, we can track each and every change to the state of the app.
Reducers can be thought of as behavioral definitions of state tree properties.
Bonus section: a React adapter
Having the store is nice, but you’re probably going to want to use it with a framework. React is an obvious choice, as Redux was created to implement Flux, a core principle data architecture of React. So let’s do that too!
You know what would be cool? Making it a higher-order component, or HOC as you’ll sometimes see them called. We pass an HOC a component, and it creates a new component out of it. And it is also able to be infinitely nested, that is, HOCs should be able to be nested within each other and still function appropriately. So let’s start with that basis:
Note: Going to switch to ES6 now, because it provides us with class extension, which we’ll need to be able to extend React.Component.
import React from 'react';
export default function StoreContainer(Component, reducers) {
return class extends React.Component { }
}
When we use StoreContainer, we pass in the Component class — either created with React.createClass or React.Component — as the first parameter, and then a reducer state tree like the one we created up above:
// Example of StoreContainer usage
import StoreContainer from 'StoreContainer';
import { myReducer1, myReducer2 } from 'MyReducers';
StoreContainer(MyComponent, {
myReducer1,
myReducer2
});
Cool. So now we have a class being created and receiving the original component class and an object containing property-mapped reducers.
So, in order to actually make this component work, we’re going to have to do a few bookkeeping tasks:
Get the initial store state
Bind a subscriber to the component’s setState method
Add the reducers to the store
We can bootstrap these tasks in the constructor lifecycle method of the Component. So let’s start with getting the initial state.
...
export default function StoreContainer(Component, reducers) {
return class extends React.Component {
constructor() {
super(props);
// We have to call this to create the initial React
// component and get a `this` value to work with
this.state = store.getState();
}
}
}
Next, we want to subscribe the component’s setState method to the store. This makes the most sense because setting state on the component will then set off the top-down changes the component will broadcast, as we’d want in the Flux model.
We can’t, however, simply send this.setState to the subscribe method of the store — their parameters don’t line up. The store wants to send new and old state, and the setState method only accepts a function as the second parameter.
So to solve this, we’ll just create a marshalling function to handle it:
...
import store from './Store';
function subscriber(currentState, previousState) {
this.setState(currentState);
}
export default function StoreContainer(Component, reducers) {
return class extends React.Component {
constructor() {
...
this.instSubscriber = subscriber.bind(this);
store.subscribe(this.instSubscriber);
}
componentWillUnmount() {
store.unsubscribe(this.instSubscriber);
}
}
}
...
Since the store is a singleton, we can just import that in and call on its API directly.
Why do we have to keep the bound subscriber around? Because binding it returns a new function. When unmounting the component, we want to be able to unsubscribe to keep things clean. We know that the store merely looks for the function reference in its internal subscribers array and removes it, so we need to make sure we keep that reference around so we can get it back when we need to identify and remove it.
One last thing to do in the constructor: add the reducers. This is as simple as passing what we received to the HOC into the store.addReducers method:
So now we’re ready to provide the rendering of the component. This is the essence of HOCs. We take the Component we received and render it within the HOC, imbuing it with whatever properties the HOC needs to provide it:
We are “spreading” the properties and state of the HOC down to the Component it is wrapping. This effectively ensures that whatever properties we pass to the HOC get down to the component it wraps, a vital feature of infinitely nestable HOCs. It may or may not be wise to place the state as properties on the Component, but it worked well in my testing, and it was nice being able to access to the state through the this.props object of the Component that is wrapped, as you might expect to normally do with a React component that receives data from a parent component.
Here’s the whole shabang:
import React from 'react'; import store from './Store';
function subscriber(currentState, previousState) {
this.setState(currentState);
}
export default function StoreContainer(Component, reducers) {
return class extends React.Component {
constructor(props) {
super(props);
this.state = store.getState();
this.instSubscriber = subscriber.bind(this);
store.subscribe(this.instSubscriber);
store.addReducers(reducers);
}
componentWillUnmount() {
store.unsubscribe(this.instSubscriber);
}
render() {
return (<Component {...this.props} {...this.state} />);
}
}
}
Implementation of using StoreContainer:
import StoreContainer from 'StoreContainer';
import { myReducer } from 'MyReducers';
let MyComponent extends React.Component {
// My component stuff
}
export default StoreContainer(MyComponent, { myReducer });
Implementation of using the Component that uses StoreContainer (exactly the same as normal):
import MyComponent from 'MyComponent';
import ReactDOM from 'react-dom';
But you don’t have to define the data basis of your MyComponent immediately or in a long-lasting class definition; you could also do it more ephemerally, in implementation, and perhaps this is wiser for more generalized components:
import StoreContainer from 'StoreContainer';
import { myReducer } from 'MyReducers';
import GeneralizedComponent from 'GeneralizedComponent';
import ReactDOM from 'react-dom';
let StoreContainedGeneralizedComponent = StoreContainer(GeneralizedComponent, { myReducer });
ReactDOM.render(<StoreContainedGeneralizedComponent myProp='foo' />, document.body);
This has the benefit of letting parent components control certain child component properties.
Conclusion
Well, that might have been a bit exhaustive, and we may not have covered everything but my hope is that by opening up Redux and exposing its innards, as well as providing an implementation of its usage with a popular library, it’s a bit more clear how it’s expected to manage state in a safe manner.
If you’ve found anything erroneous in this article or want to give feedback, please feel free to reach out to me to let me know so I can keep this article informative and up-to-date.