Tag Archive for: Agile

Jama Debating Scalability

Like many maturing companies Jama found itself in a situation where their monolithic software architecture prohibited scaling. Scalability is here a catch-all for many quality attributes such as maintainability across a growing team, performance, and true horizontal scalability. The solution was simple — on paper. Our software had to be split up. We are talking late 2013, micro-services are taking off, and a team starts carving out functions of the monolith into services that could then be deployed separately in our emerging SaaS environment. We are a SaaS company, after all. Or we are a SaaS company first. Or, well, we are a SaaS company which deeply cares about those on-premises customers that don’t move to the cloud… yet… for a variety of reasons, whether we like it or not.

Planning our Strategy

Will we keep on delivering the full monolith to on-premises customers, including those parts we deploy separately in SaaS? That would be a pretty crappy economic proposition for us, as we’d essentially be building, then testing everything twice. On-premises customers would not benefit any of the scaling benefits of the services architecture, nor can the engineering team really depart from the monolithic approach that is slowing them down. (On a side-note, as a transitional solution we’ve used this approach for a little while, and be assured that there’s little to love there.)

Then, will we deliver a monolith to on-premises customers that’s lacking a growing number of features, having those as a value add in SaaS perhaps? That works… up to a point… we currently have services like SAML, OAuth, and centralized monitoring in our SaaS environment, that aren’t available to our on-premises customers. They let us get away with that. But there is only so many services you can really carve out, before hitting something that’s mission critical to on-premises customers.

Docker

2014: Scribbling our options on a whiteboard

The only solution that makes sense: bring the services to the on-premises customers. (For completeness sake: there was this one time someone proposed not supporting on-premises installations anymore. They were voted off the island.)

So, services are coming to an on-premises near you.

Implications of Services

Huge. The implications are huge, in areas such as the following:

  • Strategy. Since 2010 we have been attempting to focus on our SaaS model and in turn driving our customers to our hosted environment. The reality is that our customers are slow to adopt and requires us to refocus back to the on-premises deployment. That is okay, and there’s no reason we can’t do both, but it’s sobering to pull yourself back after so much focus went into “being more SaaS” (which came with the good hopes of the gradual transition of (all) customers to the cloud).
  • Architecture. Our SaaS environment has a lot of bells and whistles that make no sense for on-premises customers, and it relies on a plethora of other SaaS providers to do its work, and this needs to be scaled down. Scaled down in a way that keeps the components still usable for both on-premises customers and in the SaaS environment.
  • Usability. Coming from WAR deployments, where a single WAR archive is distributed, and loaded in a standardized application server (specifically Apache Tomcat), which is all relatively easy. We are now moving to a model with multiple distribution artifacts, which then also need to be orchestrated to run together as one Jama application.
  • Culture. There is a lot of established thinking that had to be overcome, in fairly equal parts by ourselves and by our customers. I mean, change, there’s plenty of books on change, and on how it’s typically resisted.

Within Engineering (which is what I’ll continue to focus on), I’ve been involved in ongoing discussions about a deployment model for services, going back to 2014. One of the early ideas was to just bake a scaled down copy of our SaaS environment into a single virtual machine. (And expect some flavors with multiple virtual machines to support scalability.) Too many customers just outright reject the notion of loading into their environment a virtual machine that is not (fully) under their control. A virtual machine would be unlikely to follow all the IT requirements of our customers, and lead to a lot of anxiety around security and the ability to administrate this alien. So, customers end up running services on their machines.

That quickly leads to another constraint. The administrators at our customers traditionally needed one skill: be able to manage Apache Tomcat, running Jama’s web archive file (WAR). While we have an awesome team of broadly-skilled, DevOps-minded engineers working on our SaaS environment, we can’t expect such ultra-versatility from every lone Jama administrator in the world. We were in need of a unified way across our different services to deploy them. This is an interesting discussion to have at a time where your Engineering team still mostly consists of Java developers, and where DevOps was still an emerging capability (compared to the mindset of marrying development and operations that is now more and more being adopted by Jama Engineering). We had invested in a “services framework”, which was entirely in Java, using the (may I say: amazing) Spring Boot, and “service discovery” was dealt with using configuration files inside the Java artifacts (“how does service A know how and where to call service B”). It was a culture shift to collectively embrace the notion that a service is not a template of a Java project, but it’s a common language of tying pieces of running code together.

Docker and Replicated

In terms of deployment of services we discussed contracts of how to start/stop a service (“maybe every service needs a folder with predefined start/stop scripts”). We discussed standardized folder structures for log files and configuration. Were we slowly designing ourselves into Debian deb packages (dpkg, apt) or RPM (yum) packages, the default distribution mechanism for the respective Linux distributions? What could Maven do here for us? (Not a whole lot, as it turns out.) And how about this new thing…

This new thing… Docker. It is very new (remember, this was 2014, Docker’s initial release was in 2013, the company changed its name to Docker Inc. only as recent then as October of 2014). We dismissed it, and kept talking in circles until the subject went away for a while.

Early 2015, coincidentally roughly around the time we created the position of DevOps Manager, we got a bunch of smart people in a room to casually speak about perhaps using Docker for this. There was nothing casual about the meeting, and it turned out that we weren’t prepared to answer the questions that people would have. We were mostly talking from the perspective of the Java developer, with their Java build, trying to produce Docker images at the tail end of the Java build, ready for deployment. We totally overlooked the configuration management involved outside of our world of Java, and the tremendous amount of work there, that we weren’t seeing. And in retrospect, we must have sounded like the developer stereotype of wanting to play with the cool, new technology. We were quickly cornered by what I will now lovingly refer to as an angry mob: “there is not a single problem [in our SaaS environment] that Docker solves for us”. I’m way cool about it now, but that turned out to be my worst week at Jama, with a distance. Things got better. We were able to create some excitement by using Docker to improve the way we were doing continuous automated system testing. We needed some help from the skeptics, which gave them a chance to start adjusting their views. We recruited more DevOps folk, with Docker in mind while hiring. And we did successful deployments with Docker for some of our services. We were adopting this new technology. But more importantly, we were slowly buying into the different paradigm that Docker offers, compared to our traditional deployment tools (WAR files, of course, and we used a lot of Chef).

docker and replicated

We were also telling our Product Management organization about what we were learning. How Docker was going to turn deployments into liquid gold. How containers are different than virtual machines (they are). They started testing these ideas with customers. And toward the second half of 2015 the lights turned green. Or… well… some yellowish, greenish kind of color. Scared for the big unknown: will we be able to harden it for security, is it secure, will customers believe it is secure? But also: will it perform as well as we expect? How hard will it be to install?

One of the prominent questions still also was around the constraint that I mentioned earlier, how much complexity are we willing to incur onto our customers? Even today, Docker is fairly new, and while there is a growing body of testimony around production deployments, all of our customers aren’t necessarily on that forefront. First of all, Docker means Linux, whereas we had traditionally also supported Windows-based deployments. (I believe we even supported OS X Server at some point in time.)

Secondly, the scare was that customers would end up managing a complex constellation of Docker containers. We had been using Docker Compose a bit for development purposes now, and that let us at least define the configuration of Docker containers (which I like to refer to as orchestration), and we’d have to write some scripts (a lot?) to do the rest. Around that time, we were introduced to Replicated, which we did some experiments with, and a cost-benefit analysis. It let us do the orchestration of Docker containers, manage the configuration of the deployment, all through a user interface, web-based, but installed on-premises. Not only would it offer a much more user-friendly solution, it would take care of a lot of the orchestration pain, and we decided to go for it.

Past the Prototype

The experiments were over, and I formally rolled onto the actual project on November 11th 2015. We were full steam ahead with Docker and Replicated. Part of the work was to turn our proof of concept into mature production code. This turned out not to be such a big deal. We know how to write code, and Docker is just really straightforward. The other part of the work was to deal with the lack of state. Docker containers are typically stateless, which means that any kind of persisted state has to go outside of the container. Databases, data files, even log files, need to be stored outside of the container. For example, you can mount a folder location of the host system into a Docker container, so that the container can read/write that folder location.

Then the realization snuck up to us that customers had been making a lot of customizations to Jama. We had anticipated a few, but it turns out that customers have hacked our application in all sorts of ways. Sometimes as instructed by us, sometimes entirely on their own. It was easy enough to look inside the (exploded) WAR file and make a few changes. They have changed configuration files, JavaScript code, even added completely new Java files. With Docker that would not be possible anymore, and we dealt with many such customizations, coming up with alternative solutions for all the customizations that we knew of. Some configuration files can again be changed, storing them outside of the container; some options have been lifted into the user interface that a root user in Jama has for configuring the system, storing them in the database; and sometimes we decided that a known customization was undesired, and we chose not to solve it. By doing all that, we are resetting and redefining our notion of what is “supported”, and hopefully have a better grasp, going forward, on the customizations that we support. And with it, we ended up building a lot, a lot of the configuration management that was initially underappreciated.

Ready for the Next Chapter

Meanwhile, we are now past an Alpha program, a Beta program, and while I’m writing this we are code complete and in excited anticipation of the General Availability release of Jama 8.0. We have made great strides in Docker-based configuration management, and learned a lot, which is now making its way back into our SaaS environment, while the SaaS environment has seen a lot of work on horizontal scalability that will be rolled into our on-premises offering in subsequent releases — the pendulum constantly swinging. While I’m probably more of a back-end developer, and while “installers” probably aren’t the most sexy thing to be working on, it was great to work on this project: we are incorporating an amazing technology (Docker), and I’m sure that our solution will be turning some heads!

On a chance bus ride down MLK to our Jama office a few months ago I happened to share a seat with a colleague in our Engineering Department, Bryant Syme. He had only been working for Jama for a few months and to be perfectly honest I hadn’t spoken to him much yet. We talked a lot about recent events in the office, but also talked about some of his previous work experiences. This is the first time I had ever heard about Mob Programming and the many potential benefits it can bring to a team of engineers. It planted the seed for me to introduce it to my own team and eventually start evangelizing it to the rest of our department.

VC1A8278_male

What is it?

Mob Programming is a style of paired programming, but with the entire team involved instead of two developers. Every person involved in the story should be in the Mob Programming session and actively contributing, including Product Managers, DevOps and QA Engineers.

Think of Mob Programming as a tool for getting through larger, more obtuse stories and epics. The team will crowd around a single screen with one person driving and will talk through everything from acceptance criteria and design decisions, to implementation of the code and even test cases.

Mob Programming has many benefits:

  • Shared ownership over decisions.
  • Better quality code.
  • Ability to break through large tasks easily.
  • Team bonding through working together.
  • A great way to teach other team members various skills.

This style of work doesn’t need to be limited to programming. It could also be great to work on any project, from writing a document to planning for future work, to doing performance testing.

The tenets of Mob Programming

The main tenets of mob programming that everyone should follow are:

  • Use one keyboard and screen
  • Use a private room
  • Select a time keeper to rotate who is on the keyboard every 15 or 30 minutes.
  • Everyone gets time at the keyboard, even non-programmers.
  • Take a story from start to finish, or in other words: from planning to coding, to testing, to done.
  • Take breaks when you want.
  • A session should span an entire workday.

Each of these tenets are flexible and should be discussed with the group before starting. One thing I’ve had a lot of luck with so far is pausing the timer to do whiteboard planning, for instance. We also usually take however much time we need at the beginning of the session to sketch a rough plan of what we are going to do, in order to stay on task as people switch around.

One keyboard and screen

This allows the team to concentrate without the distraction of e-mail, chat applications or other work. Team members may come convinced that they will need to work on other activities since there won’t be enough to help with when they aren’t at the keyboard. I had such an encounter with one of my teammates who was certain that there would not be enough for him to do. You will need to remind them that this is not a normal meeting and that you need their full attention. In the case of my teammate, I conceded that he could bring his PC as long as he kept his attention on the task at hand. He agreed and ended up being so engaged that he rarely, if ever, looked at his own screen.

One rule you can bend here is that research on one screen can be boring for the team to watch and help with. This is an appropriate time for other team members to use their own PCs to help do research (as long as everyone is staying on task).

iStock_000055460038_male

Use a private room

This moves the team to another space both physically and mentally, and also prevents outside distractions. Other teams should respect that you have shut the doors and should not interrupt you. But if you are interrupted, team members should volunteer to chat with that person outside of the room to allow others to keep working.

Rotate who is on the keyboard every 15 or 30 minutes

Decide on a good time interval at the beginning of the meeting. I recommend 15 or 30 minutes depending on how many people are in the group, but other time increments are also fine. I’ve found that a group of 4 or less people works best with 30 minute intervals, wheras 5 or more works best with 15 minute intervals. Its just enough time to get some work done, but also enough for everyone to rotate through in the large group.

Bring a timer with a loud alarm. I usually use the Clock App on my iPhone and turn the sound way up. When the alarm goes off, whoever is at the keyboard should immediately take their hands off and let the next person rotate in, even if they were in the middle of typing. The thing to remember here is that it’s not about one person working while the others watch, as it is about everyone working on the same thing. Whoever else rotates in should easily be able to pick up where the last one left off.

A clock that resets itself is also ideal, since you don’t want to forget to start the timer.

VC1A1815_male

Everyone gets time at the keyboard, even non-programmers

Whoever is helping should have a chance at the keyboard, even if they are in a QA, PM or DevOps role. Remember that everyone is working on the same task and watching and directing what the driver is doing, and it should not matter much who is on the wheel. It’s ok to be a backseat driver in this situation.

Participation also keeps everyone at full attention! Keeping the same person or only developers will become boring for others in the room if they never get a chance to participate.

Take a story from start to finish

Even when coded, the story isn’t finished, it still needs to be tested! Work on your test cases as a team. Personally, I am a QA engineer and getting other team members to help work on making quality test cases is very validating and helps us be less black box.

Whatever is required to get that story into the “Done” column should be done during this session. In addition to getting higher quality code, test cases and automation, this also tears a lot of walls down between roles. A lot of our developers often don’t have much of an idea for what DevOps or QA engineers “do”. This is a perfect chance to get cross-team collaboration and boost how your team works together!

People are allowed to take breaks when they want

Bathroom breaks, coffee breaks, lunch breaks should not be discouraged, but be warned: people will want to keep working, so mandatory breaks may be needed!

Mob programming can also be exhausting, if someone needs a few minutes to take a breather, they should be allowed to simply leave and come back when needed.

A session should span an entire workday

This one has been difficult to schedule a lot of times. So far we have managed to schedule one full day and several half days of mob programming. Most literature I’ve seen on the topic so far recommends the full day, if possible, though. If individuals need to leave for meetings or other commitments, there should still be enough people left to absorb their absence.

Conclusion

Mob Programming is a great tool that can be used to effectively chop down and complete large stories and epics. Remember if you are trying this, review the tenets with your group, such as sticking to one screen and one keyboard, as much as possible.

This is also great for bringing other team members up-to-speed with certain design patterns or tools. Someone who never uses the command-line or has never dealt with a certain language before will likely get a chance to learn a lot.

Everyone in the room should be involved, don’t limit it to just programmers, or others will get bored and not be as engaged. Remember to invite everyone in your team to the session, including the Product Managers, QA and DevOps Engineers.

And of course remember to have fun! Odds are your team will have a blast and work just a little better together than before the experience.

Within Jama, we pride ourselves on “drinking our own champagne” — using Jama to build Jama. It’s a rare occurrence in software development that you get to design and build the very product that is core to how you work, one that makes your everyday tasks more pleasurable. Our product development organization uses Jama to manage our requirements, capture product and design decisions, and flesh out product ideas and build backlogs. The communication and collaboration features in Jama are key to keeping our engineering organization aligned with the product and business departments.

While we all work in Jama, our engineering teams use an agile scrum methodology and their day-to-day sprint planning and execution is managed in JIRA. Core to our workflow is the seamless synchronization between Jama and JIRA. We’ve used our original JIRA connector for this purpose, but when we launched the Jama Integration Hub, based on the Tasktop Sync product, a far superior tool, it naturally became a priority for engineering to make the switch.

Rolling out the Jama Integrations Hub at Jama

When we launched this initiative there was much to learn about the Integrations Hub and how it fit into our agile workflow at Jama, so we decided to take an incremental approach to the change. We have a number of scrum teams so we set out to migrate them one at a time over a course of a few sprints, allowing us to incorporate learnings from each migration and refine the process and our usage of the Integrations Hub. The nice thing about the process of switching to the new Hub is that you can migrate your teams and workflow away from our original connector incrementally, so the two tools can work side-by-side until all teams are completely migrated over to the new Hub.

We started out by engaging with one of our own implementation consultants, Matt Mickle, to learn about the Hub and the migration best practices. We mapped out our own agile development workflow, and the Jama- and JIRA-specific workflows that would be integrated. We created a timeline and created documentation that we would refine as we discovered more best practices for our specific needs. We also set up a migration support team and escalation process in the event we ran into any issues post-migration.

With this important up-front planning we knew what a migration looked like, how long it would take and how it would affect the capacity of the team during the migration process. In a truly agile way we had enough to start our incremental approach.

What we learned in setting up the Jama Integrations Hub

After we started the first team migration to the Integrations Hub we discovered a few issues which we quickly resolved, and then added new efficiencies to the migration process (more on that below). We also quickly learned that if we wanted to take full advantage of the Integrations Hub’s templates we needed to clean up our existing JIRA workflows. Up to this point, we didn’t have a consistent JIRA workflow, nor a naming convention. Remedying this problem as part of the migration process had the added benefit of establishing consistency across our teams and actually made our own JIRA administration much easier. Once we cleaned up our JIRA workflows we were able to take advantage of the Integrations Hub’s templates, adding a Story Template and a Defect Template, which gave us huge efficiencies. Now migrating a project, or even setting up a new project, is a matter of simply creating mappings from our existing templates.

Another lesson we quickly learned was the timing of all the various steps involved in migration. Our goal was to have a streamlined process for migrating teams to the Integrations Hub with little downtime. We determined that we could creating the Jama and JIRA filters needed for the Integrations Hub’s mappings the day before the migration, saving time waiting for a schema refresh. Huge amount of time saved!

Our next challenge was scaling the migrations and administration of migrated teams in the Integrations Hub. We already had a few tips from Matt Mickle about consistent naming of mappings in the Hub. We came up with a naming convention that incorporated the team name, item type, and the type of mapping. That way we could just simply look at the mapping names to know what they were instead of having to open them up to see what they were mapping and which team they belonged to.

A better workflow makes for a better product

Today we have all the scrum teams in engineering migrated over to the Integrations Hub. We have a well-documented Hub set-up process and naming conventions. Our JIRA workflow for migrated projects is now consistent across teams, which we found actually improved our Jama workflow, too.

We also understand the Integrations Hub a lot better now and can take advantage of all the new features with each release from Tasktop. For example, the ability to scope mappings by project, introduced in the Jama Integrations Hub 4.3, is a very useful feature and allows for more flexibility in how we structure projects in Jama.

The biggest win for our team, though, is that now that we now that we have integrated our Jama and JIRA workflows we have better insights into our product development process via a transparent workflow from product requirements through test results. Moreover, the integration of the data from each tool allows us to seamlessly jump into each system without paying the tax regularly associated with switching between tools, losing context, and waiting for data to synchronize across systems. This is a big time saver for our engineering teams and makes it seem like a single interconnected workflow instead of separate systems that we’re using.

Tips for a successful migration to the Jama Integrations Hub

  • Map out your end-to-end development workflow. While we use JIRA, the Hub also integrates with Version One, Rally, and Team Foundation Server. Include in your mapping how your development tool workflow meets your Jama workflow.
  • Utilize Integration Hub templates. The Hub’s templates are a huge time saver and ensure a consistent process.
  • Use consistent naming in your filters and mappings. This makes managing several teams and mappings much easier and allows you to scale your integrations with your development tool.
  • Make your development tool workflow consistent. Once we did this we found it was much easier to scale our integration and actually helped bring more order and consistency to our own JIRA workflows across teams.
  • Document your migration procedure. We created a checklist to ensure a repeatable and consistent process which could be improved over time.
  • Create your filters ahead of the migration. By creating the needed filters for the Integrations Hub mapping the day before the migration and then refreshing the Jama and JIRA schemas within the Hub, it made our actual migration process much faster.
  • Work with your Jama Customer Success team.  For us, inside of Jama, this meant having someone from consulting and support available to assist. For you as a customer, we recommend working with your Jama Customer Success team to quickly deal with any issues as they come up. Once you’re through the first few migrations the process will get faster and easier.

Our team would be interested to hear from you about how you’re managing workflows between teams, whether you’re using Jama or not yet using Jama. Do you have integrations between the various tools that your business, product and engineering teams use?

When you hear the word “workflow” do you get excited, like you can’t wait to spend a day in front of a white board with a cross-departmental team and map out how you will all work together happily ever after?

Or, are you like most people and dread the thought of going through the process of designing and implementing a new workflow, no matter how badly you know your organization needs to change the way it works?

Anyone who’s been through an organizational change in how teams are to work together already knows this process is never easy. Just thinking about this may trigger thoughts of:

  • Heavy, unmaintainable process that doesn’t scale
  • Wasted time going around in circles trying to get buy-in from everyone involved
  • Rigid rules that lead to impersonal interactions
  • And maybe some overwhelming diagrams or documentation just to add salt to the wound

When the Product and Engineering teams here at Jama set out to establish a workflow between our two groups we had some of the same concerns. We were already productive, but we were growing quickly (and still are!) and we knew we needed to establish a formal way of communicating, one that would scale over time.

What makes a good workflow?

So the first question we had to answer for ourselves was, “What makes a good workflow?”

Simply put, the intent of any workflow is to help people better work together.

But people aren’t simple. And moreover, the products they produce aren’t simple, either. As a society we now rely on critical software and technology to run our economy, our military and to save lives. These products are, by definition, complex.

So as we developed our workflow at Jama we focused on the aspects that we knew would make us not only work faster, but better. Our Engineering and Product teams were already committed to Agile practices for its focus on people first, and even have an in-house coach. What we needed to improve was our communication to the Business, and also aligning all three groups around common goals—the “why” of what we’re building. So we decided to document and formalize our commitment to Agile by designing a workflow around these tenets:

  • Facilitate fluid communications without a heavy-handed process
  • Enable quick and confident decision-making

Getting Down to the Science (Behavioral Science, that is)

But we didn’t stop there. With my educational background in neuroscience and human behavior, I knew that groups can agree to a new workflow on paper and still fail to maintain it, especially in growth periods, which we were (and are!) definitely in.

We’d need something more than a beautiful flow chart. We needed to know what makes teams adopt new ways of working and stick with them.

I started investigating some of the latest research in how people work in teams and how they adapt to change. I found some gems that applied to workflows, as well as other struggles in my life, because really, what part of life isn’t about dealing with people and adapting to change?

There were two theories that I found particularly applicable to our team:

  • Status Quo Bias
  • Choice Overload

According to The Behavioral Economics Guide 2014, Status Quo Bias refers to the fact that “…people feel greater regret for bad outcomes that result from new actions taken than for bad consequences that are the consequence of inaction.”

People don’t like change. Not surprising. But what this research also says is that people are especially averse to change in work processes, even if what they’re currently doing isn’t working, because they’re worried that if the change leads to failure they’ll be liable and life will be worse than it is today.

Also, people will say that they like choices, but if they’re presented with too many options they become confused and overwhelmed and get choice overload.  They need to have the right amount of information, at the right time, so they can evaluate and make a decision on a purposefully limited amount of criteria with just the right amount of, and right type, of data.

The Agile Workflow Design

Like I said above, a good workflow supports the people within it and improves their working relationships. So the first thing we did was get clear on who are the people who make up product development teams, what are they each responsible for in the process, and how can we support their communication with each other.

click to enlarge

click to enlarge

This chart illustrates an example team, across departments, and the people who need to weigh in on strategy, plans and key decisions, and at what point in the product development cycle. This is a good, if simplified, representation of what our customers tell us about their teams’ compositions and dynamics, and also how we work at Jama.

click to enlarge

click to enlarge

And this–finally!–is the Agile Workflow Design. If you’re familiar with Agile, you’ll recognize many of the steps. This chart also shows which tool holds which types of data, in our case, Jama and JIRA, indicated in the lower left corner.

Next, I’ll zoom in on the key steps in the Agile Workflow Design. The diagrams below don’t look exactly like the one above, and are designed to illustrate key points in the Workflow.

click to enlarge

click to enlarge

Here the Product Manager or Product Owner documents the business need, or why we’re building this feature and who will use it and the value they’ll get out of it. This information is documented in Jama, indicated by the orange text.

click to enlarge

click to enlarge

The Product Manager or Owner collaborates with Engineering—in this case, the Lead Developer—to understand the business need and develop Epics. You’ll notice two new things in this diagram. The first is the progress bar above the “Business Need” box, which indicates progress to completion. Right now we haven’t started development so it’s at “0%.” The second thing is the blue “Connected Users” icon. Right now there are 3 people involved in this project. Both of these indicators are taken directly from the Jama interface, where you can see current progress and the individual team members involved in a project, which is really useful especially if you have a global, distributed team, or, in our case, a growing company where you may not yet know everyone’s name!

click to enlarge

click to enlarge

Now development work has begun. Epics have been broken down into stories in Jama, and that data has been synced, in this case, to JIRA (indicated in the blue text). Progress is tracked in JIRA and that data is synced with Jama. As indicated by the progress bar, we are now 15% complete in this work, and with the introduction of the developers, we now have 5 Connected Users.

click to enlarge

click to enlarge

The QA team has been testing all along the way, but as we get closer to complete it’s a great time to confirm we’re still on track.  We’re logging defects in Jama and syncing the status back to JIRA. We now have 15 Connected Users and we’re 65% complete in this work.

click to enlarge

click to enlarge

Testing is complete and we’ve confirmed that this feature meets the business need. We’re now at 100% and ready for release!

And that’s the Agile Workflow Design, explained. Of course, for our team at Jama coming to this design was a gradual process, which included many conversations and agreements made. We’re always improving it and it’s been worth the effort. Now our teams are more connected to the business need behind our development plans, and other departments, such as Operations and Marketing have more insight into what we’re building, making it easier for those teams to do their jobs well.

Take This To Your Team

I want to leave you with a few useful things to guide you the next time you find yourself in a process meeting, or designing a workflow:

  • Workflows should connect people first, and data second. After all, it’s people who actually get the work done. It doesn’t mean that codifying a series of steps and procedures is a bad idea, as many people are building very complex things and they may just need to account for twenty well-defined steps in order to get a quality product out the door. What I’m highlighting is the fact that at every touch point where people need to exchange information there’s an opportunity to make that interaction as smooth and productive as possible. When you focus on this you’ll have an easier time getting the group working together in a “flow” as much as possible.
  • Remember the “why” of what you’re building. Stay connected to your original business goals, and build in opportunities to adapt to change as it inevitably arises. Don’t wait for people to protest since Status Quo bias inhibits that.
  • Avoid Choice Overload by limiting the number of available options when a decision must be made.

Hear more about team collaboration in the recently recorded version of our Agile Workflow Design WebinarWe cover these concepts and more, in-depth, and answer questions from the webinar audience.

About the author: 

Robin Calhoun is a product manager for Jama Software, as well as a certified ScrumMaster. Calhoun users her education in human behavior and economics to direct product decisions and team management, combing this expertise with the ever-evolving body of knowledge in Agile development practices. Before joining the Jama team Calhoun was a product manager at Tendril, defining data-driven Energy Service Management products. She holds a degree from Columbia University in Neuroscience and Behavior.

This is the seventh post in a series examining the changes that have occurred since the Agile Manifesto was published and the implications they have on how we might consider the Manifesto today. Find the first post here.

In 2001 the notion was that documentation should be replaced by working software. Of course, back then ‘software’ was a simpler concept. Certainly some was very complex, but overall, software products have grown greatly in complexity. The mindset at the time often was to document everything upfront, then go build, the result was that teams built the wrong thing.

Alistair Cockburn, signer of the Manifesto, has spoken about the word “comprehensive” and the decision to use it. According to Cockburn, this software term was highly debated. The creators didn’t want people to think that documentation in and of itself was unnecessary because they did believe it was important. The intent was to call out exhaustive documentation as overkill.

Today we have more complex software. We also have realities of Minimum Viable Product (MVP) and continuous delivery. The idea of working software is much more of a reality. This does not change the importance of documentation. What does change is the idea of the word ‘document.’ A white board, sticky notes, wiki, or collaboration software, these are all documentation. This is a critical and necessary aspect of the process. The ability to respond to change, to interact, in fact everything the Agile Manifesto believes in relates to communication and collaboration around something – the ideas, stories, epics, and decisions written and made everyday.

Read the next post in this series, “Rethinking the Agile Manifesto: Customer Collaboration and Contract Negotiation.”

This is the fourth post in a series examining the changes that have occurred since the Agile Manifesto was published and the implications they have on how we might consider the Manifesto today. Find the first post here

In the previous two posts in this series I discussed that the world has changed and software is now everywhere. The third reason for reviewing the Agile manifesto is simply that complexity has increased, both at the product and company level. Building products used to be simpler as hardware products could only handle so many lines of code, and web applications only had to worry about a couple of browsers and monitor sizes, and there were fewer programing frameworks.

In the modern product delivery survey conducted with Forrester, it was found that 55% of companies had over 100 products and 87% of companies had multiple teams working on projects. 70% of companies stated that they release products quarterly or more frequently. 61% of projects had at least four different teams involved, while only 4% of stakeholders were co-located in close proximity. 23% of products now consist of both hardware and software elements.

Today, products are more complex by their very nature, often performing multiple tasks simultaneously, and generally in a smaller form factor. Hardware contains a lot more software; software products need to consider many more devices and situations; and open source has provided the development community with many more libraries, frameworks and languages from which to choose. The range of products available is wider and product development has become more complex. A greater number of platforms must be supported – in 2001 Firefox and Chrome did not exist yet, now there are many browsers, as well as many different devices and platforms on which to browse. That’s not to say there was no complexity back in 2001, but there have been significant added layers of complexity since then.

Many products now have a much greater ‘ecosystem,’ in that they operate in conjunction with other products to enhance functionality. For example, the success of Apple’s iPad was around its ecosystem, that is, the applications that it enables and which enable it with greater functionality. Nest’s vision was not just around home thermostat but more around the broader ecosystem of home control which, bigger picture, was what Google was interested in when they acquired Nest.

When we look at the complexity of companies, technology has progressed so much that we now have the ability to communicate across distributed teams in many ways that were unavailable in 2001, enabling geographically dispersed teams to work together more efficiently, but which can also amplify the organizational challenges of product delivery. There is less a need for everyone to be in the same room, and many organizations have product teams both here in the United States and overseas.

Read the next post in the series here.