Tag Archive for: QA

In this post, we recap a recent webinar hosted by Jama Software on the topic of integrating TestRail and Jama Connect


As the digital demands of the business continue to escalate, software delivery teams are under extraordinary pressure to deliver more work faster. Speed, however, counts for little if these teams are not delivering a high-quality product of value; rapid turnaround for a customer request is futile if the feature doesn’t work properly or meet their needs.

Fortunately, including quality assurance and test teams in the earliest phases of the software delivery lifecycle has never been easier; striking the right balance between speed and quality.

By seamlessly linking requirements to their test cases and test results, product managers and system engineers benefit from real-time visibility into test coverage and automated compliance reporting.

Join Jama Software’s VP of Product Management, Jeremy Johnson, and Tasktop Director of Partner Pre-Sales, Zoe Vickers, for a webinar demonstrating:

  • How linking requirements in Jama to tests that can integrate directly to or from TestRail enables transparency and cross-team alignment
  • How to correct inefficiencies and speed up time-to-market while enhancing product quality and employee satisfaction

Watch the full webinar to learn more about Optimizing Your QA Process by Integrating TestRail and Jama Connect.


Integrating TestRail and Jama Connect

Excerpt from webinar below:

Jeremy Johnson: I’m going to start by going through the agenda of topics that we have lined up for you today. Most of you are likely already familiar with Jama Connect’s ability to manage requirements, test and risk as part of your overall product development life cycle. So we’re going to start this discussion specifically around our perspective on integrations. We’ll have Zoe come in and talk about Tasktop Hub for Jama Connect and tell you a little bit about Tasktop as a company. Then we’ll move into test management challenges that we see when we discuss product development with our customers and prospective customers. And we’ll also touch on the benefits of integrating TestRail and Jama Connect. Zoe will then dive a little bit deeper into the integration flow and some of the benefits and show a live demo, connecting TestRail and Jama Connect. And then we’ll recap some key takeaways and get into some questions and answers at the end of the session.

So like I mentioned I’d like to start here with Jama software’s perspective on integrations. And the first point to touch on here is really the best practice around product development is to view integrations as a means to achieve live traceability across the systems development life cycle. Many if not most of you are familiar with the V model, you see a representation here. And you may not be with live traceability. So it’s really a component where different pieces of data that impact the product development life cycle need to evolve, need to be very dynamic throughout the process. Things like verification and validation need to happen much earlier in the process.

So maintaining this live traceability between all of these different product development components, different assets that might be involved, different products that might be involved is critical to optimizing your product development life cycle and processes. The real key here of course is that with this notion of live traceability in place, issues are reduced, and those that do arise are found earlier in the process. You can see here based on data from industry sources, these issues, finding these issues earlier in the can save 16 to as much as 110 times the cost of not finding them until later in the process.


RELATED POST: Introducing TestRail for Jama Connect


Now I think everybody at this stage of evolution understands and agrees that this is the best practice. But really the challenge has been implementing this in the real world. So why is it that companies tend to struggle? And the reality is that most companies or probably nearly all companies don’t have an end-to-end system development process that covers all of these components. They tend to be broken up into silos, different toolsets, maybe even desktop tools and spreadsheets. Some of those things come into play. And all of those variability, all of that variability in the tool chain leads to potential issues. These areas with Xs on this representation of the V model. Those are potential areas where the traceability might be broken, and that results in significant manual effort, emails, meetings, all of those kinds of things, maybe a little bit of luck involved in trying to prevent delays and defects and rework and cost overruns that can come if those data points, that traceability is broken.

And Jama Software and Jama Connect as a product can certainly resolve some of these components in its inherent capability of tying risk and test information with requirements. But many companies have come to really accept this situation as unchangeable. If we don’t have a single platform to do all of this, then we inherently need to manage these things in silos and accept maybe desktop tools and spreadsheets or on some level part of the process. Then those are things that you’re simply not going to be able to control and manage. But a really a key to bringing this all together to achieving this live traceability is to sync these existing software tools, these best of breed tools.

Even in desktop tools and spreadsheets and things with requirements. So Jama Software is one of the companies that is really truly making this possible. So if we look at live traceability and an example of the connected data, the connected components within Jama Connect, you can see how easy it is to define elements, the relationships across tools, maybe even spreadsheets in this example. This happens to be from our medical device solution, but we have similar but tailored solutions for aerospace and defense automotive and semiconductor for industry industrial manufacturing, various different industries. These components are continually synced with best of breed tools. They’re applying their own specific engineering disciplines, and importantly linking that back to requirements and other vital components of the product development life cycle within Jama Connect.

So again you can see some of those connections in this diagram, very common for things like JIRA and Azure DevOps and downstream issue identification, task management, some of those things on the execution side. Zoe is going to touch a little bit on how JIRA comes into play in this scenario. We have test rail of course that Zoe will be talking about on the verification, the testing side. So those will come into play as we get deeper into this discussion. And one of the key ways that we help customers achieve this live traceability is with our strategic partnership with Tasktop. So to introduce you to Tasktop and some of their capabilities, I’ll now pass it over to Zoe.

Zoe Vickers: Thanks, Jeremy. I really appreciate it. Hi everyone. So what I just want to talk about as a whole is Tasktop has actually been a strategic partner of Jama’s for many years. And what this partnership really allows all of you Jama customers to do is you have the ability to integrate Jama with over 60 plus tools in the more agile DevOps testing ecosystem. The reason Jama decided to partner with us at Tasktop is because what we offer is an out of the box point and click configuration for setting up these integrations. What I mean by this is not a heavy services engagement. It is not something that each time you add a new project, you add new fields, you add new requirement types in JIRA that you might want to integrate with. Different tools like TestRail or JIRA. You don’t have to reach back out to Jama or Tasktop.

You are able to actually scale your integrations on your own. So that way you can build an enterprise wide solution within your own teams. One of the things that I want to show on the next slide is really the ability of what are all of the different connectors that Tasktop has. So I mentioned that we do allow you to integrate with over 60 different tools. Again, we actually have something called our integration test factory where internally at Tasktop we run over 500,000 tests a day, against every supported version of software that we have.

So again what we’re able to help, you do know we’ve helped many, many Jama customers do over the years is integrate drama with the fullest of tools on the right hand side. If you’re taking a quick look through that, they are out for the ties. Again, some of the most common ones that we are seeing specifically with Jama customers is we’re working with Azure DevOps, we’re working with JIRA, we’re working with TestRail, we’re working with Spark CA. And really the idea here is to bring in as much traceability across your different tool chains. So that way as you go back to some of those different traceability diagrams that Jeremy was talking about, you can see where all your [inaudible 00:12:05] teams are doing their work, and you actually have the visibility into the up-to-date status at any moment in time.


RELATED POST: Datasheet: Tasktop Integration Hub for Jama Connect


One of the big things that we talk about with Tasktop is why do you actually need an integration solution? There are some brilliant developers out there who can build a integration solution in-house. And a lot of times those integrations work beautifully, but what I’ve had many customers over the last few years talk with me about is that it’s hard to scale. And then it becomes a little bit more of a problem for them when it comes to error handling or troubleshooting. What we like to talk about at Tasktop is really to say first off integration solutions are needed, whether it’s through mergers or acquisitions or just growing teams as you go from large, medium, or small businesses. You have a lot of duplicate data entry that might be happening in a variety of tools.

Whether you’re communicating between your requirements tools, your agile teams, to your ITSM teams logging tickets. What task that’s going to help you do is actually eliminate a lot of that overhead and duplicate data entry, which then is going to help allow you to actually speed up your or delivery times and it’s also going to improve the communication between the teams, because they are able to collaborate across different tools without actually having to exit those applications. So one of the things that we’ll be able to even talk about today with Tasktop is you’re no longer going to have to walk across the office or send an email or extra Slack message. You can continue to comment back and forth in Jama, within TestRail, within JIRA. So that way your teams truly do have the end-to-end visibility of who’s working on what without having to log out of their tools of choice.

The main way that Tasktop actually does this is through something called model-based integration. So one of the big things that we like to talk about at Tasktop is there a value in just a simple point to point integration? Yes. This webinar today is talking specifically about Jama to TestRail. But we also need to be aware that none of you are going to operate in just the vacuum of those two tools. You probably have an entire ecosystem. 10, 15, 20 tools that your different teams are using. Tasktop is built to be an integration solution so you can scale across your entire organization. How we do this is that we are not actually a plugin to Jama or a plugin to TestRail. We are an independent third party solution that can be both on-prem or cloud hosted and how we actually offer this integration is through something we call model-based integration.

What a model is going to do is it is going to act as that universal translator to convert and normalize data between systems. What I mean by that is in that image on the right hand side you’re actually able to see for example I have a variety of different tools, this is an example customer we’ve worked with has. And specifically if we look at Jama they are doing a lot of their defect logging there. And as that relates back to their different requirements. So what I’m able to do is I’m able to take not only Jama, but also the specific work item that they’re working on. So there are different maybe bug or defect types with all the schema that makes up that specific item and map it into my specific model. From there, instead of just going over to TestRail, I could integrate that specific information with JIRA, with Azure DevOps, with a problem from ServiceNow.

And I’m also able to then take something like TestRail and map that into the model. So at any point in time when you want to add an additional tool or an additional work item, you want to actually bring integration and synchronization with your teams. You easily can map it to the model and reuse any half of any other integration to easily map it to. So an example of how this actually works is on this next slide imagine that you have your two different tools. So we have Jama test case on the left and we have a TestRail test case on the right. Then you have the model in the center. So think of your model as nothing more than a bucket of fields. Before you hopped on the webinar today, you probably decided, “Hey, I am interested in integration.” That means that probably on some little scratch piece of paper or an Excel document you have noticed and said, “Hey, I want to get information from this tool to this tool.”

The information I care about is something like a status or a priority or the assignee of this item. All that information goes into the model. Think of the model as Tasktop. From there I can easily map from Jama to the model, and you can see that we can have a one to one relationship between fields or a one to many. Then I can also map TestRail to the model. And as I do that, as we keep clicking through, you’re going to see these different boxes pop up, which shows a Jama collection and a TestRail collection. What that means is first off we look at the black lines. Tasktop is able to help route the data anywhere it needs to go to the correct fields, to the correct values via the model. From there, once we go into Tasktop later we’ll talk about how the architecture Tasktop works. Where in the previous side when I had shown you can map any tool to any other one half of an integration.

We see here that half of your integration is going to be called a Jama collection. That means we’re mapping from Jama to the model. And the other half of the integration is going to be from TestRail to the model and that’s your test rail collection. The reason this matters to you is that the second question I get from a lot of customers is, “Well, do we have to have all these fields equal? Does the schema in both tools need to match?” And the answer is no. You guys can start your integration with baby steps. You’re going to very easily be able to say, “This is this state of my Jama, this is the state of my TestRail. Let’s just start flowing information.” Then from there you can determine to scale.

Watch the full webinar to learn more about Optimizing Your QA Process by Integrating TestRail and Jama Connect.



Testing is a critical phase of quality control. Failure to include a well-defined test strategy in an organization’s development process can make or break a product’s success in the market.

The right combination of manual and automatic testing will result in a higher-quality product, and that’s ultimately what everyone wants for their users.

With Jama’s Test Management Center, Quality Assurance (QA) teams can design reliable testing strategies resulting in defect-free products that adhere to even the strictest compliance standards.

Constructing robust testing strategies requires broad strategic thinking, and more often than not teams begin formulating testing strategies too late in the game.

By providing QA teams with more visibility earlier in the product development process, you’ll increase quality by identifying problems before they arise and devoting the right resources to fix them.

A reliable testing strategy typically involves six phases. With the proper implementation, product developers can consistently deliver high-quality products that exceed expectations.

Requirements Analysis

In Jama Connect, every test case can be linked back to a requirement. This allows QA teams to have immediate visibility to test cases while clearly understanding the critical requirements behind each test.

It’s crucial that teams understand specific feature and design expectations and are able to resolve any conflicts stemming from unclear or unspecified requirements. With this unique requirement linkage, teams can spend less time on requirements analysis and more time solidifying their test plans.

Test Plan Creation

The test plan is the most important phase in the testing strategy. This document outlines the entire testing process for a specific product.

Well-executed and documented test plans ensure high-quality products. The success or failure of a product can depend on how well a test plan is carried out.

QA teams want to achieve quality in efficient and risk-free ways, so it’s important that a well-formulated test plan can be reused as a template for additional test plans.

With Jama Connect, teams can reuse the test plans they’ve created in Test Management Center for projects with the same requirements, saving time and increasing confidence in their compliance.

Test Case Creation/Execution

At the completion of your development cycle, it’s time to get testing with a well-documented test plan.

With Jama’s Test Management Center, you can easily create and execute many types of manual tests, including functional tests, non-functional tests and maintenance tests.

This is when all the stakeholders come together to review any product defects — often in the form of technical reviews centrally conducted within the system. Finally, after each test, a test report is generated that details a list of defects identified.

Defect Logging/Fix

It’s important for quality and development teams to work closely together with real-time visibility into defects across all teams.

When QA teams log defects in Jama Connect, those defects will be immediately visible to development teams in ALM tools such as Jira, streamlining the end-to-end, find-fix-resolve process.

While manual testing helps QA teams understand the entire context of a problem from an end-user perspective, Jama’s API makes it easy to link to automation tools such as Jenkins, TeamCity, Selenium and TestRail to run multiple tests in a short period of time.

With Jama’s Test Management Center, organizations are empowered to manage product development and meet compliance standards at a faster pace.

Have the testing phase down to a science? Great! Check out this short webinar to learn more about key metrics for product development success.

requirements-management-blog-featured-image

Too often products fail due to poorly managed requirements. A requirement is a document that defines what you are looking to achieve or create – it identifies what a product needs to do, what it should look like, and explains its functionality and value. Without clearly defining requirements you could produce an incomplete or defective product. It’s imperative that the team be able to access, collaborate, update, and test each requirement through to completion, as requirements naturally change and evolve over time during the development process.

There are four fundamentals that every team member and stakeholder can benefit from understanding:

  1. Planning good requirements: “What the heck are we building?”

A good requirement should be valuable and actionable; it should define a need as well as provide a pathway to a solution. Everyone on the team should understand what it means. Good requirements need to be concise and specific, and should answer the question, “what do we need?” Rather than, “how do we fulfill a need?” Good requirements ensure that all stakeholders understand their part of the plan; if parts are unclear or misinterpreted the final product could be defective or fail.

  1. Collaboration and buy-in: “Is everyone in the loop? Do we have approval on the requirements to move forward?”

Trying to get everyone in agreement can cause decisions to be delayed, or worse, not made at all.  Team collaboration can help in receiving support on decisions and in planning good requirements. Collaborative teams continuously share ideas, typically have better communication and tend to support decisions made because there is a shared sense of commitment and understanding of the goals of the project. It’s when developers, testers or other stakeholders feel “out of the loop” that communication issues arise, people get frustrated and projects get delayed.

  1. Traceability & change management: “Wait, do the developers know that changed?”

Traceability is a way to organize, document and keep track of the life of all your requirements from initial idea through to testing. By tracing requirements, you are able to identify the ripple effect changes have, see if a requirement has been completed and whether it’s being tested properly, provide the visibility needed to anticipate issues and ensure continuous quality, and ensure your entire team stays connected both upstream and downstream.  Managing change is important and prevents “scope creep”, or unplanned changes in development that occur when requirements are not clearly captured, understood and communicated. The benefit of good requirements is a clear understanding of the end product and the scope involved.

  1. Quality assurance: “Hello, did anyone test this thing?”

Concise, specific requirements can help you detect and fix problems early, rather than later when it’s much more expensive to fix. In fact, it can cost up to 100 times more to correct a defect later in the development process after it’s been coded, than it is to correct early on while a requirement. By integrating requirements management into your quality assurance process, you can help your team increase efficiency and eliminate rework.

Requirements management can sound like a complex discipline, but when you boil it down to a simple concept – it’s really about helping teams answer the question, “Does everyone understand what we’re building and why?”  When everyone is collaborating together and has full context and visibility to the discussions, decisions and changes involved with the requirements throughout the product development lifecycle, that’s when success happens consistently and you maintain continuous quality.  Not to mention the process is smoother with less friction and frustration along the way for everyone involved. And, isn’t that something we’d all benefit from?

Learn more about how to write high quality requirements.

 

blog-featured-image-ROBO

Overview

Automation remains one of the most contentious topics in software development. You get three engineers into a room to discuss automation and you will end up with four contradicting absolutes and definitions. So for the purpose of this post, we will place the following limits on the topic:

  • Automation in this post will refer specifically to Automation of Functional Tests for a UI or API.
    • Functional Tests will be defined as tests containing a prescribed set of steps executed via an interface connected to consistent data which produces an identical result each time its executed.
  • Failure in this document will be defined as greater than 3 months of effort spent creating automation that is not acted upon or is determined to be too expensive or buggy to maintain after 12 months and is turned off or ignored.

This post will cover the three most common reasons automation fails:

  1. Inability to describe a specific business need/objective that automation can solve.
  2. Automation is treated as a time-boxed activity and not a business/development process.
  3. Automation is created by a collective without a strong owner who is responsible for standards.

Wait, Who are You?

I am Michael Cowan, Senior QA Engineer at Jama Software. Over the past 20 years I have been a tester, developer, engineer, manager and QA architect. I have built automation solutions for Windows, Web, Mobile and APIs. I have orchestrated the creation of truly spectacular monstrosities that have wasted large sums of money/resources as well as designed frameworks that have enabled global teams to work together on different platforms, saving large(r) sums of money.

I have had the amazing opportunity to be the lead designer and implementer of automation for complex systems that handled millions of transactions a day (Comcast), dealt with 20 year old systems managing millions of dollars (banking), worked in high security/zero tolerance (Homeland Security) environments and processed massive big data streams (Facebook partner). I have worked side by side with brilliant people, attended conferences and trainings, as well as given my own talks and lectures.

I have a very “old school” business focused philosophy when it comes to automation. To me it is not a journey or an exploration of cool technology. Automation is a tool to reduce development and operating costs, while freeing up resources to work on more complicated things. I strongly believe that automation is a value add for companies that correctly invest in it, and a time/money sink for companies that let it run wild in their organizations.

Failure Reason #1: Unable to describe a specific business need/objective that automation can solve

The harsh truth is that, by itself, clicking a button on a page (even a really cool/difficult/complex custom button) has no value to the business. The business doesn’t care if you click that button manually with the mouse, execute it with Javascript, call the business logic via API or directly manipulate the database. What they care about is ensuring that a customer is not going to call up after the release to return the product, or some blogger wont discover a major issue and drive away investors with an scathing review.

Automation Projects fail when they are technical exercises that are not tied to specific business needs. If the ROI (Return on Investment) is not clearly understood, you are unlikely to get the funding to do automation correctly. Instead you will find your efforts rushed to just implement ‘Automation’ and move on. Months later, everyone is confused why automation hasn’t been completed, why it doesn’t do x, y, z and why all the things they assumed would be included were never planned.

Nothing is worse than a team of automation engineers thinking they are making great progress, just to have the business decide to pull apart the team due to a lack of understanding the value. If you are running automation directly tied to an understood business need, then the business leaders will be invested. You will find support because your metric will clearly show the value being produced.

Another consequence of running automation as an engineering project is in making decisions based on technology instead of business need. If you decide upfront to use some open source tool you read about, you will find yourself telling the business what it (you) can’t do. Well no, our tool doesn’t hook into our build server, but we can stop writing tests and build a shim. Pretty soon you are spending all your time converting your project to be more feature rich, instead of creating the test cases the business needs. This is how teams can spend 6-12 months building a handful of simple automation scripts. Even worse, you end up with a large code base that now needs to be maintained. The majority of your time will have been spent building shims, hacks and adding complexity that has nothing to do with your business offering or domain.

Mitigation

It’s actually very easy to avoid this pitfall. Don’t start writing tests until you have a plan for what automation will look like when its fully implemented. If your team practices continuous integration (running tests as part of the build), don’t start off with a solution that doesn’t have built in support for your CI/Build system. Find an industry standard tool or technology that meets the business needs, create a POC (Proof of Concept) that proves your proposed solution integrates correctly and can generate the exact metrics the business needs.

Write a single test to showcase running through your system and generating metrics. Make sure the stakeholders take the time to understand the proposed output and that they know what decisions that information would impact. Get a documented agreement before moving forward and then drive everything you do to produce those metrics. If anything in the business changes, start with reevaluating the metrics and resetting expectations. Once everyone is on the same page start working backwards to update the code. Consistency and accuracy in your reports will be worth more to the business than any cool technical solution or breakthrough that you try to explain upwards.

If you are in management, you might consider asking for daily automation results with explanations of all test failures. If the team can not produce that information, have them stop building test cases until the infrastructure is done.

Key deliverables that should be produced before building out automated scripts:

  • A documented business plan/proposal that clearly lays out the SMART goal you are trying to accomplish.
    • Signed off by the most senior owners of technology in your company.
    • This should be tied to their success.
  • Clear measurements for success. E.g. Reduce regression time, increase coverage for critical workflows, etc.
  • The reports and metrics you will need to support the measurement.
    • Your proposal should include a template with sample (fake) data for all reports.
  • A turnkey process generates those reports and metrics from automation results data.
  • 1 single automated test that does a simple test on a real system and generates a business report.

Take away

The key takeaway is that business and project management skills are critical to the success of any automation initiative. Technical challenges pale in comparison to the issues you will have if you are not aligned with the Business. Don’t start writing code until you have gotten written approval and have a turnkey mechanism to produce the metrics that will satisfy your stakeholders. Remember your project will be judged by the actionable metrics it produced, not by demoing buttons being clicked.

Failure Reason #2: Automation is treated as a time-boxed project and not part of the software development process

Automation is not an 8 week project you can swarm on and then hand off to someone else to ‘maintain’. A common mistake is to take a group of developers to ‘build the automation framework’ and then hand it off to less technical QA to carry forward. Think about that model for your company’s application. Imagine hiring 6 senior consultants to build version 1 in 8 weeks and then handing the entire project off to a junior team to maintain and take forward.

Automation is a software project. It has the same needs for extensibility and maintainability as any other project. As automation is written for legacy and new features, constant work needs to be done to update the framework. New requirements for logging, reporting or handing new UI functionality. As long as you are making changes to your application, you will be updating the automation. Also keep in mind most automation frameworks are tied into other systems (like build, metrics, cloud services) and everything needs to stay in sync as they evolve.

You quickly end up in a situation where junior engineers are in over their heads and either have to stop working on automation until expert resources free up, or they go in and erode the framework with patches and hacks. The end result is conflict which lowers ROI, generates a perception of complexity and difficulty and eventually leads to the failure of the project.

Mitigation

Again, this is an easy pitfall to avoid. Your business plan for automation should include long term resources that stay with automation through its initial lifecycle. It’s still beneficial to bring in experts during key parts of framework creation, but the owner(s) of the automation need to be the lead developers. They will build the intimate knowledge required to grow and refactor the framework as tests are automated.

Additionally, leverage industry standard technologies. Automation is not an area you want to be an early adopter. If your organization is building a web application you will want to pick a framework like selenium instead of something like m-kal/DirtyBoots. A good standard as a manager, you should be able to search LinkedIn for the core technologies your team is proposing and find a number of experienced people in them. No matter how awesome a mid level engineer tell you this new technology is, when he leaves, the next person will insist on rewriting it.

Take away

If you are using standard technologies and industry best practices, you will not need an elite team of devs to build the framework for QA. The complexity for the automation project should remain fairly constant through the life of your company’s application updates, new features, UI uplifts. The original creators of the framework should be the same ones automating the bulk of the tests. Additional, less experienced scripters can be added to increase velocity, but a consistent core group will produce the beast results for the least investment.

Failure Reason #3: Automation is created by a collective without a strong owner who is responsible for standards

Making the automation framework a community project is a very expensive mistake. If your company created a new project initiative with the guideline of “Take 3 months to build a good CRM System in any language that we will use internally” and turned that over to 10 random devs, to work on in their spare time, you would expect issues. Automation has the same limitations. A small dedicated team (with members that expect to carry Automation forward for at least a year or two) has the time to gather requirements, understand the business needs, build up the infrastructure and drive the project forward to success. An ad-hoc group with no accountability, especially one that the main members will not be doing the actual creation of tests, is going to struggle.

Everyone wants to work on the fun POC stage of automation, hacking together technologies to do basic testing and reporting. Most QA has some experience with previous projects and they have their own ideas about what can and can’t work. Without strong leadership, an approved roadmap and strict quality controls you will end up with an ad-hoc project that does a variety of cool things, but you can never seem to tie it together to get the information you need for actionable metrics. The team always has low confidence in the tests or their ability to reproduce results reliably. There is always a reasonable sounding excuse of why. The fun drains away as weeks turn into months and your team finds other things to focus on, while automation stagnates.

Eventually it becomes apparent how little business value was produced for all the effort, how much work remains and no clear owner to hold accountable or plan how to maintain or move forward. The champions for the fun part have shifted their attention to the next cool project. Management will end up assigning people to sacrifice their productivity by manually generating reports, cleaning up scripts and try training others to use it. Eventually everyone agrees the process sucks. Eventually a new idea/technology will surface and the cycle repeats itself.

Another common mistake is assuming that adding more engineers to aid in writing automation will increase ROI. Remember that ROI is measured against the business objective, not lines of code. Unlike application development there are few established patterns when it comes to automation. This means 2 equally skilled automation engineers will write vastly different automated tests for the same features. Remember that adding less experienced engineers requires your experienced engineers to stop building automation and start a training, mentoring and code reviewing program. In order for this program to be successful, every test written needs to be reviewed by 1-2 people to ensure it fits. It will take months until the additional engineers will be autonomous and able to contribute without degrading the framework. Additionally the more complex the application, the more custom systems, features and controls it can contain. Each of these variations will need the more senior engineers to tackle them first. Even with these efforts the business has to accept that new automation engineers will not write the best tests, it can take years to build the skills and apply the concepts correctly. This is a large factor in the constant ‘do-overs’ that automation projects suffer from.

I would assert that the ONLY business value, from automation, comes via the metrics and reports it produces. You could have the best automation in the world, but if it just clicks buttons and never produces an actionable report of its findings, then it has no value. Good automation will be structured in such a way as to produce a comprehensive report, that shows test coverage, is easy to understand and accurate release to release. Imagine having a large group of sales and marketing people, all working separately to generate their own KPIs from their own data. How cohesive would their reports be? Could the business make informed decisions with KPI from different groups at different scopes? The skill to structure and create valuable automation is not the same as being able to read the Selenium Documentation and click a button on a page.

We should always be working towards an approved business objective. Is the business objective to write test cases as fast as possible, even if they can’t be maintained? Or is it to “Automate the Regression to free up QA for other tasks”. Shifting your engineers’ time from running manual regressions, to babysitting automation does not solve anything (and actually reduces the test coverage). In certain cases, slower test case development by a smaller team of experienced engineers is the way to go. As long as you build in redundancy and have their work open for review and feedback you will produce value much faster.

Mitigation

Building automation that can generate reliable and actionable metrics is non-trivial and requires a lot of structure, discipline and previous experience. Automation projects should always be championed by 1-2 engineers experienced with setting up automation projects. They should make a compelling case to the business on what they want to build and what value it will bring. Once the business signs off, they should be given the space to build out the initial POC framework and sample test case(s). Once a working prototype is in place, feedback is solicited and then the project moves forward. The core team should be 2-3 engineers who are equals. Once all the critical areas are automated and the framework is hardened, you can begin training up interested individuals by pairing them with an experienced member of the team.

This initial work should be done by a core team of 2-3 engineers. They will be held accountable for its success or failure. It’s critical to make this group show working tests for all the main/critical areas of the product. It’s these initial tests that expose gaps in the framework. Once a working set of automated tests have been completed and tested from kickoff to report delivery, you can discuss training a small group to start building out test cases and moving automation into other teams.

Take away

When looking at an automation report, you need to be able to understand, at a glance, what was tested and what wasn’t. When you have questions on failed tests, you need to be able to quickly understand what the test did and the results. All tests should have the same scope and voice. Imagine if Feature X only has 5 tests with 100 steps each while Feature Y has 100 tests with 5 steps each, how do you combine those data points to understand the real state of the product. As the group gets larger and larger, it’s harder to maintain a single voice. You will move much faster allowing your core group to solve these problems before introducing less experienced engineer.

Summary

In this post I discussed the three most common reasons automation fails, ways to avoid them, and keep your projects focused on increasing ROI and business value.