Tag Archive for: code forensics

In the beginning, there is a simple code base written by a few developers. The code’s deficiencies are easily kept in the brains of developers creating it and they most likely know what needs to be fixed and were trouble can be found. Then the code grows, more developers are hired, features are added, and the code base evolves. Suddenly, its authors no longer easily retain the mind-map of the code and its faults, and the code base becomes a mysterious source of bugs, performance problems and exhibits remarkable resistance to change. This is legacy code.

Your code base presents challenges – technical debt accumulates, new features demand the existing code to evolve, performance issues surface, and bugs are discovered. How do you meet these challenges? What proactive steps can you take to make your legacy code more adaptable, performant, testable, and bug free? Code forensics can help you focus your attention on the areas of your code base that need it most.

Adam Tornhill introduced the idea of code forensics in his book Your Code as a Crime Scene. (The Pragmatic Programmers, 2015). I highly recommend his book and have applied his ideas and tools to improve the Jama code base. His thesis is that criminal investigators and programmers ask many of the same open-ended questions while examining evidence. By questioning and analyzing our code base, we will not only identify offenders (bad code we need to improve), but also discover ways in which the development process can be improved, in effect eliminating repeat offenders.

For this blog post, I focus on one forensic tool that will help your team find the likely crime scenes in your legacy code. Bugs and tech debt can exist anywhere, but the true hot spots are to be found wherever you find evidence of three things:
• Complexity
• Low or no test coverage
• High rate of change


Complexity of a class or method can be measured several ways, but research shows that simply counting the lines of code is good enough and closely predicts complexity just as well as more formal methods (Making Software: What Really Works chapter 8: Beyond Lines of Code: Do we need more complexity metrics by Israel Herraiz and Ahmed E. Hassan. O’Reilly Media, Inc).

Another quick measure of complexity: indentation. Which of these blocks of code looks more complex?code forensicsThe sample on the left has deep indentation representing branching and loops. The sample on the right has several short methods with little indentation, and is less complicated to understand and to modify. When looking for complexity, look for long classes and methods and deep levels of indentation. It’s simple, but it’s a proven marker of complexity.

Test Coverage

Fast-running unit tests covering every line of code you write are a requirement for the successful continuous delivery of high quality software. It is important to have a rigorous testing discipline like Test Driven Development, otherwise testing might be left as a task to be done after the code is written, or is not done at all.

The industry average bug rate is 15 to 50 bugs in every 1000 lines of code. Tests do not eliminate all the bugs in your code, but they do ensure you find the majority of them. Your untested legacy code has a high potential bug rate and it is in your best interest to write some tests and find these bugs before your users find them.

High rate of change

A section of code that is under frequent change is signaling something. It may have a high defect rate requiring frequent bug fixes. It may be highly coupled to all parts of your system and has to change whenever anything in the system changes. Or, it may be just the piece of your app that is the focus of new development. Whatever the source of the high rate of change, evidence of a specific section of code getting modified a lot should draw your investigative attention.

Gathering evidence

How do you find which parts of your system are complex, untested, and undergoing lots of change? You need tools like a smart build system integrated with a code quality analyzer, and a source code repository with an API that allows for scripted analysis of code commits. At Jama, we are very successful using Team City coupled with SonarQube as our continuous integration server and code quality analyzer. Our source code repository is git.

Here is an example analysis of complexity and test coverage produced by Sonar. Each bubble represents a class and the size of the bubble represents the number of untested lines of code in that class. In other words, the larger the bubble, the more untested lines it has.

Code forensics

In this example, there are several giant bubbles of tech debt and defects floating high on the complexity scale.

Both Team City and Sonar report on the test coverage per class so with every build you not only know what code is the least tested, but you know the overall trend for coverage.

Using these tools, you now know where your complexity and untested code lives, but you need to know which parts of the suspect code are undergoing churn. This is where forensic analysis of your source code repository comes in.

Code repositories like git produce detailed logs, which can be analyzed by scripts. A command-line tool for doing this analysis is provided by Adam Tornhill to accompany his book and is available on his web site. This tool will do complexity analysis as well as change analysis.

When looking at the results of your change analysis, you are searching for not only what is changing the most, but also what code tends to change together. Classes and modules that are frequently appearing together in code commits are evidence of a large degree of coupling. Coupling is bad.

What other forensic tools does your code repository offer? You can analyze commit messages and produce word clouds to see what terms are dominating change descriptions. You would prefer to see terms like “added”, “refactored”, “cleaned”, and “removed” to red flag terms like “fixed”, “bug”, and “broken”. And of course commit messages dominated by swearing indicate real problems.

Another useful data point is which parts of your codebase are dominated by which developers. If you have classes or modules that are largely written and maintained by one or two devs, you have potential bus factor issues and need to spread the knowledge of this code to the wider team.

Pulling it all together

After the above analysis is complete, you have an ordered list of the most untested and most complex code undergoing the highest rate of change. The offenders that appear at the top of the list are the prime candidates for refactoring.

All software systems evolve and change over time and despite our best efforts tech debt sneaks in, bugs are created, and complexity increases. Using forensic tools to identify your complex, untested, and changing components lets you focus on those areas at the highest risk for failure and as a bonus can help you study the way your teams are working together.