Managing cyclomatic complexity to increase maintainability

In a previous post on Cyclomatic Complexity (CC), I discussed two ideas:

  • Total CC only means lots of logic has been implemented, it is not a qualitative measure
  • On the contrary, average CC per method and per class are example of qualitative measures that can be derived from CC metric. For instance, high average CC by class could mean bad level of cohesion. Lack of cohesion means that a class is performing several unrelated tasks.

and I concluded that in terms of quality, what matters is not the total cyclomatic complexity of a program but a moderated and well distributed level of CC for each components (packages, classes, methods), i.e. breaking the problem down into manageable components. When those components are not small enough, maintainability decreases.

Today, I am going to explain how to use Sonar in order to identify risky programs or components in terms of maintainability before they can be fixed. In other words, how to detect if an application is more of :

  • a monolithic type of animal (and therefore little evolutive and subject to side effects in case of modifications)
  • a pretty modular project, following a good Object Oriented design (at least in term of cohesion)

There are 3 complementary approaches to do so :

1. Detecting too complex classes or methods

  • Coverage clouds (Quick wins tab) is the tool you need to find the biggest classes :

    Every big classes in there should be looked at… you can start using your favorite IDE to break those problems down into manageable parts with well known refactoring techniques like “Extract Class”, “Extract Subclass”, “Extract Superclass”…

  • Hunting complex methods can be done with a more systematic approach by using the checkstyle rule “Cyclomatic Complexity”. The maximum CC by method should be set as a parameter of this rule. Then, by going to Violations drilldown, category maintainability, click on Cyclomatic Complexity and you will find all modules / packages / classes that contain methods that do not respect the rule.

  • 2. Monitoring CC trend throughout time

    Now that you have tied up your application, you need to make sure you know how to monitor those metrics to avoid they slip slowly as you carry on the project development. It’s a usual tendency when new code is simply added to existing one without keeping in mind some refactoring principles. To do so, use the Time Machine :

    When you see the CC per method or class increasing (not the case here), you know it is time to go back to complex classes or methods detection. You can also adopt a more proactive approach by buying “Refactoring” book from Martin Fowler to each member of your team :-).

    3. Comparing projects

    Let’s have a look now at how to monitor the portfolio of projects, to compare the level of quality between them (be careful to compare apple with apple in terms of application type).

    You can do that by going to the Sonar home page and use the treemap. On the treemap, since Sonar 1.5, it is possible to choose many combination for size and color of applications. One of them, very useful in our case, is to choose the two following qualitative metrics :

    • the average CC by method as the size of the square
    • the code coverage as the color of the square

    This not only enables you to compare maintainability by looking at the size of the squares, but gives you the quality of the code coverage, by looking at the color of the project. In other words, knowing whether it is going to be easy or not to change the application is important (average cc by method), but it is not sufficient to evaluate risk : you want to know what is the probability of having side effects when making a change (code coverage).

    What you are hunting here are the big squares with reddish color !

    Be careful though to have a close look before drawing any conclusion based on a low average CC by method : it could be the result of having numerous bean type classes. Make sure back you have gone through the complex classes or methods detection described above, else there could be some big methods hidden there.

    That is it ! To go further, I think there are 2 subjects that deserve that can be explored :

    • Create an indicator that mixes CC and Unit Tests (that we discussed in a previous post). You can take a look at Crap4j if you’re interested by this subject.
    • Look at the quality of the CC by method distribution to go beyond the simple average CC by method.

    I might discuss those topics later on.

© 2008-2016, SonarSource S.A, Switzerland. All content is copyright protected. SONARQUBE, SONARLINT and SONARSOURCE are
trademarks of SonarSource SA. All other trademarks and copyrights are the property of their respective owners. All rights are expressly reserved.