With version 2.0, Sonar now embarks the seventh and last axis of source code quality : Design & Architecture. The objective of this post is to start discussing what it can be used for and why it is so important.
To know if the design of your software is in a good shape, having a sense of observation and a good memory can most of the time do the trick. No real need to use a tool (whether it is UML diagrams, Sonar…) or to look at source code. If month after month, your software is able to evolve as quickly as the business requires and can handle the changes at a constant cost throughout time, then you can confidently conclude that the design of your application is in a good shape (and believe me, it is fairly unusual in the software development market !). If not, you should focus some attention on design as it is not going to get better over time and will become costly in the medium to long term.
To handle fearlessly upcoming changes, it is key that the software design has great modularity. That is to say, you can replace part of the system by a new piece of code with little pain. Reaching true modularity can only be achieved in a programming environment that has two main capabilities (two dimensions) : ability to assemble pieces of software and ability to recursively split a piece of software. However, these capabilities are necessary but not sufficient.
When used out-of-the-box, Sonar is a code quality radiator accessible by everyone at anytime. Like for JIRA, Hudson, a post-it dashboard or any other piece of the development toolset transparency is a key success factor for adoption. So, by default in Sonar, anyone can access any project under continuous inspection and navigate through it.
But of course, there are situations where securing Sonar is necessary. Let’s imagine for 2 minutes a consulting company that does development for customers and wishes to allow those customers to follow their own projects in Sonar. Since the company has many customers, it is necessary that group of projects can be isolated to make sure each customer only has access to his own projects. Prior to Sonar 1.12, this was only possible by having one instance of Sonar per customer.
Since Sonar 1.12 there are services available in the web interface to handle this and to cover the following use cases :
Secure a Sonar instance by forcing login prior to access to any page
Make a given project non accessible to anonymous
Allow access to source code (Code Viewer) to a given set of people
Restrict access to a project to a given group of people
Define who can administer a project (setting exclusion patterns, tunning plugins configuration for that project, …)
Define who can administer a Sonar instance
All those use cases can be implemented through the Sonar web interface and will take effect immediately. The way security is handled in Sonar is pretty classic as the security policy is based on the following three concepts : user, group and role (global or by project). Let’s take the example of the “Project roles” page available at project level :
Three roles are available at project level : Administrator, User and Code Viewer. Users and/or a groups of users can be associated to each of those roles to get the required permissions.
User and group can be first created through the “Users” and “Groups” services available in the administration configuration section. Here is the screenshot of the “Groups” service :
That was authorization, let’s now talk about authentication. By default, user authentication is done against the Sonar DB (user table) but an external authentication engine can also used : OpenLDAP, Microsoft Active Directory, Apache DS, Atlassian Crowd … Three identity plugins already exist : two open source LDAP Plugin, Crowd plugin and a commercial one Identity Plugin. They all use the public Sonar authentication extension point.
To conclude, it is possible since Sonar 1.12 to easily implement a robust enterprise security policy. Those new functionality have been done with no impact whatsoever on Sonar users who do not want to activate security and want to keep full transparency.
A change of year always gives to teams an opportunity to look back and measure what was accomplished… and then to start thinking of what the new year should be made of. I thought I’d share the output of the Sonar team retrospective.
At the end of 2008, very few people knew Sonar. The platform was made of a small community of early and eager adopters who were supporting the product strongly by giving feedback, asking for more functionality, making suggestions and testing new versions. It was also made of Sonar 1.5 that, looking back, was the foundation version of the platform. From this version, here is what was achieved in a year :
A dynamic development activity on Sonar core with 7 major releases since 1.5.
The transformation of Sonar from a tool to an extensible platform with more than 20 extension points.
More than 30 open source plugins have been build to extend Sonar core using those APIs, and more that are not open source.
the number monthly downloads has been multiplied by 10 during the year from 300 to 3,000.
Sonar has been given a heart called Squid that makes Sonar much more than an integration tool. Several metrics that do not exist elsewhere are calculated by Squid.
More than 4’000 emails exchanged on mailing lists and 1,000 Jira issues created.
So after all this, what could be an exciting challenge for 2010 ? We have set ourselves 2 very ambitious objectives for 2010 which should make the Sonar community continue growing :
Design analysis : we like to say that there are seven technical axes of code quality analysis (we call them the seven sins of the developer). Sonar currently covers sixth of them and the last one is for us the most important one with unit tests : Design & Architecture. Sonar 2.0 planned for February will start covering the 7th axis with O.O. metrics like LCOM4, RFC, DIT … cycles detection and DSM at package and class levels. All those information will be of course provided by Squid. Moreover, an architecture rule engine should quickly appear after Sonar 2.0.
Multi-languages : last but not least, give a real go at other languages. By the end of the year, we expect that plugins are available to cover properly : Java, PL/SQL, Flex, C/C++, Cobol, PHP and maybe more :-)
Here is a part of the program for 2010. I have now to leave you to start working on this as I think I will not have much spare time this year !
There have been numerous debates around commented-out lines of code (line or block of code that was commented out at some point) and whether they should be left in the code or taken out. The outcome of those debates is almost systematically that they should be taken out sooner rather than later : in the Sonar Team, we generally consider than later means after code check in.
Here are the main reasons why old commented-out code is an abomination :
It always raises more questions than it gives answers
Everybody will forget very quickly how relevant the commented code is
This is distraction when going down the code as it stops the flow of eyes
It is a bad SCM engine : Subversion, CVS and Git are really more trustworthy !
The simple fact of understanding why code was commented out in the first place can take a lot of time
When dealing with source code quality information within a company, Sonar is the perfect reporting tool as it is accessible to everybody and centralizes the information through its web server . However, in some cases, this information can be at the destination of third party organizations. This situation is common in an enterprise environment, for instance on quality audit projects or outsourced projects. In both cases, a quality measurement deliverable is required. This is the aim of the Sonar PDF Plugin.
As described in a previous article, a Sonar plugin is a simple jar that must be copied to the /extensions/plugins directory of the Sonar web server in order to be loaded at next start. That is not the case for Sonar PDF as it is a maven plugin. The plugin uses Sonar extension points, especially the web services API, so all the data used by this plugin is retrieved through WS API.
The plugin, version 0.2, is currently able to report on :
The technical debt is a well-known concept that was invented by Ward Cunningham in 1992 and that he’s recently talked about in this video. Since then, it has been discussed and developed numerous times in blogs and articles. I am not going to describe it in great details here, I rather recommend that you read what it is considered as the reference article on the subject, by Martin Fowler. Here is an extract of this article that gives a synthetic view of the metaphor :
In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.
This metaphor seems to be accepted by many developers already and every day someone tweets about the urgent need to pay back his technical debt. But beyond the concept, when time comes to evaluate the amount to be repaid, there is simply no literature on how to calculate the debt or at least approach it. It’s like borrowing money to buy a house but 2 years later having no way to know what is the remaining debt and how much interest are being paid each month :-).
As stated by Martin Fowler, developers are good and sometimes make deliberate choice to borrow in order to buy time. That’s true when starting a new development as you know exactly the amount of technical debt… that is to say 0. But when extending or maintaining a legacy application, that’s another story as nobody knows exactly how bad it is. Further more you might even not be aware that you are borrowing money, when a developer simply does not follow best practices. That is why, evaluating even roughly the technical debt is very useful.
Before introducing this Sonar plugin, here are few funny and relevant quotes on the concept :
Maintaining an application without any unit tests is like borrowing money each time you add or change a line of code
Skipping design phase is like borrowing money to get a very “quick” and “predictable” return of investment
Refactoring is like paying down the principal
Development productivity decreases when interests grow up
Managers don’t care about code quality, just ask them to pay the debt in order get their attention
Bankruptcy is logical extension of technical debt uncontrolled… we call it a system rewrite
When discussing source code quality, I like to say that there are seven deadly sins, each one representing a major axis of quality analysis : bad distribution of the complexity, duplications, lack of comments, coding rules violations, potential bugs, no unit tests or useless ones and bad design. As you know already, Sonar actually covers 6 of them but the seventh one (bad design) should probably start shaking :-) as it is a matter of time it gets covered as well.
From this observation, we decided to build new metrics that reflect how much effort is required in order to get a perfect score on the various axes. In other words, what is the cost of reimbursing each of the debts in the project. By combining the results, we obtain a global indicator that we report in $$ to keep it fun ! Along with this indicator comes the repartition of each axis, i.e. how much did each axis participated to the technical debt.
The current version of the plugin is 0.2 and uses the following formula to calculate the debt :
Beyond the calculation that is a broad approximation of the reality, the technical debt measure is precious as :
it is a consolidated metric on projects, modules…
it can be followed in the TimeMachine (historical data, trend)
it enables to compare projects
it is possible to drill down on it even to… the class
As a first version, you probably noticed that we took some options, however most of the values for costs can be adjusted in the plugin configuration.
The plugin has been installed on Nemo, the public instance of Sonar, that now calculates the debt of more than 80 Open Source projects. The plugin relies only on the available Sonar extension points and is good example of advanced metrics that can be computed with Sonar.
I am going to stop here on the technical debt for today, but would like to simply mention what we plan to add to it next : interests, debt ratio and project risk profile. I let you now go back to Sonar to install this new plugin as I am sure you want to know what is the technical debt of your project…
In the last couple of weeks, we’ve started making short videos on Sonar, each one showing a dedicated feature in 1 or 2 minutes. Those videos are a good starting point for people wanting to have a rapid but comprehensive view of the Sonar platform. Our animation studios are still pretty young but here are the first three films : chasing duplications, installing Sonar and TimeMachine. Your feedback to improve the next productions is highly welcome.
In order to make the videos, we tested several tools and decided to buy Screenflow. From all the ones we tested, Screenflow is really the best : easy to use, fast with just the effects we were looking for… and all that for $99. Its only weakness : it runs on Mac, but is it really a weakness ;-)
Amongst Sonar built-in strengths, we mentioned extensibility several times without giving many details. Time has come to discuss it further as anyone can now easily contribute to the Sonar plugins ecosystem.
We do know extensibility of a tool is a key aspect to get it widely adopted. That is why we built Sonar around a very light core that consists mainly in an extension mechanism. Everything else in Sonar is a plugin. However, having such a mechanism in place is only one step amongst four to reach extensibility and leverage this capability :
An easy to use API
An active community
A “Getting started” documentation with examples
We believe that we have today a sufficient base on all four points.
For many, it is getting very tempting to switch to Sonar to centralize the quality management of source code and take advantage of the numerous functionality such as TimeMachine, Classes clouds, Consolidated dashboards, Drill downs… In Sonar 1.7, we have added a very useful feature that we did not discuss too much so far : the possibility to re-use reports generated by external quality systems in order to smoothly evaluate Sonar without having to break the legacy quality platform. Today, were going to discuss two use cases where this feature can be leveraged.
1. Switching from Maven Site to Sonar
This is a very common situation : you are already managing the quality of your source code through the Maven Site by generating sites on 250 projects, for instance, with every quality reports activated. Your team uses the maven site extensively and switching to Sonar in a big bang approach is simply not possible.
You have read the post “Maven Site, Sonar or both of them ?” on the Sonar blog but you don’t feel right to suddenly ask everybody to switch to Sonar. You realize that you need to run them both in parallel for some time. But given the fact that it takes already a long time to generate the sites, it is not possible to double this time by doing the analysis in Sonar as well.
That is where the “ReuseReport” functionality of Sonar 1.7 comes into play, it is now possible to have a staged approach ! The principle is fairly simple, it consists of indicating to Sonar that it should use reports that have already been generated by the Maven site, the ones that are the most hungry in CPU and memory : unit tests execution and/or code coverage calculation. This can be achieved by simply adding “-Dsonar.dynamic=reuseReports” to the Sonar maven command line.
It is then possible to keep both systems running in parallel for some time at a slightly higher cost until you decide to make a complete switch to Sonar. When you have switched off the quality reporting in the Maven site, you can even reference Sonar in the Maven site by using the Sonar Maven report plugin.
2. Using Sonar in its full capability in an ANT environment
If you are using ANT to build your applications, the main weakness so far in Sonar was that it did not allow to display Unit tests results nor Code coverage. I am sure that now you have read the first use case, you know that by using the “-Dsonar.dynamic= reuseReports” parameter, this limitation does not exist anymore. You simply need to specify where those reports to reuse are going to be found, by using the following properties : sonar.cobertura.reportPath, sonar.clover.reportPath, sonar.surefire.reportsPath…
With this new functionality, Sonar gives a similar level of quality information on ANT projects that there is on Maven projects.