A code review usually begins with a static analysis of the source code using a static analysis tool. Various key figures are analyzed on the basis of software metrics. Using SonarQube as an example, the following key figures are among them:
- Coding Rules: Checking for violations of common programming rules in the source code (bugs, vulnerabilities & code smells).
- Code coverage: Test coverage of the source code in percent.
- Test Quality: Number of test cases run, skipped and failed.
- Technical Debt: The technical debt describes the estimated effort to fix the code smells in person days.
- Duplications: Number of duplicated source code lines.
- Comments: Number of commented and documented lines in the total number of lines.
- Complexity: Evaluation of the complexity of classes and methods using common sizes.
The SonarQube dashboard provides a quick overview of the current status of the project. In addition to the analysis of the entire source code, an analysis of the newly added code is also always performed:
This article takes a closer look at technical debt and complexity.
The technical debt refers to the estimated cost of remedying the breaches (code smells) detected with the analysis tool. The code smells are sorted according to their severity into :
- Blocker Issues
- Critical issues
- Major issues
- Minor issues
- Info Issues
Violations of the severity levels Blocker and Critical must be corrected in any case, as these can lead to a restriction of application operation. Usually, the technical debt includes, for example, expenses for the resolution of code duplicates or for the simplification of complex methods. Expenses to increase the comment or documentation rate or to generate a higher test coverage are not part of the technical debt.
In addition to the technical debt, there is also the ratio of the technical debt. In this, the ratio of the technical liability to the estimated total expenditure of the project is considered. In SonarQube, a rating from A to E is assigned as a so-called maintainability rating. This rating is used to measure the maintainability of the application. For a final assessment, however, the interaction of all key figures must always be considered and interpreted. Another important key figure is complexity, for example.
In terms of complexity, a distinction must be made between cyclomatic and cognitive complexity. Cyclomatic complexity was developed by Thomas J. McCabe in 1976. It comes from graph theory and describes the number of possibilities to pass through a class or method. Thus, the metric indicates the maximum number of test cases needed to achieve 100% branch coverage. The cyclomatic complexity says nothing about the comprehensibility of a class. A method with a nested switch statement, for example, is often understood by the developer at first glance, but will have a high cyclomatic complexity.
With the cyclomatic complexity, this method would receive a value of 8. In literature and practice, a value of 10 is generally regarded as the limit value for maximum cyclomatic complexity. A pure application of this concept of complexity is therefore controversial. For this reason the cognitive complexity is still raised. Cognitive complexity is a measure of the comprehensibility of a class.
Here, too, a maximum value of 10 is assumed. The example method shown receives a value of 1, since it is not interrupted in the control flow, but contains a nesting. The method is therefore considered to be very easy to understand.
The cognitive complexity is basically determined by three rules:
- Structures that make it possible to combine several instructions legibly in one block are ignored.
- For each interruption in the flow of the code, the cognitive complexity increases by 1.
- For each nesting, the cognitive complexity is increased by 1.
For an easily maintainable code, both the limits of cyclomatic and cognitive complexity must be met. This provides narrow but understandable code and makes it easier for developers to access the code base.
The technical debt and the complexity are very important key figures for the first evaluation of an existing source code. The technical debt can be used to quickly determine whether a relatively large number of violations of common programming rules could be found or not. Often a high technical debt is caused by unclean work. In everyday project work, this can have various causes such as time pressure or postponed priorities. The complexity indicators are a good measure of the comprehensibility of the source code and can therefore also be used as a further indication of the way an application is designed.