Thursday, April 26, 2012

Software Review


Since I am involved in the software review & guideline team at my work, I've spent sometime to study about review process, which I want to share with you in this blog.

The benefits of software review:
• Increase software quality, reduce bugs.
• Opportunities to learn (for both the code authors and the reviewers), as a mean for knowledge transfer to junior developers.
• To foster communication between developers.
• Various study showed that review process save costs (e.g. $21 million reported by HP). It's cheaper to fix the bugs in the earlier phases (design, development) than in the later phases (QA/test phase, shipped products)
• As a part of best practices/standard e.g. PSP, CMMI3.
• Motivate the developers to improve their code quality in order to avoid "bad scores" during review. This ego-effect still works even when the random-review covers only 30% of the total codes

The disadvantages of review:
• Cost time. Solution: limit time (e.g. max 1-2 hours)
• Developers have to wait for the reviewers, might create delay in the pipeline. Solution: the project manager has to include software review process in the plan including the time & resources (the reviewers for review, the developers for rework).
• The code author feels hurt when someone else points their mistakes. Solutions: be sensitive/friendly when discussing the findings, both the reviewers & authors agree on the positive benefits, the reviewers give also positive feedbacks to the authors, focus on the codes not the authors.
• The developers think that they have better things to do. Solution: support from the management (e.g. enforce the review process formally).





An example of a review process, which is consisting of 3 steps:
1. "Over the shoulder" short session (30min)
The author guide the reviewer through the code: the entry point, most important classes and the relationships between them. He also explains the flow, sequence & concurrency mechanism and the patterns/algorithms used. This session is similar to a pair programming session. However we need to aware of the disadvantages of this method:
• the author has too much control with the scope & pace of the review.
• The reviewer has barely time to check properly.
• The reviewer tends to condone the mistakes after hearing the authors explanations.
That's why we need to keep this session short and follow this session with a private-review session.

2. Private-review (30-90min)
Without involvement of the author, the reviewers check out the code from SCM (e.g. svn), check some documentations (usecase, specs, design), do fast sanity check, perform some test/validation (e.g. soapui), check against checklists & specifications, and read some parts of the codes.

3. Past review activities:
• The reviewer discussing the findings with the author (30 min)
• consult with the product owner, architect, team-lead, project-manager regarding the risks, bug priorities & reworking impact for the plan
• create bug tickets in the trac/bugzilla
• make an appointment for follow up





Some best practices:

Determine the scope / part of the project for review, based on: risk analysis and author's error log, e.g. Based on his personal log, Bob (the author) knew that he often made mistakes with web security, so he advised Alice (the reviewer) to concentrate to the security issues of his web codes.

To improve the process, you need to define metrics in order to measure the effect of the changes. These metrics can be external (e.g. #bugs reported by QA team, #customer tickets) or internal (e.g. #bugs found, time spent, loc, defect density, complexity measure). Based on the #bugs found by 2 reviewers, you can guess the estimated total bugs and review yield.

Checklist is the most efficient tool for reviewer. The checklist should be short (less than one A4) and describe the reasons, risk level (based on damage/impact, popularity, simplicity to perform), references for further information.

Perform self-review using personal checklists (since everybody has a unique tendency for certain mistakes).

Take advantage of automatic tests (e.g. checkstyle, findbugs, pmd in java). Some companies include these tests in their continuous integration test and their metrics then can be showed in a graph to highlight the quality trend (e.g. bug reduction vs sprint cycles).

Review meeting is not the most effective way to review. It costs more man-hours. It's better for the reviewer to be alone concentrating reading the code.

Maintain reviewer's concentration by limiting the time for each review (max 1.5 hour, 400 loc (line of code)/review). Slowing down to read the code carefully (max 500 loc/review).

Having the code authors annotate the code appropriately (e.g. the patterns/algorithms used: visitor pattern, quick sort, etc.).

The code authors provide notes for his/her project: starting point/files to begin, important classes, dependency/class diagram, patterns used, where to find the documentations (use case, specs, design doc, installation guide), test cases (e.g. web service request-response examples). This information is useful not only for the reviewers but also in case the author leaves the company (e.g. a temporary external consultant). You can use a trac/wiki as a collection point for the information.

Verify/follow up if the bugs is really fixed.

Beware not to compare orange with apple: the #bugs found in a code is not only depend on the developer expertise but also:
• how complex the problem is
• how many reviewers, time spent, their expertises
• specification & code maturity (development version, beta version, shipped product)
• programming languages
• tools (IDE, validator, etc)

Review-team lead / process owner responsibilities:
• maintain expert knowledge of the reviewers, arrange trainings if necessary
• establish and enforce review policies
• lead the writing and implementation of the review process and action plans
• define the metrics, make sure that they're collected and used
• monitor the review practices and evaluate their effectiveness

Process assets:
• process description
• guidelines & checklists
• issues tracking system e.g. trac/bugzilla

Where to do the review in a typical software process:



To be continued: http://soa-java.blogspot.nl/2012/09/the-review-process.html


Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




Literatures:

11 Best Practices for Peer Code Review
http://support.smartbear.com/resources/cc/11_Best_Practices_for_Peer_Code_Review.pdf

Best Kept Secrets of Peer Code Review by Jason Cohen
Plus: recently new, supported by scientific analysis of literature & field studies, down to earth advices (instead of management jargons high in the clouds). minus: repetitive advertisements for their review-tools.


Peer Reviews in Software: A Practical Guide
by Karl Wiegers (Paperback)



Seven Truths About Peer Reviews by Karl E. Wiegers
http://www.processimpact.com/articles/seven_truths.html

OWASP code review guide
https://www.owasp.org/images/2/2e/OWASP_Code_Review_Guide-V1_1.pdf

No comments: