How to hold a more effective code review

A lot of programmers feel like being asked to go to a code review is like being told by mom to eat our veggies. We’ll complain about it, and even if we do eventually swallow them we’re determined not to enjoy them.

It’s something I’ve seen over and over again: programmers groaning about having to go to a code review, usually because someone gets some big idea about making things better, and decides this is how you do it. There’s sometimes a little nervous joking at the beginning of the meeting about how nobody really wants to be there. And after it’s done, a lot of us get the distinct feeling that it was a waste of time.

The thing is, code reviews can be a really good thing. And not only that, they don’t have to be a chore. If you do them right, people on the team can start to appreciate them and even – heaven forbid – enjoy them.

So how do we make code reviews more palatable? We need to think about what motivates us as programmers. Programmers love to code. We love building things, and we love solving problems. But we hate anything that seems bureaucratic or tedious, and we definitely hate meetings. But most of all, we hate being in uncomfortable social situations that require us to confront people face-to-face. We’re not alone in that; most people hate situations like that.

I think that’s a pretty big part of why programmers intuitively dislike code reviews. It’s not that we’re afraid of putting our work out there for our peers to see. That’s actually something we look forward to: we love to show off code that we’ve worked so hard on, and we definitely appreciate the input from the people around us. But it’s almost never the person whose code is being reviewed who groans about it. No, it’s usually the people who are asked to attend the review. And I think I know why: it’s because we don’t like being asked to criticize the work of others, openly and without hesitation, for the good of the team. I think we naturally feel uncomfortable putting someone else in the position of having to openly confront their errors, because it’s so easy to imagine ourselves in that same position.

And that may be the secret to holding code reviews that people actually look forward to attending: make it about helping make the code better, not criticizing and finding its flaws. If you can come up with a way to avoid bitter arguments about tiny details and stubbornly-held opinions, and instead concentrate on helping someone improve his or her code, then people will stop thinking about code reviews as an uncomfortable meeting and start thing about them as a way to build better software.

That sounds like a tall order. How do you do it?

Well, you start out by thinking about one of the best things that can happen in a code review, something that I’ve seen many times. It usually happens about two thirds of the way through the review, about the time when the first signs of meeting fatigue are starting to set in. Someone points out another problem with the code, and a conversation starts up about an aspect of it that nobody really thought of. Uh-oh — it’s a bug, a particularly nasty one that. Then everyone kind of looks at each other with a weird mixture of relief and disgust, because we found a bug that a) definitely would have made it into production, and b) would have taken hours or days to track down and fix.

I’ve said this many times before: programmers are very practical people, especially when it comes to our own time. If something will save us time down the road, we’ll definitely do it. If you can show a programmer that a tool or technique (like a code review) saves more time than it costs, that’s a great way to get him or her to start thinking positively about it.

That doesn’t change the fact that a lot of people get nervous about code reviews, even people who have done them a bunch of times. So I spent a little time thinking of advice I’ve given about code reviews over the years. Some of this is pretty standard code review stuff, but I think it’s worth repeating because people have so many different views on how to do code reviews effectively. And I think that if you think about them, and get other people on your team thinking the same way, it could definitely help you hold effective code reviews.

So here’s some advice about holding better code reviews — you can think of them as code review tools (or even code review best practices, although I’m not a huge fan of that term) that can make your software better:

  • First things first: there are a lot of different ways to do a code review. Some people like to follow a very strict process. Personally, I like to keep them informal; the more it seems like an everyday conversation, the more work we actually get done.
  • It’s important to keep the meeting to under two hours — any more than that and meeting fatigue sets in. Most code review teams can handle between 200 and 400 lines of code in a two-hour review meeting. (Your mileage may vary.)
  • Don’t forget to distribute the code before the review, and make sure you give everyone enough time to actually look through it. Send around a PDF of the code (a2ps is a good way to make it readable, and it’s got stylesheets for almost every language). Also make sure that everyone also has access to the source, and that they know how to build and run it. Sometimes it’s a lot easier to prepare for a code review if you can actually debug your way through the code.
  • Make sure that everyone knows you appreciate their time. It’s easy to forget that, but it helps the team see the review as a useful tool and not just another boring meeting. Remember, you’re pulling half a dozen or more people into a room for two hours, plus preparation time — that’s the equivalent of taking a top developer off of all projects for two days. That’s also why it’s very important to choose a good block of code to review. Choose one that’s inherently risky: a difficult algorithm, code from a library that many other people depend on, an interface a lot of people will use, a particularly nasty bit of spaghetti code.
  • Code reviews and refactoring work really well together. Look for opportunities to extract methods, rename variables, replace literals with constants, etc.
  • Pay attention to OOP principles, especially encapsulation. Improving encapsulation is an easy and effective way to prevent bugs.
  • The code review isn’t just about bringing up issues — it should also be about giving some indication of how to resolve those issues. Ideally, the programmer whose code is being reviewed should be able to read through the results of the review and very quickly implement the fixes, because in the meeting we wrote down exactly what needs to be fixed and how to fix it. (We don’t actually have to write down lines of code, of course — just enough information so it’s clear what to do.) A good way to do this is to make the goal of the meeting to be to produce a log, or a list of fixes that need to be made to the code.
  • Instead of storing the log in a spreadsheet, Word document, or wiki page (I’ve done all three), try having the moderator put the results of the review directly into the code as comments (which includes an easily searchable unique string like ‘// %%TODO: CODE REVIEW 8/28/08%%’, so the programmer can find them all). The results of the review meeting can be e-mailed out a link to a diff report in the source repository. When the programmer goes back to update the code, he or she can alter the comments to make sure they make sense in context — but they can stay in the code because they’ll make more sense, and it’ll be clear why the code is the way it is if someone is maintaining it in the future.
  • A good way to focus the discussion is to guide the conversation away from arguments about coding in general, and towards what to write down in the log to resolve the current issue. Make a good effort to figure out how to resolve the issue: most can be resolved in the meeting. Any issue that can’t be resolved in a reasonable amount of time gets added to the log as an open issue.
  • I’ve always gotten a lot of mileage out of using a moderator. The moderator’s job is to keep track of the discussion, and keep the discussion on track. If people are getting off onto a tangent that won’t benefit the code, or if they’ve gotten into a disagreement where there are merits on both sides of the issue and it clearly won’t be resolved, the moderator should suggest that we log it as an open issue and move on. You can always follow up later and resolve the issue.
  • Some people get very strict about making sure that the moderator stays at arm’s length, and doesn’t get involved with the review. Personally, I think code reviews are hard enough without imposing arbitrary rules like that (because we’re laying someone’s code bare and dissecting it, which can be difficult for anyone who’s not used to it). We’re all adults here, and we can trust any of us not to “abuse” a moderator role. If the moderator has something to say, he or she should say it. If it’s easier, replace “moderator” with “note-taker” or something like that.
  • Don’t be pedantic, and try to avoid theoretical discussions. It’s really easy to get bogged down with a discussion that doesn’t go anywhere about whether this variable declaration should be here or there, whether this type of structure or that is slightly more efficient, if we could do something better in theory if we scrapped a large amount of code and rewrote it. If a discussion isn’t going to directly lead to a change, even if it’s an interesting topic, note it in the log as an open issue and move on. And definitely don’t point out spelling errors. A lot of grate programmers are lousy spelers.
  • Make sure variable names make sense, but don’t worry about naming conventions. Some people love underscores in front of parameters, some people hate them. I’m sure you can come up with at least three different “official” conventions for any programming language. There are few things less useful during a code review than an argument over whether to use PascalCase or camelCase.
  • One way people like to do reviews is to have people “read” the code – interpret it out loud. I’ve had some success with going around the table and having people take turns “reading” each chunk of code. If there’s a chunk that is difficult to “read”, it’s not clear enough and is a good candidate for refactoring.
  • Before you distribute the code to the team, run it through a static code analyzer (like FindBugs or FxCop) and fix issues that are found. There’s no need to waste discussion time on problems that a tool can catch and log for us.
  • I’ve had a lot of success with a kind of review called a “high impact inspection” (that’s a term that was developed at HP about fifteen years ago). Basically, it boils down to having everyone do the code review independently and e-mailing their issues to a moderator. The moderator puts the issues into one big list, sends them back out to everyone, and then the review meeting itself is focused on just going through those issues. Jenny and I developed a code review process similar to high impact inspections to let us hold inspections in teams outsourced to India (where time zone differences make it very difficult to regularly schedule two-hour meetings). We ran it a bunch of times, and it worked really well.

When Jenny and I were writing the section on code review techniques in our first book, Applied Software Project Management, we came up with a checklist of things that you should look for during a code review. That should give you a good starting point for coming up with a good review.

Good luck with your code reviews! If you end up with a good code review success (or failure!) story, I’d love to hear about it.

Beautiful Teams coming into the home stretch

Beautiful Teams cover

We’ve gotten more than a few e-mails from readers wondering where we went. First of all, don’t worry – Jenny and I are still around! We’ve just been working overtime on our latest book, Beautiful Teams (here’s the Amazon page for it). It’s coming out well, and we’re really excited about it.

Beautiful Teams is a new kind of book for us, because it’s the first time we’ve done more editing than writing. We brought in a great team of guest writers and thinkers, people we really respect and whose work we’ve been big fans of over the years: Scott Berkun, Karl Wiegers, Karl Fogel, Johanna Rothman, Barry Boehm, Steve McConnell, Grady Booch, James Grenning, Cory Doctorow and a whole lot more. A good portion of the book will be stories from their careers, which was a big departure for us: it was as much about creative writing (helping our authors make the stories fun to read) as it was about software engineering. And in addition to the stories, Jenny and I have been doing interviews with some of these great thinkers, and I’m extremely happy about how they’re coming out. We’ve put a lot of effort into finding people who are thoughtful and knowledgeable, but also really good at talking about what they do and how they think. All in all, it should be a very unique and fun to read book.

Oh, there’s one more thing I want to tell you about the project – actually, my favorite part about it. We’re donating almost all of the royalties to PlayPumps International, a truly wonderful charity that digs wells to deliver clean drinking water to rural schools and villages in sub-Saharan Africa. If you haven’t heard of them, take five minutes and watch this piece that Frontline did on them.

I’m sure we’ll give you plenty more details about it as time goes on. In the meantime, hopefully we’ll be able to post a little more here for you!

Unit testing and the narrowly averted Citicorp Center disaster

It was almost a disaster...

I was working on a project earlier today. Now, typically I always do test-driven development, where I’ll build unit tests that verify each class first and then build the code for the class after the tests are done. But once in a while, I’ll do a small, quick and dirty project, and I’ll think to myself, “Do I really need to write unit tests?” And then, as I start building it, it’s obvious: yes, I do. It always comes at a point where I’ve added one or two classes, and I realize that I have no idea if those classes actually work. I’ll realize that I’ve written a whole bunch of code, and I haven’t tested any of it. And that starts making me nervous. So I turn around and start writing unit tests for the classes I’ve written so far… and I always find bugs. This time was no exception.

This time, for some reason, that Lose Weight Exercise reminded me of the story of the nearly disastrous Citicorp Center building.

Citicorp Center was one of the last skyscrapers built in the New York City skyscraper and housing boom in the 1960s and 1970s. A lot of New Yorkers today probably don’t realize that it was actually one of the more interesting feats of structural engineering at the time. The building was built on a site occupied by St. Peter’s, a turn-of-the-century Lutheran church that would have to be demolished to make way for the skyscraper. The church agreed to let Citigroup demolish it, on one condition: that it be rebuilt on the same site.

The engineer, Bill LeMessurier, came up with an ingenious plan: put the base of the building up on columns, and cantilever the edge of the building over the church. Take a look at it on Google Maps’ Street View — you can pan up, navigate around, and see just how much of a structural challenge this was.

The building was completed in 1977. A year later, LeMessurier got a call from an engineering student studying the Citicorp building. Joe Morgenstern’s excellent 1995 New Yorker article about the building describes it like this:

The student wondered about the columns–there are four–that held the building up. According to his professor, LeMessurier had put them in the wrong place.

“I was very nice to this young man,” LeMessurier recalls. “But I said, ‘Listen, I want you to tell your teacher that he doesn’t know what the hell he’s talking about, because he doesn’t know the problem that had to be solved.’ I promised to call back after my meeting and explain the whole thing.”

Unfortunately, LeMessurier was mistaken, and in the article he describes the problem in all its gory detail. It’s a fascinating story, and I definitely recommend reading it — it’s a great example of how engineering projects can go wrong. It’ll probably seem eerily familiar to most experienced developers: after a project is done, someone uncovers something that seems to be a tiny snag, which turns out to be disastrous and requires a huge amount of rework.

Rework in a building isn’t pretty. In this case, it required a team to go through and weld steel plates over hundreds of bolted joints throughout the building, all over the weekends so nobody would find out and panic.

But what I found especially interesting about the story had to do with testing the building:

On Tuesday morning, August 8th, the public-affairs department of Citibank, Citicorp’s chief subsidiary, put out the long delayed press release. In language as bland as a loan officer’s wardrobe, the three-paragraph document said unnamed “engineers who designed the building” had recommended that “certain of the connections in Citicorp Center’s wind bracing system be strengthened through additional welding.” The engineers, the press release added, “have assured us that there is no danger.” When DeFord expanded on the handout in interviews, he portrayed the bank as a corporate citizen of exemplary caution–“We wear both belts and suspenders here,” he told a reporter for the News–that had decided on the welds as soon as it learned of new data based on dynamic-wind tests conducted at the University of Western Ontario.

There was some truth in all this. During LeMessurier’s recent trip to Canada, one of Alan Davenport’s assistants had mentioned to him that probable wind velocities might be slightly higher, on a statistical basis, than predicted in 1973, during the original tests for Citicorp Center. At the time, LeMessurier viewed this piece of information as one more nail in the coffin of his career, but later, recognizing it as a blessing in disguise, he passed it on to Citicorp as the possible basis of a cover story for the press and for tenants in the building.

Tests were at the center of this whole situation. It turned out that insufficient testing was done at the beginning of the project. Now, more tests were used to figure out how to handle the situation. Tests got them into the situation, and tests got them out.

So what does this have to do with software?

I have a hunch that anyone who’s done a lot of test-driven development will see the relevance pretty quickly. The quality of your software — whether it does its job or fails dramatically — depends on the quality of your tests. It’s easy to think that you’ve done enough testing, but once in a while your tests uncover a serious problem that would be painful — even disastrous — to repair. And as LeMessurier found, it’s easy to run tests that give a false sense of security because they’re based on faulty assumptions.

I’ve had arguments many times over my career with various people about how much testing to do. I can’t say that I’ve always handled them perfectly, but I have found a tactic that works. I point to the software and ask which of the features doesn’t have to work properly. But it’s good to remind myself how easy it is to question the importance of tests. It’s so easy, in fact, that I did it myself earlier today. And that’s why it’s important to have examples like Citicorp Center to remind us of how important testing can be.