Proper code review provides much more benefit than cost and is indispensible for quality software. Improper code review is a waste of time (or worse).
Proper code review is done:
- by another programmer
- by someone with some knowledge of the application
- by someone with some knowledge of the environment
- against some code standard
- against standard requirements (APIs, database, etc.)
- with a checklist
- uniformly (no matter who is doing it)
- until it's right
Improper code review is done:
- by a non-programmer
- fully automated
- as a "rubber stamp" on an approval checklist
- against the standard du jour
- according to the whims of the reviewer
- with a deadline for the next step
- as a replacement for testing
I also include junior developers who don't know much about the application or environment in code reviews. It's a good way for them to learn the code base and see what type of defects the senior developers point out.
+1 to that one. Code Review should be Peer Review - it is not about a Senior reviewing a Junior it is about a developer reviewing the work of another one. The junior's questions may actually be as useful as advices (notably by forcing you to explain your choices).
I like to have one junior and one senior developer along with the software architect review another developers code. This way you get proper code review from the senior, and the junior learns and get exposure to other peoples code. But three people reviewing is actually already a too large team for review imo.
Hi,
OP here. Interesting list. I think I would approve most of your points, but never worked (or created) such a "proper" environments in my various teams. Could you share the kind of checklist you use? Or give an idea of the kind of "checks" you have there?
Great list (especially "off the top of your head"). Thanks a lot for sharing. Several of those can actually be implemented in an automated tool (styles, short variables, spacing, unused variables even). What's your take on automated tools? (Disclaimer: my company built one - pulreview.com).
They can be quite irritating for others though, especially if they happen during a loop so that the user will have to click past 40 popups to access something.
Yep, you're definitely right! I'm working on stopping that irritating process and adding in great logging and controlling it via a log level of some sort.
I think a lot of people like using alert() for debugging because it posses execution. Of course, so do break-points / "debugger" statements.
Git pre-commit hooks for alert are helpful since few and far between are the circumstances where an alert is actually wanted in production. While you're at it you can add one for whatever set of profane terms, if you roll like that.
This is actually really important. We had this happen with a customer whilst UAT testing. The project schedules were under pressure and as a result the customer went nuts (rightly so) and we had to make some concessions to customer as amends.
It isn't just curse / bad words. Equally as bad are comments like:
"this was the customer's stupid idea not ours"
Also, if you really have to output test stuff in the browser use console.log('message') and not alert('message').
Then for production you can always do this at the top of your main JavaScript include:
Professionalism is for people who wear shirts with collars and have nice cars and families. You know, the kind of people who fund the kind of place where you can use bad words in code...
When talking about code reviews I am constantly reminded of the following
1. How hard it is to grok code that I've never seen before
2. How hard it is to grok my own code from 6 months ago
Sometimes it may take me a day or more to fully get my head around a new piece of code I am working with. I could not give a thorough AND fair review of another person's code without internalizing it to a satisfactory degree.
There is also the issue of software as art vs engineering. Quite often I will take issue with the style or approach of another person's coding simply because I would have done it differently had I done it myself. But that does not mean that their code is wrong.
All things considered, I think code reviews can be helpful in certain situations, but I think there are many pitfalls which must be avoided in order for them to justify the cost.
There's also a few benefits of code review that you missed:
1. Reduces bus factor.
2. Increase readability.
Coding in a company is a communication problem. How you solve a problem may be different than someone else. Outside of simply catching more mistakes, it's possible that you may learn something new or the other person can improve their code.
Ideally there should be a team / company coding style simply to ease communication overhead and reduce bikeshedding.
The cost you speak of is directly trading software quality for "speedier" delivery. Heck, this was even posted just yesterday: http://www.sec.gov/litigation/admin/2013/34-70694.pdf. One of the big problems was there was no code review being performed.
Also think of code review as a fantastic learning opportunity for the developers you are reviewing. Practicing an art can only get you so far as an expert, but being critiqued by others can help refine the rough edges you may otherwise ignore throughout your career.
Code reviews don't have to happen in massive context, either; why not follow a practice that every pull request (or whatever your SCM calls it) goes through at least one reviewer before being merged to master? And ensure your pull requests are frequent and preferably 100 lines or less!
A little bit goes a REALLY long way. Don't give in to the fallacy that your code isn't either worth it or is good enough to not be reviewed.
> Sometimes it may take me a day or more to fully get
> my head around a new piece of code I am working with.
Are you trying to perform a design review, or a code review? Code reviews (in my mind, at least) concern themselves with the low-level mechanics of the code ("Handle this exception properly","there's a library method for this logic","follow the team's coding standards", etc.) whereas design reviews deal with the "how is this thing supposed to work?". When reviewing code, I avoid dealing with design issues. If I see design issues, I ask the developer for a design review.
We haven't mentioned timing, but that's the reason I decouple design and code reviews.
Design reviews can be done fairly early in the process before the code is complete. If design changes are needed, there is still plenty of time for them.
Code reviews can't really be done until most of the code is written, but (as you point out) the kinds of problems they find are much easier to fix, so require less time.
I agree with you, and this is the reason why having small features and reviews for all of them helps. Groking a colleague's code is much simple if it is a small piece, and when you are reviewing his code regularly (and the other way around).
Our typical PR is 1-2 days of work, so I'm reviewing several by week, something by days.
You are of course right that it should not fall into "you should do it my way". Now, when my colleague said "I would have done it differently", I always ask how and why. I will probably not change my code if it is good (or even good enough), but I would have learned something, or got another point of view.
I've found code reviews are beneficial to help you catch simple bugs, you don't always have to internalize the purpose of the code just make sure the person dotted their i's, crossed their t's and freed all their memory pointers... Or just help junior guys write more efficient code...
Personally I agree with the author that proper [1] code reviews are valuable, however, I think he's skipping over a really critical point:
> We started doing Unit Testing at some point, and it
> quickly became “good practice/mandatory” in our team (I
> think good practices need to be applied by everyone in a
> team, requiring some kind of “collective enforcement”).
I'd love it if the author would elaborate more on how to change team behavior like this, especially when it's a team member (peer) trying to make the change.
I've seen many teams struggle because there isn't a agreement over what practices to follow.
1. As an advocate for a practice, ensure you use that practice in all that you do. Advocate for it in every team meeting. Try to stress why it is important, and how it can directly make things better, instead of just "making our code better." Provide research and numbers from prior projects that show how useful such a practice can be.
2. Provide good examples and guides to get the practice started. Some avoid a practice because they don't know the best way about executing a practice. If it's unit testing, show how one should go about determining edge cases and describe useful unit testing methodologies (mocks, stubs, etc).
3. Make it easy as possible. If it's unit testing, integrate a CI server directly to your SCM. If it's code review, adopt a practice that fits inline with your existing workflow and don't make it a ceremony - make it asynchronous.
4. Understand that it will be gradual, but every effort moving forward should push towards integrating the new practice. You can't suddenly have 100% test coverage after introducing unit testing to an existing project, but all code that gets included can include new tests, and old refactoring can have tests included to slowly build up the test coverage.
Why is much more important than what to do. The best benefits of code reviews are:
1. Developer educate each other about the code base and how to be better developers. This means quick, more stable features which means the company can start making money sooner.
2. Slows tech debit. Poorly written code stays out of the product. More up time means more overt unity to make money.
Hi, author here. I agree this may not be easy - but something can always be done. I've been in several situation.
As a team leader facing junior developers, I did simply set-up rules. I explained them, but I had the power to enforce them by myself. I coded with them, and they started coding like me - until they were confident enough to challenge me. I apologize to the "autonomous self organizing teams" evangelists, but in some situations, giving some direction may be the most efficient way to progress.
As a peer, you just need to find one other person in your team that is willing to play along. Although I understand the value in explaining something, doing it is for me much more convincing. If you really think something is a good practice, don't try convincing me if you are not already applying it yourself.
If you are in an Agile/sprint oriented team, just propose to test it for one sprint, then it will pass the retrospective test or it will not - no one should object to a one sprint experience, especially in an agile team. Do the same with your colleagues ideas, even if you find them silly. It shows goodwill, and you can be surprised at some time.
Finally I would not involve non coding management in the discussion if possible, as it will quickly devolve into "what would it cost". Better to handle this inside the technical team.
Hope it helps, and remember, it's the first step that cost. Find one willing colleague and start!
My co-workers write awful code, including the "architect". I try to review their code, without having a formal process, because I simply do not trust their abilities. It would not be pretty if we did code reviews.
Or it might be better. Once code reviews are happening people tend to start realizing that people are reading their code and it is raised to a higher standard.
There's also the added benefit that reviewing your code can teach them about better programming practices, and also that reviewing their code with them and asking questions about their thought patterns can make them better developers.
It's my opinion that if the code is awful then code reviews become especially helpful and important to bringing everyone up.
Its the whole point for me: even if I prefer managers that I can convince, I just call that "development". Replace: "feature is done but need to be tested or reviewed" by "feature is not done".
The fall in the technical/developer responsibility anyway for me.
Not just start-ups; larger companies, too. The process may dictate testing, but if, for example, there's an urgent production bug or some enhancement that "needs" to be pushed quickly, testing often falls by the wayside.
I care more about full functional test coverage than code review. The tests must be automated, regression and functional coverage not just unit test. Test results/logs must output in HTML store in DB backend (now) or excel (did that in 1997) for ease of analysis. Test failures are highlight in red, passes are in green.
The developers should write new test code for every new feature and run all the existing test code after/merge before checkin. All the projects I worked on that implement this process are very successful. With this process in place and agree upon, I careless about code review.
With enough "functional" test coverage, it is easy to do massive refactoring of code without worry about any breaking.
Worked on one project that try code review for two weeks - other than a few comments about coding style, not much gain from the time spent.
Test coverage is pretty orthogonal to code review. You might wind up with a large amount of fully tested freshly refactored code that does not meet requirements or edge cases the tests missed. I'd recommend at least reviewing the test code in your case.
Besides that, it sounds like a nice codebase to work in. I've had very good test coverage for libraries and backend services with well-defined APIs I've written, but a lot of my career has been spent doing UI work and I've never found an adequate testing tool that can deal with both the complexity and randomness of human interaction and the rate of change of UI presentation.
You can't really regression test UIs, because most UI development intentionally changes the things regression tests look for, so you wind up spending a lot of time updating the regression tests and trying to keep up with UI changes that are being made in a tight code/review/tweak iteration loop.
Code review, done properly, should provide more than just a critique of coding style. It should should also catch certain types of bugs before they get to functional testing. Yes, you can catch those same bugs in functional tests, but in the environments where I've worked, there's a lot more overhead (read: time, money) involved with bugs found in functional test than in code review. Ita can be pretty frustrating to be that far along (functional test) and find a bug that could've easily been found in code review.
A major part of being an effective programmer is the ability to maintain state and call stacks in your head.
Stepping through code is great - as a last resort. But I've found that if I need to run through it with a debugger in order to understand it, it is often overly complex.
As with any generalized rule there are exceptions, but I consider needing to step through code in a debugger to understand its behavior as that exception, not the inverse.
Yes the diffs often don't show enough context and the errors may occur outside of the diff so you should really review the whole changed function even if you don't need the debugger.
They are always attached to a branch though so you can check it out to review (and run/test as appropriate).
Code review is about reviewing the code. That's literally what the name says. CI runs the tests, QA test the final behavior, code review checks the code.
That's nice in theory, but in practice another dev really should be running the code and doing some manual testing. Having full knowledge of the innards allows a competent dev to foresee potential trouble spots in the architecture and hit some edge cases to make sure they don't break.
CI may run the tests, but that assumes your tests have captured every possible edge case. It may be the intention of tests to be comprehensive and cover everything under the sun, but that is rarely the case in reality.
Ditto QA - they should be able to black-box test the software and all of its possible states, but in reality something is going to pass through the net.
Having a dev run the code themselves and poke around in it is just a plain good idea.
> That's nice in theory, but in practice another dev really should be running the code and doing some manual testing.
Why? If they are reviewing the code including the unit tests, shouldn't they instead be suggesting any missing unit tests, which then become permanent and reusable rather than "doing some manual testing"?
> CI may run the tests, but that assumes your tests have captured every possible edge case. It may be the intention of tests to be comprehensive and cover everything under the sun, but that is rarely the case in reality.
Insofar as the other dev doing code review can address this with their own testing, isn't it better for this to be done-once and preserved by adding automated tests rather than done-and-lost by doing manual tests?
It might be a somewhat good idea (in my experience, the scenario you outline is significantly less likely than domain knowledge by QA hitting patterns the developer with limited domain knowledge had not anticipated), but it's not a necessity for code review to be a good idea.
Using pull requests as your code review doesn't force you to not run the code. My team uses forks of upstream repos and all merges upstream happen via a PR that is peer reviewed. It isn't uncommon for us to pull down the changes of a PR locally to "play around" with a new feature or change to existing functionality.
We have tests in place to help stop regressions, but sometimes seeing a change in action can really help to put the corresponding code in context.
Most of my team uses hub, but you can check out a pull request easily enough with git:
Hi,
I think both are actually useful. The CI is supposed to run the tests (even if I like to run some myself when reviewing), but I agree that starting the application is a part of the review - you should not stop at just looking at the code (but just startint the application does not cut it either for me).
Code reviews are about psychology. When you know your teammates are going to be reviewing your code, you write it differently than you do when you know no one but you will ever look at it.
Static analysis and regression tests are tools to make sure the code isn't broken.
Code reviews are also about distance from the code. When it's your production it's easy to miss the forest for the trees. That's why books are generally better when beta readers or reviewers are involved.
Code reviews are also about spreading knowledge (about the subsystems and about choices made in implementation and the reason for them) and increasing the code's bus factor.
Proper code review is done:
Improper code review is done: