The "Heisenberg Developer" name is really excellent. I recall DeMarco saying in his excellent books (so many years ago): Do not measure anything unless you have a very specific action you want to take based on the data. Why? Because every measurement alters the way programmers work. The author reports that increasing the granularity of the work (and thereby decreasing the time between measurable status updates) led to a lack of refactoring. I have also experienced this one and it led to much head scratching on my part.
Which is not to say that one should not measure. I've never actually used Jira, but apart from that, many of the things the author talks about are generally good ideas: reduce story size to a day or so, prioritise every 2 weeks, measure delivery at about the same interval. Don't throw the baby out with the bath water.
I can say with some confidence that where they went wrong was that they tried to estimate the stories and then they made the cardinal sin of trying to measure the elapsed time for each story. In the first case, you don't need estimates. Each story is a day or so. It's enough granularity. In the case of measuring, there is no organisational benefit. You can see the average throughput of stories. You can estimate your final delivery (especially if you use a defect growth model to estimate how many requirements you are currently missing before delivery). Measuring the actual time of an individual story adds absolutely nothing to your control of the project.
The main argument that I've heard for measuring are two fold. First they feel that measurement will allow people to get better at estimating. Again, it is not needed. If you have a granularity of a day or two in your story sizes (and your throughput is showing that you are achieving that), then you have nothing to improve. If your throughput is not there, then you need to see what the problem is. It will be one of 3 things: your stories are too big, your code is too complicated, your programmers are crap. The first is solved by making your stories smaller, the second is solved by working on technical debt, the third is solved by firing your programmers. I recommend doing everything you can to solve the first, move on to the second, and only then resorting to the third.
What measuring will give you is an idea of the variance in the story sizes. This is unimportant. In fact it is critical not to measure this. All you care about is throughput. Variance is where the developers are putting the extra effort in to improve your throughput (by dealing with technical debt, improving tests in critical areas, creating efficient build systems, writing documentation, etc.). If you remove the variance (as described by the author), you destroy the ability for the developers to optimize throughput.
The second argument for measuring time is to identify stories that are (or were) in trouble. Again, this is unnecessary. First, if you have a story and you are having a hard enough time that it is taking you longer than a day or two, then you already know about it. Asking for help is what stand up is for (and really, I hate to have to say this, but that's all standup is for).
If you are not having trouble, but you are spending a few extra days to refactor something, then you already know about it. Everyone else does too because they can see the code that you refactored.
If you are sitting on your ass and wasting your days writing extremely long posts on HN, then you already know about it. And so does everyone else (because you haven't written any code today, and it's not like you asked for help...).
I'm just going to write this one time. If your reasons for measuring the amount of time for stories is because you secretly think that your developers are crap and that they don't know what they are doing, no amount of measurement is going to solve the problem. Make a decision. Fire them, or not. You can't make a crap programmer good by pointing out how much time they spent on a story. They already know it.
> (and really, I hate to have to say this, but that's all standup is for)
Hehe, this earned a +1 from me. I've had standups where the scrum master (who also was a lead developer) spent the first 15 minutes talking about what he did the day before.
Which is not to say that one should not measure. I've never actually used Jira, but apart from that, many of the things the author talks about are generally good ideas: reduce story size to a day or so, prioritise every 2 weeks, measure delivery at about the same interval. Don't throw the baby out with the bath water.
I can say with some confidence that where they went wrong was that they tried to estimate the stories and then they made the cardinal sin of trying to measure the elapsed time for each story. In the first case, you don't need estimates. Each story is a day or so. It's enough granularity. In the case of measuring, there is no organisational benefit. You can see the average throughput of stories. You can estimate your final delivery (especially if you use a defect growth model to estimate how many requirements you are currently missing before delivery). Measuring the actual time of an individual story adds absolutely nothing to your control of the project.
The main argument that I've heard for measuring are two fold. First they feel that measurement will allow people to get better at estimating. Again, it is not needed. If you have a granularity of a day or two in your story sizes (and your throughput is showing that you are achieving that), then you have nothing to improve. If your throughput is not there, then you need to see what the problem is. It will be one of 3 things: your stories are too big, your code is too complicated, your programmers are crap. The first is solved by making your stories smaller, the second is solved by working on technical debt, the third is solved by firing your programmers. I recommend doing everything you can to solve the first, move on to the second, and only then resorting to the third.
What measuring will give you is an idea of the variance in the story sizes. This is unimportant. In fact it is critical not to measure this. All you care about is throughput. Variance is where the developers are putting the extra effort in to improve your throughput (by dealing with technical debt, improving tests in critical areas, creating efficient build systems, writing documentation, etc.). If you remove the variance (as described by the author), you destroy the ability for the developers to optimize throughput.
The second argument for measuring time is to identify stories that are (or were) in trouble. Again, this is unnecessary. First, if you have a story and you are having a hard enough time that it is taking you longer than a day or two, then you already know about it. Asking for help is what stand up is for (and really, I hate to have to say this, but that's all standup is for).
If you are not having trouble, but you are spending a few extra days to refactor something, then you already know about it. Everyone else does too because they can see the code that you refactored.
If you are sitting on your ass and wasting your days writing extremely long posts on HN, then you already know about it. And so does everyone else (because you haven't written any code today, and it's not like you asked for help...).
I'm just going to write this one time. If your reasons for measuring the amount of time for stories is because you secretly think that your developers are crap and that they don't know what they are doing, no amount of measurement is going to solve the problem. Make a decision. Fire them, or not. You can't make a crap programmer good by pointing out how much time they spent on a story. They already know it.