You are on page 1of 4

I am a lead developer in a Scrum driven team.

The way that we tend to work it in my organisation is


this:

Before the starts of a sprint each developer will be allocated a percentage of how productive we think
they are going to be during the sprint. For example a more skilled more experienced developer will
probably be able to be productive 70-80% of his total time during the sprint. This gives time for
unexpected meetings, bug fixes. I will come onto the bug fixes in a moment. We will get the estimates
for all the tasks signed off and then plan the developers work.

Going into the sprint the developer will carry out his planned work and complete his own testing. If
Possible as each block of work is completed another testing phase will take place either by the Scrum
leader or the product owner (project manager) just to make sure that there isnt anything glaringly
obvious that needs to be looked at. Anything that comes up in this testing phase goes straight back to
the developer that wrote it to complete in the sprint. The way we see it is that the team has effectively
committed to completing the tasks given to us at the beginning of a sprint so we needs to complete
them one way or another.

If an urgent bug comes into the team and it has to be done right this minute then myself and the
scrum leader will take a view on whether or not it it is possible to get it done without effecting the
planned work depending on how well we are doing. I.E. if we are half a day ahead of schedule and
the estimate on the bug is half a day we will do it without changing the planned work. If thats not
possible we go back to the product owner who decides what it is that has to be pulled out of the sprint.

If a non urgent bug is assigned to the team part way through a sprint then the product owner give it a
priority and it will remain in our pot. When the product owner then comes up with our next set of
objectives he will prioritise the bugs and the project work together and these will become our planned
items for the next sprint.

The thing to note is that it doesnt matter which project the bug came from. Everything has a priority
and that is what needs to be managed. After all you only have a certain development resource. When
it comes to which developer does it that depends on several things. you don't always know exactly
whose code introduced the bug. Especially if its from a very old project. If the same developer can fix
it then there is obviously a time benefit there but that exact developer might not be available. The way
that we try and work it is that any developer should be able to work on any given task. In the real
world this isn't always possible but that that is always our end goal.

I realise that I have been beating around the bush here but in answer to your question about who
should do the bug fix in short this is what I would say:

If the bug is identified during the same sprint that the work was being done then send it back
to the original developer.

If its urgent then it has to go to the best person to do the task because it needs to done as
fast as possible. That might not be the person that originally wrote the code it might be
someone with more experience.

If you have prioritised and planned the bug then you should also have time to work out who is
the best man to do the job. This would be based on the other work that needed doing, the
availability of developers and your general judgment.

With regards to handovers these should be fairly minimal. At the end of the day your developers
should be writing code in a way which makes it clear, clean and obvious to any developer that has a
task to revisit it. It is part of my job to make sure the developers on the team are doing this basically.
Part of this falls onto the Product Owner to prioritize if some bugs are more important than some
cards to my mind. If the PO is, "Fix these bugs NOW," then there should be bug fixes moved up to the
top of the list. If there are numerous high priority bugs then it may be worth having a stabilization
sprint where bugs are fixed and no new functionality gets done. I'd be tempted to ask the PO how
much time they want spent on bugs though I'm not sure how practical that is.

The idea of having maintenance developers is nice but have you considered where there may be
some pain in having to merge code changes from what maintenance does and what those developing
new functionality do? Yeah, this is merely stepping on toes but I have had some painful merges where
a day was spent with 2 developers trying to promote code due to so many changes between a test
and dev environment.

May I suggest the idea of another developer fixing the bug so that someone else picks up how
something was coded? Having multiple people work on some feature can help promote collective
ownership rather than individual ownership of the code. Another part is that sometimes someone else
may have an easier time with a bug because they have fixed that kind of bug before though this can
also lead to a dependency that should be checked regularly.

Why not capture a backlog item called "bug debt", and have the team estimate it each iteration. That
item will be used to hold some developer's time to fix it (as in #1).

I'm also a little concerned about the bugs that appear in UAT. Would it be possible to have some of
those testing folks on the teams to catch them earlier? This kind of thing is very common in projects
where it's thrown over the fence from group to group. The only way I have seen that works is to
integrate those other groups into the teams and rethink the testing strategies. Then, UAT does what
you want it to do... capture usability issues and requirements. You're right they won't go away
completely, but they will be minimized.

Bill: I have often heard of unit tests but usually just assume I know what they are. I think I need to
make sure I know what you mean here.

Jeff: Unit test are the lowest level of tests. They test classes and methods and small assemblies.
They look at code function to see if it does what the developer wants it to. Functional tests on the
other hand look at code to see if it performs what the customer or end-user wants. Unit test are
designed to run very fast and look at all the code on a specific section of the application. Test
coverage measurements should be made on unit tests.

If you use a technique call test first, then no line of code is written except in response to a failing
test. That means that there is not a lot of code in the system that was produced without a test
informing the codes development. This gets interesting because what this means that every line of
code has a test associated with it. So if something changes the code later the unit test can see if
anything breaks.

Because the unit tests are so fine grained you will know right away what the problem is without having
to debug. Debugging is figuring out what is wrong and with unit tests you know what is wrong. So if
you make a code change and it breaks right away you know right away the cause. You still need to
understand it but you do not need to spend time finding it.

So the combination of continuous build and unit test allow you to get to the condition of bugs being
found right away. In the particular case I was thinking of we did not need a bug tracking system. In this
effort we had twelve developers over eighteen months and only had about five bugs at any one time.
There was no need to prioritize bugs. We said all bugs are important and need to be fixed right away.

Bill: So this makes total sense to me. Why doesnt everyone do it?
Jeff: There are a couple of reasons. One reason is that historically we have considered testing not
part of development. We have considered testing as something done by QA, often by a different
department. Code got thrown over the wall for someone else to deal with and quite a bit of
development was done before testing occurred. Responsibility was passed on and not shared.

The other reason is that we do not tend to measure debugging effort so the cost is not apparent. It is
almost always much more than people think. When we do not measure fixing bugs there is not any
drive to improve the process. Also, folks just assume there will be lots of bugs. That is just how
software is. Well, it does not have to be that way and I think it is time to change our attitude.

Bill: So is the concept of immediate fix one of the cores of Agile development?

Jeff: Lets say that most successful Agile teams just do it. One of the practices in extreme
programming (XP) is the concept of collective code ownership. In this case an individual does not
have ownership of code. So no one can say, well I cannot fix this code because it is someone elses
code. So when in Agile we all own the code so if I see a bug, I just fix it.

When I teach an Agile or Scrum course someone will almost always ask a question like How do you
handle bug fixes in iterations or sprints? When I ask How do you want to handle them? we get into
a pretty interesting discussion. Most people say something similar to We should prioritize them with
the user stories, size them like we do user stories and then see what fits into each iteration. I usually
smile and ask any developers if they know ahead of time how long it will take to fix a defect. They
ALWAYS say Sometimes. And THAT is the problem!

How can you actually determine the size of fixing something which is broken in an unknown way? I tell
people in my classes I only know two sizes for defect fixes: 1) Trivial because I already know whats
broken and how to fix it, or 2) Infinite because I have no idea whats broken or how to fix it! If those
are the only two sizes available to us how can we possibly put them into iterations effectively?

I have found one effective solution to be the use of Kanban techniques for defect fixing. I dont want to
get into what Kanban is or isnt and when it should or shouldnt be used, so Ill just lay out what I have
seen be effective for a number of teams:

1. Prioritize the defect list. This is NOT done in the context of user stories, but separately. The
list is prioritized however the Product Owner says it should be prioritized.

2. The team and Product Owner decide on how much effort (time) should be used each iteration
to work on defects. Hopefully this is not a large amount, but it might be for teams which have
large numbers of defects in a legacy system.

3. The team determines when the defect fixing time occurs and how they do it. Most effective is
to put a gate or two in place on the defects. For example, gate 1 may say the developer
needs to know within 2 hours if the defect is going to take more than a day to fix. If so, then
put it off until a discussion can take place with the Product Owner. Gate 2 may be after a day
if the defect is not fixed perhaps another discussion needs to take place. However the gates
are set up (if they are) the defects are worked in priority order.

4. Limit the number of bug fixes being worked at one time to a very small number. If you dont do
this you will have each developer working on at least one defect and run the serious risk of
none of them getting fixed before the iteration or sprint ends!

This 3 step approach allows the team to work on defects in priority order while allowing a set amount
of time to be spent on the defects. The amount of time spent can be changed as needed to address
the business needs of the organization at any point in time.
The downside of this is no one can tell a stakeholder something like that bug will be fixed by date X
or well knock out X bugs this iteration. Saying anything like that is a lie anyway, so this shouldnt be
a big issue. I say these statements are lies under the assumption the defects are non-trivial.

How else have you managed a defect backlog that has been effective? Id love to have more proven
techniques for people to experiment with!

Reply to this post:

We record all issues/bugs/problems call them what you want. However, once recorded never
deleted, and for good reason (later). We use state values including Deferred/Approved (we think we
should fix it, just not now); Deferred/No-Fix-Planned (not worth the time/cost/effort to fix);
Rejected/Approved (with a reason). By keeping these, we have proof (to ourselves) that we have
seen an issue and made a decision about how to treat it. It also keeps folks from recording the same
thing over-and-over again. Any issue can be reinstated later if it turns out to be bothersome to a
customer or the product owner changes his mind. Note that our general policy is to fix what we can
during the project in which they are found, but as you all know, that isnt always possible, and so we
use the stated mentioned above. Once in a while, we go thru the Deferred/Approved list and change
some to Deferred/No-Fix-Planned, because it has become apparent that there is no need to fix.

Unit testing tool: http://fit.c2.com/

I usually recommend tools like FitNesse, Fit, Selenium, Cucumber, Watir and WatiN as starting points.

You might also like