Last week I did some pair debugging with my co-worker. We had an automated acceptance test (took about 10-15 seconds to execute it) that showed the unexpected behavior - there were 9 items in a collection that should have contained 8 items.
When you are debugging a problem your goal is to find out what the hell is going on and do it as fast as you can - learning as fast as possible. After you know what the problem is and where it is located then you can concentrate on fixing the problem and making beautiful code and making tests that protect you and others from similar problems.
At the first point we discussed about implementing an "unit" test that would reproduce the problem, but we didn't do it. The point why we didn't do it was that the automated acceptance test was a sufficiently fast test already and we wouldn't really learn anything new while transforming the already available test case to a faster format. By sufficiently fast I mean that the major part of time we were spending by reading and writing code, thinking and discussing about the problem.
Based on my experience I think that you shouldn't spend time in automating/optimizing a test that already is fast enough for debugging (while debugging) - manual tests might also be OK at this point. If most time (for example 95%) is spend by thinking where the problem could be, 10 times faster test might not really speed things up at all (of course it might pay of if making the test is a very cheap operation) - when you optimize the original test case you are concentrating in the already known information and spending time. (The situation is of course a very different if the test takes 95% of your time and you have a method to optimize the test)
Instead you should concentrate on learning. This means finding new information that will guide you to the source of the problem. The most important things to find out first are the things that you assume to be true but aren't - test your assumptions.
There are many options that can be used for debugging purpose such as:
* Additional tests (not part of the original reproducible case)
* Debug prints (I'm working with Python and don't need to do any recompilation)
* Lucky guess fix (some might call this a scientific guess)
* Running code in a debugger
* Running code in a profiler (when we are talking about performance issues)
* Bisect (if the problem is a regression problem) - this just seems to almost automatically solve regression problems