There's a contradiction in the way that bespoke software developers often attempt to ensure the best service to it's customers. Born out of a desire to ensure that we're doing a good job we organisationally and individually want to recognise our failings, remedy them, and learn from them. It's something that I would expect to hear from anyone at all applied to almost every facet of our lives. In this particular context it pertains to software faults.
We like to measure performance. It's actually important to know when something went wrong in the process in order to improve the process to prevent it from happening again. This is often an individual error, a mistake we learn from. Sometimes it's more systemic and sometimes it's technical. The conflict stems from the need to analyse and correct retroactively and meet the customer's immediate needs. Most often the scenario from the customer's perspective is that their product is doing X and they want it to do Y. This may be a change in requirements or it may be a bug. When it's a bug, we look into it, reproduce it, compare current behaviour to our current understanding of the requirements. We investigate our understanding of the requirements and if somewhere along the way we got it wrong we put our hands up and apologetically schedule the remedy as soon as we are reasonably capable. If it turns out to be a requirement change it is instead dropped into the regular/agreed/contractual release schedule.
This is reasonable and it meets with our customer expectations.
As sensible as the procedure is, and as happy with it as our customers generally are, it's doing a disservice to both of us. The customer doesn't really care if it's an error or not. The customer just wants the software to meet their requirements. In all honesty we care more than they do because we want to monitor and improve our efforts. It's us that's really stuck in the mud.
Were we to find a less strict view on what is a change and what is a fault and lumped everything together, we'd still have to assess the requirements presented by the customer, but the majority of the expensive analysis procedure could be jettisoned. A more narrative method of performance analysis would dramatically reduce the process, all we need to do is shake our sense of responsibility and replace it with a sense of achievement. Ultimately the measure of improvement should be how infrequently the customer needs to make a change to the system.