One of my favourite blog writers is Eric Sink. I find his writing style entertaining and informative. I do not find Revision Control Systems the most interesting subject matter. I use one, it works, I am happy. But for Eric, they are his speciality. After all, he owns a company that writes them. In a recent article, he discussed speed vs. storage space trade off. He used a source code file containing a single class, which was 400KB in size for the latest revision as his “guinea-pig” and braced himself for a barrage of comments regarding whether such a file constitutes an example of poor coding.
Certainly, when you take into accounts things such as the single-responsibility principle, it seems unlikely that a single class could grow to such a size. It would seem that such a class could be a target for a future refactoring exercise. Refactoring is a worthy cause. There is no shortage of reading material that carefully constructs solid arguments for why it should be done. But a more worthy cause is making software sell. Pet projects and home-hobbies aside, software is of no benefit, if no one is using it.
Commercial reality dictates that if a new feature will help sell more copies of the software, then adding the feature is what is important. It is true that the more obfuscated code becomes, the harder it is to expand to incorporate new features. I have heard of software projects grinding to a halt because adding new features simply became too difficult to accommodate.
Using commercial pressures as an excuse to write sloppy code is not acceptable. I have seen examples of code that looks like “the first thing that popped into the developer’s head” has been committed to the Revision Control System. Often, with very little extra thought (read “time”) a neater, better solution could have been found. This is where task estimation is important. In my experience, programmers will use the amount of time allocated, to perform any task. In all likelihood, they will get to the end of that time-frame and realise they overlooked at least one aspect and then take longer, but that is not the point I am trying to make here!
If you have allocated “a day” to add a feature, then most often, that is how long it will take to add. If you had allocated “half a day” for the same feature, then I would wager that the feature would have been added in about that half-day timeframe. Granted, this is not always the case, but experience has shown me how often this is a surprisingly accurate revelation.
This stems from the fact that if a programmer knows how long they are expected to take, they will get it working first, then tinker with the code until the time has elapsed. Some “tinker time” assists in overall code readability. If you are not prepared to add “code refactoring tasks” to your project plan (regardless of the project methodology you use) then allowing a certain “slackness” in task estimation allows your code a fighting chance of staying relatively neat.
When time pressures arise, neatness and accuracy of code are amongst the early casualties. Unfortunately, this seems unavoidable and is just the price that is paid to remain profitable. Whilst I strive to write and maintain neat, manageable, accurate code, I live in the real world and know that regularly revised source code over two years old (Eric’s example was seven years old) will likely be of the “too long / overly complex” variety. I will not be one to criticise him for that.