Tag Archives: Quality

Good vs. Better

Despite my desire for an Android phone, (puns are never quite as good, when intended) I actually quite appreciate the Apple iOS and its consumer based products. I am not the sort of consumer with an insatiable appetite for the latest piece of technology. This timing was not right for me to buy a replacement for our “go-anywhere” laptop. When the time comes to replace our current run-of-the-mill laptop, I would seriously consider an Apple iPad as a worthy replacement. Of course, new technology will come along before then, so it is far from a guarantee. But, it is hard to see any other manufacturer making the concerted effort to produce a slicker device in the category.

Apple advertising pitches the iPad as a revolution. The revolutionary part is not the form-factor, nor the hardware, but rather the care-factor that went in to developing the device. Prior to this device, using a computer was akin to travelling to a foreign country where the residents spoke a foreign language. Sure, you could get around but it was a slightly difficult experience. If you went to the trouble of learning the environment/language, you got more out of the experience. The iPad was more like travelling to a neighbouring country that speaks the same language. The learning curve is almost non-existent.

This salt-pan-like learning curve enables people to feel an immediate mastery of their environment. It is empowering, which in turn leads to favourable experiences. One of the aspects I admire about the iPad is the quality of the default applications on it. Too often in the computing industry, the term “quality” is approximated to “lack of defects”. Quality should extend to usability and fitness of purpose.

The derogatory term “gold-plating” is levelled at some developers. For those unfamiliar with the concept, it basically suggests that a developer spends too much time perfecting some piece of code. After all “The perfect is the enemy of the good”.  The longer your software takes to develop, the more funds it requires. Prior to the software generating income, this is a financial burden. Therefore, it is important to avoid “gold-plating” – especially for “version 1” programs, but avoiding gold-plating is not an excuse to turn in poor work.
Apple have shown that going to the extra effort to produce good quality software can be financially rewarding. If people like your product enough to pay for it, then making software that more people like has an obvious incentive. If you work in a large corporate environment, turning out systems for internal use, the benefit is not as immediately obvious. If you think about the cost to your business in terms of lost productivity and training programs, you can start to appreciate that easier user interfaces save money, even if they do not generate it.
Making better software is hard work. A lot of effort has been expended in terms of producing systems that lead to less defective software. Revision Control Systems / Unit testing frameworks / Continuous Build tools / Static Code Analysis tools and more all aim to reduce software bugs. Can you name one automated tool that provides any support for developing better User Interfaces? I suspect a lack of these tools is due to the required level of intelligence needed to produce meaningful analysis of a user interface. Writing software that analyses UI suggests that the software understands what the UI is. I am no expert in the field of artificial intelligence, but I would suggest we are not quite at that level yet!
Having decided that human intelligence is needed to design User Interfaces does not mean that automated tools cannot assist. The “usability” of clickable targets are affected by things such as their size and location. The usefulness of text is reduced by overall length (Oh the irony in this blog post!) Modal dialogues provide “speed-bumps” to the user’s “work-flow”. Such aspects are quantifiable. Maybe such tools that measure such things already exist, but they certainly do not attract the attention of the programming masses.

Apple is leading the way in terms of “user-focused” software and there is nothing wrong with having a market leader that is doing a “good job”. Here is hoping that others will help raise the overall standard further by continuing to compete with Apple’s products!

Does Technical Debt Matter?

I have some strong views on code quality.  One of my professional goals is to always attempt to improve my coding with the aim of producing better code.  In this day and age, making software “less broken” is about the most I can hope for.  I cannot foresee a time when written software becomes “perfect” / “bug free”.  Maybe it will – I have learnt: never say never…

Anyway, this is an article akin to playing devil’s advocate.  I am not particularly comfortable with what I suggest below. I have written it purely to get people thinking about the time and effort expended writing software.  As always, I encourage your comments – positive or negative.

One of the odd things about the software industry, is that code “rots”.  This is somewhat strange.  Source code, written in text files does not “degrade”.  Unlike organic reproduction, copying a file leads to a perfect reproduction.  If you kept a copy of code written say twenty years ago, it would still be the same today as it was then.  Things change rapidly in the computing industry.  As a result, it is extremely unlikely that you could use that twenty-year old code on a modern computer.  A different form of “rotting code” exists precisely because the code does change.  Over time, countless little hacks or quirks can be added to an active code base that leads to obfuscation and “unhealthy” code.

The common technique to reducing code-rot is refactoring.  By having comprehensive unit tests, refactoring exercises help keep a code base current and ensure that changes made do not lead to regression bugs.  Working on a well-maintained code-base is a more pleasant experience for a developer.  Well-maintained code is easier to extend and developers have less fear of making mistakes.
“Technical debt” is a term coined by Ward Cunningham and refers to the “price paid” for releasing code for the first time.  He argued that a small debt was useful to incur, as it helped speed up the development process.  It was okay to incur some debt as long as it was “paid back quickly” by refactoring the code.

“The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation…”

Productivity graphs show how progress drops away on poorly maintained projects.  The longer the project runs, the harder it becomes to add features or fix bugs without causing other regressions.  The solution to the problem is to avoid as much technical debt as possible. Following best practices will help achieve this goal.  But is this done at too high a cost?  Following best practices adds extra work and thus does slow down development early in the life-cycle of the project.

Not repaying technical debt will grind a software project to a halt.  If you use an analogy of credtit card debt equating to technical debt, having the software project grind to a halt is the equivalent of declaring bankruptcy.   Obviously, this is not a great outcome, but it is not the end of the world either.

What if your software has run the term of its natural life?  Your software will have been written to meet a specific need.  Maybe that need is no longer there.  Maybe that need is now perfectly met, or the market has been saturated and sales have evaporated.  Maybe every feature that could be added, has been added (including reading mail).    If the project gets “sun-setted” does it really matter how much technical debt is left in the code base?

Not “doing things the best I can” is something I struggle with.  “Doing the best I can” and “doing things well” do not necessarily mean the same thing.  Obviously, the process of software development happens on a linear scale.  Software tends not to be written “the best way” or “the worst way” but rather somewhere in the middle.  If the process your team uses is close enough to “the best way” end of the scale to not be crippled by technical debt, then maybe that is good enough.

Say “No” to Band-aids!

Sooner or later, there will be a need to fix bugs in whatever software you work on.  How long it takes to fix a bug tends to be non-deterministic.  Some bugs will be easy to fix, and others not so.  Unfortunately, bug fixes on commercial software are often done the wrong way – under the guise of being done quickly.  The “band-aid fix” is the wrong way of fixing a problem.  The metaphor of the “band-aid fix” extends beyond the software industry, but I.T. has turned it into a real art-form.

At the heart of a lot of band-aid fixes is the notion that you can fix a problem without really knowing what the problem is.  Commercial reality may well prevent a code base from being perfect, but the more band-aids that are applied to the code, the worse the software becomes to work on.

There may be a genuine need to apply a band-aid fix to code.  When there is a real financial loss or damage to customers’ data, expediting a fix is understandable.  Removing this kludged fix should be given a high priority.  It is important to recognise that the band-aid won’t continue to hold a large wound together!  If you do not remove the band-aid and perform proper surgery, the wound will rot.  Once you allow the code to start “rotting”, it becomes difficult to arrest this negative momentum.  It damages the maintainability of the code and encourages other programmers to act irresponsibly too.  It is difficult to put enough emphasis on this point.

Depending on the culture in the workplace, it can be easy to dismiss fixing “less than ideal” code.  Studies have shown how counter-productive poorly maintained code is on development productivity.  I have yet to work with someone in the software industry that would disagree with that thought. Yet barriers are still erected that prevent acting upon it.  There is a vicious circle alive and well in parts of the software industry:

  • Code is recognised to be poorly maintained.
  • Poorly maintained code is recognised to hinder productivity
  • People are too busy to fix old code.

I cannot believe people do not see the irony with this!  Allowing software to get in to this vicious circle is the first mistake.  Programmers need to promote the culture that refactoring is not a luxury, but a necessity.  Allowing some refactoring time on all development work can avoid the problem in the first place.  Digging your software out of the hole created by the vicious circle is altogether a more expensive proposition.  Not refactoring the code at all is even worse!
The idea that refactoring time needs to be allocated with development time appears to imply that you will not be able to push out new features as quickly.  At a superficial level, this is true enough.  Over the lifetime of the code base, this argument would not hold up.  The neater and more maintainable the code base, the quicker it is to perform work on.  In other words, good code is easier to add features to.

The biggest problem I see with a “band-aid” fix is simply that it is not a fix at all!  It cures a symptom in rather the same way that pain-killers can stop broken legs from hurting.  It masks the issue – but it does not mean you’re right to walk home! Masking problems in software, just makes them harder to track down.  Software can be complex enough for one bug to be at the root of several problems. If you only mask the problem, you never know where else the bug will show up

Office Politics

When you work with other people, “office politics” will always be a factor.  I have heard people say that they did not like office politics as if it were something that they could avoid.  I am not talking about the sort of “Office politics” resulting in the metaphorical stabbing of fellow co-workers in the back. It is true that office psychopaths definitely attempt to manipulate co-workers for their own purposes, but a lot of daily interactions can also be seen as a form of “office politics”.

Internal restructuring has seen my role change recently.  I was working on a framework team, providing code (and various other infrastructure) to various teams in my company.  My “customers” were the teams that wrote the applications that sold to the real customers…  That is, the ones that paid money!
Since the restructure, I have been moved onto one of these teams as a senior developer.  Former “customers” are now team-mates.  When I was working on the framework, I had a certain perception of how our code was being used to create the end-product.  Now that I have become exposed to their code base, I have discovered the truth behind how they use the framework code!  The fact that there are differences indicates some degree of a breakdown in communications.  There is nothing catastrophic about what they have done, but it shows a disparity between the directions the framework and end-products were heading.  This difference was due largely to the difference in motivations between the two teams.

The framework was responsible for the core of twelve different applications.  As such, consistency and flexibility in the architecture were highly valuable commodities.  I would not be presumptuous enough to claim that we succeeded in providing the perfect architecture every time, but those were primary goals of the code we wrote.

The end-products have a far more tangible goal: To make money.  I am not in product management, but having developed commercial software for a long time now, “making money” tends to be about writing software that adds features that customers want and alleviates the worst of the bugs that have been reported.  In terms of priority, “adding wanted features” are more important.  Trade shows never focus on showing customers how the software no longer crashes when you do steps X,Y and Z!

In terms of how this affects code in the long-run, there is a natural tendency to leave “good enough” code alone.  Short term deadlines enforce short-term thinking.  Commercial reality allows code to deteriorate in rather the same way that an untended garden becomes an overgrown jungle.  Active pruning and weeding would avoid the problem, if only it were seen as an important goal.
Given that the software still sells and still provides real value to the customers, it can be seen as an unimportant goal.  The fact that new features become difficult to shoe-horn in to the existing code base is seldom given consideration when writing code.  Which brings me back to my original point on office politics.

Now that I am on a new team, I see the “weeds in the garden”.  No individual issue is worthy of much attention, and so the existing team members simply ignore such issues for more important work.  I would highly doubt there is a piece of commercial software being sold that did not have some degree of this occurring in their own code base.  I have worked alongside my new team members for many years.  They know how important code quality is to me and I know that they will expect me to try and improve their code’s overall quality.  Here is where the “office politics” lie.  I could just blunder in and make changes which I believe are for the better.  I have known programmers who would.  Different cultures in different parts of the world would probably react differently to such an “intrusion”.  In Australian culture, this would not go down well and so it will not be the approach I will take!  I’m also someone who is only too painfully aware of their own short-comings as a programmer. So, tact and and a measured approach, remembering one’s own shortcomings will definitely be the order of the day.   See, even in the most ordinary of jobs, “office politics” will play a role!


It takes a decent amount of time and effort to design a good user-interface.  One of the problems faced when making a user-interface is that it can take an enormous increase in effort to make an ordinary interface into an extraordinary one.  You may have come across a user interface (be it for a web-site, or an application) and been absolutely flummoxed by its operation.  Unfortunately, that does not mean that a great deal of time and effort were not spent trying to simplify it.  (Of course it may mean that no time and effort were spent trying to get it right!)

There is an extra pressure on designers of external web-sites.  Get it too far wrong and your customers go off to your competitor’s web-site.  In my experience, application developers can get away with worse user-interfaces.  If the program has the features people want, people will make the effort to learn how to use the application.  This should not be seen as an excuse not to care about the user-interface.  There is a saying that if your customers are not aware a feature exists, then it doesn’t.  Unfortunately, most user interfaces end up obscuring some functionality.  In a feature-rich application it becomes increasingly difficult not to do so.

Every time I hear a person talk about “learning” software, I feel that somehow the software has failed.  I would like software to be so intuitive that using it is “natural” – rather than a learned action.  It is probably an unrealistic expectation that all software will be like this, but that does not stop it being a worthy goal to work towards.

When I talk to non-technical people about using software, the thing that becomes apparent is that they all expect to have to learn how to use it.  No-one expects to sit down in front of a new word-processor and just use it to do their job.  One disheartening example came with the release of Microsoft Office 2007.  For me, the ribbon was a huge step in usability enhancements over the traditional tool-bar and menus approach.  The one resounding criticism I hear with Office 2007 was from existing Office 2003 (and prior) users:

“I used to know where everything was and then they went and changed it all.  Now I have to re-learn where things are”

Microsoft puts a great deal of time and effort into usability.  Hopefully, this means the learning curve for Office 2007 was not as severe as with previous versions.  The ribbon was designed to be “a better way”:  Task oriented user-interface is meant to be superior to functional oriented user-interface.  People have been “brought up” thinking along the lines of functional software rather than thinking the computer will aid them in completing their task.  This mind-set will change over time and wide spread adoption of task-oriented user interfaces.

If you ever have to write a user-interface remember this:

  • You either spend your time getting it right, or ask the users to spend their time figuring it out.
  • The world does not need more software that is difficult to use.


Refactoring is not a dirty word.  Time will always apply pressure on a project.  When this occurs, there is a tendency to want to cut corners.  If you are a “doer” coder, then I would expect the first working version of your source code is not particularly neat.  Your task is not over yet!  Make the effort to refine variable, function and class names.  Make sure you identify what is wrong with your code and fix it there and then. 

Unit tests are your friends here.  Well constructed unit tests help you to re-factor fearlessly. You will know if you break your code when you re-factor, as your unit tests will fail.  It is important that you have sufficient coverage with these tests.  If you miss boundary cases, or not test each code-path, you are leaving yourself exposed to introducing bugs.  Sometimes a few unit tests are worse than having no unit tests as they can lead to a false sense of security.

Having unit tests can also help you to produce less tightly coupled code.  It is generally code that is better abstracted and hence more compliant for code reuse.  In fact, the code you write is already used in two places: firstly in your application, and secondly in your unit tests.

Test-driven development is the technique of writing test-cases and then writing the code that passes them.  (There is more to it than that – but the fact that people write whole books on the subject probably tipped you off to that)  There is the concept that you write just enough code to pass the unit tests and no more.  It is a great way of ensuring code brevity.  Less code means less chance for bugs to exist.  It also helps to remind you of what you are trying to achieve in the code that you are writing.

Unlike some cases I have seen, refactoring is not “throwing away” a code base and starting again.  That is rewriting!  I have not seen much public endorsement of the rewriting technique for improving a code base.  Refactoring is merely the art of neatening the code.  A compiler will “understand” what it has to do, no matter how nasty (and bug ridden) the code may be.  The purpose of refactoring is to allow another human to understand what the code does.  As humans are (relatively) good at pattern matching, looking for repeated code blocks is a good first step when refactoring.  This code is not necessarily going to be multiple statements, or indeed even one complete statement.  Sometimes this repeated block is worthy of its own function, sometimes it may just be good to assign the result of the code block to a local variable.

It is worth spending some time on tidying the code.  I have yet to come across any firm metric to help you determine how much time this will be.  You don’t want to be accused of “playing” with the code, the way a child “plays” with food they don’t want to eat!  When refactoring existing code, it can be worth multiple check-ins.  If your project uses a continuous integration product and sufficient test case coverage, then this helps prevent you breaking code.  If you don’t use these tools, it still helps provide a level of transparency to your work.  It will be easier for someone to understand how the code evolved into the state it is.  This may be important in case bugs are introduced.

Remember that the result of this work is not to produce bug-free code, but code where bugs cannot hide!  You are aiming to make it readable for the next person by making the learning curve as gentle as possible.  Why bother making it easy for the next person?  Well, it may just be you!


Doers and Thinkers

It is possible to divide groups of people in a great number of ways.  Left-handers and Right-handers, women and men, so forth and so on.  Sometimes two groups are mutually exclusive and adequate to cover all of a population base, sometimes (even in the above categories) they are not.  I am about to discuss two groups of programmers – but I would fully expect that there are programmers out there who don’t fit either category as precisely as I will describe them.

Category 1, which I shall call “the doers”, includes the majority of programmers I have seen and have read about.  Their personal coding style can be bluntly referred to as a “trial-and-error style approach”.  That actually sounds far too much like a harsh criticism.  The majority of these programmers are not clueless monkeys prodding away at a keyboard.  They have a reasonable degree of knowing what to do, but the detail is lacking.  As they code, details come into focus, nuances are dealt with and problems are overcome.  The beauty of this style of coding is that it is easier to deal with when writing.  There is no need for a detailed analysis of all aspects of the code, which in turn makes it easier to concentrate on a specific part of the code.

Category 2 I shall call “the thinkers”.  They represent a much smaller group of programmers.  Their approach is more of intense thought followed by intense typing – with rarely a backspace key pressed.  I would suspect that this group would have a far better ratio of keys-pressed to source-code output.  These are the true geniuses in the field, but based on my experience, they make up no more than one or two percent. 

In reality, even the “doers” utilise and require a high degree of concentration to perform their task.  The biggest difference tends to be that the first version of working code the “thinkers” write will be neat and orderly, whereas some degree of refactoring will be required for the “doers” to turn in work of a similar quality.

As mentioned in my opening paragraph, I do believe that there are times when you will not neatly pigeonhole developers into only one of the two categories.  Depending on the task at hand a thinker may act as a doer or vice-a-versa.

Personally, I place myself in the “doers” group.  I used to feel convinced that the “thinkers” was where I wanted to be as a programmer.  Indeed, I recommend thinking about problems as opposed to blindly trying anything that springs to mind.  These days however, I am more forgiving of my own ability or lack thereof.

The computing industry seems to be coming aware that most programmers are not “thinker” programmers.  Many agile and XP style approaches to programming rely on this.  Concepts such as pair-programming and test-driven-development tend to favour “doers”.  For example, the fact that a “thinker” will visualise the code in their head for a long time prior to writing it down, makes it hard for them to participate in pair-programming environments.

The real danger is that thinkers will not be given a working environment that suits their needs.  One size does not fit all.  As the majority of the industry is made up of “doers”, it is important that methodologies are followed that allows them to produce output that a “thinker” would be proud of.  Essentially, that boils down to allowing time for refactoring.  The code is not done the moment it “works”.  Newer methodologies accept this and build it into the process of writing software.  But that is a story for another time.

Identifying what is wrong with code

There are a number of head-nodding moments in Clean Code The book mentions many coding guidelines and good-code principles that aim to simplify code that is written.  Software development is definitely a career where you must continuously learn.  Even so, I’m somewhat disappointed to be this far along in my career before learning the names of certain concepts that until now I have only known as “common sense”. 

There have been times when I have identified “bad code”, but pressed to state what was so bad about it left me speechless…  This does not make the task of improving the code impossible, but can leave you with an inability to justify and/or quantify the changes you make. 

Here are some of the more common rules:

One level of abstraction per function.

This rule is not given a “trendy term”. Unlike the other rules I’ll introduce it is dealing with functions, rather than classes.  But it is still important.  This rule possibly came from “days of old” when top-down procedural programming was all the rage.  It’s still important and sans lengthy example is best summarised as:  Do not mix details with abstract concepts in the one function.  If you stick to the idea that each function should only do one thing, then you will be going a long way towards avoiding mixing abstract concepts and technical detail.

Single Responsibility Principle

The Single Responsibility Principle (SRP) states: There should never be more than one reason for a class to change.  The object mentors web-site has a series of PDFs discussing various aspects of Clean code, and has this (in part) to say of SRP:

“If a class has more than one responsibility, then the responsibilities become coupled.  Changes to one responsibility may impair or inhibit the class’ ability to meet the others.  This kind of coupling leads to fragile designs that break in unexpected ways when changed.”

In a rather circular fashion, the definition of a responsibility in SRP is “a reason for change”.  In other words, a class written to obey the SRP only has one responsibility.  If we were to develop an Object-Oriented “Space Invaders” game, we may end up with a class for the aliens, that looked like:

 public class Alien
    public void Move()
    public void Render()

However, this is a violation of SRP.  We now have two reasons for changing this class.  If the AI model changed, the calculation of the Move method may require changes – likewise if there were changes to the Graphics subsystem, the Render method would alter.


Law of Demeter

This is also known as the Principle of Least Knowledge.

“The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents).”

Object Oriented notation assists here, by having different scopes for the methods and member variables defined in a class. Clean Code points out that often you will see private member variables (good so far) exposed through public getters and setters methods (not so good).  This exposes the implementation details of a class allowing for other classes to violate the Law of Demeter.  Better (so the book argues) to expose the data through abstract concepts

“A class does not simple push its variables out through getters and setters.  Rather it exposes abstract interfaces that allow its users to manipulate the essence of the data, without having to know its implementation.”

The example given talks about a vehicle class that exposes the “percentage of fuel remaining” as opposed to “fuel tank capacity” and “gallons remaining”. 


This is by no means a definitive list of ideas presented in the book.  From my experience, violations of these principles represent the vast majority of “ugly code” that I have seen.  Once these coding techniques have been highlighted, it is possible to see where improvements can be made in existing code.  By refactoring code to utilise these principles, you can identify that tangible improvements have been made.

Comments revisited

About a year ago I wrote an article on comments.  That article left me feeling uncomfortable.  In it I made a point that people may think that their code is too simple to need comments.  Indeed, it is not beyond the realms of possibility that this is in fact the case.

I’ve been reading a book called “Clean Code – A Handbook of Agile Software Craftsmanship” by Robert C “Uncle Bob” Martin.  He devotes a whole chapter to comments – which is highly commendable.  His thoughts on comments can roughly be surmised by the following extract:

The proper use of comments is to compensate for our failure to express our-self in code. … We must have them because we cannot always figure out how to express ourselves without them. …  Every time you express yourself in code, you should pat yourself on the back. Every time you write a comment, you should grimace and feel the failure of your ability of expression.

Despite my earlier post, I have reached the conclusion that I agree with him!  Indeed, the argument “My code is too simple to need comments” should describe the code you are aiming to write.  Being objective in your analysis of your code is the hard part.  Martin’s main objection to comments is what I was alluding to in my earlier post:  they tend to “rot” by becoming less reflective of reality over time if not maintained along with the rest of the code base.

A lot of the early chapters of Clean Code revolve around the “common-sense” aspects of coding. 

  • Variables should be descriptively named (as should functions and classes)

  • Functions should do one thing and one thing only.

  • Functions should be short. (In fact, as Uncle Bob puts it: “Make a function short, then make it shorter”)

The second strategy I recommended for writing comments was to “pretend you were explaining the block of [complex] code to someone else”.  As Martin states, this is a failing of the code.  You would be better off refactoring the code so that it wasn’t so complex.  If you apply the “common-sense” aspects of coding, you will probably find that your complex code violates at least one of the rules.

I was pleased to see that Martin suggested that comments were okay when written to convey the intent of the author.   This is still my number one use for comments. 

It’s been an interesting read so far and I am sure there will be other topics that I wish to reflect on in blog form.  Expect to see more in the coming weeks!


Good Ideas, Gone Wrong

Software is created off the strength of a “good idea”.  From inside the development process, “code quality” is often thought of in terms of the number of bugs present in the code base.  This is a useful metric, but it is not the only thing that determines the overall quality of the product.  How intuitively a feature is implemented is also a factor in code quality.

How intuitive a feature is in software, has so far managed to avoid a quantifiable measure.  That is, a “1 to 10 scale” that can accurately measure how intuitive software is, has yet to be developed.  This can make it difficult to know if you have done the right thing when the software has been built.  The phrase “to eat your own dog-food” is perhaps the best leveller for a new feature.  Force the developers / UI experts / architects (anyone who influences the look-and-feel of the product) to use their own software and pretty soon, the rough edges get polished out. 

I recently attended the Borland Inprise CodeGear Embarcadero product launch for Delphi 2009.  After many years in the marketing wilderness, it finally looks like they have found their niche market for Delphi and C++ builder.  Their tools are viable options for “native Windows” applications. – That is, there is still significant demand for applications that require the “raw power” available using “unmanaged code”.

Additions to the Object Pascal syntax in this newest Delphi edition, see it become a really cool language.   (Anonymous methods, closures and generics support).  It was quite a frank and honest product demonstration from Nick Hodges.  He was obviously keen to show-off the latest and greatest features of the product, but on the other hand open and responsive to suggestions and criticisms of the existing product.

One of the criticisms levelled against the current release (Delphi 2007) was about the product documentation (or lack thereof).  Since the advent of Delphi 2005, the compiler has supported an nDoc style commenting system, to allow developers to “roll their own”.  Nick made the passing comment that the developers were not particularly disciplined* to actually writing these comments.  On the surface of it, it seems like an obvious solution to the lack of documentation. 

Having since tried to use the feature myself, it becomes apparent why the developers are not bothering…  It’s a great example of a “good feature – but in need of polish”.  Here is my synopsis of what is wrong with it, the way it stands:

  1. I’m not belittling the efforts of those responsible for nDoc, but its formatting syntax is inferior to Java’s self-documenting comment system.  In my opinion, nDoc loses out by being fragments of XML, which make it less “human readable” than the Java equivalent.

  2. One difference between Delphi / Object Pascal and Java or C# is the fact that the declarations for classes and functions are specified in an interface section, separately to their implementation (same source file, just a different section).  The current nDoc parser used for Delpi’s HelpInsight feature (tool-tips that appear when hovering over functions) only finds comments that appear where the functions / classes are declared.  For publically accessible functions, this is in the interface section.  The most natural place to document the function is where it is implemented.  I can see that this would be technically more difficult for the parser to manage, but that doesn’t give the programmers an excuse to be lazy.

  3. The comments seem to need to be in units that are explicitly included in the project.  Despite the fact that the compiler can resolve/match symbols that appear in source files anywhere on the source path, the nDoc comment resolution can’t.  You need to add your unit with the comments to the Project (.dpr / .dproj) file.

  4. The nDoc parser appears to be buggy.  Most notably, the first function (Object Pascal still allows you to have functions that don’t live in a class) will not be found.  The way around this is to declare either a dummy function, or a dummy type.  Neither of these need to have their own nDoc commenting but it may help you to remember why you wrote in another declaration of the integer type…

 Nick Hodges invited the audience to contact him if we had ideas on how to improve Delphi.  Now I’ve had a chance to flesh out what I see as being wrong with HelpInsight, I might just do that!


* I’m paraphrasing here.  He definitely did not use this terminology…