Tag Archives: design

Making an exception

I don’t normally blog about code. I don’t often make my blog entries into the lists that are so popular on the Code Project. Plenty of people do plenty of that already. Today, I am making an exception to these rules, to talk about exceptions… Nothing I am going to present is rocket-science. This article, is closer to introductory reading for a junior programmer, or to assist someone mentoring one. Although my examples will be in C#, they should apply for any object-oriented language.

Tip 1: Only trap exceptions you are prepared to handle

Here is a bad example that doesn’t do this:

    // We know we may get a div by 0, so ignore exceptions.

Instead, you should write your code as if you are expecting a particular exception.

catch (DivideByZeroException)
    // We know we may get a div by 0, so ignore that exception.

Essentially, this boils down to “Don’t provide a general exception trap because something *might* happen. Trap explicit exceptions that we know the program can handle.
If you are thinking: “But wait! Our program could still fail and we’re not doing anything about it!” you would be right. It is better to fail early and know why you failed, than to fail later without a clue. When you “bury” exceptions in a generalised exception trap, you leave yourself open to a “clueless failure”. Imagine that DoSomeFunkyMaths() includes some writing of data to file. Now it is possible for I/O exceptions to occur as well as the division by zero. With a general exception trap, you will not know that this has failed and when the code subsequently attempts to use the data from the file, you will get unexpected issues.

Tip 2: If you are going to raise your own exceptions, make your own exception type.

Again, here is not what to do:

if (ErrorHasOccurred())
    throw new Exception("Something has gone terribly wrong");

The only way to catch this exception, is to catch all exceptions. If you haven’t figured out what is wrong with this, reread Tip 1 until it sinks in… While I am at it, try and make your messages a little more helpful than “Something has gone terribly wrong”.
In C#, it isn’t hard to make your own exception class. If you cannot be bothered putting much effort in, here’s a simple example.

public class FunkyMathsException : Exception
    public FunkyMathsException(String Message)
        : base(String Message)
    { }

This is a perfectly acceptable start for your own exception class. Now, code you write that throws exceptions, will use this, instead of the base exception class.

if (ErrorHasOccurred())
    throw new FunkyMathsException("Something has gone terribly wrong");

I still haven’t learnt to put a more meaningful message in. But, at least I can now catch a FunkyMathsException, where I want to, and leave other exceptions well alone.

Tip 3: A generalised exception trap does have one perfectly acceptable use.

I am not an expert on all languages, but normally, an unhandled exception will cause a program to terminate. Depending on the circumstances, this may or may not be acceptable.
Acceptable: Your program is some sort of stateless service that will be restarted via a script or operating system should it stop.
Unacceptable: Your program is a word processor and the user has not saved their document for the last 30 minutes because they are too busy typing away furiously on their “best-seller”
If your program falls into the “unacceptable to unexpectedly stop” category, a global exception handler is the way to go. Save / report the error / do what you have to do… Just be careful to try and not raise an exception. This is serious “infinite loop” / “stack overflow” territory and your program is already on rocky ground.

Tip 4: Exceptions do not cross process boundaries.

I do not know how general this tip is. YMMV. From what I have seen, calling code in a separate library via COM, exceptions do not cross boundaries. The calling code will throw some sort of exception, but most of the specifics to the exception will be lost. It is just best to use other means to relay failure messages back from the library code.

Tip 5: Do not raise a new exception in response to an exception

There may be times when you wish to perform some operation when an exception occurs, but not actually deal with the exception. For instance, the calling code may be better placed to deal with a particular exception but you wish to perform some logging action at the time. If you raise a new exception, you will have lost potentially useful debugging information, such as the call-stack of where the original exception has occurred. Fortunately, most languages provide the ability to “re throw” the exception, simply by using the “throw” keyword by itself.

catch (DivideByZeroException)

Tip 6: Exceptions are for exceptions

There is a balancing act between raising / trapping exceptions, or testing conditions with an if statement and acting accordingly. Using if statements increases code complexity and will take some processing time, every time the statement is evaluated. Using / trapping exceptions may simplify the code, but where an exception is raised, they tend to be far more expensive (time-wise). Therefore, using exceptions should be something that is done for the odd occasion where things haven’t gone according to plan. This point is a rather grey area and open to interpretation.

Good vs. Better

Despite my desire for an Android phone, (puns are never quite as good, when intended) I actually quite appreciate the Apple iOS and its consumer based products. I am not the sort of consumer with an insatiable appetite for the latest piece of technology. This timing was not right for me to buy a replacement for our “go-anywhere” laptop. When the time comes to replace our current run-of-the-mill laptop, I would seriously consider an Apple iPad as a worthy replacement. Of course, new technology will come along before then, so it is far from a guarantee. But, it is hard to see any other manufacturer making the concerted effort to produce a slicker device in the category.

Apple advertising pitches the iPad as a revolution. The revolutionary part is not the form-factor, nor the hardware, but rather the care-factor that went in to developing the device. Prior to this device, using a computer was akin to travelling to a foreign country where the residents spoke a foreign language. Sure, you could get around but it was a slightly difficult experience. If you went to the trouble of learning the environment/language, you got more out of the experience. The iPad was more like travelling to a neighbouring country that speaks the same language. The learning curve is almost non-existent.

This salt-pan-like learning curve enables people to feel an immediate mastery of their environment. It is empowering, which in turn leads to favourable experiences. One of the aspects I admire about the iPad is the quality of the default applications on it. Too often in the computing industry, the term “quality” is approximated to “lack of defects”. Quality should extend to usability and fitness of purpose.

The derogatory term “gold-plating” is levelled at some developers. For those unfamiliar with the concept, it basically suggests that a developer spends too much time perfecting some piece of code. After all “The perfect is the enemy of the good”.  The longer your software takes to develop, the more funds it requires. Prior to the software generating income, this is a financial burden. Therefore, it is important to avoid “gold-plating” – especially for “version 1” programs, but avoiding gold-plating is not an excuse to turn in poor work.
Apple have shown that going to the extra effort to produce good quality software can be financially rewarding. If people like your product enough to pay for it, then making software that more people like has an obvious incentive. If you work in a large corporate environment, turning out systems for internal use, the benefit is not as immediately obvious. If you think about the cost to your business in terms of lost productivity and training programs, you can start to appreciate that easier user interfaces save money, even if they do not generate it.
Making better software is hard work. A lot of effort has been expended in terms of producing systems that lead to less defective software. Revision Control Systems / Unit testing frameworks / Continuous Build tools / Static Code Analysis tools and more all aim to reduce software bugs. Can you name one automated tool that provides any support for developing better User Interfaces? I suspect a lack of these tools is due to the required level of intelligence needed to produce meaningful analysis of a user interface. Writing software that analyses UI suggests that the software understands what the UI is. I am no expert in the field of artificial intelligence, but I would suggest we are not quite at that level yet!
Having decided that human intelligence is needed to design User Interfaces does not mean that automated tools cannot assist. The “usability” of clickable targets are affected by things such as their size and location. The usefulness of text is reduced by overall length (Oh the irony in this blog post!) Modal dialogues provide “speed-bumps” to the user’s “work-flow”. Such aspects are quantifiable. Maybe such tools that measure such things already exist, but they certainly do not attract the attention of the programming masses.

Apple is leading the way in terms of “user-focused” software and there is nothing wrong with having a market leader that is doing a “good job”. Here is hoping that others will help raise the overall standard further by continuing to compete with Apple’s products!


It takes a decent amount of time and effort to design a good user-interface.  One of the problems faced when making a user-interface is that it can take an enormous increase in effort to make an ordinary interface into an extraordinary one.  You may have come across a user interface (be it for a web-site, or an application) and been absolutely flummoxed by its operation.  Unfortunately, that does not mean that a great deal of time and effort were not spent trying to simplify it.  (Of course it may mean that no time and effort were spent trying to get it right!)

There is an extra pressure on designers of external web-sites.  Get it too far wrong and your customers go off to your competitor’s web-site.  In my experience, application developers can get away with worse user-interfaces.  If the program has the features people want, people will make the effort to learn how to use the application.  This should not be seen as an excuse not to care about the user-interface.  There is a saying that if your customers are not aware a feature exists, then it doesn’t.  Unfortunately, most user interfaces end up obscuring some functionality.  In a feature-rich application it becomes increasingly difficult not to do so.

Every time I hear a person talk about “learning” software, I feel that somehow the software has failed.  I would like software to be so intuitive that using it is “natural” – rather than a learned action.  It is probably an unrealistic expectation that all software will be like this, but that does not stop it being a worthy goal to work towards.

When I talk to non-technical people about using software, the thing that becomes apparent is that they all expect to have to learn how to use it.  No-one expects to sit down in front of a new word-processor and just use it to do their job.  One disheartening example came with the release of Microsoft Office 2007.  For me, the ribbon was a huge step in usability enhancements over the traditional tool-bar and menus approach.  The one resounding criticism I hear with Office 2007 was from existing Office 2003 (and prior) users:

“I used to know where everything was and then they went and changed it all.  Now I have to re-learn where things are”

Microsoft puts a great deal of time and effort into usability.  Hopefully, this means the learning curve for Office 2007 was not as severe as with previous versions.  The ribbon was designed to be “a better way”:  Task oriented user-interface is meant to be superior to functional oriented user-interface.  People have been “brought up” thinking along the lines of functional software rather than thinking the computer will aid them in completing their task.  This mind-set will change over time and wide spread adoption of task-oriented user interfaces.

If you ever have to write a user-interface remember this:

  • You either spend your time getting it right, or ask the users to spend their time figuring it out.
  • The world does not need more software that is difficult to use.

Why are there no more two-strokes?

Traditional two-stroke engines offered a variety of advantages over their four-stroke rivals.  Their power output far exceeds similar capacity four-strokes. This is in part due to the fact they produce power twice as often as a four-stroke and partly because there are less moving parts to create drag and losses in power.  This “fewer moving parts” factor was seen as another significant benefit of two-stroke technology.  Fewer moving parts equates to fewer things to go wrong.

The late 80’s and 90’s could have been considered as the high point of two-stroke motorcycles.  The premier racing category (now known as MotoGP) featured three classes of bikes featuring two-stroke engines.  A few manufacturers had small light-weight high-powered two-stroke road bikes in their line-ups.  Of course, every silver-lining has a dark cloud somewhere. With two-strokes, this cloud has a blue tinge and a distinct smell about it.  Two strokes were notorious as being bad polluters and suffering poor fuel economy.  “Highly tuned” two strokes were also known for their light-switch power delivery.  “All-or-nothing” power delivery can be intoxicating, but it can also be annoying and down-right dangerous on public roads.  When not running at optimum engine speeds, these two-strokes expel a large amount of unburnt hydrocarbons into the atmosphere.  So much so, that even before Al Gore brought climate change to mainstream attention, most people could see that this was “altogether a bad thing”.

Of course, two-stroke technology still exists in all sorts of industries today.  Outboard marine engines often feature this technology.  But, before you start criticising your boat owning neighbour for his “careless attitude toward the environment”, know that modern outboard two-stroke engines pass the emissions tests required of it.  Direct Injection technology ensures that fuel is not wasted and pushed out the exhaust unburnt as did two-strokes of old.  The fuel is only delivered to the combustion chamber when it cannot escape out the exhaust port.  (My apologies to any reader who doesn’t understand the basics of two stroke combustion engines – hopefully I’ll cover that in an introductory manner at another point in time)

From what I have read, direct injection two-stroke engine design:

  • Eliminates the “peaky” power delivery.
  • Reduces emissions to comparable levels of a four-stroke engine.
  • Retains its power to weight ratio advantage over four-stroke engine design.

I hear you saying: “Surely they have lost some of the advantages they used to have?  Isn’t there always a compromise?”.  Well, as stated earlier, two-strokes of old were mechanically very simple.  Few moving parts with very little to go wrong.  Direct injection necessitates that things get a bit more complicated, with fuel pumps, fuel injectors and so on.

I suspect the biggest factor in why we don’t see modern “clean running” two-stroke motorcycles is largely due to the sins of their past.  Any new bike would need to overcome the old stereotypes of being polluting and thirsty motorcycles.  Given that technology exists to overcome these issues, it really is a shame the manufacturers have not risen to the challenge of marketing them in a better light.  There still seems to be a lot of sentimental folks in the motorcycling press who would like to see them return, so maybe they will, one day…