Category Archives: Computing

Tales of a lucky escape and a worthwhile purchase

Ker-boom!I have always been a bit blasé about thunderstorms.  It is not my fault!  I grew up in an area prone to them and as such have become desensitised to their awesome fury.  I know the dangers are real, I have respect for their power, but at the same time I look forward to the light-show more than fear the possibility of damage and destruction.  A resident of Los Angeles probably feels the same way about minor earth tremors – whereas I would be scared witless by them.
A while back I replaced my aging ADSL modem/router in an attempt to fix drop-outs in service that I was experiencing.  The replacement did not really fare any better and eventually I traced the fault back to an under-spec line filter.  I kept the new modem/router as it had features useful for VOIP, but I was slightly annoyed at myself.
Recently, we experienced a heavy electrical storm.  Remember how I said I was blasé about storms?  Well, when the lightning is so close that there is no distinguishable gap between lightning and thunder, and you here arcing on the power lines, even I tend to duck and utter expletives in shock!  The roly-poly-cat took fright too, and wouldn’t be comforted by someone who was visibly shaken.  It has been my experience that electrical equipment doesn’t get damaged in storms as often as you may believe.  Most equipment just keeps on keeping on!  YMMV!
Of course, this time was different.  The new modem/router lost the Internet connection. I could still “see” the device, it could still report line attenuation and signal to noise ratios, but using it to access the Internet was beyond its post lightning strike capabilities.  So, it was time to drag out the old modem, which was now reserved for “emergency backup duties”.  Sure enough – it worked and so did I. (I had been working from home at the time)
The silver lining to my grey thundercloud turned out to be that the purchase of the newer modem had been necessary after-all. Whilst the new modem had not cured the dropouts until I upgraded the line-filter, the old modem still suffered from them – even with the new line filter.  So, I declared that the new modem was a “worthwhile” (if somewhat short-lived) purchase.  That night, having soldiered on through numerous drop-outs, I decided to have a closer look at the new modem.  It was worth the effort!  It turned out that the lightning strike had simply erased the settings of the new modem.  Having re-established these, I performed my lucky escape!  And everyone lived happily ever after!
Living without surge protectors in an area prone to thunderstoms may seem risky or careless to some people.  I do not know much about high-voltage, but do know that the close proximity of unprotected and protected wires on most “domestic” surge protector boards is likely to be insufficient to prevent arcing between them.  Still, when a horse points at an open gate and says “next time I’m bolting” I pay attention.  I have since bought myself an eight point surge protector, complete with phone line and coaxial cable shielding.  I will let you know, if lightning strikes twice. :-)

Does Technical Debt Matter?

I have some strong views on code quality.  One of my professional goals is to always attempt to improve my coding with the aim of producing better code.  In this day and age, making software “less broken” is about the most I can hope for.  I cannot foresee a time when written software becomes “perfect” / “bug free”.  Maybe it will – I have learnt: never say never…

Anyway, this is an article akin to playing devil’s advocate.  I am not particularly comfortable with what I suggest below. I have written it purely to get people thinking about the time and effort expended writing software.  As always, I encourage your comments – positive or negative.

One of the odd things about the software industry, is that code “rots”.  This is somewhat strange.  Source code, written in text files does not “degrade”.  Unlike organic reproduction, copying a file leads to a perfect reproduction.  If you kept a copy of code written say twenty years ago, it would still be the same today as it was then.  Things change rapidly in the computing industry.  As a result, it is extremely unlikely that you could use that twenty-year old code on a modern computer.  A different form of “rotting code” exists precisely because the code does change.  Over time, countless little hacks or quirks can be added to an active code base that leads to obfuscation and “unhealthy” code.

The common technique to reducing code-rot is refactoring.  By having comprehensive unit tests, refactoring exercises help keep a code base current and ensure that changes made do not lead to regression bugs.  Working on a well-maintained code-base is a more pleasant experience for a developer.  Well-maintained code is easier to extend and developers have less fear of making mistakes.
“Technical debt” is a term coined by Ward Cunningham and refers to the “price paid” for releasing code for the first time.  He argued that a small debt was useful to incur, as it helped speed up the development process.  It was okay to incur some debt as long as it was “paid back quickly” by refactoring the code.

“The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation…”

Productivity graphs show how progress drops away on poorly maintained projects.  The longer the project runs, the harder it becomes to add features or fix bugs without causing other regressions.  The solution to the problem is to avoid as much technical debt as possible. Following best practices will help achieve this goal.  But is this done at too high a cost?  Following best practices adds extra work and thus does slow down development early in the life-cycle of the project.

Not repaying technical debt will grind a software project to a halt.  If you use an analogy of credtit card debt equating to technical debt, having the software project grind to a halt is the equivalent of declaring bankruptcy.   Obviously, this is not a great outcome, but it is not the end of the world either.

What if your software has run the term of its natural life?  Your software will have been written to meet a specific need.  Maybe that need is no longer there.  Maybe that need is now perfectly met, or the market has been saturated and sales have evaporated.  Maybe every feature that could be added, has been added (including reading mail).    If the project gets “sun-setted” does it really matter how much technical debt is left in the code base?

Not “doing things the best I can” is something I struggle with.  “Doing the best I can” and “doing things well” do not necessarily mean the same thing.  Obviously, the process of software development happens on a linear scale.  Software tends not to be written “the best way” or “the worst way” but rather somewhere in the middle.  If the process your team uses is close enough to “the best way” end of the scale to not be crippled by technical debt, then maybe that is good enough.

Say “No” to Band-aids!

Sooner or later, there will be a need to fix bugs in whatever software you work on.  How long it takes to fix a bug tends to be non-deterministic.  Some bugs will be easy to fix, and others not so.  Unfortunately, bug fixes on commercial software are often done the wrong way – under the guise of being done quickly.  The “band-aid fix” is the wrong way of fixing a problem.  The metaphor of the “band-aid fix” extends beyond the software industry, but I.T. has turned it into a real art-form.

At the heart of a lot of band-aid fixes is the notion that you can fix a problem without really knowing what the problem is.  Commercial reality may well prevent a code base from being perfect, but the more band-aids that are applied to the code, the worse the software becomes to work on.

There may be a genuine need to apply a band-aid fix to code.  When there is a real financial loss or damage to customers’ data, expediting a fix is understandable.  Removing this kludged fix should be given a high priority.  It is important to recognise that the band-aid won’t continue to hold a large wound together!  If you do not remove the band-aid and perform proper surgery, the wound will rot.  Once you allow the code to start “rotting”, it becomes difficult to arrest this negative momentum.  It damages the maintainability of the code and encourages other programmers to act irresponsibly too.  It is difficult to put enough emphasis on this point.

Depending on the culture in the workplace, it can be easy to dismiss fixing “less than ideal” code.  Studies have shown how counter-productive poorly maintained code is on development productivity.  I have yet to work with someone in the software industry that would disagree with that thought. Yet barriers are still erected that prevent acting upon it.  There is a vicious circle alive and well in parts of the software industry:

  • Code is recognised to be poorly maintained.
  • Poorly maintained code is recognised to hinder productivity
  • People are too busy to fix old code.

I cannot believe people do not see the irony with this!  Allowing software to get in to this vicious circle is the first mistake.  Programmers need to promote the culture that refactoring is not a luxury, but a necessity.  Allowing some refactoring time on all development work can avoid the problem in the first place.  Digging your software out of the hole created by the vicious circle is altogether a more expensive proposition.  Not refactoring the code at all is even worse!
The idea that refactoring time needs to be allocated with development time appears to imply that you will not be able to push out new features as quickly.  At a superficial level, this is true enough.  Over the lifetime of the code base, this argument would not hold up.  The neater and more maintainable the code base, the quicker it is to perform work on.  In other words, good code is easier to add features to.

The biggest problem I see with a “band-aid” fix is simply that it is not a fix at all!  It cures a symptom in rather the same way that pain-killers can stop broken legs from hurting.  It masks the issue – but it does not mean you’re right to walk home! Masking problems in software, just makes them harder to track down.  Software can be complex enough for one bug to be at the root of several problems. If you only mask the problem, you never know where else the bug will show up

Office Politics

When you work with other people, “office politics” will always be a factor.  I have heard people say that they did not like office politics as if it were something that they could avoid.  I am not talking about the sort of “Office politics” resulting in the metaphorical stabbing of fellow co-workers in the back. It is true that office psychopaths definitely attempt to manipulate co-workers for their own purposes, but a lot of daily interactions can also be seen as a form of “office politics”.

Internal restructuring has seen my role change recently.  I was working on a framework team, providing code (and various other infrastructure) to various teams in my company.  My “customers” were the teams that wrote the applications that sold to the real customers…  That is, the ones that paid money!
Since the restructure, I have been moved onto one of these teams as a senior developer.  Former “customers” are now team-mates.  When I was working on the framework, I had a certain perception of how our code was being used to create the end-product.  Now that I have become exposed to their code base, I have discovered the truth behind how they use the framework code!  The fact that there are differences indicates some degree of a breakdown in communications.  There is nothing catastrophic about what they have done, but it shows a disparity between the directions the framework and end-products were heading.  This difference was due largely to the difference in motivations between the two teams.

The framework was responsible for the core of twelve different applications.  As such, consistency and flexibility in the architecture were highly valuable commodities.  I would not be presumptuous enough to claim that we succeeded in providing the perfect architecture every time, but those were primary goals of the code we wrote.

The end-products have a far more tangible goal: To make money.  I am not in product management, but having developed commercial software for a long time now, “making money” tends to be about writing software that adds features that customers want and alleviates the worst of the bugs that have been reported.  In terms of priority, “adding wanted features” are more important.  Trade shows never focus on showing customers how the software no longer crashes when you do steps X,Y and Z!

In terms of how this affects code in the long-run, there is a natural tendency to leave “good enough” code alone.  Short term deadlines enforce short-term thinking.  Commercial reality allows code to deteriorate in rather the same way that an untended garden becomes an overgrown jungle.  Active pruning and weeding would avoid the problem, if only it were seen as an important goal.
Given that the software still sells and still provides real value to the customers, it can be seen as an unimportant goal.  The fact that new features become difficult to shoe-horn in to the existing code base is seldom given consideration when writing code.  Which brings me back to my original point on office politics.

Now that I am on a new team, I see the “weeds in the garden”.  No individual issue is worthy of much attention, and so the existing team members simply ignore such issues for more important work.  I would highly doubt there is a piece of commercial software being sold that did not have some degree of this occurring in their own code base.  I have worked alongside my new team members for many years.  They know how important code quality is to me and I know that they will expect me to try and improve their code’s overall quality.  Here is where the “office politics” lie.  I could just blunder in and make changes which I believe are for the better.  I have known programmers who would.  Different cultures in different parts of the world would probably react differently to such an “intrusion”.  In Australian culture, this would not go down well and so it will not be the approach I will take!  I’m also someone who is only too painfully aware of their own short-comings as a programmer. So, tact and and a measured approach, remembering one’s own shortcomings will definitely be the order of the day.   See, even in the most ordinary of jobs, “office politics” will play a role!

A voyage into the unknown

There is nothing like a foreign operating system to remind you how narrow your knowledge of computers may be.  All of my professional computing days and many of my academic ones have been spent on Microsoft platforms.

Although this is not my first foray into the land of Linux, I have recently started some home development projects based on a Linux machine.  My chosen “distro” is Ubuntu Jaunty Jackalope.  I do not have a good reason for not choosing the “Karmic Koala”, I just didn’t.

Having spent many years in GUI land, there was no way known I was going to start with a “server edition” and command prompt!   I remember the basics for navigating around a Unix system, but there is only so much fun you can have changing directories and listing files found in them!  I need as much “hand-holding” as possible, thank you!
I have a number of reasons for choosing Linux over Windows this time around.

  1. I wanted to see what a modern Linux system was like.
  2. I wanted some experience at using Linux.  (Never say “never”!  It may come in handy!)
  3. It was free!  (as in “free beer”)

I have already discovered that “free” as in “free speech” is not always what you will want.  For those of you still in Microsoft land, Ubuntu features a package manager that allows you to install software through a nice GUI.  Simply search a list of applications, choose the one you want and allow the magic to happen!   (Using the power of the Internet to update this list and retrieve the packages)

This is great, but it favours installations that “do not restrict your rights”.  After using Windows and software that features the words “All Rights Reserved”, I don’t really care!  “Free beer” still means more to me than “free speech”.  Well, when it relates to software, at any rate!

I wanted to use the Eclipse IDE on this system, which meant I needed a Java Runtime Environment installed.  The package manager will default to using an Open Source version.  It turns out that this does cause Eclipse to have a few head-aches and it is better to get the “original” from Sun.  This too is possible – it is just not the default behaviour of the package manager.

For those of you contemplating trying Linux, Ubuntu does have a classy offering.  As long as you get to the point of having Internet connectivity, you should not get too stuck!  The biggest thing I have noted is how often you will turn to editing configuration files and using a “terminal window” to perform operations on the system.

The internet appears to have answers to the most commonly asked questions such as, “How do I turn off the annoying system beep?”  .  I am sure earlier versions featured a way to do this from the desktop window, but these days, it is time to modify those configuration files!  Times such as this help remind me that there is nothing like a foreign operating system to remind you how narrow your knowledge of computers may be.  Wish me luck!


It takes a decent amount of time and effort to design a good user-interface.  One of the problems faced when making a user-interface is that it can take an enormous increase in effort to make an ordinary interface into an extraordinary one.  You may have come across a user interface (be it for a web-site, or an application) and been absolutely flummoxed by its operation.  Unfortunately, that does not mean that a great deal of time and effort were not spent trying to simplify it.  (Of course it may mean that no time and effort were spent trying to get it right!)

There is an extra pressure on designers of external web-sites.  Get it too far wrong and your customers go off to your competitor’s web-site.  In my experience, application developers can get away with worse user-interfaces.  If the program has the features people want, people will make the effort to learn how to use the application.  This should not be seen as an excuse not to care about the user-interface.  There is a saying that if your customers are not aware a feature exists, then it doesn’t.  Unfortunately, most user interfaces end up obscuring some functionality.  In a feature-rich application it becomes increasingly difficult not to do so.

Every time I hear a person talk about “learning” software, I feel that somehow the software has failed.  I would like software to be so intuitive that using it is “natural” – rather than a learned action.  It is probably an unrealistic expectation that all software will be like this, but that does not stop it being a worthy goal to work towards.

When I talk to non-technical people about using software, the thing that becomes apparent is that they all expect to have to learn how to use it.  No-one expects to sit down in front of a new word-processor and just use it to do their job.  One disheartening example came with the release of Microsoft Office 2007.  For me, the ribbon was a huge step in usability enhancements over the traditional tool-bar and menus approach.  The one resounding criticism I hear with Office 2007 was from existing Office 2003 (and prior) users:

“I used to know where everything was and then they went and changed it all.  Now I have to re-learn where things are”

Microsoft puts a great deal of time and effort into usability.  Hopefully, this means the learning curve for Office 2007 was not as severe as with previous versions.  The ribbon was designed to be “a better way”:  Task oriented user-interface is meant to be superior to functional oriented user-interface.  People have been “brought up” thinking along the lines of functional software rather than thinking the computer will aid them in completing their task.  This mind-set will change over time and wide spread adoption of task-oriented user interfaces.

If you ever have to write a user-interface remember this:

  • You either spend your time getting it right, or ask the users to spend their time figuring it out.
  • The world does not need more software that is difficult to use.

A nerdRider’s guide to teleconferencing

I suspect the IT industry is an early adopter of the “remote office worker”.  It is an industry that is reasonably well suited to it.  In time, I suspect more and more roles will diversify into roles that can be conducted remotely.  One component to office life when working with remote co-workers is the teleconference.
From my perspective a teleconference typically takes the form of a “traditional” meeting held around a table, and dialing me in via a speakerphone sitting in the middle of the table. There was a stage in my career where I used to conduct teleconference meetings for the stakeholders of the software project.  So, it is fair to say that I have sat on both sides of the fence (phone?) when it comes to teleconferencing.
There are some fairly obvious rules that should be followed when participating in a teleconference meeting.  For the benefit of my reader(s) I am going to state them here, just in case my definition of obvious does not match someone else’s…

For the local participants:

Project your voice
Yes, this is obvious.   Even people who speak quietly know this rule, but still manage to avoid doing this in a meeting.  Pretend the speakerphone is in fact a little old lady with a cone held up to her ear! If you have something that is important enough to say in the meeting, then I want to hear it!  Say it loud enough to be heard!

Talk towards the phone
Try and avoid addressing an individual in the meeting room in what could be considered a “traditional manner”.  Western culture dictates eye-contact of varying degrees to indicate the intended target.  Instead, phrase your question or statement, by starting with the person’s name.  Until real-time video-conferencing becomes a flawless implementation used universally, this rule is important.  It may help to imagine that your intended target will only hear you, if you are facing the speakerphone.  The volume of a person’s voice changes markedly depending on how directly they are facing the speakerphone, so try not to move your head from side to side as you speak.

Do not gesticulate to describe issues
Some people just naturally love to use their hands to describe things.  Meetings with many people are rarely technical, so you get some people who insist on using hand gestures (not necessarily rude ones) to describe things.  Such as “I have this much difficulty when I use feature x of your software”.  If you are on the other end of the phone, you are left wondering whether they were indicating a small distance between thumb and forefinger, or something more akin to how big the fish that got away was…

Stop people from tapping on the desk
Not so obvious, but it can be a real show-stopper for the person on the other end of the phone.  The large flat area of the desk amplifies any sound made on it.  When the speakerphone rests on the desk, all it picks up is the tapping – often at a deafening volume.

Soliciting feedback from the remote parties should be done explicitly.
A question such as “Does everybody understand?” is not something that should be asked in a teleconference.  In a traditional meeting, a quick scan of faces will give a good idea as to whether everyone is following along.  From personal experience, I can tell you the “Does everybody understand?” question is meant with a stony silence.  If you have multiple remote participants, single them out individually. “Did you follow that, Jack?”  –  “Yes thanks”.  “How about you, Jill” and so on…

Rules for the remote participants:

Pay attention!
I have found that being a remote attendant in a meeting means you quite often take more of a passive role.  Avoid the temptation to try and do other things whilst “in” the meeting.  It is too easy to let the meeting “get away from you” and it will cease to be of any value.  The people on the other end of the phone are doing their best to communicate solely through voice, the least you can do is give them your full attention.
Don’t be afraid to ask people to speak up.
Yes, you shouldn’t have to, but I bet you will need to!

Have an ergonomic phone
I cannot recommend hands free headsets highly enough.  Holding a phone to your ear for long periods of time is remarkably difficult and off-putting.  Just remember the first rule that you should obey.

Have a phone that is easily volume adjustable
Despite ranting about vocal projection, the unmistakable truth is that everyone will come across at different volumes.  Having a phone that is quickly adjustable and does not require you to take the speaker away from your ear is essential.

Understand half-duplex communication
A lot of speakerphones are “half-duplex” to avoid echo and feedback.  In a nutshell, if you speak, the microphone on the other end will cut out.  Effectively, you stop hearing what anyone else is saying.  Even polite people tend to “interrupt” a person’s conversation from time to time.  If you do so from the other end of a half-duplex speakerphone, there is a very good chance you will miss something being said.

Be an active participant
I do not mean that you should talk endlessly.  Understand that your lack of physical presence in the meeting space will mean that people can tend to “forget” you are there.  If the meeting “drifts” away from the intended subject, being remote can mean you “lose interest”.  Respectfully requesting that the meeting participants keep focused on the aim of the meeting quickly can avoid the communication breakdown that will otherwise occur.

I think that’s about it!  If you have got any tips, feel free to leave a comment.

Where are the good programmers?

There seems to be a trend amongst programmers who blog.  They all tend to say that they write rubbish code.  (Some put it more poetically than others…)  I think it is a good idea to steer clear of the Rock-star programmers, but are all who blog, bad at coding?  Or are they merely filled with a sense of modesty endued by self-preservation? (Needed because the Internet is a big scary place and you can’t hide from the knockers forever.)

From my own perspective: because quality takes time, there is always the sense that with more time I would have done a better job.  That is probably true to a certain extent – but there is definitely a point of diminishing returns.  That, plus the fact that I definitely have a finite amount of intelligence means that the quality of my code will probably never exceed a certain level.  Someone smarter than me could possibly turn out better code than I could ever hope to.   Extra intelligence however does not always guarantee better results.  “Care” is an attribute that counts for a lot when writing code.  “Careless programmers” write rubbish code and I find that particularly offensive if I know that they are better problem solvers / generally more intelligent than I am.

Reflecting on my own code at a later date often reveals a painful truth.  Yes, I too write some awful code.  Even code that I was once quiet proud of, I no longer see through rose coloured glasses.  I probably notice this due to looking at the code from a different perspective.  This is impossible to do at the time as you tend to be so engrossed in the code that it seems to be simple.  (To me, simple code that works is a close approximation of good code)

Different perspectives for code arise with different usage of the code.  Code that sticks to some simple rules lends itself to re-use.  Code re-use is somewhat a holy grail of programming, but for a business, it is not as important as having the code you write make money.  Joel Spolsky places a strong emphasis on finding good, talented programmers and judges them as the people who are smart and get things done.

Placed solely on this scale, I have known quite a few programmers who “pass”.  But for some, there is a high price to pay, in the form of code maintainability.  I willingly concede that for the sake of getting a “version 1.0” code base out the door and selling, making code “good” is a luxury.  But carrying on with a relentless drive to push new versions out is counterproductive.  Extending and maintaining a bad code base takes more resources and there have been documented cases where lack of progress due to the bad code base is the eventual undoing of a project.

Maybe this indicates that there are different sorts of “good programmers”.  The ones who ensure there is a product to sell and the ones that ensure that sins of the past are dealt with in a timely fashion.   I suspect software projects need both these types of programmers to succeed. I also suspect that these two groups of programmers annoy each other due to their different outlooks.  But that’s a story for another time.

The Network N00b

I am a network n00b.  I remember when networking on Windows (and DOS for that matter!) was fiendishly difficult and I am truly glad those days are behind us.  Although it is not really related, I was reminded of those days recently when I was trying to determine why my home internet connection would sporadically drop out.

I used these drop-outs as motivation to finally replace my ADSL modem / router with a new one.  I have wanted one for a while now, but couldn’t justify replacing a working one.  The world has finite resources after all, and we really don’t need the extra land-fill!  The drop-outs commonly took the form of firstly losing the VPN connection to work followed by extraordinarily long times to resolve web addresses.

The only way I had found to correct the problem was to power-cycle the ADSL modem.  Once I had bought and installed the new ADSL modem / router, I was horrified to discover the problem had seemingly become worse!  Now, a power-cycle was not always sufficient to recover from the problem.

Fortunately, diagnosis of the problem became much simpler with the new modem.  Once “the problem” occurred, I discovered that I could still ping the gateway machine, but I could not ping the primary or secondary DNS servers of my ISP.  The new modem has a less cryptic web interface.  This was able to tell me diagnostic information such as line attenuation and signal-to-noise ratios.  (The old one probably could do this, but I had a bad “hunt-to-peck” ratio – clicking on random links before I found the page I was looking for!)

Armed with a few statistics, I turned to the Internet for possible answers (when it was available!) It did not take long to find an answer and I am annoyed with myself for not starting the problem resolution here!  Recently, our local exchange had upgraded from ADSL to ADSL2+.  I had upgraded the firmware in my old ADSL modem/router for this change but did not upgrade my line filter.  The solution to my problem was to replace my existing splitter box and line filter with a new combined splitter/ADSL2+ filter.  Since then, things have been going swimmingly!

In the end, I probably did not need to replace the ADSL modem.  But I did want some features that my old modem did not have.  Also, the old modem had a quiet high pitched whistle which I’m glad to be rid of.  You live and learn!