Tag Archives: programming

Dealing with the Apple Push Notification Service

I have recently been working on sending Push Notifications for an iPhone app. The Ray Wenderlich web site has a (dated but still) great post to get you started with this and includes some PHP code for transmitting the messages to the Apple Push Notification Server (APNS).  I won’t rehash what Ali Hafizji went over in his post, rather just suggest you have a read yourself.

There are free and “nearly free” services that offer customers a service that communicates with APNS, but they just leave you communicating with their servers, rather than directly with Apple’s. At the risk of sounding like “not invented here syndrome” I wrote my own application.

Apple document the comms format used. It’s a binary message format. “Back in the day” I did more than my fair share of binary comms . Like a lot of my peers I had my share of writing Point of sales software. Most peripherals in those days used differing binary formats over RS232 to communicate. Apple’s format differs from how I remember binary protocols from working, so I wanted to share some potential pitfalls that I noted. (One of which I fell in, others I skilfully / luckily? avoided)

Numbers are stored big-endian.

If you are writing your application on a Windows machine, then your numbers will be stored little-endian. In my case, I was using C#. The BitConverter class provides methods to get the byte array representing the number, as well as a property that you can check to see if you need to reverse the array. I guess with Mono, there’s a chance that your C# code will end up running on an O/S that is big-endian – so it probably pays to check first, before reversing the byte array!

The frame data is not simply a in-memory version of the third table

This is the trap I fell in – and in retrospect it was a silly mistake caused by misinterpreting the sentence:

The frame data is made up of a series of items. Each item is made up of the following, in order.

Each item contains an item identifier, an item data length field, and the item data itself. Unlike previous binary comms that I have done, this means that length fields appear throughout a transmission. This is not exactly necessary, as there is only one variable length field. While I consider it “not exactly necessary” it does lend itself to forward compatibility that would otherwise not be possible.

The APNS servers only respond with errors – sometimes

When I was struggling with malformed frame data I would often not get an error message response from the APNS. I don’t know why this would be the case. In my experience, if your iDevice does not receive your notification within thirty seconds, you have probably done something wrong! My experience was most were received within three or four seconds…

Not every item-type needs to be in your frame data

Out of interest, I started experimenting with leaving out frame-data items. It appears safe to leave out the “expiration-date” if you wish. I would guess that leaving it out, is the same as specifying zero. That is, if the message cannot be delivered straight away, the APNS will not attempt any delayed delivery.

You do not have to reinvent the wheel!

Late in the piece, I came across the PushSharp open source library. Chances are, I will switch over the code I wrote to use this project instead. It supports all major mobile platforms, not just Apple.

Still, I wanted to rattle off what I learnt in building this app, in the hope that it may help someone avoid a gotcha!

Good luck and happy coding!

Mmmm… floor pie….

I bought a Raspberry Pi. I know there are faster, more powerful “System on a chip” computers, but the wealth of knowledge and information on the pi made it an obvious choice. I bought the “Model-B”, that features an Ethernet network port and lashed out and got a clear plastic case for it. I like to think it gives it a mini-Orac look…

I’m planning on using it to write basic web services to suit my needs. The first one, is going to be as the “back-end” (or “cloud” if you will!) for an Android app I have been thinking of.

Any app-store with half a million apps or so in it, is bound to have already covered the ideas I am likely to come up with. So, it is strictly a case of “done for the fun” combined with “not-invented-here” syndrome. Ultimately, I am planning on learning something away from what I do and know from work.

Like all true home projects, it runs the risks of being abandoned half-way through. However it goes, I’ll endeavour to blog about it as I go…

Revision Control System Ettiquette

Develop software for any length of time, and you will reach the conclusion that being able to track changes you make to source code is "a good thing". Put a show-stopping or embarrassing bug in the last round of code changes you made? Being able to revert to an older version of the source code, may be the fastest way out of your problem! If it isn't, it may at least help you to understand what you did wrong – by comparing different versions of the source. This isn't rocket science – software development teams have had Revision Control Systems (RCS) to help them do just this for a long time.

If you work as a programmer and you have never heard of RCS, then I suggest you see this thoroughly excellent visual explanation of what one is. Then explain to your boss, that you need one. Those of you who are more in the know, will realise that the introduction I linked to is a few years old, and doesn't attempt to explain what a Distributed Revision Control System is, but for the purpose of this article, it is satisfactory.

Working collaboratively on a project more-or-less makes using some sort of RCS mandatory. Sure – you can survive without it, but it simply makes no sense to do so. One of the highlights of using an RCS is the ability to show how source code evolves. Being able to diff the revisions of the code can be useful to help fellow team members understand how the code came to be the way that it is.

Just because you can see what differences there are in two versions of the file, doesn't mean you will understand what the differences are! It is easily possible to change code sufficiently to make following your changes close to impossible. Even if you are developing in isolation, you should be using an RCS and acting as though someone else will be the next person to work on your code. This is especially important, just in case you are the next person to work on your code! :)

Here are some simple guidelines to try and follow when checking in code changes:

  • Only attempt one change per check-in. If you have several bugs to fix that all relate to the one piece of code, it is tempting to fix them all at the same time. This is discourteous to anyone attempting to understand your changes at a later date.
  • Explain what you are doing. When committing a change to the RCS, you get the opportunity to submit a "check-in comment". In much the same manner as commenting code, these are best used to describe the intent of your revision. Normally, people use the comment field to provide a reference back to a bug-tracking number. This is worthwhile, but it falls short of being a great check-in comment. I am sure that the vast majority of coders can type quickly enough that expanding their check-in comments won't take them too long!
  • Code styling / formatting gets its own check-in. So, you didn't like the order someone put the class methods. Or, they didn't put spaces between function parameters the way it is meant to be. You feel moved enough to change it yourself. Well, just check it in separately. Again, this goes back to being courteous to someone who may be trying to follow your changes at a later date. Logic changes can easily be buried / hidden by moving the function they occur in. The diff output will show that a function was removed from a certain point in the file and inserted somewhere else. Subtle changes in the function can easily be lost

All of the above recommendations are aimed at improving the accountability of the code changes you are making. This becomes more apparent when you are working on a team that uses code reviews as part of their development methodology. The relative-worth of code reviews can receive fairly heated debate, but that is a story for another time.

Making an exception

I don’t normally blog about code. I don’t often make my blog entries into the lists that are so popular on the Code Project. Plenty of people do plenty of that already. Today, I am making an exception to these rules, to talk about exceptions… Nothing I am going to present is rocket-science. This article, is closer to introductory reading for a junior programmer, or to assist someone mentoring one. Although my examples will be in C#, they should apply for any object-oriented language.

Tip 1: Only trap exceptions you are prepared to handle

Here is a bad example that doesn’t do this:

    // We know we may get a div by 0, so ignore exceptions.

Instead, you should write your code as if you are expecting a particular exception.

catch (DivideByZeroException)
    // We know we may get a div by 0, so ignore that exception.

Essentially, this boils down to “Don’t provide a general exception trap because something *might* happen. Trap explicit exceptions that we know the program can handle.
If you are thinking: “But wait! Our program could still fail and we’re not doing anything about it!” you would be right. It is better to fail early and know why you failed, than to fail later without a clue. When you “bury” exceptions in a generalised exception trap, you leave yourself open to a “clueless failure”. Imagine that DoSomeFunkyMaths() includes some writing of data to file. Now it is possible for I/O exceptions to occur as well as the division by zero. With a general exception trap, you will not know that this has failed and when the code subsequently attempts to use the data from the file, you will get unexpected issues.

Tip 2: If you are going to raise your own exceptions, make your own exception type.

Again, here is not what to do:

if (ErrorHasOccurred())
    throw new Exception("Something has gone terribly wrong");

The only way to catch this exception, is to catch all exceptions. If you haven’t figured out what is wrong with this, reread Tip 1 until it sinks in… While I am at it, try and make your messages a little more helpful than “Something has gone terribly wrong”.
In C#, it isn’t hard to make your own exception class. If you cannot be bothered putting much effort in, here’s a simple example.

public class FunkyMathsException : Exception
    public FunkyMathsException(String Message)
        : base(String Message)
    { }

This is a perfectly acceptable start for your own exception class. Now, code you write that throws exceptions, will use this, instead of the base exception class.

if (ErrorHasOccurred())
    throw new FunkyMathsException("Something has gone terribly wrong");

I still haven’t learnt to put a more meaningful message in. But, at least I can now catch a FunkyMathsException, where I want to, and leave other exceptions well alone.

Tip 3: A generalised exception trap does have one perfectly acceptable use.

I am not an expert on all languages, but normally, an unhandled exception will cause a program to terminate. Depending on the circumstances, this may or may not be acceptable.
Acceptable: Your program is some sort of stateless service that will be restarted via a script or operating system should it stop.
Unacceptable: Your program is a word processor and the user has not saved their document for the last 30 minutes because they are too busy typing away furiously on their “best-seller”
If your program falls into the “unacceptable to unexpectedly stop” category, a global exception handler is the way to go. Save / report the error / do what you have to do… Just be careful to try and not raise an exception. This is serious “infinite loop” / “stack overflow” territory and your program is already on rocky ground.

Tip 4: Exceptions do not cross process boundaries.

I do not know how general this tip is. YMMV. From what I have seen, calling code in a separate library via COM, exceptions do not cross boundaries. The calling code will throw some sort of exception, but most of the specifics to the exception will be lost. It is just best to use other means to relay failure messages back from the library code.

Tip 5: Do not raise a new exception in response to an exception

There may be times when you wish to perform some operation when an exception occurs, but not actually deal with the exception. For instance, the calling code may be better placed to deal with a particular exception but you wish to perform some logging action at the time. If you raise a new exception, you will have lost potentially useful debugging information, such as the call-stack of where the original exception has occurred. Fortunately, most languages provide the ability to “re throw” the exception, simply by using the “throw” keyword by itself.

catch (DivideByZeroException)

Tip 6: Exceptions are for exceptions

There is a balancing act between raising / trapping exceptions, or testing conditions with an if statement and acting accordingly. Using if statements increases code complexity and will take some processing time, every time the statement is evaluated. Using / trapping exceptions may simplify the code, but where an exception is raised, they tend to be far more expensive (time-wise). Therefore, using exceptions should be something that is done for the odd occasion where things haven’t gone according to plan. This point is a rather grey area and open to interpretation.

Does Technical Debt Matter?

I have some strong views on code quality.  One of my professional goals is to always attempt to improve my coding with the aim of producing better code.  In this day and age, making software “less broken” is about the most I can hope for.  I cannot foresee a time when written software becomes “perfect” / “bug free”.  Maybe it will – I have learnt: never say never…

Anyway, this is an article akin to playing devil’s advocate.  I am not particularly comfortable with what I suggest below. I have written it purely to get people thinking about the time and effort expended writing software.  As always, I encourage your comments – positive or negative.

One of the odd things about the software industry, is that code “rots”.  This is somewhat strange.  Source code, written in text files does not “degrade”.  Unlike organic reproduction, copying a file leads to a perfect reproduction.  If you kept a copy of code written say twenty years ago, it would still be the same today as it was then.  Things change rapidly in the computing industry.  As a result, it is extremely unlikely that you could use that twenty-year old code on a modern computer.  A different form of “rotting code” exists precisely because the code does change.  Over time, countless little hacks or quirks can be added to an active code base that leads to obfuscation and “unhealthy” code.

The common technique to reducing code-rot is refactoring.  By having comprehensive unit tests, refactoring exercises help keep a code base current and ensure that changes made do not lead to regression bugs.  Working on a well-maintained code-base is a more pleasant experience for a developer.  Well-maintained code is easier to extend and developers have less fear of making mistakes.
“Technical debt” is a term coined by Ward Cunningham and refers to the “price paid” for releasing code for the first time.  He argued that a small debt was useful to incur, as it helped speed up the development process.  It was okay to incur some debt as long as it was “paid back quickly” by refactoring the code.

“The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation…”

Productivity graphs show how progress drops away on poorly maintained projects.  The longer the project runs, the harder it becomes to add features or fix bugs without causing other regressions.  The solution to the problem is to avoid as much technical debt as possible. Following best practices will help achieve this goal.  But is this done at too high a cost?  Following best practices adds extra work and thus does slow down development early in the life-cycle of the project.

Not repaying technical debt will grind a software project to a halt.  If you use an analogy of credtit card debt equating to technical debt, having the software project grind to a halt is the equivalent of declaring bankruptcy.   Obviously, this is not a great outcome, but it is not the end of the world either.

What if your software has run the term of its natural life?  Your software will have been written to meet a specific need.  Maybe that need is no longer there.  Maybe that need is now perfectly met, or the market has been saturated and sales have evaporated.  Maybe every feature that could be added, has been added (including reading mail).    If the project gets “sun-setted” does it really matter how much technical debt is left in the code base?

Not “doing things the best I can” is something I struggle with.  “Doing the best I can” and “doing things well” do not necessarily mean the same thing.  Obviously, the process of software development happens on a linear scale.  Software tends not to be written “the best way” or “the worst way” but rather somewhere in the middle.  If the process your team uses is close enough to “the best way” end of the scale to not be crippled by technical debt, then maybe that is good enough.

Say “No” to Band-aids!

Sooner or later, there will be a need to fix bugs in whatever software you work on.  How long it takes to fix a bug tends to be non-deterministic.  Some bugs will be easy to fix, and others not so.  Unfortunately, bug fixes on commercial software are often done the wrong way – under the guise of being done quickly.  The “band-aid fix” is the wrong way of fixing a problem.  The metaphor of the “band-aid fix” extends beyond the software industry, but I.T. has turned it into a real art-form.

At the heart of a lot of band-aid fixes is the notion that you can fix a problem without really knowing what the problem is.  Commercial reality may well prevent a code base from being perfect, but the more band-aids that are applied to the code, the worse the software becomes to work on.

There may be a genuine need to apply a band-aid fix to code.  When there is a real financial loss or damage to customers’ data, expediting a fix is understandable.  Removing this kludged fix should be given a high priority.  It is important to recognise that the band-aid won’t continue to hold a large wound together!  If you do not remove the band-aid and perform proper surgery, the wound will rot.  Once you allow the code to start “rotting”, it becomes difficult to arrest this negative momentum.  It damages the maintainability of the code and encourages other programmers to act irresponsibly too.  It is difficult to put enough emphasis on this point.

Depending on the culture in the workplace, it can be easy to dismiss fixing “less than ideal” code.  Studies have shown how counter-productive poorly maintained code is on development productivity.  I have yet to work with someone in the software industry that would disagree with that thought. Yet barriers are still erected that prevent acting upon it.  There is a vicious circle alive and well in parts of the software industry:

  • Code is recognised to be poorly maintained.
  • Poorly maintained code is recognised to hinder productivity
  • People are too busy to fix old code.

I cannot believe people do not see the irony with this!  Allowing software to get in to this vicious circle is the first mistake.  Programmers need to promote the culture that refactoring is not a luxury, but a necessity.  Allowing some refactoring time on all development work can avoid the problem in the first place.  Digging your software out of the hole created by the vicious circle is altogether a more expensive proposition.  Not refactoring the code at all is even worse!
The idea that refactoring time needs to be allocated with development time appears to imply that you will not be able to push out new features as quickly.  At a superficial level, this is true enough.  Over the lifetime of the code base, this argument would not hold up.  The neater and more maintainable the code base, the quicker it is to perform work on.  In other words, good code is easier to add features to.

The biggest problem I see with a “band-aid” fix is simply that it is not a fix at all!  It cures a symptom in rather the same way that pain-killers can stop broken legs from hurting.  It masks the issue – but it does not mean you’re right to walk home! Masking problems in software, just makes them harder to track down.  Software can be complex enough for one bug to be at the root of several problems. If you only mask the problem, you never know where else the bug will show up

Office Politics

When you work with other people, “office politics” will always be a factor.  I have heard people say that they did not like office politics as if it were something that they could avoid.  I am not talking about the sort of “Office politics” resulting in the metaphorical stabbing of fellow co-workers in the back. It is true that office psychopaths definitely attempt to manipulate co-workers for their own purposes, but a lot of daily interactions can also be seen as a form of “office politics”.

Internal restructuring has seen my role change recently.  I was working on a framework team, providing code (and various other infrastructure) to various teams in my company.  My “customers” were the teams that wrote the applications that sold to the real customers…  That is, the ones that paid money!
Since the restructure, I have been moved onto one of these teams as a senior developer.  Former “customers” are now team-mates.  When I was working on the framework, I had a certain perception of how our code was being used to create the end-product.  Now that I have become exposed to their code base, I have discovered the truth behind how they use the framework code!  The fact that there are differences indicates some degree of a breakdown in communications.  There is nothing catastrophic about what they have done, but it shows a disparity between the directions the framework and end-products were heading.  This difference was due largely to the difference in motivations between the two teams.

The framework was responsible for the core of twelve different applications.  As such, consistency and flexibility in the architecture were highly valuable commodities.  I would not be presumptuous enough to claim that we succeeded in providing the perfect architecture every time, but those were primary goals of the code we wrote.

The end-products have a far more tangible goal: To make money.  I am not in product management, but having developed commercial software for a long time now, “making money” tends to be about writing software that adds features that customers want and alleviates the worst of the bugs that have been reported.  In terms of priority, “adding wanted features” are more important.  Trade shows never focus on showing customers how the software no longer crashes when you do steps X,Y and Z!

In terms of how this affects code in the long-run, there is a natural tendency to leave “good enough” code alone.  Short term deadlines enforce short-term thinking.  Commercial reality allows code to deteriorate in rather the same way that an untended garden becomes an overgrown jungle.  Active pruning and weeding would avoid the problem, if only it were seen as an important goal.
Given that the software still sells and still provides real value to the customers, it can be seen as an unimportant goal.  The fact that new features become difficult to shoe-horn in to the existing code base is seldom given consideration when writing code.  Which brings me back to my original point on office politics.

Now that I am on a new team, I see the “weeds in the garden”.  No individual issue is worthy of much attention, and so the existing team members simply ignore such issues for more important work.  I would highly doubt there is a piece of commercial software being sold that did not have some degree of this occurring in their own code base.  I have worked alongside my new team members for many years.  They know how important code quality is to me and I know that they will expect me to try and improve their code’s overall quality.  Here is where the “office politics” lie.  I could just blunder in and make changes which I believe are for the better.  I have known programmers who would.  Different cultures in different parts of the world would probably react differently to such an “intrusion”.  In Australian culture, this would not go down well and so it will not be the approach I will take!  I’m also someone who is only too painfully aware of their own short-comings as a programmer. So, tact and and a measured approach, remembering one’s own shortcomings will definitely be the order of the day.   See, even in the most ordinary of jobs, “office politics” will play a role!


It takes a decent amount of time and effort to design a good user-interface.  One of the problems faced when making a user-interface is that it can take an enormous increase in effort to make an ordinary interface into an extraordinary one.  You may have come across a user interface (be it for a web-site, or an application) and been absolutely flummoxed by its operation.  Unfortunately, that does not mean that a great deal of time and effort were not spent trying to simplify it.  (Of course it may mean that no time and effort were spent trying to get it right!)

There is an extra pressure on designers of external web-sites.  Get it too far wrong and your customers go off to your competitor’s web-site.  In my experience, application developers can get away with worse user-interfaces.  If the program has the features people want, people will make the effort to learn how to use the application.  This should not be seen as an excuse not to care about the user-interface.  There is a saying that if your customers are not aware a feature exists, then it doesn’t.  Unfortunately, most user interfaces end up obscuring some functionality.  In a feature-rich application it becomes increasingly difficult not to do so.

Every time I hear a person talk about “learning” software, I feel that somehow the software has failed.  I would like software to be so intuitive that using it is “natural” – rather than a learned action.  It is probably an unrealistic expectation that all software will be like this, but that does not stop it being a worthy goal to work towards.

When I talk to non-technical people about using software, the thing that becomes apparent is that they all expect to have to learn how to use it.  No-one expects to sit down in front of a new word-processor and just use it to do their job.  One disheartening example came with the release of Microsoft Office 2007.  For me, the ribbon was a huge step in usability enhancements over the traditional tool-bar and menus approach.  The one resounding criticism I hear with Office 2007 was from existing Office 2003 (and prior) users:

“I used to know where everything was and then they went and changed it all.  Now I have to re-learn where things are”

Microsoft puts a great deal of time and effort into usability.  Hopefully, this means the learning curve for Office 2007 was not as severe as with previous versions.  The ribbon was designed to be “a better way”:  Task oriented user-interface is meant to be superior to functional oriented user-interface.  People have been “brought up” thinking along the lines of functional software rather than thinking the computer will aid them in completing their task.  This mind-set will change over time and wide spread adoption of task-oriented user interfaces.

If you ever have to write a user-interface remember this:

  • You either spend your time getting it right, or ask the users to spend their time figuring it out.
  • The world does not need more software that is difficult to use.