Still looking for a fair EULA (Part 2)

Earlier, I talked about the End User Licence Agreement (EULA) that bind software users to legal restraints – preventing lawsuits against the Software manufacturer in the case of failures. In it I stated: “abdicating all responsibility for the software failing upon the user is a cop out.” In my mind, there are two immediate problems preventing the abolition of this cop out. The first I shall call:

The tale of a bad Meat-loaf song and an oddly scaled reflection:

I stand by my “cop-out” statement. But on the flip side of the conversation, I also want to state: “People need to take responsibility for their own actions”. By this I mean: where a user makes errors using computer software, due to their own lack of understanding, they do not deserve legal protection. This is the counter argument that prevents the issue of fair EULAs being a black and white issue. Years of pop-culture has seen me take the viewpoint that lawyers live to make money by suing those who already have it. If the great mediums of film and television are to be believed (even partially believed), a person’s stupidity is no defense for the company/organisation/individual being sued. This attribute probably explains the “Objects in mirrors are closer than they may appear” stickers that appear on car mirrors.

Without any legal protection, the software industry would be forced to produce software that provides absolute protection for the absolutely clueless. I can see the “Are you sure?” dialogues now…
Software that molly-coddles users to the extremes of looking after those who hadn’t figured mirrored images may be a little difficult to judge scale by, will likely be tedious and unproductive to use.

The second “immediate problem” I see, I couldn’t think of a tricky title for… So I have simply called it:

The blame game:

A likely area of confusion is figuring exactly who is liable in the event of an error. Normally, the aspect that fails in a software application is the one closest to the user. Say for instance: an error in CAD software means a building is not constructed strongly enough. In this example, the error will likely be in the calculations performed by the CAD software. But this may not be the case.

  • An operating system function may not have returned the correct value.
  • An error in rounding floating point numbers may have caused a value to be “just the wrong side” of the required value.
  • The compiler used to compile the CAD program may have generated the wrong assembler / Intermediate language instructions.
  • The processor may have returned the wrong answer for a division.

Exact determination of who is at fault would no longer be a case for lawyers alone. Evidence to find the guilty party may prove to be elusive. Suing the software producer would probably succeed in tying up the programming resources of the company whilst they ascertain whether they should counter-sue someone else. The only people who benefit from the long proceedings that would likely ensue, would be those who are getting paid by the hour… To me, it appears as though only the lawyers come out of this scenario any better off.

In summary:

The threat of legal recrimination may well force software developers to put more effort into thorough practices that raise the quality of their wares. As you may have noticed, quality software is something I am passionate about. But I am a realist. I don’t think I will live to see the day that all software released is “bug free”. I am convinced that nothing short of starting the computer software industry again would achieve such a goal. The financial, economic and cultural shock required to “stop the industry whilst we get our act together” is not one the rest of the world would be prepared to bear.

One thought on “Still looking for a fair EULA (Part 2)”

  1. I don’t think legal repercussions for bad software would ever have much positive effect. The first problem is that it would immediately introduce a barrier for *new* software. We understand that *testing* software and having bugs reported is a really good way to improve the quality. You can do as much in-house and developer testing as you like, but you *always* miss bugs, and that’s why a staged rollout in the form of beta testers and downstream distros is so valuable for ensuring software quality. People have different environments, different schedulers and different kinds of data, all of which can turn up bugs. You need interested parties to help discover problems. I read a lot about the value of iterative development and frequently meeting with the customer to ensure the product does the right thing. If you are locked in a legal cage where it must be perfect before release, you can end up releasing a bug-free product that doesn’t actually do what people want.

    Maybe code quality laws could be described in terms of test coverage or hours spent on QA. Test coverage is a nice idea, but it is so easy to game – to write useless tests that execute the code but don’t actually verify its operation. Similarly “hours spent on QA” are easily falsified and it even comes back to quality vs. quantity.

    I think the best way to understand the quality of software is to look at its maturity. Software improves over time – people find problems and report them, and they get fixed. I like to think of the dependency stack and how all the core libraries tend to work flawlessly and it’s the applications at the top that exhibit the most problems. The kernel (not including drivers), the standard C library, the core GUI toolkit and the build toolchain are all very high quality, and it’s because they have seen years of use, and more importantly, billions upon billions of executions. Application software, by contrast, is often young and has had relatively little use in the wild. It even makes sense, in a way, that problems at the application level are not as serious as problems at the toolkit level because they affect a far smaller audience of users.

    So my point is that rather than resorting to legal recourse for buggy software, the best thing to do is to have 3rd-party data on the popularity of projects and avoid using buggy software in the first place. You can see this when searching for Firefox extensions on addons.mozilla.org and in Ohloh’s “Projects by Popularity” list (http://www.ohloh.net/projects). There is still room for new projects to gain popularity through other channels like word of mouth or those with different feature sets.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>