Tag Archives: programming

Where are the good programmers?

There seems to be a trend amongst programmers who blog.  They all tend to say that they write rubbish code.  (Some put it more poetically than others…)  I think it is a good idea to steer clear of the Rock-star programmers, but are all who blog, bad at coding?  Or are they merely filled with a sense of modesty endued by self-preservation? (Needed because the Internet is a big scary place and you can’t hide from the knockers forever.)

From my own perspective: because quality takes time, there is always the sense that with more time I would have done a better job.  That is probably true to a certain extent – but there is definitely a point of diminishing returns.  That, plus the fact that I definitely have a finite amount of intelligence means that the quality of my code will probably never exceed a certain level.  Someone smarter than me could possibly turn out better code than I could ever hope to.   Extra intelligence however does not always guarantee better results.  “Care” is an attribute that counts for a lot when writing code.  “Careless programmers” write rubbish code and I find that particularly offensive if I know that they are better problem solvers / generally more intelligent than I am.

Reflecting on my own code at a later date often reveals a painful truth.  Yes, I too write some awful code.  Even code that I was once quiet proud of, I no longer see through rose coloured glasses.  I probably notice this due to looking at the code from a different perspective.  This is impossible to do at the time as you tend to be so engrossed in the code that it seems to be simple.  (To me, simple code that works is a close approximation of good code)

Different perspectives for code arise with different usage of the code.  Code that sticks to some simple rules lends itself to re-use.  Code re-use is somewhat a holy grail of programming, but for a business, it is not as important as having the code you write make money.  Joel Spolsky places a strong emphasis on finding good, talented programmers and judges them as the people who are smart and get things done.

Placed solely on this scale, I have known quite a few programmers who “pass”.  But for some, there is a high price to pay, in the form of code maintainability.  I willingly concede that for the sake of getting a “version 1.0” code base out the door and selling, making code “good” is a luxury.  But carrying on with a relentless drive to push new versions out is counterproductive.  Extending and maintaining a bad code base takes more resources and there have been documented cases where lack of progress due to the bad code base is the eventual undoing of a project.

Maybe this indicates that there are different sorts of “good programmers”.  The ones who ensure there is a product to sell and the ones that ensure that sins of the past are dealt with in a timely fashion.   I suspect software projects need both these types of programmers to succeed. I also suspect that these two groups of programmers annoy each other due to their different outlooks.  But that’s a story for another time.

Retrofitting test cases

The main project I work on, has an automated unit test application which is built and run as part of the build process.  For our office, the team was an early adopter of the concept, but unfortunately this does not translate into extensive and well maintained set of unit tests.  Put politely: it would be good to improve this coverage.  This then raises a fairly obvious question: Where do you begin?
First of all, it is worth analysing the statement: “It would be good to improve this coverage”.  There are a couple of benefits to having unit tests and these benefits help explain the statement.
Avoidance of regression bugs.  This is reasonably self-evident.  If you have sufficient coverage of your classes, you cannot introduce faulty logic without a unit-test failing. Over the years many quality assurance staff have told me “the earlier you catch a bug, the less expensive it is to fix”. So many, in fact, that I now believe them.  (In truth I don’t think I ever doubted the “earlier = cheaper” argument) Anyway, if you are running unit tests as part of the build process and building regularly it stands to reason that any bugs introduced and caught, will be fixed cheaply.
Code re-use.  A less obvious benefit is that the code you are testing now has two uses.  One in the application and one in the test-case.  While this may seem a contrived second use, it should help with code abstraction.  The theory goes that the more usages a class has, the less chance it has of being tightly coupled with other classes.  Tightly coupled classes increase code complexity.  The more tightly coupled they are, the more likely a change in one class will introduce a regression bug in another class.
Now that we have defined a couple of benefits that we hope to achieve through the use of unit tests, it helps define where we should begin.  We want the unit tests to reduce instances of regression bugs and improve code abstraction – which is enforced by re-use.  History logs of the revision control system can be studied to show the frequency of changes in a unit.  If a mature project has a hot-spot of continual changes to a given class, then that may well be an indicator of frequent regressions and “hopeful” changes rather than “thoughtful” changes.

There is a very good chance that such a class violates rules of the single responsibility principal and in turn makes writing unit tests for it an unviable proposition.  Now that we have identified what is wrong, we have a good place to start:

  • Look through the class and identify the different responsibilities the class has.
  • Extract the responsibility that is least entangled throughout the class.
  • Write unit tests for this new class.
  • Rinse, repeat.

Once you have extracted a new class and written the unit tests, your changes for it are not necessarily complete.  As you extract more classes, there is a chance that your new class can be further decoupled from the original.  In my experience, the important thing to remember is that just because you remove a responsibility from a class does not immediately decouple the two classes.  Proper decoupling can take several iterations.As I stated, start with the easy abstractions first.  As classes become less entangled, the harder areas will become easier to deal with.  At least, that’s the theory!

Second place

I have a nasty habit of picking the “second placed” technology.  Fortunately, I was too young to have invested in BetaMax and do not have a collection of HD-DVDs lying around, but that is the sort of thing I am talking about.  I suspect I have since thrown it out, but at one stage, I did own a copy of “OS/2 Warp” on 3.5 inch floppy disks.  (I did purchase this prior to the release of Windows 95, I hasten to add!)

When 3G phones were introduced in Australia, I was an early adopter and bought a Motorola A920.  The thought of application development on a phone was an interesting prospect, but any enthusiasm quickly disappeared with the fact it was a “locked platform” that required certification or a certain level of hacking to put applications on it.  The A920 was something of a flawed gem.  Many of the hardware features of the phone were not supported in the initial release of the firmware.  Bluetooth, the IR receiver and GPS functions were all “locked out”.  If I recall correctly, application development required a commercial C++ compiler as well.  As such, apart from the addition of a File Manager / Explorer, my A920 stayed remarkably “standard”.

These days, phone/PDA hardware is significantly more mature.  The vast bulkiness of the hardware has been lost, replaced by sleek stylish devices.  Of course, there are multiple players on the market at the moment, but two of the biggest contenders at this point in time are the iPhone and the HTC Magic with its Android O/S.  It would be remiss of me not to mention the Windows Mobile or Symbian O/S – so, now I have. :-)  To be fair, there are a large number of devices on these two platforms, but they do not capture the public’s imagination, the way the iPhone does.

I would love to write some small applications for a phone.  Nothing serious – nothing that is going to launch me on a stellar career path to be CEO of the next exciting start-up.  For a PC owning hobbyist, this makes Android an obvious choice.  Applications are written in Java and there are Eclipse plug-ins complete with hardware emulators.  This makes the cost of entry free.

Compare this with the iPhone.  Applications are written in Objective C.  Whilst I believe the development tools are free, the cost of entry is buying a Mac.  Now I realise that if you start with a Mac platform, that makes the entry point approximately the same as for Android development, but unfortunately for me, that is not the case. Limited introductory reading has also led me to believe that the API provided by Android is superior to the iPhone API and that getting applications approved by Apple can be problematic, if your vision doesn’t align with Apple’s.
Overall, Android looks to be the obvious choice.  There is just one problem: The iPhone is killing Android in the market place.  Public awareness is heavily tilted in favour of the iPhone, thanks to aggressive advertising campaigns.  I have heard plenty of people saying they wanted to get an iPhone.  I haven’t heard one person say they wanted an HTC Magic.
So which phone would I be buying?  For the time being, neither.  Wanting to write applications and actually getting around to doing so are two different things.  When I had my A920, I discovered it was quite good at doing everything, except being a phone.  Now I own an unremarkable 3G phone that was purchased solely because it was the smallest on the market at the time.  – This was a reaction to the size of the A920, which was so bulky, it was inconvenient to carry. The A920 was not the only consumer electronic device I have owned that was software extensible. If past experience is anything to go by, customisation of such devices is little more than a pipe dream for me.

Realistically, I should attack this problem the other way around.  That is:  have an Android development environment and write and test applications on it.  If I reach the stage where I have written sufficient applications to justify buying an actual phone, then I shall – as long as I can convince myself I’m happy to buy the second-placed technology.

Fun with Delphi 2009!

All work done on our project is subject to peer review.  Any code submitted to the version control system, must have an accompanying “change request” which has a unique number.   The reviews are done “incrementally”.  That is, “diffs” are compared to ensure the changes are correct.  (Or at least, that’s the theory!)

To help facilitate this, a Delphi client application was written to access the information necessary.  The diffs are stored as HTML files (generated by a server side application) which an embedded Web browser control displays.  An external “diff tool” can be used for more powerful operations than the web browser allows.  Although in theory, a normal web-browser could be used to perform the review, the HTML diff files are limited in their user-friendliness and non-trivial changes end up being examined by the external diff tool.

The problem I have, is that I work in a remote office to where the “server” is.  Network latency and the low specification of the “server” takes the review process to a new level of tedium.  However, as the review tool was written in house, I had the power to do something about it!  Although I have been using Delphi 2009 since its release, this was the first opportunity I had to put together several of its new language features.

I wrote an simplistic “cache” for the program, that copied the files it needed to reference to a temporary directory on my own machine.  To do this in a unobtrusive manner, the files are copied using a background thread.   The cache keeps a request list, and a list keeping tabs of what files are currently held in the cache.  I utilised closures and anonymous methods to access these lists in a thread safe manner and the generic storage classes found in the Delphi libraries for the lists themselves.  As these classes support iterators, I was even able to use these too. (Yes, I realise iterators aren’t “new” to Delphi)

I know none of this is a “new trick” to the managed languages such as C# under .Net 2.0 and onward, or later versions of Java.   I was never a C++ developer, but I suspect some of these “new tricks” were always possible with it.  Delphi’s TThread class still seems to me a riskier way of writing multi-threaded code than C#, but it is so cool that an “old favourite” can now play along with some of the newer languages and do so “natively” rather than requiring a virtual machine to do so.

Commercial reality

One of my favourite blog writers is Eric Sink.  I find his writing style entertaining and informative.  I do not find Revision Control Systems the most interesting subject matter. I use one, it works, I am happy.  But for Eric, they are his speciality.  After all, he owns a company that writes them.  In a recent article, he discussed speed vs. storage space trade off.  He used a source code file containing a single class, which was 400KB in size for the latest revision as his “guinea-pig” and braced himself for a barrage of comments regarding whether such a file constitutes an example of poor coding.

Certainly, when you take into accounts things such as the single-responsibility principle, it seems unlikely that a single class could grow to such a size.  It would seem that such a class could be a target for a future refactoring exercise.  Refactoring is a worthy cause.  There is no shortage of reading material that carefully constructs solid arguments for why it should be done.  But a more worthy cause is making software sell.  Pet projects and home-hobbies aside, software is of no benefit, if no one is using it. 

Commercial reality dictates that if a new feature will help sell more copies of the software, then adding the feature is what is important.  It is true that the more obfuscated code becomes, the harder it is to expand to incorporate new features.  I have heard of software projects grinding to a halt because adding new features simply became too difficult to accommodate. 

Using commercial pressures as an excuse to write sloppy code is not acceptable.  I have seen examples of code that looks like “the first thing that popped into the developer’s head” has been committed to the Revision Control System.  Often, with very little extra thought (read “time”) a neater, better solution could have been found.  This is where task estimation is important.  In my experience, programmers will use the amount of time allocated, to perform any task.  In all likelihood, they will get to the end of that time-frame and realise they overlooked at least one aspect and then take longer, but that is not the point I am trying to make here! 

If you have allocated “a day” to add a feature, then most often, that is how long it will take to add.  If you had allocated “half a day” for the same feature, then I would wager that the feature would have been added in about that half-day timeframe.  Granted, this is not always the case, but experience has shown me how often this is a surprisingly accurate revelation. 

This stems from the fact that if a programmer knows how long they are expected to take, they will get it working first, then tinker with the code until the time has elapsed.  Some “tinker time” assists in overall code readability.  If you are not prepared to add “code refactoring tasks” to your project plan (regardless of the project methodology you use) then allowing a certain “slackness” in task estimation allows your code a fighting chance of staying relatively neat.

When time pressures arise, neatness and accuracy of code are amongst the early casualties.  Unfortunately, this seems unavoidable and is just the price that is paid to remain profitable.  Whilst I strive to write and maintain neat, manageable, accurate code, I live in the real world and know that regularly revised source code over two years old (Eric’s example was seven years old) will likely be of the “too long / overly complex” variety.  I will not be one to criticise him for that.

Professional Code of Conduct

What stops a bank teller from stealing money from a bank?  These people literally handle lots of cash.  It would seem that handling large amounts of cash can make them blasé about its value.  I am also sure that most bank tellers are ethical people too.  But I doubt that any bank would be happy with just these assurances that money was not going to be stolen by the tellers.  Instead, the processes used by the bank would generate an audit trail, sufficiently detailed that theft by tellers would quickly be discovered. If tellers stealing money is seen as a risk, then the risk is minimised by the processes used.

Not all careers are able to be adequately scrutinised by processes to avoid unethical dealings.  Doctors make the “Declaration of Geneva” – which is the modern version of the “Hippocratic oath”  . This is a statement regarding their ethical treatment of their patients.  Not being a doctor, I am not sure how seriously they take the oath, but, as a patient, I would like to believe that this is a binding principle for them. 

Computer programming strikes me as a career that could do with an “oath”.  Some programs have the ability to harm people, harm their finances or otherwise adversely affect human life.  How seriously do the programmers of such systems take their jobs?  Again, I would like to think that they take that responsibility very seriously.  Would taking an oath make a difference here?  Computer programming requires no formal education, and quite often the “rich and famous” leaders of the industry are mavericks without one.  Without formal qualifications, do you know whether or not the programmer has produced good quality work?  Unfortunately, these two aspects (“education” and “quality work”) are only tenuously linked in the computing field.  I use this point as a defence for my claim that we are still very much in the infancy of the computer industry.  In years to come, I hope the two are more closely aligned.

I believe that taking an appropriate oath could be an important aspect of computer programming in years to come.  Such an oath would require several parts to it:

Part 1: To be honest and ethical in code that is written.
This includes things such as not writing Trojan horses, illicit collection of users’ private data and ensuring that computational output given is truthful and accurate to the best of your ability.

Part 2: To follow industry best practises with respect to producing high-quality code.
Whatever I write here will date the article, but things such as writing unit-tests for code and following an appropriate development methodology are along the lines of what I am referring to.

Part 3: To reduce the severity or likelihood of errors when they are discovered in software.
The whole notion of “Bug prioritisation” is based upon fixing the most serious ones first, and leaving more obscure bugs with less impact until later.  This point in an oath is to ensure that bugs are not left behind for the wrong reasons.  (e.g. Because there were “more fun” bits to write)

Part 4: To periodically update technical skills through on-going education.
The intent here is to ensure that code written is done on the strongest and most relevant platform.  There is a balance required here.  On one side the aim is not to produce code that “rots” because it is old and unmaintainable and on the other side, trying to avoid re-writing everything in the latest trendy language.

I suspect that such an oath needs to be complied with “in the spirit that it is written”.  Programmers, by their very nature are constrained by binary logic.  As such, I could imagine an extreme-nerdy glee some programmers would have in deconstructing such an oath.  In this sense, “deconstructing” is exactly the right term to use:  Applying a warped perspective on a literal interpretation of an oath would not be constructive for our industry.  You have two choices when constructing something like an oath.  You can write it in straightforward simple terms and use the phrase “in the spirit that it is written”, or you can write it so precisely that there is no room for interpretation.  Precise wording undermines the effectiveness of a document by writing it in “legal-ese”.  No-one likes reading such a document and people often hide behind the excuse that they didn’t understand such a document.

I have never made the “Declaration of Geneva”, but I do understand what it means, and I expect any doctor that treats me to have made it, or a similar oath.  It could have been written with legal precision, but for the benefit of doctors and patients alike, it was not.  Similarly, if the software industry was bound by an oath or affirmation, then it would need to be one that could be understood by both the programmers and the end-users.

Do end-users care about ethical programmers?  I somehow doubt that it is as important to them as an ethical medical practitioner is, but they probably do expect that the software they use has been written by diligent programmers.  If you were to point out that private information on their computer may be accessible by a rogue program, then I expect they would care about the programmers being ethical as well!  Due to the all-pervasive nature of software in western society, you could wonder why the general-public has not started to care more, about such matters. With “public education”, you could probably raise the level of concern, but such an approach does seem to be way over the top.  It is probably just as effective for those of us in the industry to care about such matters.


Programming Bloggers’ Lament

Programming blogs try to distil wisdom, pass on advice, inform their readers of good practices.  The authors of blogs I read, tend to be modest of their own abilities – willing to offer advice, but willing to stand corrected.  I like this trait: less rock-star, more egoless programmer.

One of the common issues bloggers have, is getting their information to the screens of those who need it most.  Almost every programmer I know could name a “hopeless” programmer.  Someone they have come across during their careers that really just cannot code.   These are often the kind of people who should be taking an active interest in learning; but generally, they don’t.  In other words, they do not read the blogs that could possibly help them to improve their work.  This is what I refer to as “the Programming Bloggers’ Lament”

There is a simple explanation as to why these sorts of programmers do nothing to improve their skills: Learning would take effort.  These coders may not be particularly lazy people.  There is more to life than work – and to them, programming is just work.  Anyone with a healthy “work-life balance” (as the trendy ones call it) deserve respect.  It is worthwhile noting that the key word is “balance”, which indicates to me that diligence in the “work” side of the equation is still required.

So, how do you reach these sorts of people?  First of all, I am convinced that not every programmer I have met, should be one.  As I have suggested before if patience is your trump card when it comes to programming, then maybe you are in the wrong career.  Presuming that “improving” is a possibility, mentoring and team leadership are the best ways to reach these kinds of programmers.  In a positive atmosphere, everyone will seek to improve their skills.  You really cannot change other people.  They have to want to change, before it will happen.  Even then, they really have to want to change with a kind of freakish determination before they are likely to do so.  The best you can hope for, is to inspire them to be better.  Continual guidance of these people is not ideal, but may be necessary.  Sometimes, you will never be able to get them to an end-goal such as “being a great programmer”.  Sometimes the best you can hope for is getting them to a level of acceptable competence and professionalism that allows them to admit when some task is beyond them.So if you know someone who you wished would take more of an interest in reading programming blogs, then you are probably wishing for the wrong thing.  Use the lessons you learn from reading blogs to pass them on to those who need to know them the most.  The bottom line is: if you know someone who will only spend a minimal amount of effort on their work, then that is the “price” you must make the lessons they need to learn.

A cold hard truth about recruitment

Whether intentional or not, I have noticed a trend recently of some prominent computer industry folks try to spell out to programmers that non-technical skills are important when looking for work. Normally these tend to be along the lines of “improving communication skills” or “try not to look like you have Asperger’s”. 

Then, from Joel Spolsky came the first piece of résumé advice I could believe in: “you may want to highlight the Banging Out Code parts of your experience”.  Over the years I have heard numerous recruiters (both agencies and direct employers) say things like “People skills are important”, “some technical deficiencies can be overlooked for the right candidate”.  Baloney! I have been on both sides of the fence (i.e. looking for work and looking for programmers) and I have never seen any proof of that.  Despite the common-sense that suggests if you put an intelligent eager person into a position that they will succeed, the person with the right set of skills on paper will get the nod. 

I guess this is a sign of risk-minimisation.  If the person has the right skills on paper, then the only risk is whether or not they can apply themselves effectively in the environment a company offers.  This is going to be a risk, no matter who you take on, so why not take the approach of choosing the candidate who ticks the most boxes?

The simple reason this approach is used may lie in the fact that there simply needs to be a fast and effective filter.  Too many candidates apply for any and every IT job.  For this reason a brutal skills filter needs to be used to narrow the search. 

If you are looking for employment, take the time to look for the right job and not apply for every job.    Do not lie to yourself, or a prospective employer and do your homework on determining how aligned an employer is with your own goals.

If you are looking to hire a new programmer, take the time to make sure you spell out the exact needs that you are looking for in a candidate. Make your requirements as clear as possible listing them in order of importance.  If you list a skill as optional, that is what it is.  Just because someone does not have that skill should not put them at a major disadvantage to someone who does.


Refactoring is not a dirty word.  Time will always apply pressure on a project.  When this occurs, there is a tendency to want to cut corners.  If you are a “doer” coder, then I would expect the first working version of your source code is not particularly neat.  Your task is not over yet!  Make the effort to refine variable, function and class names.  Make sure you identify what is wrong with your code and fix it there and then. 

Unit tests are your friends here.  Well constructed unit tests help you to re-factor fearlessly. You will know if you break your code when you re-factor, as your unit tests will fail.  It is important that you have sufficient coverage with these tests.  If you miss boundary cases, or not test each code-path, you are leaving yourself exposed to introducing bugs.  Sometimes a few unit tests are worse than having no unit tests as they can lead to a false sense of security.

Having unit tests can also help you to produce less tightly coupled code.  It is generally code that is better abstracted and hence more compliant for code reuse.  In fact, the code you write is already used in two places: firstly in your application, and secondly in your unit tests.

Test-driven development is the technique of writing test-cases and then writing the code that passes them.  (There is more to it than that – but the fact that people write whole books on the subject probably tipped you off to that)  There is the concept that you write just enough code to pass the unit tests and no more.  It is a great way of ensuring code brevity.  Less code means less chance for bugs to exist.  It also helps to remind you of what you are trying to achieve in the code that you are writing.

Unlike some cases I have seen, refactoring is not “throwing away” a code base and starting again.  That is rewriting!  I have not seen much public endorsement of the rewriting technique for improving a code base.  Refactoring is merely the art of neatening the code.  A compiler will “understand” what it has to do, no matter how nasty (and bug ridden) the code may be.  The purpose of refactoring is to allow another human to understand what the code does.  As humans are (relatively) good at pattern matching, looking for repeated code blocks is a good first step when refactoring.  This code is not necessarily going to be multiple statements, or indeed even one complete statement.  Sometimes this repeated block is worthy of its own function, sometimes it may just be good to assign the result of the code block to a local variable.

It is worth spending some time on tidying the code.  I have yet to come across any firm metric to help you determine how much time this will be.  You don’t want to be accused of “playing” with the code, the way a child “plays” with food they don’t want to eat!  When refactoring existing code, it can be worth multiple check-ins.  If your project uses a continuous integration product and sufficient test case coverage, then this helps prevent you breaking code.  If you don’t use these tools, it still helps provide a level of transparency to your work.  It will be easier for someone to understand how the code evolved into the state it is.  This may be important in case bugs are introduced.

Remember that the result of this work is not to produce bug-free code, but code where bugs cannot hide!  You are aiming to make it readable for the next person by making the learning curve as gentle as possible.  Why bother making it easy for the next person?  Well, it may just be you!


Doers and Thinkers

It is possible to divide groups of people in a great number of ways.  Left-handers and Right-handers, women and men, so forth and so on.  Sometimes two groups are mutually exclusive and adequate to cover all of a population base, sometimes (even in the above categories) they are not.  I am about to discuss two groups of programmers – but I would fully expect that there are programmers out there who don’t fit either category as precisely as I will describe them.

Category 1, which I shall call “the doers”, includes the majority of programmers I have seen and have read about.  Their personal coding style can be bluntly referred to as a “trial-and-error style approach”.  That actually sounds far too much like a harsh criticism.  The majority of these programmers are not clueless monkeys prodding away at a keyboard.  They have a reasonable degree of knowing what to do, but the detail is lacking.  As they code, details come into focus, nuances are dealt with and problems are overcome.  The beauty of this style of coding is that it is easier to deal with when writing.  There is no need for a detailed analysis of all aspects of the code, which in turn makes it easier to concentrate on a specific part of the code.

Category 2 I shall call “the thinkers”.  They represent a much smaller group of programmers.  Their approach is more of intense thought followed by intense typing – with rarely a backspace key pressed.  I would suspect that this group would have a far better ratio of keys-pressed to source-code output.  These are the true geniuses in the field, but based on my experience, they make up no more than one or two percent. 

In reality, even the “doers” utilise and require a high degree of concentration to perform their task.  The biggest difference tends to be that the first version of working code the “thinkers” write will be neat and orderly, whereas some degree of refactoring will be required for the “doers” to turn in work of a similar quality.

As mentioned in my opening paragraph, I do believe that there are times when you will not neatly pigeonhole developers into only one of the two categories.  Depending on the task at hand a thinker may act as a doer or vice-a-versa.

Personally, I place myself in the “doers” group.  I used to feel convinced that the “thinkers” was where I wanted to be as a programmer.  Indeed, I recommend thinking about problems as opposed to blindly trying anything that springs to mind.  These days however, I am more forgiving of my own ability or lack thereof.

The computing industry seems to be coming aware that most programmers are not “thinker” programmers.  Many agile and XP style approaches to programming rely on this.  Concepts such as pair-programming and test-driven-development tend to favour “doers”.  For example, the fact that a “thinker” will visualise the code in their head for a long time prior to writing it down, makes it hard for them to participate in pair-programming environments.

The real danger is that thinkers will not be given a working environment that suits their needs.  One size does not fit all.  As the majority of the industry is made up of “doers”, it is important that methodologies are followed that allows them to produce output that a “thinker” would be proud of.  Essentially, that boils down to allowing time for refactoring.  The code is not done the moment it “works”.  Newer methodologies accept this and build it into the process of writing software.  But that is a story for another time.