"Scoping" out IT project failure...

According to a 2008 Gartner report, 15% of all IT projects failed that year because of high cost variance, while 18% were unsuccessful because they were substantially late. This means that in 2008, 1 in 3 technology projects failed.

Hmmm… IT project failure based on the fact that the estimates for budget and schedule proved to be incorrect.  Since many IT projects produce unique software products (i.e., never been done in exactly the same way before), is it any wonder that the estimates for scope AND budget AND schedule would be wrong?

Consider what this would mean in other industries if failure was based on having a high cost variance and substantially late (whatever “high” and “substantial” mean):

  • Medicine:  when a doctor “estimates” that a full term baby is due on Oct 15 and the baby is born on October 31 (16 days late) – is that baby a failure?
  • Medicine: when an oncologist “estimates” that a patient has 18 months to live and the patient is still alive 5 years later – is that patient a failure?
  • Survival: when the mining accident happened in Peru and scientists predicted that there would be casualties and all were rescued alive – was that a failure when the time frame exceeded the expectations and all miners were still saved?
  • Hurricanes:  when the scientist in Colorado predicts the number of hurricanes that will form in the Atlantic and how many will make landfall, and the number of storms falls short of his predictions – is that a failed hurricane season?
  • Everyday life: when you go grocery shopping with a list and a preset amount of cash, and you have to make several trips to purchase the items and they cost you more than you anticipated – is your Saturday a failure?
  • Everyday life: when your son makes the basketball team at school and your planned school budget is exceeded after shoes and uniforms – is the school year a failure?

All of these and many more examples in other industries illustrates how an “estimate” is simply a best guess based on history and science.  But life doesn’t follow science (truth can be stranger than fiction but that is another post for another time).  Just as a psychic cannot reliably predict the future (they can only make an educated guess based on intuition and observation), software estimators cannot reliably forecast the life of a project before it begins.  In fact, software estimators often work with even less predictable scopes than those outlined in the examples above.

So, in the context of a software project, what does on-time and on-budget really mean?  Given a set of approximate inputs for the major cost drivers (based on the information at hand) together with historical data for similar projects, an estimate is derived.  Will this estimate be correct?  Never!  It is always only a best guess given “typical” environment and situational characteristics – and an optimistic view of what could happen during the project.

If the estimator is gifted with a solid and complete set of requirements for a piece of software similar to a historical one, s/he might come within a range of accuracy.  Software, however, remains an amorphous product for which good requirements are often ill-defined or are discovered late in the development life-cycle.

IT project failure (in my humble opinion) should be defined based on scope – if the project misses the product requirements or gets them wrong, then the project should be deemed a failure.  Not whether it cost more than the estimator thought (which is a reflection on how good the estimator could predict life) or whether the product was delivered late (also a function of how well the estimate mirrored life).  It makes so much more sense to track project delivery based on scope!  But, wouldn’t that mean we’d have to concentrate on getting the requirements right and then delivering the product right?  That’s a novel thought.

What do you think?



Close Comments

Comment (1)

  1. Pingback

Leave a Reply

Your email address will not be published. Required fields are marked *