What’s the most frustrating problem in IT? It’s us

What’s the most frustrating problem in IT? It’s us

Thinkstock (Copyright: Thinkstock)

Why do so many large computing projects end up as costly failures? History shows that we, not our machines, are the problem, and it's time to address this.

The UK’s National Health Service might seem like a local topic for this column, but with 1.7 million employees and a budget of over $150 billion, it is the world’s fifth-largest employer, surpassed only by McDonald's, Walmart, the Chinese Army, and the US Department of Defense. Its successes and failures offer important lessons for organizations of all sizes.

Consider the recent news that an abandoned attempt to upgrade its computer systems will cost over £9.8 billion ($15 billion), described by the Public Accounts Committee as one of the "worst and most expensive contracting fiascos" in public sector history. This won't surprise anyone who has worked on large computing projects. Planning is often inadequate, with projected timelines and budgets based more on wishful thinking than a solid analysis of needs. Communication breaks down, with minor issues overshadowing core functions. Meanwhile, the world changes, turning yesterday’s technical marvel into tomorrow’s outdated burden, complete with endless administrative headaches and limited potential for technical growth.

According to a 2011 study of 1,471 Information and Communication Technology (ICT) projects by Alexander Budzier and Bent Flyvbjerg of the Said Business School, Oxford, one in every six projects costs at least three times more than initially estimated. This is about twenty times the rate at which projects in fields like construction encounter such issues.

Costly IT failures are an all-too-common part of 21st-century life. However, what's important is not just what went wrong this time, but why the same mistakes keep happening decade after decade. These factors were evident during one of the first and most famous project management failures in computing history: the IBM 7030 Stretch supercomputer. Started in 1956, IBM aimed to build a machine at least one hundred times more powerful than its previous system, the IBM 704. This goal secured a prestigious contract with the Los Alamos National Laboratory, and in 1960, the machine's price was set at $13.5 million, with negotiations beginning for other orders.

The problem was that when a working version was tested in 1961, it was only 30 times faster than its predecessor. Despite having several innovations that would later be important for computing, the 7030 failed to meet its target, and IBM didn't realize the issue until it was too late. The company's CEO announced that the price of the nine systems already ordered would be reduced by almost $6 million each, below cost price, and that no more machines would be made or sold. Cheaper, more agile competitors filled the gap.

‘Artificial stupidity’

Is there something about information technology that leads to unrealistic expectations and underperformance? Do organizations tend to overlook technology challenges until it's too late?

One answer is the gap between how businesses see problems and how computer systems see them. Take the health service, for example. Moving to a fully electronic system for patient records makes perfect sense, but connecting this goal with the complex, interconnected ways 1.7 million employees currently work is a huge challenge. IBM faced a seemingly simpler task on paper: create a machine one hundred times faster than their previous best. However, turning this idea into reality brought challenges that didn't exist until new components were developed, leading to dead ends and frustrations.

All projects face such challenges. With digital systems, the focus is less on the real world and more on an abstract vision of what might be possible. The sky's the limit, and big promises can help win contracts. Yet, there's a natural divide between the real-world complexities of any situation and what it takes to represent these on a screen. Computers rely on models, systems, and simplifications we've created to make ourselves understandable to them. The big risk is that we might not understand ourselves or our situation well enough to explain it to them.

We might believe we have the answers and propose amazing solutions to complex problems, only to find that what we’ve "solved" is far from what we wanted or needed. In almost every large computing project, trying to solve a few big problems often leads to disaster because there are countless hidden, conflicting requirements waiting to be uncovered.

If there is hope, it lies not in endlessly analyzing the failures we seem doomed to repeat, but in better understanding the flaws that lead us to them. This means recognizing that people often struggle to explain themselves in ways machines can understand.

You might call it artificial stupidity: the tendency to project our hopes and biases onto a digital platform without considering what reality can actually support. We, not our machines, are the problem, and any solution starts with accepting this.

Such humility is hard to promote, especially when it competes with polished solutions and confusion—common in discussions between managers and technicians long before the digital age. The alternative, however, is unthinkable: a future of over-promising and under-delivering, along with wondering why our most powerful tools only seem to create more chances to make mistakes.