There are all kinds of definitions for the term “technology deficit”. Some believe it is the total implied cost to refresh technology components, such as PCs, servers, storage, operating systems, anti-virus tools, database software, middleware applications, networks, network components, and patching from their current state to where they need to be.
In 2022, Deloitte reported that there was a technology deficit “because board members lack the knowledge they need to ask informed questions and ensure technology is being driven by strategy.”
Wikipedia says, “In software development, technical debt (also known as design debt or code debt) is the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer.”
Then, there was the Southwest Airlines incident in late 2022, apparently brought on by an old scheduling system that had finally seen its day and failed.
I keep asking myself that if I were a CIO today, and the board would ask me to explain technology deficit or technology debt, what would I say? Here are a few thoughts:
1. A technology deficit describes any situation in which your technology is not doing what it is supposed to be doing.
Deficits get created for many reasons. The technology might be out of date, and no longer capable of running today’s systems. Or, you might have people in your organization (including the board!) who lack working knowledge about technology and are unable to evaluate or ask for technology strategies. Or, you could have the latest technology or develop the latest code, but it isn’t of production-level quality, and it just doesn’t deliver.
In short, there is no singular reason for technology deficits, but the bottom line is that if technology isn’t working for you, you are creating a deficit.
2. Technology assets need to be assessed to see if they are on the plus or the minus side of the technology ledger.
IT should take the time to inventory systems and assets so it can grade them based upon the value they are (or aren’t) producing.
If a 30-year-old hotel reservation system on a mainframe continues to deliver value and has never failed in 30 years (I know of a situation like that), and the system is one of the best in the business, should you sunset it just because it is old? Or do you continue to run it because it is delivering great revenue value?
If a state-of-the-art AI (artificial intelligence) system is sitting idle, do you discard it because it isn’t producing, or do you educate users so that they learn how to use it?
3. Technology deficit management is risk management.
There are systems that are risky because they are failing. These systems might be antiquated and may have seen better days, but the budget might not be there to replace them entirely at once. If they are mission-critical, like an ERP system that is the drive line of an entire company’s daily operations, they are high risk.
This is where risk management comes in because the CIO needs to be clear with both the CEO and the board on just what that risk is.
Many organizations approach high-risk projects like this by first trialing generic, cloud-based versions of the system alongside the customized, internal system that they already have. They might even have a few users or departments work on this new cloud-based system version.
Over time, more departments can be migrated to the new, cloud-based version of the system so the old internal system can finally be retired. A project like this requires patience, because you can’t do any immediate cutovers. This is another way of managing risk.
A classic use case was the Y2K date conversion project two decades ago, when companies around the world had to change all their date fields to prepare for the new century. In most cases, this was a painstakingly slow, expensive, and manual process. No one liked spending millions of dollars to make date field changes in old code and databases, but everyone was ready and accepting of the necessity. Why? Because boards and management teams had been briefed on the risk of not doing it
4. Reimagine IT’s view of maintenance.
When I was in IT, soon-to-retire or more neophyte programmers were always placed on the software maintenance team. If you were a young coder looking to move up, maintenance was the absolute basement.
Today, that notion still holds. Software maintenance is relegated to a back seat in IT, and only receives recognition when a system fails and quickly needs a patch.
An alternate approach is to define software maintenance as a critical risk management function.
For example, you can see in both maintenance and help desk logs which systems are failing most often. Strategically, the software maintenance team should be assigned to these trouble spots so that help desk workloads can be lightened, software can work better, and fewer users get frustrated.
From a risk perspective, you are also reducing the amount of inherent risk from potential system failures that the company experiences.
Software maintenance and risk management are as important today as they’ve ever been. With the introduction of more low-code, no-code and DevOps-created applications, speed of deployment matters most and the QA process isn’t as robust. This creates its own higher risk of application failure for IT to deal with.
What to Read Next:
CIOs: Stop Spending on Bad Tech
CIO Lessons Learned from Southwest Airlines’ Winter Plight
COBOL, COVID-19, and Coping with Legacy Tech Debt