IT Benchmarking challenges – 6 Questions to consider
How do you develop a business relevant benchmark in a rapidly evolving environment? More importantly, have the key IT Benchmarking challenges been identified and adequately covered?
The adoption of industry ‘best practice’ makes good commercial and business sense is it harvests the cumulative learning of others – specifically, what produced results and what was to be avoided.
This approach works well in stable, comparatively slow moving organisations and industries. However, introducing the effects of technological disruption, innovation not to mention the rapid rate of change can seriously undermine the value of perceived ‘best practice’– especially if you are serious about innovating. The bottom line is that the assumption that ‘best practice’ is ‘best’ for your organisation needs to be tested.
Evidence based decision making (in theory at least) should be a core element of effective strategic and operational decision making processes. One aspect of this evidence lies in attempting to answer the question:
How does my organisation compare with others? Are we ‘better’ or ‘worse’ than others, and in which respects?
Welcome to the world of benchmarking.
IT Benchmarking challenges 101: Benchmarking vs. Balanced Scorecard
Occasionally benchmarking can be confused with the Balanced Scorecard, so I thought it important to differentiate the two:
- Balanced scorecard (BSC) is a widely used management framework for the measurement of organisational performance across a wide range of qualitative and quantitative measures.
- Benchmarking, typically refers to the comparative measure of specific aspects of your organisation (e.g. products, processes, functions, departments, etc.) to an external organisation(s). Multinational organisations often attempt to benchmark similar subsidiaries across the corporation, a practice known as internal benchmarking.
Just like the game of golf – the theory often vastly different to the practice, and the getting the context right will ultimately determine whether your IT benchmarking initiatives will be of real value, a distraction or even a risk.
Let’s explore key aspects of benchmarking in more detail by considering the following questions.
Question 1: Do you trust the benchmarked data?
If you are going to drive important IT budgetary, operational or strategic decision on the gaps between your data and externally sourced benchmark data, then understanding the methodology used to derive the benchmark data is important. The degree of scrutiny over the methodology and provenance of the benchmark data should be proportional to your reliance on the data. i.e. Whilst the methodology of calculating the benchmark may be sound, if the integrity and source of the data is not known with certainty, then the output could be, well, plain wrong!
If the benchmark data is going to result in a ‘top-down’ executive mandate for you to meet or exceed the benchmark, then understand what’s you’re dealing with, and be prepared to identify which specific benchmarks are of relevance and which are not.
Question 2: Are you measuring this because everyone else is?
Because a benchmark has been published from a reputable source does not automatically mean that it will apply to your specific situation. IT Vendors, consulting forms and IT industry analysts often use comparative metrics and benchmarking to invite new business.
Assess benchmarks on their relevance to your organisation.
Question 3: What actions will you take based on the benchmark data?
How will the benchmarking exercise drive your decision-making processes, and subsequent actions? The assumption that ‘it’s worked there, therefore let’s apply it here’ may need to be tested. The rote application of a benchmark to drive hard business decisions should be subject to the appropriate degree of assessment.
Question 4: How perishable is the benchmark?
A benchmark is to be considered a ‘point in time’ assessment. Depending on the particular benchmark, the delay in generating the benchmark also needs to be considered in changing environments.
For example, if the underlying data is collected 6 months before the publication of an annual benchmark, and you are still using the data later in the year it could be up to 18 months out of date. Is that a problem?
Question is – in our rapidly changing technology landscape, what’s changed in the mean time since the benchmark assessment? Which benchmark participants have undergone M&A, restructuring, moved everything to the Cloud, outsourced their IT, undertaken an aggressive international expansion, changed their product lines or services or mix of business activities?
Question 5: Can you see the risk of comparing apples with oranges?
In IT benchmarking, comparability is the key. Comparing ‘like with like’ with a high degree of confidence can be a formidable challenge for enterprise IT as each organisation uses technology in a different way. Ask any IT vendor.
Understanding the comparative service levels as well as the types of services delivered by the IT department can have a profound impact on the benchmarking exercise. If you run a 24×7 service desk across multiple timezones with a SLA of a 5 minute response time, question is, are the services and SLAs of benchmarked organisations comparable?
Are your IT uptime SLAs much higher than those of those in the benchmark? (There can be a nonlinear relationship with SLAs. e.g. the cost base for ensuring 99.0 to 99.999% may be substantially different)
Question 6: What’s in-scope?
If the IT department’s IT performance is being benchmarked, then are the IT investments (including Shadow IT ) outside of the IT department included in the benchmark? In many organisations, an increasing proportion of the overall cost of IT is being managed outside of the IT department. If your organisation mandates a fully centralized IT cost, and your benchmark does not that could result in a misleading result.
The bottom line
The bottom line is that benchmarking, if used appropriately and for the correct business reasons can be of considerable value. The pitfalls, however could be significant for the ill informed. The key is to be informed and to stress test the underlying hypotheses and assumptions on your benchmarking journey.