Top-down benchmarking

A note by Geoff Horton and Brian Tilley

  1. Summary
  2. Ofgem has produced an initial piece of work on benchmarking as contained in its April letter to distribution network operators (DNOs). This has been based on regressing normalised controllable costs and total fault costs against a composite scale variable (CSV).

    This paper reflects a number of concerns with the methodology being employed and argues that the results set out in the April letter would not stand up to a rigorous independent assessment. In particular the regression is likely to be mis-specified in its treatment of mergers and capital costs. When these are specified correctly, or comparisons are made between the nine groups of distribution companies operating in 2002-03, different results are obtained.

    The treatment of mergers is particularly unfortunate since, by assessing costs on the basis mainly of those of merged companies, it denies Central Networks the opportunity to retain the cost savings for which it paid a £32m "merger tax" in good faith.

    The impact of the mis-specifications is an indication of the uncertainty of the benchmarking process. There are many other uncertainties however that need to be factored into any conclusions that are drawn from the results. The raw data have been adjusted in an attempt to make them more comparable and, while this objective is reasonable, the adjustment will introduce measurement error. The functional form of the regression has been arbitrarily chosen to be linear, which may or may not reflect the true relationship between inputs and outputs. Explanatory variables have been omitted; we do not accept that the variables in CSV are the only principal cost drivers. Regression is inherently subject to error. Deviations of a company's costs from a regression line are at least as likely to be a result of errors in the equation and other omitted variables rather than of an omitted efficiency term. It is therefore doubtful whether an independent assessment should conclude there is sufficient evidence to attribute inefficiency to any company.

    As regards Central Networks, applying confidence intervals to the regression to reflect uncertainty shows it to be "efficient". This is reinforced when total costs are taken into account, rather than partial consideration of opex. Even in data envelopment analysis (DEA), where stochastic errors are assumed to be absent, a process that specifies the three output targets also shows Central Networks to be efficient when merger adjustments are made, either by changing the costs of DNOs that have merged or by running the models on the basis of 9 groups of companies.

    Even if inefficiency can be demonstrated in any company, future allowances should be set on the basis of average costs and their likely scope for reduction. The calculations used for setting the cost of capital are based on the appropriate return to the average company and, given the uncertainties of this benchmarking technique and the fact that the definition of a frontier relies on outlying observations that are most likely to contain errors, the uncertain frontier is not a good guide to the scope for reduction. This method also has better incentive properties for outlying companies.

    If there is no convincing evidence showing companies to be inefficient, future allowances must be set in relation to actual reported costs.

  3. Ofgem's work

    1. Summary of appendix 3

Ofgem has published an update on normalised operating costs and total fault costs for all DNOs, having made adjustments for a number of factors it considers make the costs more comparable across the industry.

Top-down benchmarking has then been performed using these normalised costs. Regression results were published in appendix 3 of an April letter to DNOs based on a composite scale variable (CSV) cost driver. This attributes various weights to network length, customer numbers and units distributed which sum to 1. Figure 1 and 2 shows the OLS regression and "corrected regression" applying the following weights of 50% network length, 25% customer numbers and 25% units distributed and 25% network length, 25% customer numbers and 50% units distributed respectively

 

Figure 1 OLS regression using 50% weighting for network length

  

Figure 2 OLS regression using 50% weighting for units distributed

These purport to show that, for most distribution network operators (DNOs), changing the weight does not affect the distance from the efficient frontier. However, the relationship is probably mis-specified and also fails to take proper account of the uncertainties of functional form and other omitted variables, the errors implicit in any regression, and the particular uncertainties of outlier observations.

  1. Mis-specification

Ofgem’s regressions present relationships between a measure of output and standard controllable costs but the form of relationship investigated is limited in that it is a simple linear form and omits other explanatory factors. At best this makes judgements derived from the relationships uncertain and, at worst, the equations may be mis-specified and so positively misleading.

This section considers two aspects of the specification:

It shows that the outcome is significantly affected if one either considers total costs or uses data from the eight merged companies – both aspects Ofgem has said that it is presently considering – or if one adjusts the data for merger cost savings.

However, there are many other omitted variables – e.g. wage rates, customer mix, sparsity, voltage levels – and other possible functional forms that are not considered here.

    1. Mergers and merged companies
        1. Previous understanding about the treatment of cost savings
        2. At the last price control, a broad-brush adjustment was made to pass back to consumers the benefit of mergers that had already taken place. This was based on the assumption that "a sustained reduction of half the fixed costs, such as corporate costs, would arise as a result of distribution companies merging". The policy made at the time was to allow companies to "retain the benefit of merger savings during the five years following the merger" and thereafter reduce revenues by £12.5m p.a. (1997/98 prices).

          The merger policy statement in May 2002 changed this position. From June 2002 mergers would be subject to a total revenue reduction of £32m (2001/02 prices) over 5 years to those customers affected by the transaction. This was based on the view that a merger reduces the number of comparators available to Ofgem, and by implication may affect the rate of change of the "efficient" frontier.

          When E.ON were negotiating with Aquila for the purchase of Midlands Electricity, discussions with Ofgem clearly indicated that merger savings would not be passed back to customers until the end of the fifth anniversary of the transaction. On the basis of this understanding, E.ON agreed to pay the £32m merger tax. The merger was sanctioned by E.ON based therefore on the belief that they would retain any subsequent merger savings from 16th January 2004 to 15th January 2009.

        3. Ofgem policy statement March 2004

Ofgem admits that the experience of mergers between DNOs demonstrates that they are able to reduce certain costs, "in particular fixed costs, such as head office and corporate functions and sharing of best practice". However Ofgem also believes that these savings are also achievable through other corporate structures, such as EME being part of E.ON and Aquila Networks owned by a larger US group.

This assessment seems improbable.

There are a number of functional areas where synergies could be achieved by merged DNOs in contrast to a single DNO that was a member of a wider group, for example:

A rigorous assessment of cost efficiency using 2002/03 data must recognise the disadvantage that non-merged entities will encounter. This can either be achieved by making merger adjustments or assessing efficiency using nine groups of companies. If such an approach is not adopted, the analysis will be conducted on a discriminatory basis against non-merged entities that seems to be contrary to Ofgem’s policy of not dictating corporate structures and, by assessing costs on the basis mainly of those of merged companies, denies Central Networks the opportunity to retain the cost savings for which it paid £32m.

        1. Results of merger adjustments

Ofgem has examined the business plan questionnaires submitted by DNOs and has initially calculated a range of fixed costs of between £14m and £22m. The regression in figure 1 shows that fixed costs are imputed to be £22m, equivalent to one-half of the fixed costs of a combined DNO group of two licence entities. However whilst Seeboard had merged with EPN and LPN in June 2002, there would not have been sufficient time for the group to have extracted any merger benefits, Seeboard should therefore be treated separately from the remainder of the EDF group.

An £11m adjustment is therefore made to each of the following DNOs:

Figure 3 below shows the OLS regression and the corrected regression (frontier), which is based on the view that all of the residual is attributed to inefficiency. By implication, the corrected regression ignores random variation including measurement error in the disturbance term.

Stochastic frontier analysis (SFA) explicitly tests for the presence of inefficiency and measures its contribution to the residuals compared with the random error. The corrected regression represents the most extreme definition of the efficient frontier, and assumes that inefficiency is the only omitted variable and that there is no other error. Consequently it maximises the perceived inefficiency of a company, which in reality is not a true representation of comparative performance. However, with a small sample set, SFA is likely to conclude that there is zero inefficiency, which is at the other extreme to the corrected regression, which assumes a great degree of inefficiency is present.

This dilemma can be mitigated to some extent by applying confidence intervals to benchmarking. The figures below illustrate this by moving the frontier by the size of the limits at the stated confidence levels.

Figure 3 shows that, even if only a modest 50% confidence level is chosen, the frontier moves almost to the regression line and Central Networks costs are deemed "efficient".

  

Figure 3 Applying 50% confidence intervals to the frontier using 50% weighting for network length and making a £22m merger adjustment

 

Figure 4 Applying 50% confidence intervals to the frontier using 50% weighting for network length and making a £14m merger adjustment

In figure 4 the sensitivity of the adjustment is tested by reducing the proportion of fixed costs that could be saved by two DNOs from £22m to £14m, the bottom end of the range of fixed costs published by Ofgem. This also shows that Central Networks is deemed efficient at the 50% level of confidence when a £14m adjustment is applied to the DNOs who had completed their mergers by 31 March 2003.

DEA results that provide further support to this view are given in section 3.2.3.

        1. Results of 9 groups of companies

An alternative methodology for taking account of merger synergies for benchmarking purposes is to model the analysis on 9 groups of companies. This will reflect the scale of each group when assessing efficiency and assumes that each will have one set of fixed costs. It also provides a counterweight to the impression that there are 14 genuinely independent data observations. Many of these are produced by separating the costs of merged companies and are not independent, given they will generally have one management team. Pretending that they are artificially boosts the equation statistics and exaggerates the explanatory power.

Figure 5 shows that when the frontier is based on a 50% probability that all variation from the regression is due to inefficiency, Central Networks is deemed "efficient" under a 9 company group regression. Central Networks is also efficient when a DEA frontier is constructed. This confirms the conclusions made in section 3.1.3 when a cost adjustment was made to reflect merger benefits.

Figure 5 Regression based on 9 groups of companies

3.2 Total costs

Basic economic theory suggests that any measure of partial productivity, such as operating costs per unit of output, will depend on the ratio of those costs to the other costs of production. Ofgem’s own consultants, Cambridge Economic Policy Associates (CEPA), pointed out the dangers of failing to consider this point in their paper for Ofgem of October 2003.

      1. Difference from "total costs"
      2. Ofgem has said that it will consider "regression analysis using total costs (opex and capex)" as a "sense check". However, this measure of expenditure does not accord with any normal economic definition of total costs. Total costs are operating costs plus capital costs, in the sense of the cost of the services of the capital stock. Capex, on the other hand, is expenditure on additions to the capital stock.

        There is no reason to believe that capex will be a reasonable proxy for the capital stock. If the stock of capital assets is relatively low, operating expenditure is likely to be relatively higher (because it is substituting for the services of new assets) and there is likely to be a greater need for capital expenditure to increase the capital stock. Thus, Ofgem’s suggested sense check might even increase the original mis-specification.

        There is little to recommend a total cost measure of the sum of opex and capex other than that it can be measured fairly easily. The difficulty of measuring meaningful estimates of capital costs was said by CEPA to make total cost benchmarking difficult to implement. The next section therefore considers such measurement.

      3. Definitions

DNO regulated accounts contain estimates of the regulatory asset base and of depreciation. "Regulatory" capital costs can therefore be estimated as the sum of that depreciation and a cost of capital (e.g. 6.5%) times the asset base.

It is possible that the regulatory estimate may not be a good estimate of the economic value of the services of capital assets because:

        1. The 1990 asset value was based on the flotation price and companies were sold at different discounts to their CCA accounting asset values
        2. Regulatory depreciation has differed from economic depreciation

An alternative "CCA" measure can be constructed starting from 1990 CCA values, adding subsequent investment (all indexed to current prices using the RPI) and subtracting estimates of economic depreciation based on asset lives.

      1. Results

        1. OLS
        2. Regression of opex, regulated totex and CCA totex against the scale variable produce equations with adjusted R squareds of 0.73, 0.76 and 0.85 respectively. Figures 6 and 7 below show (for the first and the last) the actual costs, the regression line and the regression line moved by one standard error. While there are similarities between the graphs, the relative positions of many companies are significantly different. In particular, Southern, the frontier company in the opex relationship, now appears inefficient.

           

          Figure 6 Figure 7

          When the data are adjusted for mergers, R bar squareds are 0.72, 0.74 and 0.84. In all four charts East Midlands and Midlands are the third and fourth observations from the right respectively. They are consistently at or below the line.

           

          Figure 8 Figure 9

          When capital costs are included as an explanatory variable in opex equations the coefficient is of the expected negative sign although, unlike in regressions using previous estimates of normalised opex, insignificant.

        3. DEA

Efficiency calculations using data envelopment analysis (DEA) also show changes in efficiency estimates when total costs are used. As is to be expected when using DEA with variable returns to scale the outlying companies (Eastern and South Wales) are on the frontier in both cases but the position of other companies changes.

The figures show actuals (points) and frontiers (lines) for both opex and totex CCA with and without merger adjustments.

 

Figure 10 Figure 11

These analyses assume a single output, the weighted CSV. However, including the outputs separately is arguably the correct approach because each has to be achieved separately by each DNO. Achieving a different combination that has the same weighted average would not satisfy DNOs' licence obligations. If other explanatory factors are included, or if the three outputs included separately, more companies are on the frontier, including Central Networks East.

Even using no merger adjustment, DEA with three outputs shows Central Networks to be efficient, with East on the frontier and West near it, on operating costs alone.

Figure 12 DEA results for opex using a £22m merger adjustment and 3 outputs

 

The position is similar when DEA is conducted on 9 companies unadjusted opex with three outputs.

 

Figure 13 DEA results on opex for 9 companies using 3 outputs (units, customers, length)

 

4 Averages and frontiers

The preceding analysis argues that Ofgem has specified its benchmarking comparisons incorrectly. It also demonstrates that there is considerable uncertainty in the comparisons and that results are affected by the method of comparison used. This section pursues this point and also discusses the nature of an efficiency frontier and its relationship with cost forecasts to be used in calculating allowed revenue

    1. Uncertainty of estimates

There are many sources of uncertainty in top-down benchmarking comparisons.

The base data are themselves uncertain, vary from year to year, stem from different accounting practices and are, in most cases, the result of allocations and internal transfers between entities within larger companies.

Ofgem has made adjustments to the data to attempt to correct biases that may be produced by systematic differences in, say, accounting policy but the adjustments will themselves involve uncertainty. In some cases the adjustment is in respect of a cost driver where Ofgem is, rather than estimating a general equation using all cost drivers, claiming prior knowledge of the impact of some cost drivers and confining the regression to the remaining drivers. In reality, the adjustment is uncertain and, had a general regression been conducted, the uncertainty would have been indicated by the standard error on that driver’s coefficient.

The decision to make no adjustments in respect of mergers is itself one to which uncertainty attaches.

The choice of the form of benchmarking from the many possibilities available makes the comparison uncertain. The result would have been different had different choices been made in:

Finally, even if there had been no choices made in treatment of data and in the functional form, the results would be uncertain. In most regressions the uncertainty is revealed in the equation statistics, which are functions of the errors of the estimates, the difference between the equation estimate and the observed value. However, in this approach, that difference is taken as a measure of the impact of another explanatory variable, relative efficiency, which is not directly observable and so is not included in the regression.

There is no certain way of deciding the extent to which it is reasonable to assign the equation errors to efficiency rather than to genuine error but the appendix at the end of the note briefly discusses the issue in the context of Ofgem's analysis. There are no definite answers but, given the potential for error in the benchmarking process as a whole, it would seem to require striking results for there to be any confident conclusions on relative efficiency.

    1. Projections of costs
    2. If no relative efficiency differences can be demonstrated with any degree of confidence, cost projections should be based on the potential for general efficiency gains relative to each company’s cost levels.

      If relative efficiency differences can be demonstrated, cost projections might be based on either the average cost for all companies implied by the benchmarking relationship or on the costs incurred by the most efficient companies. It is probably the former that should be the most influential.

        1. The average firm earns the cost of capital
        2. The calculations involved in determining the cost of capital are based on an average over all companies. The averagely efficient company should earn the cost of capital; more efficient firms should earn more; and less efficient firms less.

          The argument is sometimes put forward that this is not the case for regulated utilities because they are not subject to competitive forces and so inefficient firms are not driven out of the industry. Only the more efficient firms are comparable with the economy as a whole and should earn the cost of capital.

          However, although there is little competition to retain customers, there is both capital market competition (in which less efficient firms are taken over) and comparative competition and so, in the absence of evidence that efficiency differences are larger in electricity distribution than in other industries, the argument is not persuasive.

          The projected costs to which a normal return should be added are those of an averagely efficient company.

        3. Implications of frontier or quartile based targets for average productivity growth

The costs of the present frontier might be helpful in projecting the costs of a normally efficient company because the difference between the two represents the scope for improvement. However this presupposes that:

In fact the position of the frontier will be particularly uncertain since it depends on the specification of the relationship and the data of the outlying company or companies. Moreover, in most industries there is a gap between frontier and other companies and best practice is not generally adopted within a short period.

Ofgem’s frontier and highest quartile criteria produce the following productivity growth rates in the baseline in scenario 2:

It seems harder to derive the third result from the other two than to debate it directly. The location of the frontier or upper quartile company, the rate of efficiency growth likely for such companies, and the feasible rate of catch-up are all uncertain. On the other hand, one can debate the likely rate of productivity growth in the sector on the basis of factors such as TFP growth, capital substitution and any likely remaining privatisation impact.

Furthermore, the incentive properties of using the average are preferable, as the work by Frontier Economics showed, since it in effect mimics the role of competitive markets. There is no strong incentive for any individual DNO not to undertake cost reduction initiatives when they are feasible because future allowances are not affected by this action.

 

Appendix on stochastic frontier analysis and confidence limits

Some of the problems of the benchmarking techniques used to date by Ofgem could in theory be addressed by adopting a stochastic frontier approach. However this would require further assumptions and data, which there is insufficient time to produce in the present review. Therefore the assessment of the extent to which deviations from the regression line constitute random error or efficiency is somewhat judgmental. The chart below uses the confidence interval of the regression as an indication.

Confidence intervals can be applied equally to the regression and corrected OLS frontier. However for illustrative purposes only, these have just been applied to the frontier in figure 14. The 50% confidence line for the frontier places it almost on the regression line.

 

 

Figure 14 Applying confidence intervals to the frontier using 50% weighting for network length

Table 1 compares the COLS frontier with an approach of applying 75% and 95% confidence intervals to the frontier, using the baseline scenario of 50% network length, 25% customer numbers and 25% units distributed. Even using Ofgem's flawed specification, a rigorous case can be made for arguing that Central Networks is efficient in delivering its distribution obligations.

 

 

 

 

 

Inefficiency

DNO

Costs

£m

COLS

£m

50% confidence £m

Average cost £m

75% confidence £m

95% confidence £m

Midlands

67.2

-17.0

-7.4

-4.3

-4.0

3.2

Northern

41.0

-5.0

4.7

7.7

8.2

15.5

Yorkshire

56.0

-8.1

1.4

4.6

4.9

12.0

Eastern

79.3

-10.2

0.8

2.5

4.7

13.0

London

57.2

-19.4

-9.7

-6.7

-6.2

1.0

Seeboard

66.3

-22.9

-13.4

-10.2

-9.9

-2.8

East Mids

66.0

-12.3

-2.6

0.4

0.9

8.2

Scot Power

63.4

-15.2

-5.7

-2.5

-2.2

4.9

Manweb

53.5

-16.3

-6.6

-3.6

-3.1

4.1

Hydro

35.4

-6.7

3.5

6.0

7.2

14.8

Southern

59.1

0.0

10.0

12.7

13.7

21.2

Norweb

74.2

-25.5

-15.9

-12.8

-12.5

-5.3

SWALEC

35.2

-5.2

4.9

7.5

8.5

16.1

Sweb

51.5

-14.1

-4.4

-1.3

-0.9

6.3

Table 1 Efficiency results using confidence limits