Does Pensions Administration require Regulatory oversight?
I read with interest that TPR are seeking to extend their voluntary supervision regime to the “top 75 pension administrators”. The definition of “top” here meaning the largest and most strategically important to the industry as a whole. The announcement by David Fairs included many sensible soundbites about administration historically being a “Cinderella issue” but also being “critical in securing good outcomes for savers”. Absolutely!
So, whilst my immediate reaction of “about bloody time” probably wasn’t the most constructive reaction, it did get me thinking about why administration is now coming under closer scrutiny and how we can go about constructively evaluating administrators.
There is no doubt that in my 30+ years in the industry, administration has tended to be the poor relation around Trustee tables. Whilst many hours of time and money were typically spent looking a funding assumptions and investment strategy, the administrator might get a 30 minute slot at the end of the day to present a ‘stewardship’ report, where the key focus would be SLA performance.
This perceived lack of importance has historically had a detrimental effect on investment in administration and, to some degree, those chickens are now coming home to roost. Just look at the amount of money and man/woman hours now being employed across the industry on sorting out legacy data issues for example.
However, a welcome and positive change in view has come about in the last few years as administration is increasingly seen as an ‘enabler’ for so many strategic aspects of good scheme management. Whether it’s member engagement, data projects, de-risking or a final buy-out, having a good administrator in place who can add real value and insight to the Trustee objectives is now seen as vital. Administrators are also increasingly seen as risk managers, particularly with regard to issues such as pensions scams and cyber security.
So how do we know what “good” really means and what can we put in place to evaluate administrators on an ongoing basis?
The challenge here is that not all administrators are alike. In fact, whilst at a superficial level one would assume that it’s “just admin” and so it must all be delivered in a pretty similar way, the reality is that there are myriad differences between administrators and the way that they do their jobs.
As an example, how do you compare an administration function that is ‘under one roof’ where all the staff are expected to undertake every aspect of member calculations and customer support, against an administration function split over multiple ‘specialist’ locations, some of which may be overseas? The type of staff required and their training requirements will be just two major differences. The transition path to each administrator is also likely to be different.
This is a simple example which serves to illustrate the reality that every administrator has their own way of ‘delivering the job’ and that is quite often a function of their own scale, scheme / client portfolios and broader business culture. One approach is not necessarily better than another and so to compare them directly in this way does not really achieve much. Moreover, the closer an administrator can stick to their own ‘standard model’ across their entire client base, the lower the risk of things going wrong tends to be.
Perhaps therefore, we should look at the output from administrators rather than how they organise themselves to deliver the service.
By output, I am not suggesting that we fall back on the typical blunt measure of SLA performance. Measuring speed of delivery is no proxy for quality, accuracy or member satisfaction and as a general observation, most forward thinking administrators and Trustee Boards have already moved towards more 3 dimensional objective setting and measurement criteria in any case.
Most administrators now track key operational measures on a daily basis. Data items such as case volumes, work inventory, end to end process times, capacity forecasting, member feedback, re-work statistics, etc. are generally provided straight from workflow tools and core technology applications. Certainly, for the larger 75 or so administrators whom TPR are initially bringing into scope, we would expect most of this information to be able to be produced at a macro level on a regular basis.
Using this kind of information to look at trend analysis can be a powerful insight to how an administrator is doing. For example, are work inventories increasing or decreasing? How much re-work is happening? Are case volumes fluctuating and how is this impacting overall performance? What is member feedback telling us?
When this kind of performance data is amalgamated with other sources of data, a more holistic picture of the overall service delivery can be constructed and a clearer picture emerges. For example, DR testing results, CRM effectiveness, Data quality reporting, etc can all be added to the ‘basket’ of monthly, quarterly, annual measures in order to gauge progress. This can then be built into review and planning discussions so that there is an ongoing cycle of continuous improvement.
There are two other important principles in my mind. The first is that trend analysis at a macro level is more useful than direct comparison at a point in time between administrators. A league table as such would be counter productive in many regards and because of the underlying variability in operational design (as mentioned earlier) would most likely be meaningless in any case.
Secondly, all this measurement is in danger of missing the key requirement of accuracy. How do we know that at the point an administrator settles a benefit, it is entirely consistent with the scheme rules? What checks can (and should) an administrator or other scheme advisors undertake on a regular basis to ensure complete accuracy of benefit calculations and payments? This is equally applicable to DC and DB arrangements and so is another area that should be considered as part of the emerging oversight regime as otherwise, we run the risk of creating a framework for review that fails in the most fundamental of ways.