There was no lack of rigour in the production of our latest paper on the costs and charges of fund managers says Investment Association director, public policy Dr Jonathan Lipkin
Our recent paper ‘Investment Costs and Performance’ generated headlines in the UK press. Some commentators were surprised by and critical of the findings, but we stand firmly behind the results and respond to key criticisms in this blog.
The paper was above all an attempt to analyse fund sector performance in the context of all quantifiable charges and costs, and to present data on turnover levels. In other words, we wanted to shed empirical light in an area that is hotly contested. To do this, we partnered with an independent data provider – Fitz Partners – which has crunched through more fund accounts than any other provider we know.
This comprehensive dataset allowed us to analyse a period spanning from the later part of 2011 to May 2015. We were very clear why this period was chosen and when we have more data, we will do more. We were also clear that more work is needed to provide granular data, particularly on implicit transaction costs. Our new Disclosure Code will provide this, and the preparatory work is being overseen by an independent advisory board
The findings are not universally positive on performance, but they do demonstrate widespread outperformance net of investment and all distribution and advice costs – in other words, the total cost of ownership. They also point to lower transaction costs and lower turnover rates than often imagined. This is simply a fact about recent delivery. We are not saying this was true historically, nor that it will be in future years.
Some question our sampling approach and consider the performance results implausible. We reject this completely. In fact, we looked much more widely than our sample and these results are reported in the paper. They show that performance was even better than our sample showed, with funds on average outperforming by 1.3 per cent per annum. Sector return figures calculated independently by Morningstar corroborate our results.
Other comments suggest there was a lack of statistical rigour. We reject this too. Doing something differently to academic literature does not mean less rigorous analysis. Our statistical approach is robust and gives an accurate picture of the most representative retail investor experience in terms of costs and performance. Issues such as exploring the statistical significance of a constant (alpha) in a regression and whether a three or four-factor model is better are a separate issue.
The differences in benchmark returns are not because we used different benchmarks for active and passive funds as has been suggested. We took the most commonly used primary prospectus benchmark in each sector. For UK All Companies funds, both active and passive, this was the FTSE All Share index. For greater accuracy, fund and benchmark returns were calculated to exactly match the period over which costs were reported in the accounts and each fund account covers a different 12-month period – a point explained in some detail in the paper. And that is the reason why benchmark returns are different for different sets of funds even where the same benchmark is applied.
Our approach to portfolio turnover rate (PTR) is also being questioned, with claims that we have dismissed the SEC methodology in the past. Not so. We have analysed PTR metrics in detail and while there are always limitations, the SEC method can tell something very intuitive about turnover. A figure of 40 per cent gives an indication that around 40 per cent of the portfolio has changed in a year. The fact that this involves both buying and selling is undeniable and reflected in the transaction costs. Nothing is being hidden from the reader. But if you ask someone what they think Ucits turnover of 80 per cent means, they are unlikely to tell you that this means that 40 per cent of the portfolio has changed.
Overall, our message here is simple: we have applied all of the rigour of an academic paper, can answer all of the challenges raised so far and are happy to respond to more. The paper is absolutely open about its intentions and its limitations. Readers are invited to read the paper itself, refer to our published fact-check list addressing other comments or engage with us directly.
We look forward to further discussion and challenge in the months and years ahead. And that discussion will take place with much more data available, based on work that the IA and others are doing to improve the disclosure landscape.