Sometimes missing data tells us a lot more about a question than the data that is presented. To understand this, let's start with an anecdote.
In World War II, the US assembled a bunch of statisticians to evaluate which parts of the fighter planes needed more armour. As planes would come back from soirees they would count which part of the plane (engine, fuselage, etc) had been hit by how many bullet holes.
They found that, on the planes that returned, the engine area had 1.11 bullet holes per square foot, while the fuselage had 1.73 bullet holes per square foot. The common wisdom was to add more armour where the planes were getting hit more.
That's when a brilliant statistician, Abraham Wald, came with the insight, the extra armour does not go where there are more holes (fuselage), but where there are fewer holes (engine). The insight was simple, planes were able to survive hits on the fuselage and were returning back. But those planes which were getting hit more on the engines were not returning at all. The missing planes and the missing holes were the keys to solving the problem.
If your ratings are so good, where is the data that five-star funds outperform four-star funds?
So, what is the missing data in the mutual fund star rating show?
Say you are the CEO of one of the rating agencies and your chief statistician or chief information officer comes to you and says, "Hey boss, we have conclusive data that our five-star rated funds outperform our four-star rated funds by 3 percent annually over a five-year holding period".
What would you do? You would, of course, publicise this as much as you could, make the ratings very expensive to buy and earn a lot of money from them. Or maybe open your own hedge fund and invest based on your rating methodology.
All the rating agencies employ a lot of smart statisticians and investment professionals and if they had the above data they would be publishing it, updating it and talking about it every quarter. It's not that they are not looking for this data, it is just that what they have does not show any outperformance of higher rated funds. This is the missing data in the mutual fund star rating show. If your ratings are so good, where is the data that five-star funds outperform four-star funds?
So what do we have instead?
Well, Morningstar says: "We have always been very clear that it’s not intended to predict future performance."
Similarly, Valueresearch Online says: "The assessment does not reflect Value Research's opinion of the future potential of any fund. It only gives a quick summary of how a fund has performed historically relative to its peers."
Both Morningstar and Value Research Online also disclose that star ratings are completely mathematical with no subjective inputs, and represent a fund's risk-adjusted return over the past three years and five years with the best performers receiving five stars.
Hmm... so another fancy way to talk about past performance, as if three-year and five-year return and information and Sharpe ratio were not enough. Just like returns then, maybe the rating's should also come with their own disclosure: "Ratings are subject to market risk and past rating does not predict future returns."
The data that exists, confirms that ratings are not predictive of future outperformance.
The Wall Street Journal in a review called "The Morningstar Mirage" concluded - "Investors everywhere think a five-star rating from Morningstar means a mutual fund will be a top performer — it doesn’t"
A similar study by Advisor Perspectives finds that the probability that a randomly selected five-star fund will outperform a randomly selected four-star fund is only 50.6 percent. Essentially a coin flip.
If ratings do not predict future returns, and rating agencies universally disclaim that ratings are not predictive, then how did they come to be so important in the mutual fund ecosystem?
The answer lies in human psychology. No one ever fired an advisor or a broker for recommending a five-star fund. The ratings then are both a safety net and a shiny badge for the advisor to be sold to the investor. And if a five-star fund underperforms, well no one else saw that coming. But if you picked a lower rated fund which underperformed, then you are not a good advisor.
From the same Wall Street Journal piece: “Advisers get in trouble when they go against the grain. You isolate yourself more if you sell something else rather than just go with what research recommends.”
Since most advisors go with the grain and recommend five-star funds, these funds see huge inflows creating reasons for asset management company (AMC) with such funds to unofficially bless the ratings in their own investor materials. And thus the myth of the mutual fund star rating continues.
In part 2, the missing data on advisor outperformance. Gaurav Rastogi is the CEO of
Kuvera.in: a free direct mutual fund investing platform. Gaurav managed a pan-Asia quantitative portfolio for Morgan Stanley before he started Kuvera.