NEW YORK - This week I went to the office for the second time in seven months and met up with two colleagues for a meal. Along with slightly more interesting things, we discussed the most difficult elements of doing our jobs during lockdown.
One of these, we agreed, was communicating with colleagues in a nuanced way. We noted that it is easier to give and receive feedback in person, rather than via email or Zoom. And this loss of tone and nuance matters.
It was a point also made by Merrill Lynch’s top gatekeeper, Anna Snider, at our recent due diligence conference, when discussing the pros and cons of manager meetings via Zoom. As we covered here, Snider argued that insights about PM team dynamics were lost when interviewing via video rather than in person.
Human or machine?
It is a compelling argument for humans, rather than computers, to do due diligence on funds, and one that now has the backing of academic research to support it.
A new paper, called ‘What Should Investors Care About? Mutual Fund Ratings by Analysts vs. Machine Learning Technique’ looks at the difference between Morningstar’s analyst ratings (produced by human analysts) and its new-ish quantitative ratings, which seek to emulate the analyst ratings, but which are produced by a machine-learning model. Both ratings, the paper says, are forward-looking ratings, as opposed to the firm’s better-known, backwards looking, and occasionally maligned star ratings.
The paper ultimately finds that ‘the analyst rating identifies outperforming funds, while the quantitative rating fails to do so.’
While the researchers – Si Cheng, Ruichang Lu, and Xiaojun Zhang – argue that this is ‘mostly due to the selection of analyst coverage’ (the algorithm covers those funds the human analysts choose not to), they also identify skill within the ratings given. They write:
We find that the analyst rating successfully identifies outperforming funds in the univariate portfolio sort. Gold-rated funds and recommended funds (i.e., rated as Bronze or above) outperform the benchmark by 0.91% and 0.53% per year, respectively, while the non-recommended funds do not outperform the benchmark. To put it into perspective, Gold-rated funds outperform traditional 5-Star funds by 100% on a style-adjusted basis.
This was not the case for those funds rated by the machine.
Gold-rated funds based on the quantitative rating deliver an insignificant style-adjusted return of 0.41% per year, while Gold-rated funds based on the analyst rating continue to outperform the benchmark by 1.46% per year during the same period.
The analyst ratings also have another edge: nuance. The paper argues that the tone of the reports that accompany Morningstar’s analyst ratings for a fund can also help investors pick funds with better future returns, a quality that is enhanced when the tone of the report is at odds with the rating (for example negative sentiments and a Gold rating, or nice notes but a Negative rating).
The authors write:
We find that the tone in the analyst report provides incremental soft information that augments the known fund characteristics and managerial skill proxies, suggesting that mutual fund analysts play an important role in acquiring and processing information as well as facilitating more efficient capital allocation across mutual funds. The improved information environment could reduce the search cost in the mutual fund industry and, as a result, lead to a more efficient asset management market and financial market.
In among a sea of papers suggesting investment consultants add no value and a host of predictions that all analytical jobs will be done by computers, it is a reassuring conclusion for any professional buyer.
There is some irony to the fact that one area where artificial intelligence is making grounds is in scanning reports for tone and sentiment (something the authors of the aforementioned paper used to analyse the Morningstar notes), and in writing in increasingly human ways. But even then, for reports to contain insights and foresights, such information will have to be gleaned from managers, and this is where humans, with their noses for nuance, have an irreplaceable role (well, for now at least).
You can read the paper in full here.
- Machine Learning Is Cheaper — But Worse — Than Humans at Fund Analysis (II)
- How We Use Machine Learning to Help Investors (Morningstar)
- Selecting Mutual Funds Using Machine Learning Classifiers (Cyril Vanderhaeghen)
That’s it for this week. As ever, please send any links, thoughts, feedback and general miscellany to me at firstname.lastname@example.org.