In the financial services industry, almost every major company that issues publicly traded debt requests a credit rating to signal to investors that their debt is higher-quality and unlikely to default. However, the system works such that the companies PAY the rating agencies to give their debt a rating. Given that companies issue debt frequently and there are multiple rating agencies, the rating agency is incentivized to give a high rating in order to attract the company back in the future regardless of the actual creditworthiness of the debt. Given this relatively misaligned system of incentives, I have become curious about the objectivity of ratings systems in other industries and particularly in the wine industry. Almost every time I walk into a wine store, all the bottles have ratings in the mid-80s or above, even for low cost, mass-produced wine.
How can it be that on a scale of 0-100, almost all the wines on the shelf are seemingly 85+? It would seem that such a high skew would indicate a fundamental flaw in the rating system - it seems unlikely that there are lots of 40s/50s wines sitting somewhere unsold or being thrown away. Several articles that compared results of taste-testing and ratings showed significant inconsistencies in how wines were scored and indicated that "context and expectation influence the perception of taste." I was struck (but not surprised) when reading the case on DBR that "top management and estate staff remained close to wine magazines, spent time with the media and wine critics and journalists who liked to visit DBR properties." It seems likely that there is an extensive system of lobbying and influence in the wine industry that, while not maybe as overt as credit ratings, tries to use the influence of a brand and reputation to drive high scores from reputable critics. If critics are being invited to these vineyards and hosted by winemakers, psychology and human nature would tell us that they would almost certainly be influenced to give more favorable ratings that will ensure that retain their access to these "esteemed" vineyards like DBR.
To be clear, I do believe there are wine ratings that are done more objectively but would be very interested in further understanding the relationships between vineyards and critics and whether we can consider these ratings more than just relationship management and marketing.
Additional resources:
https://www.wsj.com/articles/SB10001424052748703683804574533840282653628
Thanks for posting this Jordan!
ReplyDeleteCertainly there are perverse incentives around maintaining status as a critic - though theoretically the argument for objectivity would be similar to a market dynamics argument. That is, if the critic is inflating the scores of low quality wines their reputation would be tarnished as consumers stop trusting the scores of the critic, and therefore they lose their status as fewer people follow their recommendations. However, I do believe that there is a missing link in that argument: consumer knowledge. If consumers aren't informed enough to hold critics accountable and really just use their ratings as a guide, then it would be expected that this feedback loop would fail.
Interestingly, an increase of 1 point by Robert Parker's Wine Advocate rating "generates ... $3.00, of revenue per bottle, according to one analysis, while a difference of ten points can mean millions of euros for a large-scale producer, and a perfect score of 100 can support a three- or fourfold price increase." (Harvard Business Review). It is notable how little customer feedback is involved in the wine making process - it's all experts and critics working with each other to help influence the taste of consumers. Hence the dependence on these wineries to create an impression of authenticity and correlate, in consumers' minds, that authenticity to quality. If consumers aren't being educated on more objective means of discriminating between wines that they may prefer, then this whole feedback mechanism fails. The New Yorker suggests it may be a good step to change the vocabulary we use in discussing wine, shifting more towards chemical compounds and measurable features that correspond to tastes a consumer may prefer. However, this in and of itself does not impact the role of a critic.
In addition to this key failure in the critic feedback loop, there are similar questions about the objectivity and uniformity of the ratings system itself. While most critics will use a 20 or 100 point system, it was interesting that upon further investigation basically no ratings will be distributed under 50 for some of the biggest critics out there (Robert Parker, Wine Spectator). So there is a skew inherent that can be misleading.
Now all that being said, I'm not sure that objectivity is the goal of wine criticism. Similar to a movie critic or a restaurant reviewer, I personally do not expect to find a perfectly objective rating from anyone - especially because taste itself is so subjective. I view the role of a critic as helping laypeople to learn how to articulate or appreciate the features of a wine that they enjoy or dislike, to find the commonalities between the wines that consumers enjoy so they can be more informed when making selections, and to act as a guide for those who have similar taste to them in searching through an oversupplied market with seemingly infinite choice. With that view, objectivity in ratings shouldn't be the goal, but then the perverse incentives you mentioned become a risk.
Maybe it's time to rethink what wine criticism is supposed to be! Maybe the flowery language does a disservice or introduces bias into how we taste? Do we need a rotten tomatoes for wine? Who knows.
https://www.winespectator.com/articles/scoring-scale
https://www.vivino.com/wine-news/wine-experts/robert-parker
https://hbr.org/2019/03/what-the-u-s-wine-industry-understands-about-connecting-with-customers
https://www.newyorker.com/culture/culture-desk/is-there-a-better-way-to-talk-about-wine
https://www.nytimes.com/2019/06/17/dining/drinks/wine-ratings-criticism.html