September 15 2011
I’m aware that there are at least three strata of consumers who use wine reviews (and likely many more).
1) People that calibrate their palate to that of a critic so they can make very informed purchase decisions. These people are few and probably most closely aligned with Robert Parker or niche critics like Allen Meadows of Burghound or Charlie Olken of the Connoisseurs’ Guide to California Wine.
2) The broad swath of consumers who use scores, perhaps with some deference to the score-giver, to make retail purchase decisions. With these folks, all things considered equal while balanced against price, a 91 is better than an 88 so they go with the higher score on the shelf-talker.
3) Online armchair wine researchers are an emerging category of users. Searching for a wine presents a sort of blotter file like the dreaded “permanent record” of school days gone by. Consumers use search to research wines, validate a thought, sway indecision and incent action, sometimes in conjunction with #2.
This is linked, but separate from a recent working study presented under the banner of the American Association of Wine Economists called, “The Buyer’s Dilemma – Whose Rating Should a Wine Drinker Pay Attention to?” For a well-considered post on this topic, see Joe Roberts post at 1WineDude.
For my part, I’ve done very little wine reviewing on this site preferring instead to make any specific wine the context for bigger ideas or points I want to make (no pun intended). However, as I’ve gotten into the groove with my Forbes.com column, where there is a much broader audience, a wine-of-the-week column does have merit and I’ve started reviewing wines with more regularity.
Doing so is fun, but the most that I hope for is to be a part of the permanent record as noted in item #3. I certainly don’t have visions or a desire for anything more, but just the same, doing any sort of reviewing does open a can of worms, particularly in the case of the 2009 Red Car “Trolley” Sonoma Coast Pinot Noir, a wine that I recently reviewed and gave four stars to – which equates to a generalized “90-94” score. I don’t give precise numeric ratings. If I had to, I would have given the Red Car a 92, I liked the wine – it was earthy, nuanced, layered, balanced and it required some thought to figure out, all hallmarks of a good wine.
So, consider me SHOCKED when I saw the Wine Spectator review for this very same wine and Jim Laube gave it an 81. I was less shocked, but slightly curious when I saw that Steve Heimoff at Wine Enthusiast gave it an 86 and Stephen Tanzer gave it an 88.
Can you imagine somebody searching online for the Red Car and seeing search results that present a disparate spread along the lines of Spectator’s 81, Heimoff’s 86, Stephen Tanzer’s 88, CellarTracker’s average score of 89 and a score under the Forbes masthead of 90-94?
It would be a real WTF moment that creates more confusion instead of the consumers desired order.
This disparity in scores brings me to my point, which is the point of the Wine Economist working paper – whose score should you listen to? Well, Joe Roberts, rightfully so, says listen to your own palate. However, with the preponderance of existing and emerging wine reviewers out there, combined with an ever burgeoning tsunami of information about wine online, that’s easier said than done. The real need is for meta-aggregation of scores, a sort of super wine review database.
Neil Monnens and his Wine BlueBook represents this on some level with his monthly newsletter that aggregates wine scores for individual wines from three or more critical scores giving it a QPR rating, but this is just the tip of the iceberg compared to where information is going.
Methinks that if a stats wonk can assign a Quarterback rating to NFL quarterbacks, and Sagarin ratings for college football, there has to be a way to create a meta-rating database based on regression analysis that accounts for palate preferences across a wide diversity of reviewers to create a super score for a wine that acts as the ultimate arbiter. And I won’t be surprised if, in the near future, this emerges.
Ultimately, the ongoing debate about wine scores is for naught. The horse has already left the barn. A better conversation might be around shaping the future and the fact that the best answer to, “Whose Rating Should a Wine Drinker Pay Attention to?” might be, “Trust your palate,” but it might also be, “Tune your palate against the database.”