> After all the uproar over Brochet's "The Color of Odors", I did a tasting with a few friends where I chilled red and white wines down to the same temperature, and had them try the wines blindfolded.
> Maybe it was my just my lame friends
Your friends were fine; it was your test that was lame.
Why would you ever serve red wine chilled? That's not how it's consumed by most wine drinkers, so I'm not sure what you were trying to prove, other than trying to "tear down the culture a peg or two"—and by implication your friends who consider themselves wine enthusiasts.
Chilling a wine slows the volatilization of the aromatic compounds and dramatically changes the flavor. As a result it can taste thin or tasteless. No wonder they couldn't tell the difference.
It's great when you want to mask a cheap product, though. The same is true of beer. Think about the quality of a Coors Light and why their ad campaign is all about drinking it ice cold.
There was at least one similar, and slightly more scientific test done on this. In this version, white wine was dyed red with food coloring. And nobody could tell.
That doesn't prove anything other than intentionally distorting the features that the taster receives causes incorrect classification results. If you hand a sommelier a glass of red wine and say that it's from South Africa but it's actually from Australia, they are going to be way off in their estimations. It's not because there is no discernible difference, it's because the priors are completely shifted when you give the wrong information.
In those studies, however, raters profess great confidence, and positively identify the hallmarks of red wine in white wine. Sure, it shifts their priors, but unlike a Bayesian model there's no obvious deviation between the prior and the professed posterior.
That may be the case, but if you have ever trained a classifier you know that expecting a model to generalize when given data that it's never seen before is not something you should usually expect. It doesn't mean that there is no discernible difference between the two classes, just that the model has overfit to the narrow subset of data and things that were previously immutable (e.g. the color of the wine) were not actually immutable.
That's exactly it, though. What studies like this indicate, is that the discernible differences in wine are generally overwhelmed by such distorting features. And those features don't have much to do with our tastebuds or features of the wine itself.
(And this also extends to food & drink as a whole, to some degree)
Romantic vineyard lore... signaling ones status by drinking the expensive stuff... signaling ones refinement by having tastes that align with the experts... these are also all distorting features that overwhelm discernible differences in wine.
How well a wine taste to most, tends to be fixed on those types of things, rather than the actual wine.
That doesn't prove anything other than making it popular to admire the Emperor's clothes causes incorrect classification results. If you tell a population that only the best people can see how great the Emperor's clothes are, they're going be way off in noticing that he's naked.
Yeah, that's the point. In any real, coherent domain, you can tell when someone's lying because your model is coherent and based on something real. When your model is 99% expectation, then it can be easily fooled. That doesn't means your model was good; it means your domain is a fraud.
I think some people take a different lesson from the parable of the Emperor's New Clothes.
Think of it this way. Our senses of taste and smell are very weak feature extractors. The way that trained sommeliers have been taught to distinguish wine given the presence of these weak features is somewhat like a deep decision tree. They ask a series of questions about what they have sensed from the wine that allow them to narrow in on a small set of possibilities, winnowing out wrong answers at each step. They have trained for hundreds or thousands of hours of tasting wine and building up this decision tree through many observations.
If you know how deep decision trees work, you know that they overfit very strongly to the training data. Notably, if you intentionally distort the data that you provide as input, by adding dye to turn white wine red, or by saying that the wine is from South Africa but it's actually from Australia, you cause their decision tree process to go down the entirely wrong path and produce a completely incorrect result. The model does not generalize well.
This is not at all how we would train a wine classifier if we were to design one from scratch, but our brains are not really capable of emulating a logistic regression model or an artificial neural network (ironically). They function much better with a discrete process like a decision tree.
I hope that helps you understand. I know it's much more satisfying to be contrarian and rail against the presumptuous wine snobs who claim to see the Emperor's clothes, but then again it's always easier to be cynical about anything you don't appreciate or understand.
The question we want our model to answer matters. Nobody cares whether a superhuman tasting machine can distinguish a red from a white despite food dyes or other trickery. We care whether WE - human beings - can. We care whether we can predict if a wine will taste good to us. We care if we can predict if a wine is high or poor quality. We care if we fail to recognize a bad wine.
We like to believe we can do these things via our tastebuds. We can't. 90% of so-called experts can't either, reliably.
We CAN predict how people (along with those those unreliable experts) will describe and rate wine - based on qualities like color, lore, price, etc - but not taste.
The model for predicting wine quality/affinity is a model based on everything BUT taste. Thats what folks - especially those who've bought wholesale into wine culture - have a hard time with.
Sommeliers are more or less like dowsers, astrologers, or tarot card readers.
I suggest you spend some time blind tasting wine with a trained sommelier. You will find very quickly that there is an actual methodology being applied and the results are not just randomly-chosen like tarot card readers. It is perhaps not as precise and scientific as we would like, but it is the best we humans can do with our limited abilities of sensory perception.
>The model for predicting wine quality/affinity is a model based on everything BUT taste.
Frankly, this is reductionist bullshit. But OK, believe what you want to believe.
I understand the concept of deep decision trees. But if you have to do that much effort even to distinguish them, maybe the distinction is not that important, and the contempt for the snobbery is justified?
Imagine a highly trained medical pathologist, laboring to determine whether the clump of cells from the biopsy is a tumor or not. To the untrained eye, it just looks like any other clump of cells in the image. To the professional, it appears to be a tumor but there is some degree of uncertainty.
Then again, if they have to do that much effort to even distinguish between the two, maybe the decision is not that important after all.
I get your point, but just because people find something important and interesting and you don’t does not necessarily mean that it lacks value. Especially as a member of this community, where one might for instance find a heated discussion about the merits of node.js, for instance, that the vast majority of the world would find mindlessly dull.
But do you see the point? You’re consistently surprised by the difference between fields where you have to make an active effort to know you’re enjoying something at all vs those where the difference between real and fraud is unfakeable. In neither node nor cancer research do you have to go through significant effort to even be aware that there exists a relevant difference at all. In neither case is the fan entirely responsible for awareness of a difference at all, as they are for wine.
If they’re into this connoisseurism, they are. How do they know they “enjoy red wine” if they don’t even know the taste difference and can be fooled by food coloring?
The differences are substantive. In the test you linked, expectation determines interpretation, similar to how people interpret cheap versus expensive bottles differently. In the latter, removing flavor means people can't judge flavor. In one, additional information yields a different, directed result; in the other, less information yields random results.
It's not just wine - different temperatures effect the taste of everything, it's why melted ice cream tastes much sweeter than frozen ice cream and warm cola tastes much sweeter than cold. It's also why I don't like my food piping hot - I strongly perfer luke warm, the hotter the food the less taste is perceived.
I just had a glass of a too chilled red wine... It's warm outside...
There are wines that are bordering to each other and some are all over the place, but I'd say I could most likely nail a chilled red wine just on the tannins, unless there where some really funny ones in the mix.
I've done blind tests of varieties and districts, and even amateurs can sort out a typical pinot noir, shiraz, and cabarnet sauvignon just on the description. But there are always atypical wines that can trick you.
> Maybe it was my just my lame friends
Your friends were fine; it was your test that was lame.
Why would you ever serve red wine chilled? That's not how it's consumed by most wine drinkers, so I'm not sure what you were trying to prove, other than trying to "tear down the culture a peg or two"—and by implication your friends who consider themselves wine enthusiasts.
Chilling a wine slows the volatilization of the aromatic compounds and dramatically changes the flavor. As a result it can taste thin or tasteless. No wonder they couldn't tell the difference.
It's great when you want to mask a cheap product, though. The same is true of beer. Think about the quality of a Coors Light and why their ad campaign is all about drinking it ice cold.