Very nice! I actually used the featured example from Mark Richardson's class notes on Principal Component Analysis (http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf) in teaching. It was astounding how clear it was to some people and how unclear to others.
I did a singular value decomposition on a data set similar to the one Richardson used (except with international data). The original post here looks at the projection to country-coordinates, looking at what axes describe primary differences between countries. My students had no problem with that -- Wales and North Ireland are most different, in your example, and 'give' the first principal axis. But then I continued to do it with the foods, as Richardson did (look at Figure 4 in the linked file). Students concluded in large numbers that people just don't like fresh fruit and do like fresh potatoes. Hm. They didn't conclude that people don't like Wales and do like North Ireland; they accurately saw it as an axis. But once we were talking about food instead of countries, students saw projection to the eigenspace as being indicative of some percentage of approval.
How could we visually display both parts of this principal component analysis to combat this prejudice that sometimes leads us to read left to right as worse to better?
By labelling the axes longitude and latitude ;) Or you could show the mean first and explain that the mean describes which foodstuffs are popular, and PC1 / PC2 refers to deviations from that mean.
How differently is linear regression than PCA? I understand the procedure and methods are completely different, but isn't linear regression also going to give the same solution on these data sets?
They kind of skipped a step. What you would do is then look at what the first principle axis is. In this particular case, it would be a 17-D vector, each element corresponding a food type. You would then look at which elements (food types) have the greatest magnitude.
In a toy example, imagine we had a 5D case where we have beer, cereal, fruit, beef, chicken, and salad, and we found out that the first principle axis is {0.3, 0.1, -0.5, 0.0. 0.2} (in the same order). Then the cause of the change would be due to primarily fruit and beer consumption.
I was looking for the same information. The article has a link to pdf with a more deep discussion of this case, and the explanation and the values of the weight of each product. http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf
From the figure 4, after rescaling and rounding the coefficients
That's the main drawback of PCA - the principal components are linear combinations of all the original features.
An attempt to remedy this is callled Sparse PCA (you can look it up on Google Scholar), in which the principal components are combinations of only a few features. This allows you to figure out which features are not important.
PCA is a pretty okay method for dimensionality reduction. Latent Dirichlet allocation is pretty good too. It depends on what you're trying to do and how the data is distributed in N-dimensional space.
Ladder of Abstraction Essay: http://worrydream.com/#!2/LadderOfAbstraction
Stop Drawing Dead Fish Video: https://vimeo.com/64895205
This is awesome, thanks for sharing!