Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Principal component analysis explained visually (setosa.io)
187 points by vicapow on Feb 12, 2015 | hide | past | favorite | 22 comments


Seeing how each visualization adjusts as I change the original dataset is so useful. The technique reminds me of Bret Victor's amazing work.

Ladder of Abstraction Essay: http://worrydream.com/#!2/LadderOfAbstraction

Stop Drawing Dead Fish Video: https://vimeo.com/64895205

This is awesome, thanks for sharing!


Very nice! I actually used the featured example from Mark Richardson's class notes on Principal Component Analysis (http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf) in teaching. It was astounding how clear it was to some people and how unclear to others.

I did a singular value decomposition on a data set similar to the one Richardson used (except with international data). The original post here looks at the projection to country-coordinates, looking at what axes describe primary differences between countries. My students had no problem with that -- Wales and North Ireland are most different, in your example, and 'give' the first principal axis. But then I continued to do it with the foods, as Richardson did (look at Figure 4 in the linked file). Students concluded in large numbers that people just don't like fresh fruit and do like fresh potatoes. Hm. They didn't conclude that people don't like Wales and do like North Ireland; they accurately saw it as an axis. But once we were talking about food instead of countries, students saw projection to the eigenspace as being indicative of some percentage of approval.

How could we visually display both parts of this principal component analysis to combat this prejudice that sometimes leads us to read left to right as worse to better?


By labelling the axes longitude and latitude ;) Or you could show the mean first and explain that the mean describes which foodstuffs are popular, and PC1 / PC2 refers to deviations from that mean.


How differently is linear regression than PCA? I understand the procedure and methods are completely different, but isn't linear regression also going to give the same solution on these data sets?


PCA minimizes the error along the PC axis. Here's a nice article that goes into more detail: http://www.r-bloggers.com/principal-component-analysis-pca-v...


PCA results in a space, while linear regression results in a function within the same space as the original data.


To begin with they solve different problems:

Linear regression has a notion of input and output. You want to model how the output depends on the input.

PCA does not. You have a pointcloud and want to find a compressed representation of it.


This is truly fantastic.

Excuse me for being daft, but how do you transform back into 'what does this mean'?

For instance, in ex 3, we see that N. Ireland is an outlier. It wasn't obvious to me that the cause was potatoes and fruit.

How does PCA help you with the fundamental meaning?


They kind of skipped a step. What you would do is then look at what the first principle axis is. In this particular case, it would be a 17-D vector, each element corresponding a food type. You would then look at which elements (food types) have the greatest magnitude.

In a toy example, imagine we had a 5D case where we have beer, cereal, fruit, beef, chicken, and salad, and we found out that the first principle axis is {0.3, 0.1, -0.5, 0.0. 0.2} (in the same order). Then the cause of the change would be due to primarily fruit and beer consumption.


I was looking for the same information. The article has a link to pdf with a more deep discussion of this case, and the explanation and the values of the weight of each product. http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf

From the figure 4, after rescaling and rounding the coefficients

The pc1 that separate N Ireland is approximately:

  PC1 = FreshPotatoes + SoftDrinks/2 - FreshFruits/2 - AlcoholicDrinks/2
The pc2 that separate the other three countries is approximately:

  PC2 = FreshPotatoes - SoftDrinks


That's the main drawback of PCA - the principal components are linear combinations of all the original features.

An attempt to remedy this is callled Sparse PCA (you can look it up on Google Scholar), in which the principal components are combinations of only a few features. This allows you to figure out which features are not important.


It be helpful to learn about eigenanalysis (at least it was for me). PCA leverages the ideas of vector spaces that I found eigenanalysis to clarify.



I think the 3D chart requires a WebGL plug-in... Where can I download one for Chrome 40.x?


PCA is a pretty okay method for dimensionality reduction. Latent Dirichlet allocation is pretty good too. It depends on what you're trying to do and how the data is distributed in N-dimensional space.


dont forget latent semantic indexing :)


This is great. The only thing I'm missing is explanation of the various methods of rotating the principal axes (varimax, oblimin, etc.)


Hope I'm not late to the party...Does anyone have an implementation(s) to recommend in Python?


This post seems to cover PCA in Python quite well: http://sebastianraschka.com/Articles/2014_pca_step_by_step.h...


Strange font, particularly the 's'. It's hard to read. Anyone else having the same experience?


I frequently use this Chrome extension (and even used it on this article, in fact): https://chrome.google.com/webstore/detail/text-fix/ofafkoecd...


Pretty nice page, but it doesn't say much about PCA. "Visualizing PCA output" would be a more appropiate post title.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: