His dog won pretty handily in the last election, though. It's not an exact science, but his methodology seemed to eliminate a fair amount of doubt from the equation.
Have you seen the comparisons to Princeton's Election Conesertium which did something similar but without the "special sauce"? He did a little worse, which is a bit of evidence saying Sam's more straight numbers are better than Nate's judgment.
I think it will be really interesting to see what happens in election forecasting if Trump wins this election -- doubly so if there's no major scandal for Clinton between now and election day.
If the polls stay as they are and the election goes the other way, it means that polling as practiced is somehow fundamentally broken, which would be very surprising.
If there's a gradual shift in the polls, I'm not sure what that would mean. Which scenario did you have in mind?
Nate Silver arguably lost a good deal of "prediction calibration points" by placing a very low chance on Trump getting the nomination in the first place.
And wouldn't the same be true if Trump wins the election? After all, Nate Silver isn't predicting a 0% chance for Trump to win, I think last time I checked it was 10-15%.
It adds up pretty quickly. For 2008 and 2012 combined, he only got one state wrong. Say there are 10 swing states close to 50/50 chances, and all others are fixed. The chance of getting them all right, twice, are 0.5^(2*10) = 0.000000953674316.
(that's actually not the best method to evaluate him, because he provides estimates of chances and could be better evaluated with something like the https://en.wikipedia.org/wiki/Brier_score that checks for calibration as well, but it's more intuitive).
There aren't ten swing states. It's probably closer to half that. And they aren't 50/50 either. I got 49 right the last election just by looking at the public polls in the days leading up to the vote.
538 didn't fail to read data properly re Trump's nomination. Trump was a new phenomenon with no history to build a model from. 538's mistake was guessing instead of admitting they had insufficient data -- which is bad behavior, but not because their preferred methodology is bad.
Bull. Their polling based model performed okay (although, at least through Super Tuesday when I looked at it, his polls-plus model didn't outperform his polls-only model, and neither significantly outperformed the RCP weighted average of polls). What failed during the primaries was Nate's "Party Decides" punditry.