Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

His dog won pretty handily in the last election, though. It's not an exact science, but his methodology seemed to eliminate a fair amount of doubt from the equation.


Have you seen the comparisons to Princeton's Election Conesertium which did something similar but without the "special sauce"? He did a little worse, which is a bit of evidence saying Sam's more straight numbers are better than Nate's judgment.


or random error.


Definitely possible, that's why I said "a bit"


I think it will be really interesting to see what happens in election forecasting if Trump wins this election -- doubly so if there's no major scandal for Clinton between now and election day.


What happens with election forecasting will be the least of our collective problems.


If the polls stay as they are and the election goes the other way, it means that polling as practiced is somehow fundamentally broken, which would be very surprising.

If there's a gradual shift in the polls, I'm not sure what that would mean. Which scenario did you have in mind?


BRexit will likely want to have a word with you.

I have a feeling that polling is used now a political weapon not a measure of reality.


Nate Silver arguably lost a good deal of "prediction calibration points" by placing a very low chance on Trump getting the nomination in the first place.


Not at all. A low probabilty does not make the prediction invalid because of the result.


And wouldn't the same be true if Trump wins the election? After all, Nate Silver isn't predicting a 0% chance for Trump to win, I think last time I checked it was 10-15%.


Well, that's not what I said either.


>doubly so if there's no major scandal for Clinton between now and election day.

Unlikely, considering there's one every other day


Repeating the same thing every other day doesn't count.


That wasn't the feat people make it out to be. In US elections there are only a handful of states that could go either way.


It adds up pretty quickly. For 2008 and 2012 combined, he only got one state wrong. Say there are 10 swing states close to 50/50 chances, and all others are fixed. The chance of getting them all right, twice, are 0.5^(2*10) = 0.000000953674316.

(that's actually not the best method to evaluate him, because he provides estimates of chances and could be better evaluated with something like the https://en.wikipedia.org/wiki/Brier_score that checks for calibration as well, but it's more intuitive).


There aren't ten swing states. It's probably closer to half that. And they aren't 50/50 either. I got 49 right the last election just by looking at the public polls in the days leading up to the vote.


[flagged]


His name is Nate Silver. Don't be obnoxious.

538 didn't fail to read data properly re Trump's nomination. Trump was a new phenomenon with no history to build a model from. 538's mistake was guessing instead of admitting they had insufficient data -- which is bad behavior, but not because their preferred methodology is bad.


Bull. Their polling based model performed okay (although, at least through Super Tuesday when I looked at it, his polls-plus model didn't outperform his polls-only model, and neither significantly outperformed the RCP weighted average of polls). What failed during the primaries was Nate's "Party Decides" punditry.


LTCM failed too because they did not consider confidence intervals on its models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: