Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Read Weapons of Math Destruction by Cathy O'Neil. The book explores several ways that's already happening. Her main premise is that there's a feedback loop in many data-driven policies. You only get success results for the things that you try, and you only try the things you already think are likely to receive. As a result, algorithmic policies tend to reinforce the status quo.

Loan risk algorithms will favor people "similar to" those who have paid back loans before, a sample group biased towards people that banks have already loaned to before. As a result, a lot of the factors are biased towards "from a white upper-middle-class suburban background."

And recidivism estimators, which are used as jail sentencing guidelines in some places.

Screening algorithms for job resumes, and college applications.

Algorithms send police to where crimes are reported. Crimes are reported because the police are there to witness them. The area gets designated a high-crime area. Regular people are arrested more often because regular activity is suspicious in a high-crime area, affecting their future prospects. The higher arrest rate is used to justify this.

It's a continuous spectrum rather than a single point. But if I were to pick a single "point" where it became a self-fulling prophecy? 1994, due to the widespread passage of three-strikes laws.



Can't this be solved by randomly giving out the wrong prediction, and seeing how it turns out? eg. for 1% of applicants, pretend to give them 800+ credit score, then check the outcome compared to the "expected" score.


You're playing with peoples' lives here. That's not funny at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: