Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I took two notes from this that I can use in my day job doing this kind of thing:

1) Don't use approaches that are black-box in nature if they will negatively impact people who have a say in its adoption. Stay simple or create tools that let users play with the parameters to understand how the consideration of factors creates a different outcomes. Expose them to how it works don't just ask for trust.

2) The narrative that takes hold around an algorithm supersedes the reality of what the algorithm does and how. Manage the soft side of the rollout just as strictly as the technical side. Just like in startups, sales is equally as important (if not more so) than having a working product.



> Stay simple or create tools that let users play with the parameters to understand how the consideration of factors creates a different outcomes.

I've found that this works only if the tools can actually be used reliably to arrive at an outcome without supervision. If there are intermediate results that have to be checked against descriptive knowledge that isn't explicitly included in the model, then people will be disappointed when they choose a configuration and learn that it can't work for some reason or other.

Such as, "Oh, the calibration standard doesn't work in that range, I didn't put it in the model because I didn't expect anybody to try it, and it wasn't in the requirements at the time."


Don't use approaches that are black-box in nature if they will negatively impact people who have a say in its adoption.

Shouldn't that "people who are going to be harmed by it." I mean, this situation makes it clear that no one likes being ridden-over by a black box. We shouldn't say "well, it's OK if they can't complain." (and the racist-AI parole system example shows this is a real risk [1]).

[1] https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: