Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

cleaning up the bug reporting system is often one of my jobs.

the point of a bug report is to let you know something is going on, to provide context useful in tracking it down, and tracking for the eventual resolution.

as they years tick by, some bug reports are counterproductive because they are unclear, dont give _any_ diagnostic information, or are no longer relevant because the codebase has shifted.

its great to say for each of these issues we should spend a day or two trying to reproduce them - but its really not the best use of time.

we're not talking about capriciously trashing useful bug reports. we're trying to lower the noise floor so that the actionable bugs stand out and are more likely to be dealt with.



The same way we have text classification for the spam in our mail inbox, couldn't someone train a model to classify issues as actionable bugs vs. noise for large projects like the one mentioned in the OP? Data would come from closed issues:

- if the issue lead to a commit, or a merge is mentioned in the thread, then it is actionable,

- if the issue was closed without any code change, then it is noise.


maybe. but i think this process really helps develop a deeper understanding of the evolution of the project and doesn't take that much time.

and I bet your AI isn't going to be able to say 'oh yeah, thats just that thing we fixed in 2.1'

seems useful in a second-order capacity though - process introspection


You do see a decent number of GitHub bots that will trash issues if they don’t comply to some sort of “expected/observed/steps/specs” format, which I think more or less accomplishes the same thing. The only issue is you’re potentially losing out on issues from less-tech-savvy users, but I guess simply using GitHub filters many of those people anyways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: