Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While it's a good idea in principle to publish failures, in practice it's a bit more tricky. So a particular model didn't work. Does that mean the model is fundamentally flawed? Or that you weren't smart enough to engineer it just right? Or that you didn't not throw enough computing power at it?

In a vast error landscape of non-working models, a working model is extremely rare and provides valuable information about that local optima.

The only way publishing non-working models would be useful would be to require the authors to do a rigorous analysis of why exactly the model did not work (which is extremely hard with our current state of knowledge, although some people are starting to attempt this).



>> While it's a good idea in principle to publish failures, in practice it's a bit more tricky. So a particular model didn't work. Does that mean the model is fundamentally flawed? Or that you weren't smart enough to engineer it just right? Or that you didn't not throw enough computing power at it?

And yet the field seems to accept that a research team might train a bunch of competing models on a given dataset, compare them to their favourite model and "show" that theirs performs better - even if there's no way to know whether they simply didn't tune the other models as carefully as theirs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: