Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Let it go. Let it crash. Just assume inputs are non-null (except where null makes sense). Even C crashes safely on null pointer dereference.

If the amount of work done before the crash has some value and is lost by the crash, then crashing just isn't an option.



What everybody is missing here is that null pointers where none was expected are just an instance of wrong code. How do you protect yourself from wrong code? (Hint: not by adding more code!)


Perhaps you're just missing the complexity of the real world?

I'm working on quite big finite element desktop applications, where alone the reading of the data can take up quite a bit of time. The user can then apply all kinds of different operations on this data and if one of these operations fails because of a null pointer - then sure, the code is wrong - but nevertheless the user doesn't want to loose all of his previous work and wants to be able to save the changed data.

Sure there're cases that a failed operation might have corrupted the data, but you just can't tell this in every case, and often the data is still valid and only the operation couldn't be applied.

If I've learned something over the years, then that there's not one solution that works for all cases.


So you weren't expecting bad user input? Bad for you!

This is called validation, and in a validation routine you expect bad invalid data. Check inputs at the trust boundaries, and off you go.

Note this is NOT about just null pointers but about integrity in general.


Well, if you want a serious discussion then don't imply something just to be able to make a point.

But if you just want to win an argument: here you go.


Then you should run the risky operations in a different process. It will not only protect you from the operation crashing your main program, but also from data corruption if you use readonly access for the shared data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: