This brings up one of my pet peeves: recursion. Of course there should have been other mitigations in place, but recursion is such a dangerous tool. So far as reasonably possible, I consider its only real purpose to confuse students in 101 courses.
I assume that they are using .Net, as SOEs bring down .Net processes. While that sounds like a strange implementation detail, the philosophy of the .Net team has always been "how do you reasonably recover from an stack overflow?" Even in C++ what happens if, for example, the allocator experiences a stack overflow while deallocating some RAII resource, or a finally block calls a function and allocates stack space, or... you get the idea.
The obvious thing to do here would be to limit recursion in the library (which amounts to safe recursion usage). BinaryPack does not have a recursion limit option, which makes it unsafe for any untrusted data (and that can include data that you produce, as Bunny experienced). Time to open a PR, I guess.
This applies to JSON, too. I would suggest that OP configure their serializer with a limit:
The most effective tools for the job are usually the more dangerous ones. Certainly, you can do anything without recursion, but forcing this makes a lot of problems much harder than they need to be.
> While that sounds like a strange implementation detail, the philosophy of the .Net team has always been "how do you reasonably recover from an stack overflow?"
Can you expand on this or link to any further reading? I just realized that this affects my platform (Go) as well, but I don't understand the reasoning. Why can't stack overflow be treated just like any other exception, unwinding the stack up to the nearest frame that has catch/recover in place (if any)?
The answer lies in trying to figure out how Go would successfully unwind that stack, it can't: when it calls `a` it will simply overflow again. Something that has been discussed is "StackAboutToOverflowException", but that only kicks the bucket down the road (unwinding could still cause an overflow).
In truth, the problem exists because of implicit calls at the end of methods interacting with stack overflows, whether that's because of defer-like functionality, structured exception handling, or deconstructors.
But doesn’t this apply to “normal” panics as well? When unwinding the stack of a panicking goroutine, any deferred call might panic again, in which case Go keeps walking up the stack with the new panic. In a typical server situation, it will eventually reach some generic “log and don’t crash” function, which is unlikely to panic or overflow.
Perhaps one difference is that, while panics are always avoidable in a recovery function, stack overflows are not (if it happens to be deep enough already). Does the argument go “even a seemingly safe recovery function can’t be guaranteed to succeed, so prevent the illusion of safety”?
(To be clear: I’m not arguing, just trying to understand.)
I'm not actually sure what Go would do in a double-fault scenario (that's when a panic causes a panic), but assuming it can recover from that:
In the absolute worst case scenario: stack unwinding is itself a piece of code[1]. In order to initiate the stack unwind, and deal with SEH/defer/dealloc, the Go runtime would need stack space to call that method. Someone might say, "freeze the stack and do the unwind on a different thread." The problem is the bit in the quotes is, again, at least one stack frame and needs stack space to execute.
I just checked the Go source, and it basically uses a linked list of stack frames in the heap[2]. If a stack is about to overflow, it allocates a new stack and continues in that stack. This does have a very minor performance penalty. So you're safe from this edge case :).
> So far as reasonably possible, I consider its only real purpose to confuse students in 101 courses.
I had a "high school" level programming class with Python before studying CS. I ran into CPython's recursion limit often and wondered why one would use recursion when for loops were a more reliable solution.
Nowadays, my "recursion" is for-looping over an object's children and calling some function on each child object.
I assume that they are using .Net, as SOEs bring down .Net processes. While that sounds like a strange implementation detail, the philosophy of the .Net team has always been "how do you reasonably recover from an stack overflow?" Even in C++ what happens if, for example, the allocator experiences a stack overflow while deallocating some RAII resource, or a finally block calls a function and allocates stack space, or... you get the idea.
The obvious thing to do here would be to limit recursion in the library (which amounts to safe recursion usage). BinaryPack does not have a recursion limit option, which makes it unsafe for any untrusted data (and that can include data that you produce, as Bunny experienced). Time to open a PR, I guess.
This applies to JSON, too. I would suggest that OP configure their serializer with a limit:
[1]: https://www.newtonsoft.com/json/help/html/MaxDepth.htm