Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reads like a sad state of affairs, but the article itself doesn't really explain whatbthe actual concerns where that caused the proposals to be put on ice.

From reading, I mostly get "PTC was un-implemented and put on ice because some browser vendors had issues with it; the alternative proposal, STC, was put on ice because other browser vendors had different issues with it. Then everyone (from the browser vendor side) kind of lost interest."

But what were the actual issues that blocked the two proposals?

Edit: Ah, I'm sorry. The issues with PTC are indeed described, but STC was brought forward specifically to address those reasons. So why wasn't STC implemented then?



[From the article]

Why are browser vendors ignoring PTC? V8 chalks it up to two main reasons:

* It makes it more difficult to understand during debugging how execution arrived at a certain point since the stack contains discontinuities, and

* error.stack contains less information about execution flow which may break telemetry software that collects and analyzes client-side errors.


> It makes it more difficult to understand during debugging how execution arrived at a certain point since the stack contains discontinuities

That's a weird complaint, considering that stacks don't describe "how execution arrived at a certain point". In fact, stacks don't describe the past at all; rather, they describe the future of what's left to do (AKA the "continuation").

For example, consider this code:

    function foo() {
      const bar = someComplexFunction();
      performSomeEffect();
      baz(bar);
    }
If an error occurs somewhere inside `baz`, the stack trace won't mention anything about `someComplexFunction`, or `performSomeEffect`, or the vast majority of "how we arrived at" the call to `baz`. Yet it will tell us exactly what was remaining to do (namely, `baz` and `foo`).

If we eliminate tail calls, stack traces are still an exact description of the continuation. The difference is that "remaining work" doesn't include a bunch of useless identity functions (i.e. redundant stack frames with no further work to do)


For execution, a stack is a continuation. For debugging, we pretend like it's a historical record, and mostly get away with it. Various things break the correspondence slightly. TCO breaks it a lot more.

Debugging is important. It doesn't get enough respect. Stacks are a pretty critical component of debugging, for better or worse.

It would be great if we didn't depend on this fiction quite so much. With native code, there are definitely alternative options now, such as rr[1] and Pernosco[2] where if you want to look back in time—well, you just go back in time. For JavaScript, that's becoming more and more possible with things like Replay[3]. Perhaps before long, the debugging argument will just go away.

[1] https://rr-project.org/

[2] https://pernos.co/

[3] https://www.replay.io/


The hardware stack has always been a crutch that in retrospect was probably a bad idea. We use it for jobs it's not well-suited for (like parameter passing, local variables, and debugging) and it has held back better flow control mechanisms like delimited and first-class continuations. And of course TCO, which wouldn't even be a thing if everybody didn't automatically assume a stack pointer was involved with every call. (Hard to imagine? Yes, but plenty of other flow control models exist.)

Stacks are still useful for low-level jobs like register spilling and interrupt handlers, and they make memory management of such data easy. Nevertheless on modern machines with multicore processors running message-passing programs, the limitations of what can be done in high-level code with a one-dimensional stack pointer should now be obvious.


Stack frames also capture locals, and those often provide a lot of information about what just happened. I've had cases before where this was instrumental to figuring out the cause of the bug, and other cases where it likely would have been if TCO hasn't wiped out that information (in C++).


If you're writing imperative code with side-effects (or mixed-style) like much classic JS code is, the existence of foo on the call stack indicates that performSomeEffect has been run, and thus it's side-effects on our global state when we enter baz has to be accounted for.

Is it an ideal style to write code in? No. Does real code have this problem, Yes!


This is such a weird complaint given that Erlang exists, with tail calls, and proper async, and..., and is used to create complex software


The spec for STC has a critique of PTC:

- performance

- developer tools

- Error.stack

- cross-realm tail calls

- developer intent

See: https://github.com/tc39/proposal-ptc-syntax#issues-with-ptc

Apple's 2016 response as to why they won't implement STC is here: https://github.com/tc39/ecma262/issues/535

- STC is part of the spec and will take too long to change.

- Now that they've implemented support for PTC, they don't want to regress web pages that rely on it.

- They don't want to discourage vendors from implementing PTC by agreeing to STC.

- They don't want to introduce confusion.

Some of these arguments about confusion and delays seem wrong hindsight, since on every point things would have been better if they'd just agreed to the compromise of STC.

- It would have been part of the spec years ago

- STC would have had a clear way for web pages to know when tail calls could be relied on (and PTC would have been optional)

- Other vendors didn't implement PTC in any case, despite no agreement on STC

- There's even more confusion as things are now


It's in the article?

1. more difficult to understand during debugging

2. less information about execution flow which may break telemetry




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: