There's something that I'm missing here. Can anyone who is involved with this project clarify this example:
function getFullName( id ){
return ajax( "/user/" + id + "/name" ) + " " + ajax( "/user/" + id + "/lastname" )
}
I've read through the source and it mentions blocking etc. but I didn't see quite what I was looking for.
So here's the deal: at one point I was working on laziness library to treat promises as lazy values, and the problem that I ran into was that JS doesn't let you overload the operators `+`, `-`, etc. so that the API needs to export things like `Lazy.plus(p1, p2, p3)`. By itself that's not so bad -- it even makes everything look lispy in a strangely C syntax -- but it was sufficiently heavy that I kind of abandoned the project.
So from my understanding, the `+` can only work if the `ajax()` calls now block the global browser JS thread. Is that true? Does the above function even work synchronously, if I call it?
Of course, to actually use this function we have to use `Syncify.revert` which converts it back into a callback-based function. Does Syncify.revert have to somehow parse the function? How does it "stick its own context" into the function that it's calling without something complicated like dynamic variable scope? Or did you find a way to hack dynamic scope into JS?
Thanks for the example. Just to clarify, it appears that you used syncify.revert(getCallbackText) to to create a callback from a synchronous function in order to log the final result.
So it looks like Syncify.js evaluates the parent getCallbackText() function each time that one of its child callbackTest("Alice") or callbackTest("Bob") functions fire. Conceptually it replaces instantiations of callbackTest() with their return values. So on the final pass there are no more child functions to evaluate and it can return the final value.
If this all sounds correct (and please correct me if I'm wrong) then the tradeoff is some re-run overhead in order to skip having to deal with promises or yield/generators.
I've only briefly looked through the source, so I'm not sure if they're doing this, but you can sort of intercept operators like `+` by implementing valueOf() on the object. When you perform a primitive operation on an object, the VM calls valueOf() on it to get a primitive value that it can handle. It does not tell you what operation caused valueOf to be called, nor does it let you influence the result (except by the primitive value you choose to return).
I suspect that they're doing something else, and given the Syncify.parallel construct, I don't think they're blocking the whole thread.
It seems incredibly fragile, better to just use some kind of compile-to-js language. IcedCoffeeScript, Clojurescript, Gopherscript and more all make it possible to avoid callbacks in a cleaner way, with fewer restrictions and probably better performance. Babel has experimental support for es7 async functions, if you really want to write javascript code.
Excuse my ignorance, but are patterns like the one mentioned in the example in wide use? Chaining together the results of multiple AJAX calls to form a single response? (response might not be the correct term - a single return value)
Wouldn't you just send a request to the server and have it handle things and spit out the correct response/return value? What's the need for multiple ajax calls?! I know we're obviously not using this to fetch first and last names - what's a realistic example for this usage pattern?
Very common. RESTful architectures always give rise to this type of pattern. For example, you fetch a user id, and then you fetch all the blog posts by the user then you fetch the comments for the blog posts.
Very common in poorly designed RESTful architectures. Hypermedia-style APIs take care of this. [0] The performance advantage of lightweight payloads to an SPA gets trashed by network latency otherwise.
I don't think the structure of the response would be the dictating factor for a "poorly designed RESTful architecture". This response format no longer makes things RESTful because it lumps together so many different models. This format is more suited for command/presenter pattern (orchestration).
The idea is to improve code composition by creating synchronicity. With Promises, you generally have to return the promise and then attach a then method to do something which is contagious and creates tons of dependencies on promises.
What you describe is the "old" way to use promises.
Instead, you can yield promises inside generator functions, or better yet use async/await with a transpiler. Both methods give you nice sync-looking code.
I guess what you're trying to say is that once you adopt Promises a large portion of your code will have to be tailored to use it.
To which I respond: .. yes, so what? It's better than callbacks. And it's better than some "magic fairydust" that doesn't make it explicit that you're doing something async.
Promises are better than callbacks but far worse (in terms of complexity they create) that synchronous code. This might be fairy dust but it crushes promises for reducing developer cognitive load. Having said that I agree with the person who commented that async/await is a better way to go (although sadly not really an option in CS yet).
CoffeeScript! :-) One of those neat compiles-to-JS languages!
Edit: And the "problem" with async/await and CoffeeScript is that the former is currently available in various JS transpilers but you can't use one of those and CoffeeScript at the same time.
> once you adopt Promises a large portion of your code will have to be tailored to use it.
This isn't a necessity, since you can create a function which both returns a Promise and accepts a callback. Most Promise libraries make this easy to support, which makes it easy to introduce Promises into projects that already use callbacks, and vice-versa.
My guess: the system keeps calling your function, and throws an exception whenever an asynchronous call needs to be made which was not yet handled. This exception is then caught inside the system to mark the asynchronous call as being done, and the next iteration starts.
I get that it returns a wrapper, but the functions like map and toUpperCase would have to wait for the asynchronous callback somehow. How does this work?
See my response to virulent. I believe it re-runs the parent function multiple times until all of the child ajax functions have evaluated to their return values. This probably explains why the author states that the child functions must be idempotent (edit: and the code readonly in its entirety to prevent side effects).
This is quite an interesting hack, and I enjoy seeing novel solutions to the async problem (especially on the browser side, where ES6 is not universally available yet). But still, I would probably never use this in production, given that I can just transpile ES7 async/await with babel.
Reminded me of streamlinejs (https://github.com/Sage/streamlinejs) but I guess what makes your library better is that there you don't have the compiler overhead they need, right?
So here's the deal: at one point I was working on laziness library to treat promises as lazy values, and the problem that I ran into was that JS doesn't let you overload the operators `+`, `-`, etc. so that the API needs to export things like `Lazy.plus(p1, p2, p3)`. By itself that's not so bad -- it even makes everything look lispy in a strangely C syntax -- but it was sufficiently heavy that I kind of abandoned the project.
So from my understanding, the `+` can only work if the `ajax()` calls now block the global browser JS thread. Is that true? Does the above function even work synchronously, if I call it?
Of course, to actually use this function we have to use `Syncify.revert` which converts it back into a callback-based function. Does Syncify.revert have to somehow parse the function? How does it "stick its own context" into the function that it's calling without something complicated like dynamic variable scope? Or did you find a way to hack dynamic scope into JS?