Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> change of data format to internal software representations

There is so much of this. Is there any hope for a world where we can ship working memory without costly XDR? Does anyone have data on the rate of change of time or energy spent on XDR translation?



I have thought a lot about this over the years, and I think one of the issues is that it is hard to specify fixed data formats in most programming languages (in a declarative way), leading to lots of code that decodes byte-by-byte or word-by-word. Since writing all that sucks, many programmers have turned to higher-level data formats such as JSON or protobufs that are decoded by libraries.

I have been working on a new feature in Virgil to make writing explicit byte-by-byte layouts easier and directly supported by the compiler. It's still a work in progress though.


If the heap were treated like a typed database, with transaction semantics, many of our problems with disappear. Layout should not be an implementation detail. I know Wasm won't solve this explicitly, but at least have a chance while everything is being redefined.


Wouldn’t that move the data transformation costs from at I/O time to all the time?


Not if the encoding can simply be consumed in place, needing only a validation pass if it's coming from an untrusted source. Cap'n Proto is pretty close to that for one example.


> a world where we can ship working memory

Yes, you can have zero serialization latency: https://capnproto.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: