OK, yes, I get it, every major programming language can implement an emulator for DCPU-16 in a few dozen lines. Can someone hurry up and post the [consults the Trendy Language Calendar] Node implementation so we can call it a day on the DCPU-16 implementations?
(Incidentally, I'm not saying DCPU-16 is uninteresting, and if somebody's got something more interesting than a straight-up implementation I'm still all ears. But... Church-Turing, you know?)
I would be a lot more interested into high level languages for DCPU-16.
Also, most DCPU-16 implementations I have seen so far are bytecode interpreters. I would love to see an actual JIT translating emulator that generates x86 or even better llvm code. The performance difference should be considerable.
I was going to write one using DynASM (it would be very short, simple, and fast) but it was unclear to me that DCPU-16 had any substantial programs written for it where you could show off a speed difference.
I have been looking for an excuse to write an article about how to use DynASM though.
JIT compiling DCPU-16 bytecode using LLVM is something I plan to do later if I get time. I imagine by then someone else will already have done it, but if not, then its something I plan to do.
Come on, isn't it nice to see a Rosetta Stone for something that isn't math-oriented?
I think one of Notch's goals with DCPU-16 was bringing back the feeling of programming for computers with very limited specs and little complexity. I think this is proof that his plan is working.
Please forgive me my ignorance, but why are these DCPU-16 implementations in <put your favorite language> [in less than X lines] popping up in recent days?
Yes, I think so because branches skip full instructions and instructions may be between 1 and 3 bytes but exactly how many is unknown until the skipped instruction is decoded.
This is true - I actually miss-read your comment and thought you said instruction pointer instead of stack pointer... It seems to be a bug in the implementation (from my reading of the spec, this should not happen).
>In 1988, a brand new deep sleep cell was released, compatible with all popular 16 bit computers.
So I've assumed from the start that the D is for Deep, because the whole story is about this deep sleep stuff. But yeah, that's really just speculation. I don't think there's any official answer to it.
(Incidentally, I'm not saying DCPU-16 is uninteresting, and if somebody's got something more interesting than a straight-up implementation I'm still all ears. But... Church-Turing, you know?)