I wouldn't overlook the distinctions between them. This kind of policy against Jews was a "quota", specifically "numerus clausus". Participation from a specific minority is prohibited once it reaches a specific number.
Let's consider which are the hard parts of verifying that remote code is safe.
Useful verifiers exist. Coq is a state-of-the-art verifier for a large class of propositions. It hasn't been easy to create. Its design is not frozen.
We need to create a specification for safe remote code that is trustworthy. This seems difficult when permitting all the remote code capabilities that we want.
We need to demonstrate constructing specification-compliant code for a nontrivial algorithm with a proof of compliance. This seems costly but feasible.
Which wiki engine do you have in mind? There certainly is revision control in MediaWiki, Confluence, and others. Granted, I wouldn't use those in place of a source code revision control system such as git. There are generally no "annotate"/"blame" or "pickaxe" features. I would agree with writing something code-friendly in the first place, examples of which would include DocBook and Restructured Text.
I think there's something relevant going on with search optimization, or if you prefer, marketing and advertising. A Wikipedia user who is somewhat active just now is using mentions of the website to boost the references to a particular consulting firm. This sort of activity is not much beloved by many habitues of Wikipedia, as you can see in the essay http://en.wikipedia.org/wiki/WP:SPA
Here I'll leave it to others whether this is smart and laudable or sneaky and regrettable. Perhaps it's normal and unremarkable, but I find it interesting. It is only a guess on my part that this is intentional and coordinated.
You can use 7zip to archive, compress, and AES-256 encrypt.
You can also use GnuPG or openssl to encrypt a file or a disk image.
Lately I took a look at actual filesystems for cross-platform use. I didn't find a great one, but depending on which OS you want to burden with an extra driver, pretty good ones include exFAT, NTFS, and UDF specifically version 2.01. Like you, I am curious whether better answers are available.
Again, those don't work for filesystem purposes. One of the nice things about TC is that it not only is cross-platform, but that it functions as an mountable directory across all of them.
I agree. A counterargument is that you're inherently taking advantage of the processor(s) and the operating system, not to mention, ha ha, every single component and factor that you did not personally create, whether or not it is computational in nature...
Another route, my present hobby and something that I know many have done before: Take the Lisp program on page 10 of the Lisp 1.5 User Manual and translate it into C. Write a simple tokenizer/parser (read function) in C and translate that into Lisp. Once the C program can run the equivalent Lisp program, we're on our way.
At the outset I have no garbage collection. I just expect the memory space to fill up pretty soon. I could have `cons` compute a hash for each cell to eliminate common subexpressions, but for a while I suppose I won't bother at all. The C program has a loop to handle evcon (did I write cons? oops), eval, and apply as a trio of mutually tail-recursive operations. That's an obvious place to put a stop-the-world mark-and-sweep operation or some such thing.
To get the C tools out of the loop eventually, hand-disassemble the binary program (C compiler output) just to see what's there. Then write a Lisp program (compiler) that translates the Lisp 1.5 User Manual program into something pretty similar.
A counter-counter argument is that implementing a language that is semantically distant from assembly language, over top of assembly language, counts as "from scratch", whereas skinning a language over with an interpreter (which doesn't even scan tokens, collect garbage or perform any I/O) doesn't count as "from scratch".