I learnt Pittman shorthand (and some Greg), and taught it to my kids as a "secret code". They use it in class to jot notes the teacher cannot read :)
Both of these make the writing faster, but reading slower. I once spoke the world's champion shorthand writer - I've forgotten her name. She said the even she cannot read shorthand as fast as regular text.
Which made sense before computers, when a stenographer needs to write very quickly, and English is written long.
Bt, nowadays we need the opposite - a "shorthand" which, once you have learned, can be consumed quickly.
I know from experience that it takes much less time to read Hebrew than to read the same text in English (even though my mother tongue is English), since the vowels are assumed and abbreviations are extremely common - the actual text is shorter and quicker to read.
I can scan a long article quickly, but I wish there was a way to convert that to a writing system that was quicker to intake.
Use other languages, perhaps? English is a relatively compact language in terms of visual space, but there are even more compact languages. Typical examples include East Asian languages (Chinese, Japanese and Korean) and Nordic languages (Danish, Finnish, Norwegian and Swedish).
I remember seeing a study about the "information density" of different languages and of all the languages covered, English was #2 in terms of information per syllable while Vietnamese was #1.
A shorthand system is free to represent words phonemically instead of orthographically, and most languages have fewer phonemes per word than letters (or strokes/radicals/jamo if you're looking at Asian characters), so it would make sense to just always do that. So maybe Vietnamese would be the most compact if you used a phonemic system, but I actually think it's more complicated than that.
There are a limited number of different types of strokes you can include in a shorthand system before they become too similar to each other, so you are capped in how much information per second can be written regardless of the language. Different languages have different numbers of phonemes (Rotokas has just 11, while Taa has over 100). If you have very few phonemes, you can group clusters together into single strokes, whereas if you have many phonemes then you may need multiple strokes for a single phoneme.
So what you'd really want is the language with the greatest information per phoneme divided by the total number of phonemes, or another way of putting it, how well it fits into a .zip file :)
Japanese isn't compact. It generally uses fewer characters to convey the same meaning (unless the sentence is full of loanwords from English), however the problem is that the characters are usually written significantly larger than Latin characters, simply because they're so much more complex, so the amount of actual space on the page comes out roughly the same.
Han characters are of course much larger than Latin characters, but the resulting space usage is more nuanced AFAIK. My data about the relative compactness of languages mostly comes from localization researches [1], and I believe CJK is generally more compact even when such differences are accounted for (but the actual expansion ratio can greatly vary).
The only thing that was correct in your link regarding Japanese is "varies".
As for Han characters, Chinese uses those. Japanese uses a mixture of kanji (ancient Chinese characters) and native phonetic characters. So the space needed really depends on the content. As I said before, if it's full of English loanwords (or worse, English technical terminology that's been adopted into Japanese), it'll likely be larger, since all that is expressed in katakana (phonetic characters). If it's something that can be written in mostly kanji, then it'll be quite compact.
Korean does not use Han characters at all. It uses Hangul, an artificial phonetic writing system invented about 100 years ago.
> The only thing that was correct in your link is "varies".
Because I didn't bother to link all guidelines I've found. ;-) Other sources, for example, say that Japanese has a relative contraction of 10% to 55% [1], which is really variable but still supports my claim. There is also an inforgraphic about specific scenarios [2] with a similar conclusion for all CJK languages. It might be possible that some guidelines only show the character count and thus adjusted for the visual space, but I see no reason that most guidelines, primarily from professional translation companies, would get that same point wrong.
> So the space needed really depends on the content. As I said before, if it's full of English loanwords (or worse, English technical terminology that's been adopted into Japanese), it'll likely be larger, since all that is expressed in katakana (phonetic characters). If it's something that can be written in mostly kanji, then it'll be quite compact.
You are correct (I do speak Japanese), but again, the general concensus seems that average Japanese texts do have enough kanjis that make up for some loanwords. I'm well aware that the balance has been greatly changed in recent decades though, so I'd like to be corrected if there is a well-known analysis.
> Korean does not use Han characters at all. It uses Hangul, an artificial phonetic writing system invented about 100 years ago.
Hangul was invented in the 15th century. Only the specific orthographic rules and the current name "hangul" were established ~100 years ago. It took more decades (i.e. circa 1990) for Han characters to be mostly gone from the written Korean, and I observed the final transition as a native Korean speaker.
Sorry, no, Nordic languages are not compact. They may stuff more words together (e.g. generalforsamling instead of general/annual meeting), but just putting two words together doesn't make it more compact or easier to read.
As I noted in the other comment, there does exist multiple guidelines that do suggest a relative text compaction of many Nordic languages when translated from English. Guidelines themselves may have been biased to specific set of texts, of course. If it's the case I'd like to hear counterexamples.
> Nordic languages (Danish, Finnish, Norwegian and Swedish)
Do you have a citation for that? I know some Swedish and often need to read documents in it and I don't get the impression that it's any more compact than English.
Try out Forker shorthand. You can learn it gradually, and the first steps are omitting vowels and simplifying some letters. You then progress to various abbreviations. Ultimately, it's based on the English cursive so there's nothing too exotic to learn in terms of orthography, although I guess if you are younger there is a chance you never learned a cursive style! I'm a novice but feel like it doesn't hurt readability that much, and it's quick to learn.
> Which made sense before computers, when a stenographer needs to write very quickly, and English is written long.
Stenography is where it was at! My mom's second husband was working at the local parliament and had to take notes, in real time, about what politicians were saying and he'd use stenography. He'd then hand his "stenographed" notes to a secretary that'd convert them back to english, which he'd then proofread.
It was french btw, which is even longer than english (about 20% longer).
Both of these make the writing faster, but reading slower. I once spoke the world's champion shorthand writer - I've forgotten her name. She said the even she cannot read shorthand as fast as regular text.
Which made sense before computers, when a stenographer needs to write very quickly, and English is written long.
Bt, nowadays we need the opposite - a "shorthand" which, once you have learned, can be consumed quickly. I know from experience that it takes much less time to read Hebrew than to read the same text in English (even though my mother tongue is English), since the vowels are assumed and abbreviations are extremely common - the actual text is shorter and quicker to read.
I can scan a long article quickly, but I wish there was a way to convert that to a writing system that was quicker to intake.