I remember as a school boy I heard about Markov Chains, and played a bit with them. Instead of building big n-gram tables, I found a simpler way. Start with one n-gram, then scan the source text until you find the next occurrence of it, and take the next letter after it. Output it, and append it to your n-gram, dropping its first letter, and repeat until you have sufficient amount of nonsense text.