Such old urban places would just be car-free in the Netherlands (sometimes with limited access for delivery and emergency vehicles), a trend fortunately becoming popular in other European cities now.
The “urban” in the title is a bit misleading, this intersection is definitely more suburban, or on the boundary of an urban center. (Or rather, the author has a different definition of urban - in my definition cities like den Bosch are really just a small medieval urban core surrounded by continuous medium-density suburban neighborhoods.)
In my experience, cars are discouraged from city centres, but not banned. You can drive your car all around Amsterdam, although you’ll have many one way streets and parking is going to very expensive for non-residents… and it’s hard (but not impossible) to find street level parking. Amsterdam has a number of car parks in the outskirts that are cheap if you can show that you used public transport afterwards.
The result is that people use their car (if they have one, still quite common esp. for families) to get out of the city, or big errands, but use bike or public transport for day to day trips.
Actual car free zones exist in cities across Europe but tend to be pretty small and constrained to the hyper centre, like the church square and the major shopping streets. Not that I’m opposed to them being bigger but that seems rare at this point.
I would love to see the diff between the hand-rolled recursive-descent parser and the ANTLR syntax!
I certainly feel the amount of boilerplate in my hand-rolled recursive-descent parser is manageable. Of course it's not as succinct as an EBNF grammar:
- For example, you have to write an actual loop (with "for" and looping conditions) instead of just * for repetition
- The Go formatter demands a newline in most control flows
- Go is also not the most succinct language in general
So you do end up with many more lines of code. But at the end of the day, the structure of each parsing function is remarkably similar to a production rule, and for simpler ones I can mentally map between them pretty easily, with the added benefit of being able to insert code anywhere if I need something beyond old-school context-free parsing.
Right, I may have forgot to mention that lexerless parsers are somewhat unusual.
I didn't have much time in the talk to go into the reason, so here it is:
- You'll need a more complex lexer to parse a shell-like syntax. For example, one common thing you do with lexers is get rid of whitespaces, but shell syntax is whitespace sensitive: "a$x" and "a $x" (double quotes not part of the code) are different things: the first is a single word containing a string concatenation, the second is two separate words.
- If your parser backtracks a lot, lexing can improve performance: you're not going back characters, only tokens (and there are fewer tokens than characters). Elvish's parser doesn't backtrack. (It does use lookahead fairly liberally.)
Having a lexerless parser does mean that you have to constantly deal with whitespaces in every place though, and it can get a bit annoying. But personally I like the conceptual simplicity and not having to deal with silly tokens like LBRACE, LPAREN, PIPE.
I have not used parser generators enough to comment about the benefits of using them compared to writing a parser by hand. The handwritten one works well so far :)
That example you gave could certainly be done in Lex/Flex and I assume other lexers/tokenizers as well, for instance, you would probably use states and have "$x" in the initial state evaluate to a different token type than "$x" in the string state.
But I do get your meaning, I've written a lot of tokenizers by hand as well, sometimes they can be easier to follow the hand written code. Config files for grammars can get convoluted fast.
But again, I was not meaning it as criticism. But your talk title does start with "How to write a programming language and shell in Go" so given the title I think Lexers / Tokenizers are worth noting.
Yeah, ultimately there's an element of personal taste at play.
The authoritative tone of "how to write ..." is meant in jest, but obviously by doing that I risk being misunderstood. A more accurate title would be "how I wrote ...", but it's slightly boring and I was trying hard to get my talk proposal accepted you see :)
As the sibling comment mentioned, you can find documentation on Elvish itself on the website https://elv.sh. There are tutorials and (not 100% but fairly complete) reference documents.
Do you have a link to a copy of the video with captions? YouTube autogen doesn't cut it unfortunately. Or perhaps a written-form version (slide deck + transcript)?
The remaining 8% mostly falls into the following categories:
- Code that use OS functionalities that are cumbersome to mock in tests
- Code paths that are triggered relatively rarely and I was simply too lazy to add tests for them
Nothing is impossible to cover, but for whatever reason it was too much work for me when I wrote the code.
However, it's worth mentioning that I only settled on the transcript test pattern fairly recently, and if I were to rewrite or refactor some of the untested code today I would add tests for them, because the cost of adding tests has been lowered considerably. So Elvish's test coverage is still increasing slowly as the cost of testing decreases.
There were a lot of aspects of this talk that I thought were really great. The willingness to try something unscripted, diving into the code repo live (e.g. to show where fuzzing is used), and the discussions of the reasoning behind the design choices. Great job @xiaq. This really makes me want to try elvish out, and I usually am quite skeptical of new shells.
haha I can't present nearly as well as yourself but maybe one day.
It's not easy to present though. I know on HN we see a lot of very clever people give some well executed presentations and it's sometimes easy to forget how much preparation and courage it takes to perform like that. And it's great to see how engaged people were with the content too.
Sorry, this is less of a question and more just comment of appreciation.
Did you set your login shell to Elvish? Vim unfortunately relies on your shell being a POSIX shell, but you can fix that with "set shell=/bin/sh" in your rc file.
>The input file can be of any type, but the initial portion of the file intended to be parsed according to the shell grammar [...] shall not contain the NUL character.
You'll notice there's no NUL characters on the first line, and that the subsequent NULs are escaped by a single-quoted string, which is legal. The rules used to be more restrictive but they relaxed the requirements specifically so I could work on APE. Jilles Tjoelker is one of the heroes who made that possible.
[T]he initial portion doesn't mean the first line, it means the script part of a file consisting of a shell script and a binary payload, separated by `exit', `exec whatever', etc. A good example is the Oracle Developer Studio installation script.
You can write to Austin Group mailing list and ask for clarification if you want.
That seems to be a bad translation, the original text means something like “sharpen something fully and it won’t last so long” (presumably due to it being more brittle). The text (like the rest of Tao Te Ching) is pretty vague and doesn’t actually refer to knives, so it could also be read metaphorically.
I was being silly and pedantic but thats an interesting distinction!
I imagine Laozi probably knew how to have sharp knives so I defer to his wisdom as far as 4th century chinese knives go anyways. Perhaps they used a different technique there and then than the type of knife sharpening I'm familiar with.
> In case there are no jpg files in the working directory, Bash will put the pattern itself (*.jpg) into the $x variable. You need to explicitly check that the file in the variable actually exists before working on it.