The `find` command is not a very good example of Unix philosophy. http://doc.cat-v.org/unix/find-history provides an interesting snippet of its history, indicating that `find` and some other friends weren't designed by the research group of AT&T behind Unix.
> ... take a moment to consider how utterly anachronistic both of the above solutions come across to non-believers in 2012
While this as the author admits isn't what he wants to talk about, I want to talk about it for a moment. Apart from `find` which is indeed archaic (few people seem to use its many built in features and pipe to simpler programs instead--e.g. preferring to pipe to xargs instead of using -exec), I'm struggling to find what's so old-fashioned about the grep example and what a modern, 2012 design should look like. Is it the '-v'? You can type '--invert-match' instead. Is the problem old-fashioned? I don't think so, I still occasionally pipe the results of find through grep -v and then on to something else. Is piping old-fashioned? Again, what's the 2012 version of these things?
While I don't really like comparing programming languages to spoken languages (much like comparing OS kernels to popcorn kernels), I always liked this quote since I read it:
"Linux supports the notion of a command line or a shell for the same reason
that only children read books with only pictures in them. Language, be it
English or something else, is the only tool flexible enough to accomplish
a sufficiently broad range of tasks." -- Bill Garrett
As for the "meat" of this post, well, we'll see how Light Table turns out. I'm not particularly excited about it but plenty of HNers are.
I don't like the way `find` is being criticized, both in the article and in your comment. I won't go into pointless bickering on why it's a wonderful tool and "my stuff is better than yours". Let's keep using the tool we're each comfortable with. But do I really need to point out that, once you go through the initial learning curve, `find` is probably (one of the, if not) the most powerful tool in the Unix toolbox? If Unix's shell power comes from piping commands for streams to flow through, I find that in the vast majority of the cases `find` is the best way to initiate this stream.
And the reason is that it's reliable (unlike `ls`) and isn't wasteful (unlike the most-misused-tool-of-all `cat`). I don't understand why similarly newbie-unfriendly tools (like `sed` or `awk` for instance) never get as much heat as `find` does from people unfamiliar with it. Is it because the name implies that it should be a simple tool?
As for what's wrong with the `ls ... | grep -v ...`. It can break in so many ways like for instance, if a filename contains a newline character. If you think that never happens, you're making the same mistake me and a thousand of newcomers before me made once and never repeated again.
To quote myself, "...I still occasionally pipe the results of find through grep -v and then on to something else..." I agree find is greatly useful for the purpose of finding files (I don't know how Windows sysadmins could live without it) and learning its basics is important in one's mastery of the shell, but it's clearly "not like the others". It has useful options to control its behavior like outputting with null separation (something I wish `ls` had), following symbolic links, searching up to a maxdepth limit, and limiting the type to file or directory. And in the same way you can make justifications for the utility of other `find` features, but for nearly all of them and the ones I mentioned they have an irregular syntax. There are also things that obsolete `find`'s features like `xargs` that are simple to use and easy to understand and are useful in more contexts than `find`, even if using them has some performance hits. (And `locate` obsoletes `find` almost entirely because of performance when you need to search big swathes of your system.)
I think Ritchie sums up my feelings for `find` with "...we were somewhat put
off by the syntax of their things. However, they were clearly quite useful..." `find` is useful, it's just not Unix-y. Does it still follow some of the Unix Philosophy in spirit? Maybe. I think there's also a chance that Unix Philosophy in its purest form is mostly dead in practice and has been for a while. Just look at the evolution of echo.c: https://gist.github.com/1091803
I'll readily agree that `ls .. | grep -v ..` can break, but I don't find anything archaic or non-Unix-y about it like I do with the `find` version. (Also every time someone uses `cat .. | grep ..`, a kitten dies. :( )
Finally, it's not the newbie-unfriendliness that `find` has, it's the irregularity and kitchen sink of features that aren't expressive and don't generalize well, as the other repliers elaborate on. In my experience `sed` and `awk` are more regular, more generally useful, and have less "intuition traps" than `find`. (As sort of a neatity aside from their surface-level expressiveness, both sed and awk are Turing-complete. I think it's interesting that my man page for sed(1) is 267 lines, my man page for gawk(1) is a hefty 1713 lines, and my man page for find(1) is also a hefty 1446 lines.)
I think awk gets a pass because it's a programming language with a handy syntax for running small scripts. Find is in an awkward position of being not quite that general but still very complicated.
The solution to filenames with newlines in them is one garbage-safe script involving 'rm' and a stern email to whoever created the file. Containing complexity instead of letting it leak into every single script is a good thing.
> I think awk gets a pass because it's a programming language with a handy syntax for running small scripts
Even more than that, awk's orientation towards filtering text lines made up of fields makes handling many little pipeline tasks amazingly simple. In combination with being a very nice little general-purpose programming language with good string support, awk is a no-brainer first stop for a lot of things.
[Other more recent programming languages generally have many more features, much better implementations etc, but nothing has improved or really even matched awk's smooth integration into the unix pipeline. Perl, for instance, although it's long tried to bill itself as a replacement for awk, and explicitly includes special features for such usage, can be significantly more awkward (er... :) for the sort of tiny little filters awk's really good at.]
I'm frankly, not converted by this uninspiring call to arms. If you'd like to convert me, offer more than just a "we're doing it wrong, this is the direction we should go".
Instead, offer me a graphical solution to the same problem you posed against the 'nix shell. It doesn't even have to work; a series of photoshopped images would be welcome.
Show me something better, because that 'nix philosophy works very well, whereas its replacements don't (seriously, can anyone think of a way to come up with that list of files without resorting to a shell or programming your own solution?)
Agreed. He should provide a graphical solution to every example given. It's distressing that none are given, not even then ignore-exception piece. That's the brilliance of Bret Victor's essays: he posits the idea, bringing you along in the 'yeah, that's been bugging me, too' fashion, and then proceeds to guide you through working solutions that build on themselves over the course of the piece. If you've been properly dog-fooding, show us the crumbs and the empty bowl to prove it.
Graphical wrappers to plain text are not only 'orders of magnitude harder [to implement]', but that much harder to debug as well. If your GUI client to a remote server beach-balls waiting for a command to complete, is your code wrong, or your connection latent, or your desktop app crashing? An SSH connection to the CLI server can cut through a lot of that pretty quick.
Exactly what I think. I am content with a text interface because honestly I do not think there will be a more intuitive way to exchange ideas than language, a skill we have been honing for thousands of years.
OP, build that better, graphical interface. When you blow by me because you are so much more productive, don't worry, I'll take notice. Until then you can take my text editor when you can pry it from my cold, dead hands.
This doesn't seem to be any simpler. The CLI equivalents take about the same time, but selecting all text files surely takes longer than 'ls .txt', especially if there are a lot in the same folder (click first, scroll, shift+click last).
If we extend the problem to do something like changing the extension (very common for me) for all selected files, the GUI is useless here. Say I want to turn all .txt files => *.txt.old for a cheap archival? In the GUI, I don't know how to do this. On the CLI, I can simply do a 'find -exec', which also allows me to recursively do the change.
Sure, this would be trivial to implement (even Eclipse has a dynamic rename for variable names) with an extension, but this has to be done for each and every possible activity.
That does not fulfill the original requirement set forth by the author:
> Here's a simple task: print a list of all the files with a txt extension in the current directory except for ignore_me.txt.
Your solution, and one of the the sibling comment solutions, will select all of those files, not print a list of them. Pedantic, perhaps, but the solution outlined by the OP is not mirrored by your solution.
This isn't automation, this isn't anything but simple file manipulation, that most graphical operating systems are incapable of; it's also a simple display of how graphical UIs fall short of the shell.
I'm going to guess you are making a joke, esp. the ctrl click to deselect. I did not even read the rest of the article, once I got to the word visual, I did a quick scan for an example, none found, came here to see what others thought. Second read, maybe u are not making a joke? While not mentioned, how would u automate that?
Since the times when automation is not necessary greatly outnumber the times when it is, "how would u automate that?" is not exactly a knock-out blow to the pro-GUI position.
In other words, even for people like me who when asked, "how would u automate that?" would reply, "I'd write a shell script or some Emacs Lisp," it is IMHO often better to do it in a GUI when automation is not necessary. (No joking.)
>the times when automation is not necessary greatly outnumber the times when it is
That is only because most things that need automization have already been automatic. The advantage of cli is that when you need to automate something, the tools are already there, as apposed to needing to find a completly different set of tools, or manually do the same simple procedure a hundred times.
I am not pro-GUI, just providing a possible solution the OP article didn't mention.
For the automation part, yes it's going to need scripts, but can your oneliner script handle this file name like this?
-\;?.txt
I think using the best tools suited for the task. If you need scripts to do the job, use scripts, if just few simple clicks, I don't want to bother find exec xargs pipe here and there.
For many people it's easier to understand complex relationships when they're represented in visual form. However, to represent logic in visual form becomes quite damn painful because when the complexity grows, so does the visual area occupied. When you actually have cross-referencing of objects of type X which inherit from Y and have a is-a relationship with Z which is a friend of K it really, really becomes a one huge mess.
While I could imagine such a system being used for very trivial programming tasks by "non-programmers", I find it very utopistic to even imagine that something as complex as even a primitive compiler could be writ.. described in such form with ease. The actual hard part is not to code it out(as opposed to sketch it out visually) but to manage the complexity by designing it in such a way that it makes sense and works as intended.
I really don't see how visual programming would be helpful with the actual act of designing or even understanding a complex software system with different types, constructs, patterns and their permutations. Perhaps it's just lack of imagination, I sure hope so.
It would have to be layered, like anything. You click on one node in the diagram, and it opens up to be another diagram with data flowing through it. It would be pretty hard to "describe" a primitive compiler in text too if you didn't have some form of subroutine. Also, you want to be able to hide certain relationships temporarily, just like in text you might fold a block of code, or simply ignore the friend declaration.
I've found plotting out call graphs and dependencies with GraphViz to be helpful in decoding and fixing spaghetti code, and as a big-picture documentation tool in general.
GraphViz is a tool that converts a text format to images, not a pointy-clicky WYSIWYG thing, so I'm not sure how much of a defense of visual programming that is.
I have to say that the lack of a good "visual" programming language was what really held me back from learning to code when I was young. I tried to learn programming countless times and always got bored or frustrated because there seemed to be this huge gulf between what I'd be reading about (print "Hello, world!") and what I thought coding was (being able to create applications like those I actually used on a day to day process).
In the end, it was (of all things) Actionscript/Flash that ultimately gave me the visual feedback I needed to make real progress in learning to code. These days, I'd probably recommend something like Processing(.org) to a newcomer.
All this just to say: I'm sympathetic that someone wants a programming language that's more visual and that makes it easier to push pixels. I think it'd go a long way to help teaching programming at the very least.
If there were a visual programming environment that was actually faster, simpler, and more productive to use than typing, fine, I'm sure we'd all use it. I don't understand what his argument is. There ISN'T such an environment, so why is he arguing against the current state-of-the-art (text)?
I choose to avoid syntax highlighting. The first time I saw it was some years ago, and I remember thinking it was interesting that the editor actually "knew" what it was looking at. It actually understood the rules of the language and was able to use them to apply different colors.
However, I also decided at that point that it was not for me. The text already has a certain vibe to it so there's no need to add any enhancement via the computer. If anything, it tends to clash with the actual business of programming.
But that's just me, and I'm probably an outlier in this regard.
I'm not sure, but I think Rob Pike recently advocated against syntax highlighting, saying it was distracting, leading to focus on tiny parts instead of reading the program as an integrated whole. Surprisingly I agree with him. Syntax highlighting is probably a side effect of unbearable complexity. In Pike's context I guess he could avoid it, his culture and work run around small, elegant and expressive code.
It IS distracting, if all the default themes built into Sublime are any indication of how it's used now. Everything pops in conflicting colors that would make an interior designer scream in pain. But, it can also be used to make the parts you don't care about fade into the background, and better reveal the shape of the code you do. I prefer to purposely downplay keywords (blue) and comments (dark green), and make hard-coded numbers and strings obvious (red). All this on a black background with almost white text. A drastically reduced palate with plain colors.
I think what the author is missing is that the command line offers the programmer a world into writing code that doesn't require humans to use. Another way of saying it is that there is a whole class of programs out there that you really DONT want to have a HCI (human Interface) for.
Do i really need all of that tooling when one simple line in my cron will suffice?
Do I really need a huge text editor in order to find and replace words when one sed line will do?
It's a class and style of programming that forces us to be efficient so we can get onto other things.
Ok, so I broadly agree with what the post says, and am pretty enthusiastic about it to boot, but my opinions have some nuances....
I spent about 3 years developing software systems graphically using Simulink. It was a horrible experience from an ergonomic perspective: There was far too much mousing, and it had a really bad impact on my wrists and hands.
On the other hand, I do have a very visual imagination, and I find the ability to view the system and logic that I am working on diagrammatically an enormous boon - to the point where I have thrown together crude dependency analysis tools to let me render dependency graphs with graphviz (to help navigate a hairy, undocumented legacy codebase).
So the key, to my mind, is having a set of graphical tools that enable the developer to visualise (and navigate) source documents in a range of ways, whilst keeping the underlying source as text, and the primary input tool as the keyboard.
This should encourage discoverability in the software tools that we use - which is the main bugbear that I have with Unix, (although I am beginning to appreciate it in other ways).
I understood it as a call to think more about the tools we use every day -- not just what editor or anything petty like that, but in a big way. The fact that we still edit files in order to write code, and how little has actually changed since the macintosh was released, with regards to graphical programs.
Wow, the commenters here are really defensive about their text editors and CLI. Yes, they do rule the day and they do a damn good job at that. But aren't you at lest curious what else could be possible? Don't be so quick to shoot other ideas down, no-ones forcing you to move your cheese!
"But aren't you at lest curious what else could be possible?"
Well, I was, but after all the attempts over the years my curiosity has been sated.
To the extent that you did not know this has been attempted many, many, many(, many, many, many, many...) times, well, that's a measure of how successful the idea has historically proved to be.
Perhaps I too would fall into the "it should all be more visual" local optima sink trap of opinion if I hadn't seen so, so very many tries at the idea.
Mind you, if you produce a working one that actually keeps all of its promises, I will hail your success. I will sing its praises all the harder precisely because I know how hard a problem it is. (Many of your users will take it for granted and think it was easy to build.) But I haven't seen one yet.
Every year I read something like this. The first time I was excited. The second time I was vaguely interested. At this point I consider it a waste of time.
Yes, absolutely I'm curious. I've been dreaming about it since moving the little LOGO turtle triangle around on my green-and-black Apple IIe. But despite that circa-1960s idea, I still use `scp` and grep to get-things-done. Come on, people, it's 1982! We should be dreaming of electric sheep by now!
Code is best written as text because it is logic. Sometimes we can embed data in logic or use logic to create data, and then things can get a bit messy. Sometimes it can help to visualize the logic, e.g. with flowcharts, but I don't personally believe that it is useful to do things primarily this way.
The visual part of a UI is graphical data. For reasonably complex things it makes sense to edit graphical data in a graphical editor and to separate it from code.
It sounds a bit in the article like he's talking about funny esoteric 2D visual programming languages but I think he just wants a nice way to make GUIs and has concluded that HTML is reasonable for doing so.
I'd be interested in a graphical shell for an OS (Linux I guess) written purely in HTML and friends. You'd have completely seamless integration with the web.
These days HTML is the most reasonable approach to anything involving
fonts and images and interaction. It's not as beautifully direct as REBOL,
and being trapped in a browser is somewhere between limiting and annoying,
but the visual toolkit is there, and it's ubiquitous. (For the record, I
would have solved the "list all the filenames..." problem by generating
HTML, but firing up a browser to display the result is a heavy-handed
solution.)
I know it's Microsoft-only, but I think XAML is an attempt to solve exactly this problem. It's a declarative UI description language, but without some of the legacy baggage of HTML.
The technique is not exactly Microsoft only. Using some sort of SGML to represent graphics seems pretty common.
I suspect that XAML is somewhat related to Adobe's MXML. Both seem pretty similar in concept to Mozilla's XUL, or even GLADE XML. Recent versions of HTML, with behavior defined in something like jQuery seem to be reaching for this kind of thing as well.
They all boil down to a representation of a tree in memory used to sort out what goes where on a screen.
Also, not all Text based UX is a command line. Russ Cox has an interesting if not exactly short demonstration of the ACME editor. http://research.swtch.com/acme Not all Text based UI's are command lines.
For an article like this, I do not think it is being pedantic to note that to "print a list of all the files with a txt extension in the current directory except for ignore_me.txt" you do not accomplish that by using the command "ls *.txt | grep -v ignore_me.txt". What if I have the file do_not_ignore_me.txt in that directory ?
Thus, I suppose my message is that Unix command line is even harder to properly use then he states. But in my opinion, the initial hurdles in learning it eventually pay off over time.
I'm surprised no one has mentioned [LabView](http://en.wikipedia.org/wiki/LabVIEW), a visual programming environment used by many scientists to control data acquisition systems.
Although the "language" makes creation of functional user interfaces fairly simple, I find the system quite unwieldy as soon as program complexity increases to even a moderate amount.
It's such a long blur of characters.... where are the pretty pictures? Maybe a few process-flow diagrams and an interactive UML concept piece? I only read blogs for the visuals.[1]
Shell is very nice, but work like http://xiki.org/ , and ideas about using dom as user-space (domus, catchy right?) graphical substrate lead me to agree with his underlying point.
i like how the very example he uses is something that is a huge hassle to do in any visual UI as soon as the number of files in question is more than, say 300. partisan of visual programming have managed to make easy tasks trivial but i have yet to see a visual toolset that does not make slightly complex tasks tedious and difficult tasks impossible.
Just try porting any visual studio project to a different platform and see instead of a sed one liner to edit your makefile you need to open 72000 dialog to change the name of one header file.
Right tool for the right job. If I want to select all the pictures of my niece out of a folder of bad snapshopts, then a gui is a nice tool. The command terminal just can't browse images very well (at all). If I want to grab the source files from a legacy application which used a deprecated function and the files span several hundred directories, and I want to make a minor text replace, and then run my test to make sure everything compile and works still, the GUI just comes up short in a serious way.
> ... take a moment to consider how utterly anachronistic both of the above solutions come across to non-believers in 2012
While this as the author admits isn't what he wants to talk about, I want to talk about it for a moment. Apart from `find` which is indeed archaic (few people seem to use its many built in features and pipe to simpler programs instead--e.g. preferring to pipe to xargs instead of using -exec), I'm struggling to find what's so old-fashioned about the grep example and what a modern, 2012 design should look like. Is it the '-v'? You can type '--invert-match' instead. Is the problem old-fashioned? I don't think so, I still occasionally pipe the results of find through grep -v and then on to something else. Is piping old-fashioned? Again, what's the 2012 version of these things?
While I don't really like comparing programming languages to spoken languages (much like comparing OS kernels to popcorn kernels), I always liked this quote since I read it:
"Linux supports the notion of a command line or a shell for the same reason that only children read books with only pictures in them. Language, be it English or something else, is the only tool flexible enough to accomplish a sufficiently broad range of tasks." -- Bill Garrett
As for the "meat" of this post, well, we'll see how Light Table turns out. I'm not particularly excited about it but plenty of HNers are.