Public FTP servers were where I downloaded most of the software for my computers, back in the 90s. There's nothing really like it anymore - you can't have anonymous sftp.
But perhaps we don't care anymore. The web is gradually consuming all that came before it.
> But perhaps we don't care anymore. The web is gradually consuming all that came before it.
It is partly the web/HTTP eating everything but also that FTP is legitimately a bad protocol and is less tolerant of horrid shit going on in layers below it (like NAT) than HTTP is.
I think my favorite "feature" of the FTP protocol has to be ASCII mangling, wherein the FTP server tries to mess around with line endings and text encoding mid-transfer. It's so bad that vsftpd, one of the better FTP servers for Linux systems, pretends to support it but silently refuses to perform the translation.
I wrote a custom FTP server once (it was database-backed instead of filesystem-backed - e.g. you could do searches by creating a directory in the Search directory) and I added in insulting error messages if a client tried to exercise one of the more antiquated features of the spec (e.g. EBCDIC mode)
>There's nothing really like it anymore - you can't have anonymous sftp.
Strictly speaking there's nothing stopping someone from writing an anonymous sftp server that lets anyone log in as a 'guest' user or similar - it's just that nobody has (as far as I'm aware).
"Unauthenticated SSH" is basically what the git:// protocol is. I wonder if you could use git-daemon(1) to serve things other than git repos? Or you could just convert whatever you want to serve into a git repo, I guess.
You could, but since git isn't designed for handling large binary files the performance will be poor. That's why there are large file support plugins like (the aptly named) Git LFS[0] and git-annex[1].
IPFS requires a stateful thick client with a bunch of index data, no? Would it be efficient to, say, build a Debian installer CD that goes out and downloads packages from an IPFS mirror? Because that's the kind of use-case anonymous FTP is for.
Many many years ago I was on the team that managed the compute cluster for the CMS detector at the LHC (Fermilab Tier-1).
When we would perform a rolling reinstall of the entire worker cluster (~5500 1U pizza box servers), we would use a custom installer that would utilize Bittorrent to retrieve the necessary RPMs (Scientific Linux) instead of HTTP; the more workers reinstalling at once, the faster each worker would reinstall (I hand wave away the complexities of job management for this discussion).
I'm not super familiar with IPFS (I've only played with it a bit to see if I could use it to backup the Internet Archive in a distributed manner), but I'm fairly confident based on my limited trials that yes, you could build a Debian installer CD to fetch the required packages from an IPFS mirror. No need to even have the file index locally. You simply need a known source of the file index to retrieve, and the ability to retrieve it securely.
You have to be really careful though because the default is to give users shell access. If you think you can limit that by forcing users to run some command you'll run into trouble because the user can specify environment variables.
The user also by default gets allowed to set up tunneling which would allow anonymous users to use your network address.
> There's nothing really like it anymore - you can't have anonymous sftp
Nonsense. http is exactly like anonymous ftp and it does a much better job of it. Pretty much every anonymous ftp site started also serving their files via http decades ago -- which is why ftp is no longer needed.
Case in point: Debian makes all these files available over http. This isn't going away.
It really isn't as convenient if you have to download lots of files at one time though. FTP has mget. That's probably why FTP lives on for scientific data (NCBI, ENSEMBL, etc). Yes, you could use some tool like wget or curl to spider through a bunch of http links, but that's more work.
Not quite, ftp CLIENTS have mget. The ftp protocol has absolutely no awareness of mget. In fact, ftp is terrible at downloading more than one file at a time because it has no concept of pipelining and keepalive, both things that http supports.
With a nice multi protocol client like lftp, http directory indexes work just like an ftp server:
lftp has a ton of features. background jobs, tab completion, caching of directory contents, multiple connections, parallel fetching of a SINGLE file using multiple connections.
Yes, it looks like '/usr/bin/ftp' from 1970, but it's far far far more advanced than that.
(where 'x' is an asterisk, but HN's formatting eats it)
More work, in the sense that it's more command line options to remember, I agree, but otherwise it's easier to integrate in scripts and much more flexible than mget.
(I don't miss FTP for the sysadmin side of maintaining those servers.)
a public facing httpd that uses the default apache2 directory index can be configured to , of course, allow anonymous access and with a log level that is neither more or less detailed than an anonymous ftpd circa 1999.
> Public FTP servers where where I downloaded most of the software for my computers, back in the 90s. There's nothing really like it anymore
Modulo UI details, the common download-only public side of public FTP servers is a pretty similar experience to a pretty barebones file download web site. Anonymous file download web sites are, to put it mildly, not rare.
In the "transition" from FTP to HTTP, the level of abstraction in popular use has shifted out of the protocol and into resources (mime-types) [1], rel types [2], server logic [3][4], and client logic [5].
In the past, I've said that this extensible nature of HTTP+HTML is what made them so successful [6], but once specialized protocols began to falter, tunneling other semantics over HTTP became not just a niceity, but also a necessity (for a diverse set of reasons, like being blocked at a middlebox, being accessible from an the browser where most people spend their time, etc).
Apache works better for this than FTP. I use it all the time: just configure it to serve indexes. Apache lets you configure the index to include CSS, fancy icons, custom sorting, and other stuff. All over HTTPS.
> The web is gradually consuming all that came before it.
It's about cost, too. HTTP can be cached very efficiently, but FTP not at all. If I were the operator in charge and I had the choice between next-to-free caching by nearly anything, be it a Squid proxy, apt-cache or nexus, or no caching and having to maintain expensive servers, I'd choose HTTP.
> FTP is not nearly as trivial, plus it's a stupid, broken protocol that deserves to die.
I agree with you, but FTP has one very valid use case left: easy file sharing, especially for shared hosting. FTP clients are native to every popular OS, from Android to Windows (only exceptions I know are Win Mobile and iOS), and there's a lot of ecosystem built around FTP.
There is SCP and SFTP but they don't really have any kind of widespread usage in the non-professional world.
> [Has] one very valid use case left: easy file sharing, especially for shared hosting.
Nope. Nope. Nope. Not easy. Not secure. Not user friendly. Not anything good. Have an iPhone and need to FTP something? Don't have installation rights on your Windows workstation and need to FTP something? Unpleasant if not confusing as all hell.
Dropbox or a Dropbox-like program is significantly easier to get people on board with.
Any "ecosystem" built around FTP is rotten to the core. Blow it up and get rid of it as soon as you can.
Some vendors insist on using FTP because reasons, but those reasons are always laziness. I can't be the only one that would prefer they use ssh/scp/rsync with actual keys so I can be certain the entity uploading a file is actually them and not some random dude who sniffed the plain-text password off the wire.
Windows has first-class support (obviously); but Samba gives Linux and BSD support that, in modern Desktop Environments, is exactly as good. Mobile devices don't tend to have OS-level support for it, but there are very good libraries to enable individual apps to speak the protocols (look at VLC's mobile apps.)
Even Apple has given up on their own file-sharing protocol (AFP) in favor of macOS machines just speaking SMB to one-another.
Yes, it's not workable over the public Internet. Neither is FTP, any more. If you've got a server somewhere far away, and want all your devices to put files on it, you're presumably versed with configuring servers, so go ahead and set up a WebDAV server on that box. Everything speaks that.
Uh, hell no. Never ever I'd expose a SMB server to the Internet. SMB is really picky when the link has packet loss or latency issues, plus the countless SMB-based security issues.
> Even Apple has given up on their own file-sharing protocol (AFP) in favor of macOS machines just speaking SMB to one-another.
Is there a way to tune SMB to work better over low bandwidth / high latency links? The last time I tried it through a VPN it was working at less than 10kb/s
But we're talking about picking a thing to replace FTP for the use-cases people were already using FTP for. It doesn't matter if it doesn't do something FTP already doesn't do, because presumably you were already not relying on that thing getting done.
FTP is used to exchange files, a task that HTTP/HTTPS and/or email and/or IM and/or XMPP and/or Skype and/or Slack and/or a hundred other services can do just as well if not better.
...But it does work on iOS. It's just not built in. For example, Transmit for iOS supports FTP, and includes a document provider extension so you can directly access files on FTP servers from any app that uses the standard document picker.
The post I replied to implies that iOS is (somehow) "artificially limited" to be unable to access FTP - or at least I interpreted it that way.
FWIW, I'm not convinced that "web-based" is a better alternative for read/write file access, assuming you mean file manager webapps. No OS can integrate those into the native file picker, so you can't avoid the inefficiency of manually uploading files after changing them. WebDAV works pretty well though, if that counts...
It's just needlessly exclusionary. One of the greatest things about "the web" is it's pretty accessible by anyone with a browser that's at least semi-mostly-standards-compliant.
Have you looked at the spec? If you do, then you'll understand.
Imagine a file transfer protocol that defines the command to list files in a folder, but does not specify the format of the response other that it should be human-readable.
https://www.ietf.org/rfc/rfc959.txt LIST and NLST commands for example. No way to get a standard list of files with sizes and modification dates. yay!
Oh, and the data connection that is made from the server to the client. That works wonders with firewalls of today.
It was an ok spec when it was invented, but today it's very painful to operate.
> It's about cost, too. HTTP can be cached very efficiently, but FTP not at all.
It's ironic that you mention cost and caching but lot of services used for software distribution of one kind of another (e.g. Github releases) are following the "HTTPS everywhere" mantra and HTTPS can't be cached anywhere other than at the client.
> and HTTPS can't be cached anywhere other than at the client.
No. Nexus for example can certainly cache apt, as well as Squid can do if you provision it with a certificate that's trusted by the client.
Also, Cloudflare supports HTTPS caching if you supply them with the certificate, and if you pay them enough and host some special server that handles the initial crypto handshake you don't even have to hand over your cert/privkey to them (e.g. required by law for banks, healthcare stuff etc)
To clarify; what I meant is that HTTPS can't be cached by third parties. If I want to run a local cache of anything served over HTTP it's as easy as spinning up a Squid instance. With resources served over HTTPS I can't do that.
Well, there is WebDAV. At least Windows and OS X support it (Windows from Explorer, OS X from Finder), no idea about mainstream Linux/Android/iOS support though. Also, no idea if WebDAV can deal with Unix or Windows permissions, but I did not have that problem when I set up a WebDAV server a year ago.
IIRC WebDAV uses GET for retrieval, so the read parts can be cached by an intermediate proxy and the write part be relayed to the server.
As someone who once tried to write a WebDAV server I cannot with good convince recommend it. It's Bizarre extended HTTP protocol that should not exist.
Out of curiosity: why did you try to write your own WebDAV server? Apache ships a pretty much works-OOTB implementation - the only thing I never managed to get working was to assign uploaded files the UID/GID of the user who authenticated via HTTP auth to an LDAP server.
More specifically a CalDAV server which is a bizarre extension of WebDAV that shouldn't exist. We wanted one to connect to our internal identity server. That project was abandoned.
Actually i thought about Gopher (i even have my own client - http://runtimeterror.com/tools/gopher/ - although it only does text) since it basically behaves as FTP++ with abstract names (sadly most modern gopherholes treat it as hypertext lite by abusing the information nodes).
Gopher generally avoids most of FTP's pitfalls and it is dead easy to implement.
edit: thinking about it not sure I agree with the anonymous part considering the swarm can be monitored. the access log is essentially publicly distributed.
Neither is FTP really, the user's IP is still logged somewhere, you just use a common user (anonymous) with everyone else. The modern name of such a feature would probably be something like 'No registration required'.
It goes to show how much the meaning of the word 'anonymous' changed over the last 30 years.
Not quite; the "currently accessing" list is public. While it is of course possible to make an access log from this with continuous monitoring, it's not possible to arbitrarily query historical data.
But perhaps we don't care anymore. The web is gradually consuming all that came before it.