Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Public FTP servers were where I downloaded most of the software for my computers, back in the 90s. There's nothing really like it anymore - you can't have anonymous sftp.

But perhaps we don't care anymore. The web is gradually consuming all that came before it.



> But perhaps we don't care anymore. The web is gradually consuming all that came before it.

It is partly the web/HTTP eating everything but also that FTP is legitimately a bad protocol and is less tolerant of horrid shit going on in layers below it (like NAT) than HTTP is.


I think my favorite "feature" of the FTP protocol has to be ASCII mangling, wherein the FTP server tries to mess around with line endings and text encoding mid-transfer. It's so bad that vsftpd, one of the better FTP servers for Linux systems, pretends to support it but silently refuses to perform the translation.

http://sdocs.readthedocs.io/en/master/sdocs/ftpserver/vsftpd...


I wrote a custom FTP server once (it was database-backed instead of filesystem-backed - e.g. you could do searches by creating a directory in the Search directory) and I added in insulting error messages if a client tried to exercise one of the more antiquated features of the spec (e.g. EBCDIC mode)


>There's nothing really like it anymore - you can't have anonymous sftp.

Strictly speaking there's nothing stopping someone from writing an anonymous sftp server that lets anyone log in as a 'guest' user or similar - it's just that nobody has (as far as I'm aware).


"Unauthenticated SSH" is basically what the git:// protocol is. I wonder if you could use git-daemon(1) to serve things other than git repos? Or you could just convert whatever you want to serve into a git repo, I guess.


You could, but since git isn't designed for handling large binary files the performance will be poor. That's why there are large file support plugins like (the aptly named) Git LFS[0] and git-annex[1].

[0] https://git-lfs.github.com

[1] https://git-annex.branchable.com/


> "Unauthenticated SSH" is basically what the git:// protocol is.

git:// is usually unencrypted (people have run it over TLS, but not commonly).



Please use IPFS instead.


IPFS requires a stateful thick client with a bunch of index data, no? Would it be efficient to, say, build a Debian installer CD that goes out and downloads packages from an IPFS mirror? Because that's the kind of use-case anonymous FTP is for.


Many many years ago I was on the team that managed the compute cluster for the CMS detector at the LHC (Fermilab Tier-1).

When we would perform a rolling reinstall of the entire worker cluster (~5500 1U pizza box servers), we would use a custom installer that would utilize Bittorrent to retrieve the necessary RPMs (Scientific Linux) instead of HTTP; the more workers reinstalling at once, the faster each worker would reinstall (I hand wave away the complexities of job management for this discussion).

I'm not super familiar with IPFS (I've only played with it a bit to see if I could use it to backup the Internet Archive in a distributed manner), but I'm fairly confident based on my limited trials that yes, you could build a Debian installer CD to fetch the required packages from an IPFS mirror. No need to even have the file index locally. You simply need a known source of the file index to retrieve, and the ability to retrieve it securely.


And a day or two to wait for the installer to finish fetching packages...


Such pessimism! Bittorrent started with only a few nodes too. It is now a majority of internet traffic.


Isn't IPFS not anonymous either? I thought you needed I2P for that.


Anonymous within this context refers to unauthenticated clients, not communication privacy.


With this caveat: https://blog.filippo.io/ssh-whoami-filippo-io/

(Granted you can turn you can stop this client-side, but it's worth noting that your SSH client will generally identify you to a server on connect.)


That ssh server seems to be down.


Some public roguelike servers use SSH with a single account and posted private key in place of telnet.


You have to be really careful though because the default is to give users shell access. If you think you can limit that by forcing users to run some command you'll run into trouble because the user can specify environment variables.

The user also by default gets allowed to set up tunneling which would allow anonymous users to use your network address.


> There's nothing really like it anymore - you can't have anonymous sftp

Nonsense. http is exactly like anonymous ftp and it does a much better job of it. Pretty much every anonymous ftp site started also serving their files via http decades ago -- which is why ftp is no longer needed.

Case in point: Debian makes all these files available over http. This isn't going away.

ftp://ftp.debian.org/debian/

http://ftp.debian.org/debian/


It really isn't as convenient if you have to download lots of files at one time though. FTP has mget. That's probably why FTP lives on for scientific data (NCBI, ENSEMBL, etc). Yes, you could use some tool like wget or curl to spider through a bunch of http links, but that's more work.


> FTP has mget

Not quite, ftp CLIENTS have mget. The ftp protocol has absolutely no awareness of mget. In fact, ftp is terrible at downloading more than one file at a time because it has no concept of pipelining and keepalive, both things that http supports.

With a nice multi protocol client like lftp, http directory indexes work just like an ftp server:

  $ lftp http://http.debian.net/debian/
  cd: received redirection to `http://cdn-fastly.deb.debian.org/debian/'
  cd ok, cwd=/debian
  lftp cdn-fastly.deb.debian.org:/debian> ls
  drwxr-xr-x  --  /
  -rw-r--r--         1.0K  2017-01-14 10:44  README
  -rw-r--r--         1.3K  2010-06-26 09:52  README.CD-manufacture
  -rw-r--r--         2.5K  2017-01-14 10:44  README.html
  -rw-r--r--          291  2017-03-04 20:08  README.mirrors.html
  -rw-r--r--           86  2017-03-04 20:08  README.mirrors.txt
  ..[snip]..
  lftp cdn-fastly.deb.debian.org:/debian> mget README*
  5315 bytes transferred
  Total 5 files transferred
  lftp cdn-fastly.deb.debian.org:/debian>


For anyone else wondering how to recursively download with this:

    lftp cdn-fastly.deb.debian.org:/debian> mirror doc
    Total: 2 directories, 43 files, 0 symlinks                             
    New: 43 files, 0 symlinks
    1031755 bytes transferred in 1 second (678.7 KiB/s)
    lftp cdn-fastly.deb.debian.org:/debian> 

Warning, -R means reverse (upload!), not recursive. ;)


Wow, I had no idea lftp had that feature. That's super cool.


lftp has a ton of features. background jobs, tab completion, caching of directory contents, multiple connections, parallel fetching of a SINGLE file using multiple connections.

Yes, it looks like '/usr/bin/ftp' from 1970, but it's far far far more advanced than that.


It would make more sense to offer rsync (unauthenticated), to ensure integrity of what's transferred.

  rsync -r rsync://... ./
will retrieve everything in a directory.


I suppose you can sometimes do `mget x.csv` on FTP (if the client supports it?), but with wget you can do:

wget -r -A 'x.csv' https://example.org/

(where 'x' is an asterisk, but HN's formatting eats it)

More work, in the sense that it's more command line options to remember, I agree, but otherwise it's easier to integrate in scripts and much more flexible than mget.

(I don't miss FTP for the sysadmin side of maintaining those servers.)


Download managers such as Down Them All! [1] for Firefox are more convenient (and useful) than mget in an ftp client.

[1]: http://www.downthemall.net/


> there's nothing really like it anymore

a public facing httpd that uses the default apache2 directory index can be configured to , of course, allow anonymous access and with a log level that is neither more or less detailed than an anonymous ftpd circa 1999.


> Public FTP servers where where I downloaded most of the software for my computers, back in the 90s. There's nothing really like it anymore

Modulo UI details, the common download-only public side of public FTP servers is a pretty similar experience to a pretty barebones file download web site. Anonymous file download web sites are, to put it mildly, not rare.


... until you want to offer directory structure a with multiple files


In the "transition" from FTP to HTTP, the level of abstraction in popular use has shifted out of the protocol and into resources (mime-types) [1], rel types [2], server logic [3][4], and client logic [5].

In the past, I've said that this extensible nature of HTTP+HTML is what made them so successful [6], but once specialized protocols began to falter, tunneling other semantics over HTTP became not just a niceity, but also a necessity (for a diverse set of reasons, like being blocked at a middlebox, being accessible from an the browser where most people spend their time, etc).

[1] http://www.iana.org/assignments/media-types/ [2] https://www.iana.org/assignments/link-relations/ [3] https://wiki.apache.org/httpd/DirectoryListings [4] http://nginx.org/en/docs/http/ngx_http_autoindex_module.html [5] http://stackoverflow.com/a/28380690 [6] https://news.ycombinator.com/item?id=12440783


So, configure your HTTP daemon to serve directory indexes, and point it at the root of whatever tree you want to serve.


Apache works better for this than FTP. I use it all the time: just configure it to serve indexes. Apache lets you configure the index to include CSS, fancy icons, custom sorting, and other stuff. All over HTTPS.

What's not to like?


Because then I need to use a browser plugin to download all the files in a directory.


True. Or, you know, wget.


> The web is gradually consuming all that came before it.

It's about cost, too. HTTP can be cached very efficiently, but FTP not at all. If I were the operator in charge and I had the choice between next-to-free caching by nearly anything, be it a Squid proxy, apt-cache or nexus, or no caching and having to maintain expensive servers, I'd choose HTTP.


Exactly. You can offload HTTP to anyone, an HTTP proxy is absolutely trivial to set up, and an HTTP cache is even easier.

FTP is not nearly as trivial, plus it's a stupid, broken protocol that deserves to die. The whole thing is a giant bag of hurt.


> FTP is not nearly as trivial, plus it's a stupid, broken protocol that deserves to die.

I agree with you, but FTP has one very valid use case left: easy file sharing, especially for shared hosting. FTP clients are native to every popular OS, from Android to Windows (only exceptions I know are Win Mobile and iOS), and there's a lot of ecosystem built around FTP.

There is SCP and SFTP but they don't really have any kind of widespread usage in the non-professional world.


> [Has] one very valid use case left: easy file sharing, especially for shared hosting.

Nope. Nope. Nope. Not easy. Not secure. Not user friendly. Not anything good. Have an iPhone and need to FTP something? Don't have installation rights on your Windows workstation and need to FTP something? Unpleasant if not confusing as all hell.

Dropbox or a Dropbox-like program is significantly easier to get people on board with.

Any "ecosystem" built around FTP is rotten to the core. Blow it up and get rid of it as soon as you can.

Some vendors insist on using FTP because reasons, but those reasons are always laziness. I can't be the only one that would prefer they use ssh/scp/rsync with actual keys so I can be certain the entity uploading a file is actually them and not some random dude who sniffed the plain-text password off the wire.


Huh?

Windows explorer has FTP built into the shell. As does every other desktop OS.

If you care about file integrity, you want to out of band to verify the signatures on the binary anyway.

Dropbox is blocked in most commercial networks and offers no assurance of anything.


Yes, I'm sure walking your accounting department through how to use the command-line FTP tool in Windows is going to work out fantastically well.

Versus drag and drop file to this page. There you go. It's uploading.

That's why I said Dropbox or a Dropbox-like service, of which there are hundreds. Microsoft SharePoint is but one example.


Not that shell. The UI shell. Explorer. You can browse FTP sites just like network file systems, including drag and drop.


Go to Start->Run.

Type ftp://ftp-site/path/to/file. Drag and drop to your hearts content.


When I worked ftp (with accounting departments ironically), the advice was not to do that because the client often corrupted files.


The protocol you want is SMB.

Windows has first-class support (obviously); but Samba gives Linux and BSD support that, in modern Desktop Environments, is exactly as good. Mobile devices don't tend to have OS-level support for it, but there are very good libraries to enable individual apps to speak the protocols (look at VLC's mobile apps.)

Even Apple has given up on their own file-sharing protocol (AFP) in favor of macOS machines just speaking SMB to one-another.

Yes, it's not workable over the public Internet. Neither is FTP, any more. If you've got a server somewhere far away, and want all your devices to put files on it, you're presumably versed with configuring servers, so go ahead and set up a WebDAV server on that box. Everything speaks that.


> The protocol you want is SMB.

Uh, hell no. Never ever I'd expose a SMB server to the Internet. SMB is really picky when the link has packet loss or latency issues, plus the countless SMB-based security issues.

> Even Apple has given up on their own file-sharing protocol (AFP) in favor of macOS machines just speaking SMB to one-another.

TimeMachine still depends on AFP.


> TimeMachine still depends on AFP.

No, it doesn't:

https://developer.apple.com/library/content/releasenotes/Net...


Is there a way to tune SMB to work better over low bandwidth / high latency links? The last time I tried it through a VPN it was working at less than 10kb/s


AFAIK no, other than increasing frame size, CIFS is more or less an inverse latency gauge.

I wonder if GP is pulling our legs about WebDAV. Yuck.


> Mobile devices don't tend to have OS-level support for it...

Yeah, nope. Dead out of the gate. Thanks for playing.


Mobile devices don't tend to have OS-level support for anything, though.

The GP comment:

> FTP clients are native to every popular OS, from Android to Windows (only exceptions I know are Win Mobile and iOS)

To rephrase: only 1/3 of mobile OSes support FTP.


Anything web-based works fine on the phone. FTP and SMB do not.

Not working on iOS is like saying "Oh, this road doesn't work with German made cars. That's not a big deal, is it?"


But we're talking about picking a thing to replace FTP for the use-cases people were already using FTP for. It doesn't matter if it doesn't do something FTP already doesn't do, because presumably you were already not relying on that thing getting done.


FTP is used to exchange files, a task that HTTP/HTTPS and/or email and/or IM and/or XMPP and/or Skype and/or Slack and/or a hundred other services can do just as well if not better.


More like German made cars have been artificially limited so they can't use certain roads.


...But it does work on iOS. It's just not built in. For example, Transmit for iOS supports FTP, and includes a document provider extension so you can directly access files on FTP servers from any app that uses the standard document picker.

https://panic.com/transmit-ios/

Searching the App Store I also see some apps for SMB, but I don't know whether they have document provider extensions.


You can also send faxes from iOS. What's your point?


The post I replied to implies that iOS is (somehow) "artificially limited" to be unable to access FTP - or at least I interpreted it that way.

FWIW, I'm not convinced that "web-based" is a better alternative for read/write file access, assuming you mean file manager webapps. No OS can integrate those into the native file picker, so you can't avoid the inefficiency of manually uploading files after changing them. WebDAV works pretty well though, if that counts...


It's just needlessly exclusionary. One of the greatest things about "the web" is it's pretty accessible by anyone with a browser that's at least semi-mostly-standards-compliant.


No one uses Windows mobile, that comment was hardly fair. Pretty sure Android is 80%+ share.


even mobile windows/ios/android have ftp and variants support via software but not native.


Why is the protocol broken ? Is there something that doesn't work ? Perhaps you mean to say complicated protocol ill-fit for modern times ?


Have you looked at the spec? If you do, then you'll understand.

Imagine a file transfer protocol that defines the command to list files in a folder, but does not specify the format of the response other that it should be human-readable.

https://www.ietf.org/rfc/rfc959.txt LIST and NLST commands for example. No way to get a standard list of files with sizes and modification dates. yay!

Oh, and the data connection that is made from the server to the client. That works wonders with firewalls of today.

It was an ok spec when it was invented, but today it's very painful to operate.



> It's about cost, too. HTTP can be cached very efficiently, but FTP not at all.

It's ironic that you mention cost and caching but lot of services used for software distribution of one kind of another (e.g. Github releases) are following the "HTTPS everywhere" mantra and HTTPS can't be cached anywhere other than at the client.


> and HTTPS can't be cached anywhere other than at the client.

No. Nexus for example can certainly cache apt, as well as Squid can do if you provision it with a certificate that's trusted by the client.

Also, Cloudflare supports HTTPS caching if you supply them with the certificate, and if you pay them enough and host some special server that handles the initial crypto handshake you don't even have to hand over your cert/privkey to them (e.g. required by law for banks, healthcare stuff etc)


To clarify; what I meant is that HTTPS can't be cached by third parties. If I want to run a local cache of anything served over HTTP it's as easy as spinning up a Squid instance. With resources served over HTTPS I can't do that.


Github probably trust their CDN with their TLS key.

For Debian, packages are secured by PGP in combination with checksums so it's not relevant for them. The debian repos are often HTTP only.


What we need if FTP over HTTP.


Well, there is WebDAV. At least Windows and OS X support it (Windows from Explorer, OS X from Finder), no idea about mainstream Linux/Android/iOS support though. Also, no idea if WebDAV can deal with Unix or Windows permissions, but I did not have that problem when I set up a WebDAV server a year ago.

IIRC WebDAV uses GET for retrieval, so the read parts can be cached by an intermediate proxy and the write part be relayed to the server.


As someone who once tried to write a WebDAV server I cannot with good convince recommend it. It's Bizarre extended HTTP protocol that should not exist.


Out of curiosity: why did you try to write your own WebDAV server? Apache ships a pretty much works-OOTB implementation - the only thing I never managed to get working was to assign uploaded files the UID/GID of the user who authenticated via HTTP auth to an LDAP server.


More specifically a CalDAV server which is a bizarre extension of WebDAV that shouldn't exist. We wanted one to connect to our internal identity server. That project was abandoned.


Related: "I Hope WebDAV Dies"[1]

1. https://news.ycombinator.com/item?id=10213657


Linux has great webdav support with the fuse filesystem, davfs. I used it with my box.net account and it worked fine.

http://savannah.nongnu.org/projects/davfs2


What about Gopher? Will nobody think of Gopher?


Actually i thought about Gopher (i even have my own client - http://runtimeterror.com/tools/gopher/ - although it only does text) since it basically behaves as FTP++ with abstract names (sadly most modern gopherholes treat it as hypertext lite by abusing the information nodes).

Gopher generally avoids most of FTP's pitfalls and it is dead easy to implement.


As a side note, there was a nice podcast about Gopher a few days ago at Techstuff [1]

[1] http://shows.howstuffworks.com/techstuff/what-was-gopher.htm


At least there's still Jigdo.


lftp (https://lftp.yar.ru/) works over http (and more) and works just the way you would expect it to ie:

lftp http://ftp.debian.org/debian/


There's Bittorrent. It's anonymous and not web based.


and it does checksum for you.

edit: thinking about it not sure I agree with the anonymous part considering the swarm can be monitored. the access log is essentially publicly distributed.


Neither is FTP really, the user's IP is still logged somewhere, you just use a common user (anonymous) with everyone else. The modern name of such a feature would probably be something like 'No registration required'. It goes to show how much the meaning of the word 'anonymous' changed over the last 30 years.


Not quite; the "currently accessing" list is public. While it is of course possible to make an access log from this with continuous monitoring, it's not possible to arbitrarily query historical data.


i wrote a simple sftp server awhile back that advertises no auth methods, plain ssh tools log right in without prompting for a password

code; https://pypi.python.org/pypi/noauthsftp

handy for moving files around a network


Ftp is an old protocol, it was good for it's time but http is just better now.

Even though you can't have anonymous sftp you can have anonymous ftps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: