"you can't have more than 64,000 objects in a folder in S3 - even though S3 doesn't have folders."
Is this for real, or are these stories made up? All documentation I've read about S3 suggests that it does not have any file count limitations. The timeline of Togetherville suggests that this story took place between 2008 and 2010. Did S3 have a limit back then that they lifted?
There has never been an S3 limit. Some of our customers have millions of objects in a single bucket.
When you do something like this you need to make sure that you have a good distribution of keys across the name space, and you need to think twice before you decide to write code to list the entire bucket. In most use cases at this scale, metadata and indexing are handled by something other than S3.
The problem was with their temporary storage on a local filesystem, not S3. I'm sure there's some kind of limit to what S3 will allow you to store based on how they distribute data to servers, but 64k isn't it.
This was likely an inode issue on the operating system/kernel and how it represents S3 on the filesystem, especially if you're using something like S3FS.