Yes. Oracle has done this (ASM) to eliminate overhead, implement fault tolerance and provide a storage management interface based on SQL, for example.
I once made a 'file system' to mount cpio archives (read-only) in an embedded system. Cpio is an extremely simple format to generate and edit (in code) and mounting it directly was very effective.
I suspect operating on block storage directly may both be easier and more reliable for databases, since about 75 % of the complication in writing transactional I/O software is working around the kernel's behavior.
Kernel's fsyncing behavior is one thing, but just relying on a massive amount of fragile C code running in kernel is a significant liability, especially if your software is a centralized database and crashes, panics will bring down everything.
Yes, and also the traditional answer was that the kernel handles weird and complicated hardware and can talk to RAID controllers properly, but nowadays hardware has much less variance, and RAID is rare (and arguably unnecessary for a direct-IO database).
I think it'd be viable for an enterprise-y database to do IO directly over NVMe. Imagine the efficiency and throughput gains you could get from a database that (1) has a unified view of memory allocation in the system (2) directly performs its page-level IO on the storage devices.
Wow this comment just made me fall down a rabbit hole. I've only just surfaced. The Kaitai project actually comes with some pre-defined bindings for cpio which meant I was up and running very quickly.
I once made a 'file system' to mount cpio archives (read-only) in an embedded system. Cpio is an extremely simple format to generate and edit (in code) and mounting it directly was very effective.