Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why wouldn't they work on improving NTFS performance instead?


Much of the cost is imposed by semantics and the security model; there's no silver bullet solution that can be imposed without turning it into a different filesystem with different semantics and/or on disk representation.

At one point they planned to replace it with a database filesystem, but that was too complicated and abandoned. That was probably the end of replacement work on NTFS. https://en.wikipedia.org/wiki/WinFS


WinFS was never designed as a replacement as it rode on top of NTFS. It was a fancy SQL Server database that exposed files as .NET Objects for API interaction.

Sans the API piece, think of it like storing blobs in SQL Server, just like SharePoint does.

I was lucky enough to play around with beta 1. Not much you could do with it, though.


> Why wouldn't they work on improving NTFS performance instead?

There are other old-school techniques which are far easier to implement and maintain, such as using RAM drives/partitions. Expensing 32GB of RAM is simpler than maintaining weird NTFS configurations.

Splitting your project into submodules/subpackages also helps amortize the impact of long build times. You can run multiple builds in parallel and then have a last build task to aggregate the all. Everyone can live with builds that take 5 minutes instead of 3.


Even Microsoft doesn't touch NTFS code (if they haven't lost it). All new file system features like new compression methods are implemented on layers above ntfs.


Because likely they aren’t a. allowed to, and/or b. have no file system code experience or understanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: