Hacker Newsnew | past | comments | ask | show | jobs | submit | more rdc12's commentslogin

The drive can infer that the LBA hasn't been mapped, since it won't be present in the FTL, there is no need for the OS to inform the drive of this.


How could the drive know? TRIM commands are the only way to "free" a sector/block for writing. The drive might have been tested during setup, so the block might not be empty any more.


You could use some sort of disk quota system, to make the filesystem artificially smaller than it actually is (trim after applying this change). Or simply insure that you don't exceed 80% - 90% used space.

It it is also worth noting that many SSD's are over-provisioned by the manufacture anyway, in those drives manual over-provisioning might achieve very little anyway.


The SSD maintains a translation table for all the virtual addresses exposed by the drive, that maps to the underlying flash physical addresses. Any physical address not in that table, is unallocated and the drive can use freely.


So over-provisioning has to be done before any writes to the drive? What if I want to over-provision a used drive? Discard all blocks first?


With most SSDs, there's no special explicit step necessary to overprovision a device. Just trim/unmap/discard a range of logical block addresses, and then never touch them again. The drive won't have any live data to preserve for those LBAs after they've been wiped by the trip operation, and the total amount of live data it is tracking will stay well below the advertised capacity of the drive.

The easiest way to achieve this is to create a partition with no filesystem, and use blkdiscard or similar to trim the LBAs corresponding to that partition.


My old T420 couldn't sleep overnight in Windows 7 (something would trigger a resume), but would sleep fine under Linux and FreeBSD.


Not sure if this is more or less extreme option, but you could put the TV in a Faraday cage, or build a Faraday cage into your walls.


Directly no, but if you moved the data to a new dataset, with a command that preserves the timestamp that would work (rsync -a or zfs send/recv), which could be run from a cronjob.

Compression settings are set at a per dataset level, so applying this to only some files in a dataset isn't practical.


From what I have heard, the amount of work needed in porting native encryption to the original FreeBSD port of ZFS was actually what led to this project.


You could also hook up a GPIO pin to the wires for the power on button


Is the design files available for the rack mounted version (in particular the clip/holder parts).

Been quite keen to build something similar (albeit less nodes) for a little while. I find the homebrew blade style cluster amusing for some reason.


I can think of an example in New Zealand where an old law (Sedition) was essentially ignored for several decades. A couple years after its first use in a modern setting (early 00's) it was repealed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: