Dear Slashdot comment flood: I didn't write the summary in the Slashdot story. The submitter did. I know the disks themselves don't handle an fsync().
I know fsync() only tells the operating system to flush. This script tests whether fsync works end-to-end. A database in userspace can only fsync... it can't send special IDE or SCSI "flush your buffers" commands to the disks. So that's what I care about and what I want to test: that a database can be Durable.
The problem is manufacturers shipping SCSI disks with a dangerous option (hardware write-caching) enabled by default. It makes sense for consumer ATA stuff, but for SCSI disks that already have reliable TCQ, there's much less point. And any respectable raid card should just disable write-back caching on the disks if the raid card has its own nvram-backed cache, but LSI doesn't anymore (they used to, but stopped).
And I'm glad Linux is finally starting to tell the disks to flush on an fsync. But months ago I was stuck with databases that couldn't survive power outages and needed a way to test whether everything from the filesystem to the block device driver to the disks themselves were doing what the database expected when it did an fsync. That's no component's job alone, so I needed to test everything together.
Remember my disk-checker program I wrote about before? I'd never released it because it was too hard to use, but now it's dead simple, so here it is:
Run it and be amazed how much your disks/raid/OS lie. ("lie" = an fsync doesn't work)
It seems everything from PATA consumer disks to high-end server-class SCSI disks lie like crazy. Yes, that includes SATA there in the middle. I'll discuss fixing your storage components in a second.
In a nutshell, run it like this:
Tester machine (machine that won't crash): $ diskchecker.pl -l
And then just let it chill. (the el is for listen). This program will listen (on port 5400 if no number follows -l) and will write one tiny file per host to /tmp/. It can be run as any user.
Machine being tested (machine you're going to pull the power cable on) $ diskchecker.pl -s TESTERMACHINE create test_file 500
That creates a 500 MB file named "test_file" and it reports everything it's about to do and does to the TESTERMACHINE (which can be an IP or hostname).
Now, pull the power cable on the machine being tested. Don't turn it off nicely. Don't just control-C the program. Wait a couple seconds and plug your testee machine back in and reboot it. When it's back up, do:
If the server process is still running, the machine you just killed will connect to the server and information about what's supposed to be where. The client will then verify it and produce an error report.
What you should see is:
Total errors: 0.
But you probably won't. You'll probably see an error count and a histogram of errors per seconds-before-crash. Most RAID cards lie (especially LSI ones), some OSes lie (rare), and most disks lie (doesn't matter how expensive or cheap they are). They lie because their competitors do and they figure it's more important to look competitive because the magazines only print speed numbers, not reliability stats. They must figure people who care about their disks working know how to test/fix their disks.
Ways to maybe fix your disk:
hdparm -W 0 /dev/hda -- worked on a crap office PATA disk (and it failed otherwise) scsirastools -- need this on lots of SCSI disks. you'll probably have to remove your SCSI disks from your RAID card and fix the disks directly, since RAID cards very often won't disable it for you
Isn't this... caching? By pulling the plug, you kill whatever data is pending write in drive's volatile RAM cache, and that's what they have UPSes and stupidly expensive battery-backup-capable RAID controllers for.
Re-read my post. I'm testing whether the entire storage stack respects the fsync() system call. All the way from the OS to the drivers to the raid array to the disks themselves.
The fsync() system call says: "Stop caching, it's very important that everything I've given you now must be on disk, and don't return to me with an answer until it has."
So this program tests that your fsync() works as advertised and some part of the storage stack isn't faking the fsync. (it's usually the disks themselves, against specs, and unknown the operating system, which thinks the disks are behaving)
Otherwise caching's just fine and it's done all over. My complaint is when it's done when you tell it not to.
Sweet, Brad. Perfect timing. I've got an ATA-over-Ethernet evaluation unit from Coraid on its way. I'm planning several tests on this equipment for a demo and presentation up here in June (background: see my rant and LUG discussion). It'll be fun to add this test to the mix (and pull the plug at different places in the connection to see which part does most of the lying). Thankee much.
Some Windows-related details of disk caching that I've found out about (and may be useful to those of you using Windows).
Under Windows NT (that's 2k, XP, 2k3 as well - remember those are NT 5.x), you can disable disk caching by going to the properties for the hard disk and then going to the Settings/Properties/Policies tab (is named differently under different versions but does the same thing). By default write cache is enabled except on disks containing the Active Directory. It is also disabled by default for removeable disks (USB, memory cards (but not all!), iPods, etc.) and apparently anything it thinks is SCSI (this includes some IDE/SATA controllers). I think Windows tells the disk itself to explicitly disable the disk cache as well.
When using CreateFile(), you can set the FILE_FLAG_WRITETHROUGH flag, which tells Windows to flush write caches quicker than usual, or you can set FILE_FLAG_NO_BUFFERING, which bypasses all Windows caches but has some restrictions (you have to be careful with memory alignment and read in multiples of the sector size). These are set in the dwFlagsAndAttributes parameter.
so the scsirastools are used to disable write caching on the drives themselves? Going to run this later today with a freshly minted dell running freebsd 5.4, I wonder if there are any similar tools for it.
I'll talk to my boss and see if he'll allow me to create a small space on our webserver for a "name and shame" type thing that you're talking about (if no one else has done it, that is). It might not be a Wiki (just a static webpage at first) but it might be a good excuse for me to work on getting the company Wiki up and running 80).
S. Garcia SLM Industries ~NOSPAM~ steven &DOT& garcia %AT% slmindustries ~DOT~ com ^SPAMISEVIL^
It is strange that after finding out that fsync() never functions the way you expect, you didn't doubt your assumptions or your code! Your assumption is wrong! fsync() is required to flush the hard-disk cache. It's function is just to flush all buffers inside the OS and the device driver to the hard-disk and it does that perfectly. As you see, these are two different things: flushing to the device and flushing to the disk. fsync() flushes to the device but you expect it to flush to the disk (which it does not). Flush to the disk usually can only be done in a system level program (like a device driver) and not a user program.
Yes, but that's all a userspace program like a database can do. And while I know that fsync() says it can't promise it makes it to disk if write-caching is enabled, this tool is a great way to see if your disk's write-caching is indeed on.
The old version of this script used the raw(8) interface to bypass all filesystems and kernel buffers, but it produced the same results as the fsync version, so I made it just use fsync so it was easier to use. (don't need a spare block device on the disk(s) being tested handy....)
I needed this two months ago! I suffered massive filesystem corruption due to exactly this problem. Well, this and the fact that ext3 still sucks. And why doesn't Red Hat do reiserfs anyway?