Sure, everybody mounts their filesystems async, but that mount option only affects parts of it, and doesn't effect flushes that end up calling sync.
Depending on the application, it might be beneficial to mount the filesystem sync on top of the async block device.
Here's how I see abd working:
-- you configure a maximum amount of unpageable memory to use for the driver's caches / data structures
-- syncs are ignored. always lie and say it's good.
-- writes are queued in memory, and an in-memory mapping from that sector to its new value is stored.
-- another thread does writes async, then removing the write from the queue and mapping.
-- reads check the mapping first, falling back to passing the request to the lower-layer block device
-- means of checking the number of uncommitted writes via ioctls/sysfs
-- means of using ioctl/sysfs to tell block device to stop accepting new async writes, and do all writes async, but accepting them at a slow rate or not at all (configurable), while driver works on flushing all ioctls
-- driver could issue tons of writes at a time, so the lower-level block device could do nice write-out scheduling. (Note: learn more about TCQ, aio)
Anybody know of any async block device driver tied into UPS notification daemons?
In my ideal world I'd layer block devices as so:
Application (database)
async block device
memcache block device
raw block device (locally attached raid driver)
Instant writes, really fast reads (with big cache), reliable storage. No super-pricey "appliances".