May 27th, 2004

belize

pig latin: perl vs. C++

I went to help out at my littlest brother's high-school programming class today. Freshman intro to programming... they learn C++. Painful, but I can't think what the best language to learn on would be. I suppose simplified C++ isn't so bad... mix of high-level and seeing how the machine actually works, kinda.

Anyway, their project was to write an English to Pig Latin converter. I explained how to setup one function w/ a little state machine to tokenize the sentence into words, one function to pigify the word once boundaries are found (including "qu" support, -ay vs -way suffixes), etc. Also showed them how to use the debugger. (Visual Studio)

So after their 3 pages of code (small monitors) I showed them how it's just one line in Perl and walked them through each character of the Perl version briefly. *evil grin*
belize

today's rage #1: portable advisory file locking

So at work we made this little network lock daemon, and client that's capable of using them all (or all that are alive) to take out locks. Tiny, tiny, works great.

So then we're like, "Oh, we need a dumb fallback method for people on single hosts. Let's just do flock or fnctl or lockf or something.... that'll be easy...."

Yeah, right.

Either we're all crazy, or flock/fnctl/lockf are a total pain in the ass. Our lock stress tester (10 forked children fighting for a few seconds over a small number of locks) works perfectly with our network lock client/daemon..... the synchronized code (an O_CREAT|O_EXCL open + unlink) never fails.

But once we switch to using local-machine locking, reliability goes to hell.

What's wrong here?
http://www.danga.com/temp/flock-test.c

It seems both fcntl and flock locks are released at different times, but I can't get it right. It only works if I comment out the line after BIZARRE in the source above. (never unlinking the file I flock)

Where's the race?
belize

today's rage #2: NFS "nohide" export option

Second frustrating item of the day:

$ man 5 exports
....
nohide
This option is based on the option of the same name provided in IRIX
NFS.  Normally, if a server exports two filesystems one of which is
mounted on the other, then the client will have to mount both
filesystems explicitly to get access to them.  If it just mounts the
parent, it will see an empty directory at the place where the other
filesystem is mounted.  That filesystem is "hidden".

Setting the nohide option on a filesystem causes it not to be hidden,
and an appropri- ately authorised client will be able to move from the
parent to that filesystem without noticing the change.

However, some NFS clients do not cope well with this situation as, for
instance, it is then possible for two files in the one apparent
filesystem to have the same inode number.

The nohide option is currently only effective on single host exports.
It does not work reliably with netgroup, subnet, or wildcard exports.

This option can be very useful in some situations, but it should be
used with due care, and only after confirming that the client system
copes with the situation effectively.

The option can be explicitly disabled with hide.

Why can't it work with a wildcard? :-(

Instead, I tried the ultra-ghetto solution of remounting the nohidden merged filesystem locally (explictly exporting all the filesystems back to 127.0.0.1), and then reexporting that merged mount.... no go. But I'm kinda glad, as that'd be ugly.

So instead do I need each web node to mount every disk? Lame. I'd prefer one per host. I suppose the mounting will be automated anyway (go whitaker!) but I'd rather each web node have 1 mount per storage server rather than 14. (14 logical devices per storage node)

Alternatively, I can have d*w exports per storage node (d == disks, w == webnodes), but that's lame too.

Is there some way to create a merged filesystem (yet with 14 distinct filesystems backing it) which I can then export, without having to futz with nohide?