?

Log in

No account? Create an account
gearman demo; distributed map in Perl - brad's life [entries|archive|friends|userinfo]
Brad Fitzpatrick

[ website | bradfitz.com ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

gearman demo; distributed map in Perl [Apr. 13th, 2005|11:08 pm]
Brad Fitzpatrick
[Tags|, , ]

We've been talking about writing a distributed job system for awhile now. I'm pretty aware of what's out there, but you know me. (hate everything, picky, etc....)

So Whitaker and I spec'ed this out last week and I got to hacking on it today. Even with on/off interrupted hacking time all day, it works:

-- Multiple job servers track clients and workers. (scales out)

-- Clients hash requests to the "right" job server, or whoever's alive. There is no right job server, but the job server can merge requests if the clients both wanted, so it's beneficial to map the same jobs to the same job servers. That is, if two callers both ask for job "slow_operation_and_big_result" to be done, it's only done once, even if the second client comes in 2 seconds into the computation.

-- Workers on connect announce what job types they're willing to do. Can also unown things.

-- Workers poll all job servers doing a "grab_job" operation. If no jobs, workers announce they're going to sleep, and select on a "noop" wake-up packet from the server if it gets something the worker can do. (job server can't just give out the job, lest there be races between multiple job servers, and you want low-latency: it'd be bad for two job servers to send a request to the same worker when another worker was idle... so all jobservers can do is wake up workers)

-- Clients can submit lots of jobs, get handles for them, and wait for their results (and status updates) in parallel

-- Client can submit a "background" job where a handle is returned, but client isn't subscribed to status notifications. The client just wants it done sometime soon, but it's going away.

-- Job server makes no promises about things getting done eventually, durability, etc. That's all done at different layers. (for instance: for non-background jobs, the client module can be told that the result is idempotent and on failure, it should be retried, the failure hidden from the client)

Anyway... I didn't want to go into all those details, because they're poorly explained. But it works, and one worker process I just wrote registers with the job server and announces it can help do a distributed map for Perl, using Storable.pm's CODE serialization using B::Deparse.

Observe:
sammy:server $ cat dmap.pl
#!/usr/bin/perl

use strict;
use DMap;
DMap::set_job_servers("sammy", "kenny");

my @foo = dmap { "$_ = " . `hostname` } (1..10);

print "dmap says:\n @foo";

sammy:server $ ./dmap.pl
dmap says:
 1 = sammy
 2 = papag
 3 = sammy
 4 = papag
 5 = sammy
 6 = papag
 7 = sammy
 8 = papag
 9 = sammy
 10 = papag
So it's like Perl's map, but the computations are spread out all over, and recombined in order.

I can't wait for this to be production-quality so we can do things like parallel DB queries. (I especially love the request merging.)
LinkReply

Comments:
[User Picture]From: xaosenkosmos
2005-04-14 06:20 am (UTC)

Rambly pedanticism

This all sounds really neat, but i'm perplexed by the kenny/papag dichotomy in the test output. I think i know what's up there, since there's a nutjob machine at ibiblio that really thinks it's one hostname, but that hostname doesn't exist outside that one machine (yay jumping through RealServer licensing hoops!).

It still made me do a triple-take of "Wait, Brad's not the kind of guy to hard-code magic values like that. Well, maybe he is, but not in something this elegant-sounding, right? Oh, it must be weirdness."
(Reply) (Thread)
[User Picture]From: brad
2005-04-14 06:24 am (UTC)

Re: Rambly pedanticism

I was hoping somebody would get confused by that:

sammy/kenny: are the job servers.

sammy/papag: are where the workers happened to be running.
(Reply) (Parent) (Thread)
[User Picture]From: xaosenkosmos
2005-04-14 06:32 am (UTC)

Re: Rambly pedanticism

Oooh... cleverness. This gets even better =)
(Reply) (Parent) (Thread)
From: evan
2005-04-14 06:23 am (UTC)
That is, if two callers both ask for job "slow_operation_and_big_result" to be done, it's only done once, even if the second client comes in 2 seconds into the computation.

What if slow_operation_and_big_result mutates stuff? Shouldn't running it twice cause it to run, well, twice? (Imagine a query like "get_current_user_count" that should be increasing with each sucessive call.)


Someday I'll recruit you and you'll get to see the cool infrastructure we have for these sort of things.
(Reply) (Thread)
From: evan
2005-04-14 06:24 am (UTC)
Oh and: that looks cool! I can't wait to see the final result.
(Reply) (Parent) (Thread)
[User Picture]From: brad
2005-04-14 06:25 am (UTC)
What if slow_operation_and_big_result mutates stuff?

Then that job shouldn't be registered with merging enabled. (currently the client is the one that requests merging or not, but it should really be on job register....)
(Reply) (Parent) (Thread)
From: jamesd
2005-04-16 03:11 am (UTC)
Let Jeff Dean talk about the cool Google stuff, of which there is much.
(Reply) (Parent) (Thread)
From: evan
2005-04-16 06:57 am (UTC)
Jeff only gets to talk about the public bits. :)
(Reply) (Parent) (Thread)
From: jamesd
2005-04-17 11:18 pm (UTC)
Those public bits are jealous-making enough.:)
(Reply) (Parent) (Thread)
From: (Anonymous)
2005-04-14 06:23 am (UTC)

MapReduce

Did you just make a new version of Google's MapReduce (http://labs.google.com/papers/mapreduce-osdi04.pdf)? Or does it do something more?
(Reply) (Thread)
[User Picture]From: brad
2005-04-14 06:27 am (UTC)

Re: MapReduce

Kinda? The DMap part resembles MapReduce, but DMap is just one application atop Gearman:

$ wc -l DMap.pm dmap.pl dmap-worker.pl
  77 DMap.pm
  13 dmap.pl
  73 dmap-worker.pl
 163 total

And not a very big application at that.
(Reply) (Parent) (Thread)
[User Picture]From: lithiana
2005-04-14 04:49 pm (UTC)
what exactly is a 'job'? do you send perl coderefs over the network? (didn't even know that's possible)
(Reply) (Thread)
[User Picture]From: brad
2005-04-14 05:02 pm (UTC)
The workers register some string as a job (say: "send_mail") and if callers request that job be run, the arguments (a big blob) is passed to the job running send_mail and it's that job's job to decode the args.

So I made a job called "dmap" which, yes, serializes the coderef, and then deserializes it on the workers.
(Reply) (Parent) (Thread)
[User Picture]From: taral
2005-04-14 09:46 pm (UTC)
What does this do?

$x = 1;
dmap { $_ + $x } (1..10);
(Reply) (Thread)
[User Picture]From: brad
2005-04-14 10:03 pm (UTC)
Not work I'm sure.
(Reply) (Parent) (Thread)
[User Picture]From: taral
2005-04-17 04:56 am (UTC)
Why not? Lexical closures are the bomb.
(Reply) (Parent) (Thread)