?

Log in

No account? Create an account
Work - brad's life — LiveJournal [entries|archive|friends|userinfo]
Brad Fitzpatrick

[ website | bradfitz.com ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Work [Apr. 26th, 2004|10:20 pm]
Brad Fitzpatrick
Whitaker and I profiled the hot code paths on the site today, starting with some simple SQL commands. Since we log the codepath name, CPU consumption, and wall time of each request, we can just ask the database, "What codepaths take the most CPU, on average?" Then we make a test wrapper to hit that codepath hundreds of time and profile. In the end we made some of our most popular codepaths about 25% faster.

We do this every time we start to run out of CPU. We also ordered 8 new Dual Xeon 3.06 Ghz 1MB machines w/ 2GB of memory. This time instead of diskless 1Us we're getting 1Us with two SATA hot-swap on front, but without disks. But when we fill up our 8TB of disk with FotoBilder images, we'll add 4TB of 250GB disks to these 8 new machines.

Speaking of which, I need to finish MogileFS. A PHP/Python(?) user wants to use it too, so I'm moving more from the library into the daemon, making the library really light-weight so bindings are simple. Fortunately I'll be able to reuse large parts of Perlbal (my load balancer) inside the MogileFS TCP listener.

The MogileFS daemon will run and maintain at least 3 forked processes:

-- TCP listener (async, event-based, answering requests for clients)
-- Replicator
-- Lazy deleter
-- ...

And the daemon will run on multiple machines, so there are no points of failure. The client will be configured to know about all of them, and just pick one at random until one responds quickly.

I would've worked on the above today, but I got stuck doing the profiling. Damn sysadmin stuff. :-)
LinkReply

Comments:
From: jeffr
2004-04-26 11:14 pm (UTC)
What sorts of optimizations did you do? Are they mostly in queries or code? What actually takes up the most cpu in running livejournal? The databases? I'm curious about the ratio of frontend webserver cpu power vs backend db resources.
(Reply) (Thread)
[User Picture]From: whitaker
2004-04-27 12:01 am (UTC)
it was all in code. we were trying to reduce web slave cpu since that was the bottleneck, not db io/cpu.
(Reply) (Parent) (Thread)
[User Picture]From: brad
2004-04-27 12:12 am (UTC)
Since our traffic's always increasing, it's a constant battle between I/O problems on databases and running out of CPU on web nodes. (CPU is never a problem on our database machines)

Lately the database/memcached situation has gotten quite a bit better, so the last few rounds have been running out of CPU on web nodes. Over time the CPU killer has changed, since we obviously tune the hell out of things whenever any one area stands out.

At one point it was the "HTML cleaner" code, validating/cleaning people's HTML. I remember one of the easiest fixes for that: Bypass the whole cleaning process (which involved an HTML parser) if the entry contains no < character. And also then some minor modification to put in HTML breaks in place of linebreaks when needed. That's barely on the radar anymore, and I'm thinking of putting the cleaned data in memcached, rather than computing it all the time, but I'm not sure if it'd be less CPU to get it from memcached than re-compute it.

That's one of the current slow areas: our Perl memcached client is pretty CPU-heavy. We need to write parts of it in C.

BML (our templating/web framework) has gone through a dozen tuning cycles, including today when I replaced a lot of object member accesses from hash-table-lookup to array-index-lookup (by way of Perl's "fields" module, which does the substitution at compile-time and also does field identifier verification at compile-time....)

Before today, BML compiled all code blocks each time in their own package, then wiped the package at the end of execution. Now each page's code blocks are compiled and cached in their own namespace forever, and only scalars left-behind from non-strict code are cleaned, not the compiled code. That helped quite a bit. Even though we had the ability to pre-compile certain code blocks in the past (which we used for larger code), we never used it much because it was tedious. Now it just happens for everything.

Also did the fields module thing today to one of our non-memcached caching modules which we hit a lot. Helped a percent or so.

I'm trying to think of other things we've done in the past, but there's nothing that's standing out. It's just a lot of looking at the top-20 worst functions and reducing their runtime or call count. I've learned a lot about how Perl works internally to help me write fast Perl. We do write some things in C, though, using Inline.pm, which lets you mix C and Perl together. Then on first run, it builds a module and links it. Subsequent runs just link it as if it were a normal module. (goes so far as to have a full C grammar in it, and finds the function declarations to make Perl XS wrappers for them...)

I think I'm running out of comment area, but I have a possible contract idea for you. I'll mail ya.
(Reply) (Parent) (Thread)
From: (Anonymous)
2004-04-26 11:31 pm (UTC)
i miss you
(Reply) (Thread)
[User Picture]From: lisa
2004-04-27 12:07 am (UTC)
its not your fault.
(Reply) (Parent) (Thread)
[User Picture]From: graceadieu
2004-04-27 07:45 am (UTC)

:)

I had a dream last night that I actually understood some of the stuff Brad writes in his journal! (It was scary!) :D
(Reply) (Thread)