Brad Fitzpatrick (brad) wrote,
Brad Fitzpatrick
brad

Random geeky ideas

Geeky Idea #1
You got a web farm and a load balancer.

Each node runs so many httpd processes, has so much memory, and has so many processors. Running too many httpds can starve the CPU or memory. Running too few results in under utilization.

Further, you can control how many connections the load balancer tries to give to nodes at any given time. If you set it exactly the same as the httpd count, there are periods where the node is idle. If you set it too high, you increase throughput, but also increase latency.

Tuning all these things is a bitch, and often comes down to, "I'll take these 4 equivalent machines, change one variable, and observe for awhile." Then, move everything closer to the one that was best, and change a new variable.

I realized what I was doing is just a genetic algorithm... do random changes, judge results by a given heuristic (latency and throughput), and breed from there.

It'd be interesting to automate this problem. The first step would have to involve a way to update Apache's MaxClients at run-time, without restarting.

And just so you don't think I'm stupid and/or on crack, I'm not the only one with this problem. From Slashdot's FAQ:
Slashcode itself is based on Apache, mod_perl and MySQL. The MySQL and Apache configs are still being tweaked -- part of the trick is to keep the MaxClients setting in httpd.conf on each web server low enough to not overwhelm the connection limits of database, which in turn depends on the process limits of the kernel, which can all be tweaked until a state of perfect zen balance has been achieved ... this is one of the trickier parts. Run 'ab' (the apache bench tool) with a few different settings, then tweak SQL a bit. Repeat. Tweak httpd a bit. Repeat. Drink coffee. Repeat until dead. And every time you add or change hardware, you start over!
Yeah, that's another limit I didn't mention.... killing the database(s).

Geeky Idea #2
Compressing pages from the webserver to the client cuts bandwidth a ton... over 50%, even. When you're paying an expensive car's cost in bandwidth each month, 50% matters. The down-side... it increases CPU consumption a bit.

Easy solution: buy more machines to compensate. It's more than cost effective.

However, it'd be fun, if not incredibly practical, to do compression quickly...

I forget the details, but gzip uses a dictionary, which is computed and updated as the input is compressed. If you could compressed a bunch of sample text ahead of time, find a "good enough" dictionary, and always use that, you could pre-compress common segments of output. I could make BML (my templating system) cache gzipped templates.

Compression would be less than ideal, but CPU would be reduced. Thing is, I don't think it'd be enough to be worthwhile. But it'd make for interesting research, if I were like, a grad student or something.
Tags: mysql, tech
Subscribe

  • Ukraine

    Nobody reads my LiveJournal anymore, but thank you to everybody in Russia protesting Putin's insane war against Ukraine. (I know it's risky…

  • Happy Birthday!

    Happy 20th Birthday, LiveJournal! 🐐🎂🎉

  • hi

    Posting from the iPhone app. Maybe I'm unblocked now.

  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 17 comments

  • Ukraine

    Nobody reads my LiveJournal anymore, but thank you to everybody in Russia protesting Putin's insane war against Ukraine. (I know it's risky…

  • Happy Birthday!

    Happy 20th Birthday, LiveJournal! 🐐🎂🎉

  • hi

    Posting from the iPhone app. Maybe I'm unblocked now.