Log in

No account? Create an account
LJ Talk activity (or, ejabberd vs djabberd) - brad's life [entries|archive|friends|userinfo]
Brad Fitzpatrick

[ website | bradfitz.com ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

LJ Talk activity (or, ejabberd vs djabberd) [Oct. 16th, 2006|12:40 pm]
Brad Fitzpatrick
[Tags|, , , ]

I've been watching our LJ Talk ganglia stats and also comparing them to the Jabber.org status (which runs ejabberd).

Our memory usage, even with a known memory leak, is way better. ejabberd seems to take 184 kB/connection, while djabberd is currently using 34 kB/connection. (which includes leaked data .... when it starts it's closer to 5 kB/connection)

In a week or two it looks like our connected clients will overtake jabber.org's too, at least with our current rate of growth. They currently peak at ~10,000 users. Our peak, currently at 4,000 users, keeps climbing each day, from a peak of just 1,000 a few days ago.

At least it's really easy now to track down memory leaks, using Devel::Gladiator and $^P |= 0x200, and Devel::Peek::CvGV .....

All objects in memory ...
djabberd@lj-jabber1:~/DJabberd$ echo "gladiator" | nc 5200 | head -50
 1087578 606945 SCALAR
 536548 327556 REF
 258889 176113 ARRAY
 142478 82245 HASH
 58433 45139 CODE
 34086 19638 DJabberd::XMLElement
 12023 4177 IO::Handle
 8583 6951 DJabberd::Callback
 8493 6887 *DJabberd::VHost::__ANON__[lib/DJabberd/VHost.pm:257]
 8493 6887 *DJabberd::VHost::__ANON__[lib/DJabberd/VHost.pm:226]
 8493 6887 *DJabberd::VHost::__ANON__[lib/DJabberd/VHost.pm:242]
 8317 4882 DJabberd::StreamVersion
 8195 4806 DJabberd::Connection::ClientIn
 8054 10   GLOB
 4518 464  DJabberd::JID
 4283 3463 *DJabberd::Connection::ClientIn::__ANON__[lib/DJabberd/Connection/ClientIn.pm:250]
 4283 3463 *DJabberd::Connection::ClientIn::__ANON__[lib/DJabberd/Connection/ClientIn.pm:251]
 4282 3463 DJabberd::Callback-switch_incoming_client
 4272 3451 DJabberd::IQ
 4208 3423 *DJabberd::IQ::__ANON__[lib/DJabberd/IQ.pm:519]
 4208 3423 *DJabberd::IQ::__ANON__[lib/DJabberd/IQ.pm:526]
 4208 3423 *DJabberd::IQ::__ANON__[lib/DJabberd/IQ.pm:530]
 4208 3423 *DJabberd::IQ::__ANON__[lib/DJabberd/IQ.pm:501]
 4208 3423 *DJabberd::IQ::__ANON__[lib/DJabberd/IQ.pm:522]
 4207 3423 DJabberd::IQ-set-{jabber:iq:auth}query
 4207 3423 DJabberd::Callback-CheckDigest
 4085 1456 IO::Socket::INET
 3857 1332 DJabberd::Presence
  532 -915 DJabberd::Subscription
  532 -915 DJabberd::RosterItem
  483 188  DJabberd::Queue::ServerOut
  325 107  DJabberd::IPEndPoint
  151 107  DJabberd::DNS
  143 -17  Danga::Socket::Timer
  122 77   DJabberd::Connection::ServerIn
   92 64   *DJabberd::Stanza::DialbackResult::__ANON__[lib/DJabberd/Stanza/DialbackResult.pm:47]
   92 64   *DJabberd::Stanza::DialbackResult::__ANON__[lib/DJabberd/Stanza/DialbackResult.pm:61]
   92 64   *DJabberd::Stanza::DialbackResult::__ANON__[lib/DJabberd/Stanza/DialbackResult.pm:43]
   91 64   DJabberd::Stanza::DialbackResult
   88 29   *XML::SAX::Base::__ANON__[/usr/share/perl5/XML/SAX/Base.pm:1114]
   88 29   *XML::SAX::Base::__ANON__[/usr/share/perl5/XML/SAX/Base.pm:400]
   87 29   XML::SAX::Base::NoHandler
   87 28   *XML::SAX::Base::__ANON__[/usr/share/perl5/XML/SAX/Base.pm:987]
   87 29   XML::LibXML::ParserContext
   87 28   *XML::SAX::Base::__ANON__[/usr/share/perl5/XML/SAX/Base.pm:264]
   87 28   *XML::SAX::Base::__ANON__[/usr/share/perl5/XML/SAX/Base.pm:1941]
   87 29   DJabberd::XMLParser
   87 29   XML::LibXML
   87 30   *XML::SAX::Base::__ANON__[/usr/share/perl5/XML/SAX/Base.pm:57]

So that'll give me something to do on the plane, too.

[User Picture]From: scosol
2006-10-16 09:35 pm (UTC)
i remember the last time i took a large dump on a plane-
(Reply) (Thread)
From: (Anonymous)
2006-10-16 09:49 pm (UTC)


any chance you could post the code that generates that display? the use of Devel::Peek::CvGV in particular is pretty undocumented.
(Reply) (Thread)
From: evan
2006-10-16 10:21 pm (UTC)
I expect the rate of growth to rapidly decline after the initial news announcement reaches everyone, but on the other hand LJ news tends to propagate virally. And this is a great achievement anyway!
(Reply) (Thread)
From: evan
2006-10-16 10:22 pm (UTC)
Google servers also let you do runtime memory profiling via a special request. They output this neato directed graph of where the allocations are coming from. Maybe seeing it will give you some ideas:
(Reply) (Parent) (Thread)
[User Picture]From: crucially
2006-10-16 11:41 pm (UTC)
We have seen the exact same growth for 4 days in a row now..
(Reply) (Parent) (Thread)
[User Picture]From: brad
2006-10-17 10:49 am (UTC)
Everything looks linear if you look close enough.
(Reply) (Parent) (Thread)
[User Picture]From: avatraxiom
2006-10-17 04:17 am (UTC)
Wow, is that leak-tracing code somewhere in the current djabberd SVN? I looked around but I couldn't find it. I'd love to see it.

(Reply) (Thread)
[User Picture]From: brad
2006-10-17 10:49 am (UTC)
Yup, it's in there. Search for "gladiator". It's in DJabberd/Connection/Admin.pm.
(Reply) (Parent) (Thread)
From: jmason
2006-10-17 10:18 am (UTC)

very nice

[I've already posted a comment, but accidentally did it as "Anonymous", so it's been screened I think]

Any chance we could see the code that produces that output? It's very nice, light-years ahead of simply using Devel::Peek::Dump -- and some of the components are a little under-documented, the use of Devel::Peek::CvGV() in particular.

I'd love to know how you go from an RV, to determining that it's a DJabberd::XMLParser object, for example... the only way I can see is to call Devel::Peek::Dump and parse out the STASH line, ick.
(Reply) (Thread)
[User Picture]From: brad
2006-10-17 10:51 am (UTC)

Re: very nice

(Reply) (Parent) (Thread)
From: jmason
2006-10-17 11:12 am (UTC)

Re: very nice

nice use of closures -- and _in_sub_process(): what an excellent hack! I'll remember that one ;)

thanks for this. I'm going to see if I can bodge it into SpamAssassin, and if it'll produce useful output there. It may not, since we have a pretty simple memory profile, but worth a try...
(Reply) (Parent) (Thread)