Currently my client machine (old, weak, diskless p3) PXE boots off my server (beefy new machine), and mounts its root and home filesystem off server via NFS.
I realized today my local network (fast ethernet) should be more than adequate to push X data around.... and even if the latency is the same as the NFS latency, I'd be better off running all my applications on the server, using my client as merely an X terminal.
So I did some tests:
$ xhost +10.0.0.81
$ ssh 10.0.0.81
$ DISPLAY=10.0.0.10:0 mozilla-firefox &
Nice and fast! Started much quicker than it does from my client. So the win is obviously there: the combination of much more CPU and no NFS latency works out.
Next step: how to do XDMCP? Normally I just type "startx" and have a .xinitrc file, so I don't use xdm/gdm/kdm, but I hear the best things about gdm.
So I start gdm on both client and server, with the server allowing XDM connections, including indirect requests.
Now, I start gdm on the client and select "XDMCP chooser...". And my server appears in the list.
I click it, and I just get the classic empty X screen, with checkered black & white background and the X X cursor.
I've read about /etc/gdm/PostSession/display, kinda, but I don't know what to put there, or what format it is. And it looks like Debian puts a lot of crap in all the /etc/gdm/*/Default files, which I understand would be overridden if I put my own thing in /*/display.
Anybody know the proper way to set this up? I use GNOME.
The ghetto way is to keep using startx and make my client .xinitrc be:
exec ssh server gnome-wrapper
Where gnome-wrapper is just:
#!/bin/sh
DISPLAY=10.0.0.10:0 exec gnome-session
But I'd prefer to learn the proper way to do XDMCP.
Thanks!