Remote Window Managers
Lots of times it's extremely frustrating or time consuming to run an xterm on a remote host just to fork your programs from that remote machine. Why not just run your window manager there even though you're not on its console? The window manager is just another X application, after all, isn't it?
Fire off your local X server
xinit /usr/bin/xterm -- :1 &
yields a vanilla X session with merely an xterm running - no window manager. Now you need to add permissions to this window session for the remote host. You can tunnel the connection through SSH if your network is insecure but there's a distinct performance hit. If your network is secure, you can just "xhost +remotehost" and spray directly to your X server:
ssh -fY remotehost /usr/bin/wmaker
or spray directly:
ssh -f remotehost /usr/bin/wmaker -display localmachine:1
The first option, if your remote SSH server supports it, will use a locally defined DISPLAY that then gets tunneled to your local side over SSH. The second option allows remotehost to send X data directly to your local display, then runs WindowMaker there but displaying it locally. Now all your desktop actions are done on the remote machine, not locally.
Our special thanks to Bill from Washington state for this tech tip.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?