When the server boots, it will run the VNC server for each user in the /etc/sysconfig/vncservers file. As it starts up, the VNC server reads the .xsession file in your home directory and will use it to run your preferred window manager. Then the VNC server will simply sit in RAM, waiting for a connection (either a local connection or one from over the network).
The display manager also starts up at boot time, which presents you with a graphical login banner.
A user who does not have the VNC server configured, and who does not have the proper .xsession file in his or her home directory, will get a normal X desktop when logging in to the display manager. A user with all of these things configured will get the VNC desktop and also will be able to access the VNC desktop from anywhere on the network. Doing so securely, however, is a tale for the next article.
The setup I've described here has many advantages. For me, the primary advantage is the ability to access my desktop's state from any computer at work, which is nice if I'm on the other side of the site with a half hour between meetings, and I want to check my e-mail or my calendar.
A major drawback of using VNC as your default desktop is that you lose graphical performance. For example, playing movies on your VNC desktop is slow, due to long redraw delays. Most fast-action games are slow as well. Also, when the VNC server runs, it acts like an X server to all the applications that you run, but it is not accelerated in any way. Even if your true X server is accelerated, you won't be able to take advantage of it. (You can log out, log in as a different user, play the movie or the game, log out when finished, then log in again as your normal user. Your original desktop will be sitting there just like you left it.)
This configuration doesn't scale well for multiple users. You could define many VNC sessions in the /etc/sysconfig/vncservers file to start up at boot. But all of these VNC desktops will be idle until they are used. And for each VNC desktop, a VNC server is run, a window manager is run and, in the case of GNOME or KDE, many auxiliary applications are run. All of these take up RAM, and they may compete with each other for resources like the sound card. Similar commercial solutions like Citrix MetaFrame and Microsoft Terminal Server call for seriously powerful computers to support many users, and I'd guess this solution would work just as well on such hardware.
One alternative solution is to use XDMCP, which is how the traditional X world provides remote X access (à la X terminals). But in doing so you lose statefulness because the desktop is restarted every time a connection is made to the server, and you lose the ability to share the same desktop both remotely and locally. See the documentation for your display manager (xdm, gdm or kdm) or www.linuxdoc.org/HOWTO/XDMCP-HOWTO for more information.
Another solution is to run VNC out of inetd/xinetd, using the -inetd option. Doing so, however, causes the VNC server to start anew for every connection, disallows multiple connections to one desktop and shuts down after the original connection exits. So it loses statefulness and the ability to share the desktop both remotely and locally. See the VNC server documentation for more information.
Another option is x0rfbserver. This is an application that you run in your normal X desktop that will relay the contents of the desktop to a VNC client. It works well and will let you take full advantage of any accelerated video cards that your X server supports. It also is less RAM-demanding than running an X server plus a VNC server (it only requires an X server, besides the x0rfbserver itself, which is fairly small). But it requires that you always keep your X desktop running on the console, so it will not scale for multiple users. See www.hexonet.de/software.en for more information.
Jeremy D. Impson is a senior associate research scientist at Lockheed Martin Systems Integration in Owego, New York. There he's a member of The Center for Mobile Communications and Nomadic Computing, where he uses open-source software to develop mobile computing systems. He can be reached at firstname.lastname@example.org.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?