Managing Multiple Cooks
Vien ici, François! Have a look at this. What do you mean, what is it? It is our new intranet. What do you think of it? Mon Dieu! You cannot tell me because you do not know what an intranet is? I am certainly glad our guests have not yet arrived, François. You know, working here at Chez Marcel, they naturally assume that you are an expert about these things.
An intranet, François, can be thought of as your own private Internet, a networked environment where users can share information, participate in discussions and use networking technologies to make getting to that information as easy as possible. It can be many things, really—a database providing access to corporate documents, an information center for job postings, a discussion board or a place to find the results of the latest hockey pool, non? An intranet is a place to share information. It can take many forms, but the essence of an intranet is a high-tech bulletin board, one that lets you post simple notices as well as entire multipage documents. Unlike that corkboard in most company lunchrooms, a good web-based intranet has virtually unlimited room.
François, why are you not looking at me when I talk to you? Ah! Mes amis. Welcome to my restaurant. François! To the cellar pour du vin. Vite! Bring back the 1990 Vosne-Romanée Les Beaux Monts. Nothing like a good Burgundy to discuss networks, non? Merci, François.
Please sit, mes amis. Before you arrived, I was showing François a special feature from this very restaurant. Although we love to bring you delicacies from around the world, sometimes it is our kitchens that do the creating, non? Sally Tomasevic, master chef, has written an intranet package we call Grand Salmar Station. What is really wonderful about this package is that it is virtually self-administering. A common problem with intranet solutions is that they require a technical person to oversee the project. At the very least, someone has to write HTML, maintain the structure and deal with dated information. Unfortunately, sometimes that person (and their dedication) can be hard to come by.
Ah, François, you have returned. Merci. Please pour for our guests. What if I told you, mes amis, that you could deploy an intranet and turn it over to your users and let them maintain it without having to train them? They would not need to know any HTML, and they would not have to code a single line. Chef Sally has created just such a package. It's even better. Grand Salmar Station will automatically create and maintain all links for you and will even expire old postings or dated information without you lifting a finger. Any newly added item will magically appear on the intranet's What's New page to highlight it. Normally, such adding and deleting requires user intervention and links must be verified and re-created. No problem with this intranet. It does it all for you.
It is a bulletin board, a flexible news center, an internet reference list, an office directory and a document management system, all in one. Best of all, Grand Salmar Station is freely distributed under the GPL.
For this recipe, you will need the following ingredients: a Linux System (but that goes without saying, non?), an Apache web server, Perl, PostgreSQL and the latest Grand Salmar Station source.
Grand Salmar Station is written completely in Perl and uses PostgreSQL as its database. You may recall PostgreSQL from past visits to the restaurant. Our menu has featured applications based on this excellent database, so you may already be quite familiar with it. It is possible that you even have it running as part of your day-to-day processes. In case you do not, I will give you a quick introduction. For the others, may I recommend a little brie while you wait? You will find cooking instructions for the intranet a little further on.
PostgreSQL is an advanced multiuser, relational database management system (RDBMS) distributed freely along with its source code. Originally written in 1985 and worked on by many developers worldwide, PostgreSQL is fast, powerful, supports most (if not all) SQL standards, and best of all, it is free. You probably don't even have to go looking for PostgreSQL since it is usually packaged as part of most major Linux distributions. Look on your distribution CD it is probably already there. In fact, on some systems, PostgreSQL is part of the default install making it that much easier.
Here at Chez Marcel, we enjoy cooking with open source, non? And we love working from source. So, for this recipe, we will be building PostgreSQL from the freshest of ingredients. The latest version is available by visiting the site at http://www.postgresql.org/, or as I mentioned mere moments ago, simply take it from your own distribution CD.
After downloading the latest bundle, I extracted it to a temporary location, changed directory to the source directory and compiled. Here are the steps:
tar -xzvf postgresql-7.0.3.tar.gz cd postgresql-7.0.3/src ./configure make make install
The distribution directory (postgresql-7.0.3 in this case) has a nice INSTALL file that you might want to take a moment to read since there are some options related to the configure script that you might find useful. For instance, by default, PostgreSQL will install in the /usr/local/pgsql directory, and you may find that location less than palatable. Tastes vary.
When the compile is done, PostgreSQL will have to know how to find its libraries. You can always modify your environment variable to include /usr/local/pgsql/lib in the LD_LIBRARY_PATH, but it's probably easier to add the path to the /etc/ld.so.conf file. This is a text file that tells the system where to search for libraries. Because it is straight text, just add the path to your libraries and run this command as root:
If you decided to install a PostgreSQL binary from your CD, a postgres user will have been created as part of the installation. Otherwise, create a postgres user with its home directory being the PostgreSQL install directory. Then, assign a password to the user and log in as postgres. If you built your database along with me, you will be in the /usr/local/pgsql directory. Next step is to create a data directory:
mkdir dataTo initialize the database for the first time, use the following command:
$ bin/initdb -D /usr/local/pgsql/dataThat is, of course, the data directory that you just finished creating, n'est-ce pas?
You will see a number of messages going by as PostgreSQL reports on what it is doing. Not quite enough time for a full meal, but perhaps a single escargot and a sip from your wineglass, non? Meanwhile, some default permissions will be set. In addition, a default database will be created along with PostgreSQL's own database (pg_database) for user and other database information. Several views are then created after which you should get a message like this:
Success. You can now start the database server using: /usr/bin/postmaster -D /var/lib/pgsql or /usr/bin/pg_ctl -D /var/lib/pgsql start
Either will work, but the second is a better choice because it launches the process into the background for you. You will also want to add this to your startup for system boot.
Now, it is time to create some users that will have access to your database. If you are installing as root (for access to Perl directories, CGI directories, etc.), then root will have to be added, as will the user “nobody”. This is often the user that your httpd server runs as. Some have a user called “www” to run web services. I will use “nobody” as mine, and you may use whatever your server is configured for. Start by logging in as your postgres user, and execute the following commands to add users “root” and “nobody” to your PostgreSQL system:
$ createuser root
You'll be prompted for root's UID (accept the default) and whether user root is allowed to create databases. Answer “y”. When asked whether root is allowed to create users, I answered “y” again. Now, do the same thing for user nobody. The only difference is that I answer “n” to the question of whether nobody is allowed to create databases as well. Depending on which version of PostgreSQL you are using, the question of whether a user is allowed to create other users may be worded this way:
Is user whoever a superuser?The answer is still “n” or “no”.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SourceClear Open
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide