MTool: Performance Monitoring for Multi-platform Systems
Three-tier architecture turned out to be a good solution. We defined three different levels:
Platform dependent daemons
To monitor specific systems you must be connected to the network; actually, you have to connect to the middle-tier Java server. Client side presents just a GUI (Graphical User Interface). Since it is written in Java, a Java capable browser is all you need. After you make a connection to the Java server, the GUI code is transferred to your local machine. Code is then interpreted with the Java built-in interpreter (if you use Netscape, for example).
The central point of our system is the middle-tier Java server. It not only holds Java classes to be transferred to the local clients, but it also takes care of the monitoring. The Java server is a program that runs all the time but demands only a few system resources. It connects to monitored computers and asks them for new data. Communication is handled via sockets and is very simple. We also introduced a very simple protocol for data transfer.
On the client side, one must run a starting applet to bring up the basic window in which all the computers to be monitored are listed (see Figure 2). Once we choose a computer's name from the list, a second window appears containing all the information about the observed system. We used a tabbed panel to display different types of data. We've chosen a tabbed panel to ease the upgrade procedure. If we add new system information, there is no problem with the graphical interface—we simply add a new panel.
At this time we can provide the following system resources:
CPU information (user, nice, idle, sys, average)
Disk and memory information (free, used, swap)
The Java server is designed to run on an independent computer, but it can also run on a client of one of the monitored computers. The communication with the monitored computers via sockets is quite short and lasts only as long as the data transfer. At any time we can choose how often the Java server requests system resource data. At that time, a “ping” is sent to all monitored computers. At the ping, mtoold is awakened by inetd. Then mtoold reads system resource information and forwards that information to the middle-tier Java server. After completion the mtoold daemon “sleeps” until the next server's ping. The platform-dependent mtoold therefore uses system resources for only a very short time.
For its operation, the Java server is supported by one configuration file, named MTool.conf. The first part of this file contains the list of computers to be monitored, then the time between monitoring intervals, and last the list of client computers authorized to communicate with the Java server and thus monitor selected computer systems. We can use IP addresses or IP host names or both. As you can see from the sample configuration file below, one can also use the asterisk wild card in defining the domain name or the subnetwork address.
# First, the hosts to be moni- # tored cebelica.uni-mb.si mravljica.uni-mb.si cmrlj.uni-mb.si strela.fcs.uni-mb.si grom.uni-mb.si 18.104.22.168 # Second, monitoring time inter- # val in seconds 4 # Last, clients with access to # monitored hosts *.hermes.si 22.214.171.124 126.96.36.199 130.15.40.* 193.246.15.* 164.8.253.*
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide