Introduction To Sybase, Part 3
Just as any other part of a computer system, databases need to be backed up. One approach would be to shut down the database server, then back up the database device files. This could work, but Sybase says a back up made this way is not guaranteed to restore properly. Each database has to keep track of its logs and the sequence of transactions. If any of these are out of sync with the others, Sybase cannot use the database. Fortunately, Sybase has created a backup server to help you back up your databases. It will even let you run your backups while the database is still up. Your bookstore can stay up 24 hours a day, 7 days a week.
You can back up your databases and logs to either disk or tape using the Sybase backup server. To back up to tape, you'll need to configure your tape device. Read Chapter 18 of the System Administration Guide to learn how to configure your tape drive.
Depending on the size of your databases, I would recommend backing up the logs at least once during the day and the databases every night. If your databases are huge, you might not be able to do this. Back up the logs as often as you can, but think of log backups as changes made to the database. If the database gets corrupted, you'll need to restore the database, then all the logs you backed up since the database backup was made.
If you choose to save logs to a disk drive, I would highly recommend they be saved on a separate disk from the one holding your database. On one of our production database servers, we back up the logs to disk at noon each day, then FTP them to another system. This system has 20 1GB drives on it, so databases and logs are located on different drives, and the log backups are on yet another disk. For best performance, it would be best to keep everything on separate small drives. The system can read and write to multiple drives faster than to a large single drive.
Each time you boot your Linux system, the fsck program is run against all your file systems. The fsck program makes sure all the directories and data are consistent. It is possible for a file system to become out of sync, causing a loss of data.
The Sybase database server has a similar command. It is called dbcc (database consistency check). This command makes sure all data allocations are proper and accounted for. These checks should be run regularly on your databases. They can be scheduled just as you schedule your backups.
You can download the entire application from the Linux Journal FTP server at ftp://ftp.linuxjournal.com/pub/lj/listings/issue64/. To save space, the entire application isn't presented here. There are two tar files: 3250src.tgz and 3250cgi.tgz. The first one has the source code for all database objects. Log in as the sybase user and unpack the tar files. Read the README file before installing. You'll need to install the pubs2 example database shipped with the server. The second tar file has all CGI scripts and other files needed for the application. These files should be placed in the /cgi-bin directory of your web server. Again, read the README file before installing. Sybase and Sybperl must be installed to use this application. Information on this installation is in the first two parts of this article.
This isn't an article about web development, so I won't go into great detail about the web aspects of the application. It also isn't a web application beauty contest. You will see I didn't spend much time making it look nice. I'm sure you can find the time to make the application look the way you like it.
If you installed the application in the default location, you can start a web browser on your system and go to http://localhost/cgi-bin/store/store/ to run the application. If you receive a “File Not Found” error message, your CGI application cannot be found. On a default Red Hat 5.x system, the CGI perl script should be located at /home/httpd/cgi-bin/store/store.
If you received an Internal Server Error message, your Perl program probably isn't located at /usr/bin/perl. Edit the file /home/httpd/cgi-bin/store/store/ and modify the path for Perl to point to your Perl executable. One way to figure out the problem is to run the Perl script at the command line. You should receive HTML. If you receive an error message, you'll need to resolve this problem.
If you receive the store menu, click on Search for Books. If you receive an error message, your client cannot connect to the database. Look at the file /home/httpd/cgi-bin/store/store.inc/. You may need to change the name of the database server. Make sure your database server is up and running. You should be able to connect to it using the user name and password in the store.inc file. Use the isql program at the command line to test this.
Each screen of the application has a subroutine written in Perl. The subroutine is named after the last part of the URL. To search for books, the URL would be http://localhost/cgi-bin/store/store/search/. The subroutine responsible for this is called search.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide