Now let's add our first customer with the command
nosql edit customer.rdb
The default editor is the vi command, but you can use your favorite editor changing the EDITOR environment variable. The screen below is presented to the user:
CODE NAME PHONE EMAILJust fill the fields with some information, remembering to separate values from field names with a tab. Do not delete first and last blank lines, this is not a bug: it's the way NoSQL handle lists. But I prefer to let you discover this little feature later in this article.
CODE ACM001 NAME Bugs Bunny PHONE 1 EMAIL firstname.lastname@example.orgNow that we have filled in the form, just write it (ESC then :wq!) and the command will check if the format is correct and write it to disk. Wow, we have a real table and real data!
Since we are curious, we will taking a look to the real file on disk.
CODE NAME PHONE EMAIL ACM001 Bugs Bunny 1
First of all, is important to note the fact that all columns are tab-separated: please keep this in mind when you want some external program to update the table, otherwise you will break the table integrity.
The first line is called the headline and contains column names; the second is the dashline and separates the headline from the body: both are named the table header. The rest is called the table body and contains the actual data.
A number of commands have been build to displays these parts, and they are simply calls to ordinary UNIX utilities:
nosql body: displays the table body (same as: tail +3 > table)
nosql dashline: displays the table dash line (same as: sed -n 2p < table)
nosql header: displays the full table header (same as: head -2 < table)
nosql headline: displays the table headline (same as: head -1 < table)
nosql see: displays the TAB character as ^I and newline as $, making much easier to see what's wrong on a broken table (same as: cat -vte < table)
Once again, this shows how powerful the UNIX OS is on its own, and how handy it can be for add-on packages such as NoSQL to tap into this power without having to re-invent the wheel.
A fun way to fill the table is using the environment variables. You can export variables in any way, e.g., using UNCGI in a CGI environment or named as column names with the desirable values as follows:
export CODE="ACM002" export NAME="Daffy Duck" export PHONE="1-800-COOK-ME" export EMAIL="email@example.com"
Then issue the command:
nosql lock customer.rdb; env | nosql shelltotable |\ nosql column CODE NAME PHONE EMAIL |\ nosql merge CODE NAME PHONE EMAIL customer.rdb |\ nosql write -s customer.rdb; nosql unlock customer.rdband the work is done—a bit cryptic? Yes, but that's the power of NoSQL: all can be done in a single shell command. Let's explain it:
nosql lock customer.rdb: this locks the table and ensures noone else can write in the table at the same time we do.
env: prints the environment variable.
nosql shelltotable: reads all variables from the pipe and writes a single record table containing all values to STDOUT.
nosql column CODE NAME PHONE EMAIL: reads the NoSQL table containing the environment variables from the pipe and selects column CODE, NAME, PHONE and EMAIL in that order and writes STDOUT.
nosql merge CODE NAME PHONE EMAIL customer.rdb: reads the two merging tables, one from pipe (STDIN) and other from file, writing the merged table to stdout. The resulting table has two records: the existing one and the new one extracted from the above process.
nosql write -s customer.rdb: reads the resulting table (merged from the above command) and writes it to disk as customer.rdb. We already explained what switch -s means.
nosql unlock customer.rdb: unlocks the table.
CODE NAME PHONE EMAIL ------ ---------------- --------------- ---------------------- ACM001 Bugs Bunny 1-800-CATCH-ME firstname.lastname@example.org ACM002 Daffy Duck 1-800-COOK-ME email@example.com
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide