Now let's add our first customer with the command
nosql edit customer.rdb
The default editor is the vi command, but you can use your favorite editor changing the EDITOR environment variable. The screen below is presented to the user:
CODE NAME PHONE EMAILJust fill the fields with some information, remembering to separate values from field names with a tab. Do not delete first and last blank lines, this is not a bug: it's the way NoSQL handle lists. But I prefer to let you discover this little feature later in this article.
CODE ACM001 NAME Bugs Bunny PHONE 1 EMAIL email@example.comNow that we have filled in the form, just write it (ESC then :wq!) and the command will check if the format is correct and write it to disk. Wow, we have a real table and real data!
Since we are curious, we will taking a look to the real file on disk.
CODE NAME PHONE EMAIL ACM001 Bugs Bunny 1
First of all, is important to note the fact that all columns are tab-separated: please keep this in mind when you want some external program to update the table, otherwise you will break the table integrity.
The first line is called the headline and contains column names; the second is the dashline and separates the headline from the body: both are named the table header. The rest is called the table body and contains the actual data.
A number of commands have been build to displays these parts, and they are simply calls to ordinary UNIX utilities:
nosql body: displays the table body (same as: tail +3 > table)
nosql dashline: displays the table dash line (same as: sed -n 2p < table)
nosql header: displays the full table header (same as: head -2 < table)
nosql headline: displays the table headline (same as: head -1 < table)
nosql see: displays the TAB character as ^I and newline as $, making much easier to see what's wrong on a broken table (same as: cat -vte < table)
Once again, this shows how powerful the UNIX OS is on its own, and how handy it can be for add-on packages such as NoSQL to tap into this power without having to re-invent the wheel.
A fun way to fill the table is using the environment variables. You can export variables in any way, e.g., using UNCGI in a CGI environment or named as column names with the desirable values as follows:
export CODE="ACM002" export NAME="Daffy Duck" export PHONE="1-800-COOK-ME" export EMAIL="firstname.lastname@example.org"
Then issue the command:
nosql lock customer.rdb; env | nosql shelltotable |\ nosql column CODE NAME PHONE EMAIL |\ nosql merge CODE NAME PHONE EMAIL customer.rdb |\ nosql write -s customer.rdb; nosql unlock customer.rdband the work is done—a bit cryptic? Yes, but that's the power of NoSQL: all can be done in a single shell command. Let's explain it:
nosql lock customer.rdb: this locks the table and ensures noone else can write in the table at the same time we do.
env: prints the environment variable.
nosql shelltotable: reads all variables from the pipe and writes a single record table containing all values to STDOUT.
nosql column CODE NAME PHONE EMAIL: reads the NoSQL table containing the environment variables from the pipe and selects column CODE, NAME, PHONE and EMAIL in that order and writes STDOUT.
nosql merge CODE NAME PHONE EMAIL customer.rdb: reads the two merging tables, one from pipe (STDIN) and other from file, writing the merged table to stdout. The resulting table has two records: the existing one and the new one extracted from the above process.
nosql write -s customer.rdb: reads the resulting table (merged from the above command) and writes it to disk as customer.rdb. We already explained what switch -s means.
nosql unlock customer.rdb: unlocks the table.
CODE NAME PHONE EMAIL ------ ---------------- --------------- ---------------------- ACM001 Bugs Bunny 1-800-CATCH-ME email@example.com ACM002 Daffy Duck 1-800-COOK-ME firstname.lastname@example.org
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Designing Electronics with Linux
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
2 hours 51 min ago
- BASH script to log IPs on public web server
7 hours 18 min ago
10 hours 53 min ago
- Reply to comment | Linux Journal
11 hours 26 min ago
- All the articles you talked
13 hours 49 min ago
- All the articles you talked
13 hours 52 min ago
- All the articles you talked
13 hours 54 min ago
18 hours 19 min ago
- Keeping track of IP address
20 hours 10 min ago
- Roll your own dynamic dns
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?