This message is for LJ's At the Forge columnist Reuven Lerner: I've read both the Ruby beginning books and looked all over the place, and I've not been able to find anything that takes you from a nice beginner setup with a single database table to something real—with a “well-designed” database—which means lots of tables. I know there's an official name for it, but I don't remember it—basically, no data is duplicated anywhere, you just have “pointers” (IDs) everywhere.
The reason I care is that I'm trying to do just that—the application is an on-line user-based help system (which doesn't even come close to describing it). The short version is that I've got user tables, user e-mail tables, other sorts of user identification (which can be one entry or many more), lists of things users are interested in (like hobbies), when they last verified their e-mail addresses, and on and on. Once I get past a single table though, I can find no help anywhere.
Please, point me to a book or something that goes beyond a one or
two table database into the “real world”. This is a
for a nonpaying hobby. I've been fooling with this now for more than ten years
(I started with Perl, but gave up on Perl once I had all the database entry done
and had to start working on the database searching).
Reuven M. Lerner replies: Unfortunately, you're right. Most tutorials (including my own!) have one table, or maybe two, and don't go much beyond that. It's hard enough to write something understandable that fits into a normal-sized article length. Several tables would require even more time and space, which would make things even more complicated.
One solution is just to extrapolate a bit from those examples. An association between two tables works the same way between any pair of tables. Indeed, associations only exist between two tables, so you just need to repeat the association between however many tables you have.
But I believe, based on what you've written here, that you're looking for something more concrete—something you can really sink your teeth into to understand the techniques.
One of my favorite Rails books is Enterprise Rails written by Dan Chak and published by O'Reilly. His examples might be too much in the other direction, using a variety of advanced techniques that might be overkill for your simple application or for simply seeing how many tables can fit together. But, several of the examples involve multiple tables joined in different ways to demonstrate a variety of Rails techniques.
However, if you're really looking to understand how a number of tables might fit together, look at one of the Rails-based open-source applications available on the Internet. There are a number of social-networking platforms (such as Insoshi and LovedByLess), at least one e-commerce system (Spree), and at least one content-management system (Radiant). You can download, explore and try to understand the code. In your particular case, it sounds like the most interesting part of these applications will be the models and the associations among them, but there are lots of other parts to a Rails application, and looking at these open-source applications can help you better understand those too. I hope this is helpful! Please let me know if you have any further questions.
I have been a reader and subscriber to Linux Journal for a number of years. Almost every month I learn something new that can be applied immediately. With the help of the excellent Linux Journal articles over the years, I have had the opportunity to install database servers, backup servers, network monitoring systems and PBX systems.
This month, I was trying out Xen server with the intention of installing sogo. As I was preparing to deploy my first appliance, I followed the commands on page 74 of the January 2010 issue [“Simple Virtual Appliances with Linux and Xen” by Matthew Hoskins] and promptly wiped out my previous work. The tar command should read:
tar -cvzf appliance-base.img.tar.gz ↪appliance-base.img appliance-base.cfg
tar -cvzf appliance-base.img appliance-base.cfg
Keep up the good work.
Matthew Hoskins replies: Rob, my sincerest apologies. I don't know how that typo crept in there. You are correct. Thanks for your feedback.
As a bash teacher at Marseille University, I like to read Dave Taylor's
columns. In the February 2010 issue, on page 24, see the line that says
trap "..." 0 1 9 15. If you try trap
-l to get
the list of UNIX signals, you'll see that 0 is not a signal (but has
special meaning for kill: kill -0 pid succeeds if pid exists, without
sending any signal). Moreover, 9 stands for SIGKILL, which is not
Finally, a reasonable choice is: 1 2 3 15.
Dave Taylor replies: Sacré bleu! You're right, there is no trap 0 signal to catch, and you can't catch SIGKILL. Thanks!
Daniel Bartholomew's review of the Always Innovating Touch Book in the
February 2010 issue mentioned that the software wasn't very good. I was
wondering if there was a way to install Ubuntu or Windows XP instead?
Daniel Bartholomew replies: The Touch Book is built around an ARM Cortex-A8 CPU from Texas Instruments. Because of this, your choices are limited. Windows, for example, does not have an ARM-compatible version. There are some choices available though. Ubuntu has an ARM port, and both Android and Chrome OS run on ARM processors. Also, other Linux distributions run on ARM processors that probably could be made to run on the Touch Book. The Touch Book Wiki has the best information on the various distributions you can run on the Touch Book: www.alwaysinnovating.com/wiki.
Always Innovating also seems to have recognized that its Linux OS is not the greatest, because the latest version of the Touch Book OS (at the time of this writing, it's version 2010-01.b) has Ubuntu and Android included as boot-time options with Chrome OS promised in a future version.
The Ubuntu boot option boots you into a vanilla Ubuntu Xfce desktop environment. In my limited testing, it appears to work well enough, but I wish they would have used the Ubuntu Netbook Remix or the MID edition, as what you get isn't optimized for the touchscreen, and there doesn't appear to be an on-screen keyboard either. The Android boot option isn't fully functional yet. It boots, and you get to the desktop, but you can't do much else. For example, the two hardware buttons aren't mapped to any of the standard Android hardware buttons (home, menu and back), and there doesn't appear to be anything set up to emulate them, which makes Android unusable for now.
The default Touch Book OS has improved during the past few months, but it still has too many issues for me to recommend it, unless you like getting up close and personal with your hardware and software.
Kyle Rankin is usually spot on with his Hack and / column, but he may have confused readers with his explanation of CPU load and the output of the w/uptime command on Linux [March 2010 issue, “Linux Troubleshooting, Part I: High Loads”]. Contrary to what he says, w does not show the number of processes waiting for the CPU to become available.
On Linux, it includes both those ready to run and those waiting for any type of I/O. Later in the article, he talks about CPU and I/O bound load situations and is correct on how they can be monitored. It's just a bad summary that might confuse folks. That is why you can have a responsive system even though w reports a load of 40.
I used to administer Sun OS/Solaris servers, and on those,
was genuinely the number of processes ready to use the CPU. I got confused
myself when I started working on Linux—the semantics of the command were
Kyle Rankin replies: Thanks for the clarification! In trying to explain the idea of load in a simple way, I definitely left out the more complete definition. Here it is from the uptime man page: “System load average is the average number of processes that are either in a runnable or uninterruptable state. A process in a runnable state is either using the CPU or waiting to use the CPU. A process in an uninterruptable state is waiting for some I/O access, eg waiting for disk.”
In the March 2010 issue, Paul wrote a very interesting letter about filling the /etc/hosts file with IPs of important DNS names to avoid any man-in-the-middle spoofing or phishing attacks that could be used on free/public LANs. I use OpenDNS for all my computers, mainly because they offer a faster service than my ISP (in the UK) as well as running an anti-phishing database service to protect the rest of the family.
I was wondering if setting the computer to use OpenDNS instead of the default (and the possibly corrupt) DNS servers issued by the router/DHCP server is as safe as inserting IPs into the hosts file?
Is it possible for a compromised DNS server (say, inside a router) to
intercept DNS queries destined for an external IP address and return false
address data to the original node?
Sadly, while a compromised DNS server wouldn't be able to poison your DNS results, a compromised router certainly could. And in that instance, it doesn't matter if we're talking about DNS or just traffic in general—an untrusted network is quite untrustworthy!
As an example, the router in your hypothetical network could just reply, pretending to be the OpenDNS servers. Because the router routes traffic, and there is no way to verify where it's coming from, it's easy to spoof. In fact, that's the reason so many people at coffee shops immediately start a VPN session, because the encryption guarantees you're connecting to the proper endpoint.
So in the end, the only way to be safe behind an untrusted router is to use some sort of VPN. Hope that helps!—Ed.
Concerning John Knight's description of wxGuitar in his New Projects column
in the March 2010 issue:
H is the German name for the note we know in English as B. B to the German
means our English B-flat. So wxGuitar probably comes from Germany or
another country with a German-speaking musical heritage.
Bach's B minor mass is, in German, in the key of H moll.
The company I work for takes legal compliance with licenses seriously. This presents a difficulty when using free Linux distros, because the clearest statement they seem to make (if you're lucky), is that the licenses of the software in the distro are compatible. Since distros typically contain 1,000s of packages, each with its own license, for a company to check that it can comply with the terms of each (a simple example: that no package says it may not be used for commercial purposes) is quite expensive. It could tie up a legal department for weeks.
IANAL, but even Red Hat's licenses look slightly tricky. They say that the core stuff is all GPL2, but they also say that many contain many components that each have their own license. I guess Red Hat doesn't distribute OOo (now GPL3). Red Hat also has a set of 16 third-party licenses, one of which (Monotype) says you may take only a single copy for backup. So, let's hope no one has multiple level 0 system backups! Another (the “Macromedia” aka Adobe) license would be even harder to comply with. If you install the Adobe Reader on two servers (one for failover), you're in breach.
I just used Red Hat as an example, because you would think commercial distros would have the clearest statement of a user's legal obligations, but even their licenses would require a while to check properly. (Did I mention that the “Macromedia” links to a further set of dozens of licenses for other Adobe software that may be relevant?)
This seems like a crazy situation. Surely it makes sense for the legal position of each distro to be clearly set out and summarized, for companies who want to use it in good faith. Instead, it seems every user is expected to duplicate the effort of checking for the typical problematic restrictions (such as not for commercial use or NAP).
Of course, the same situation applies to other software collections, like
the wonderful Cygwin. I'm writing to you to bring the matter to the
community's attention, and in the hope that the situation is not really as
impossible as it seems.
I know exactly what you mean, and our community (Linux users) knows all too painfully how licensing, even open source, can be so controversial. The GPL itself, like you mention, with multiple versions, is confusing. I'm not a lawyer either, but I fear this won't end any time soon. When companies like Adobe try to stretch their comfort level and delve into open source, they do so cautiously, so that their intellectual property isn't stolen. Quite frankly, I understand their concerns, and I applaud them for making any movement into open source at all.
Perhaps in the future, licensing will be less complex, as time proves open source is a “safe” environment to work in and still make money. Until that time, it is a complicated mess to say the least.—Ed.
Have a photo you'd like to share with LJ readers? Send your submission to email@example.com. If we run yours in the magazine, we'll send you a free T-shirt.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
|Working with Command Arguments||May 28, 2016|
|Secure Desktops with Qubes: Installation||May 28, 2016|
|CentOS 6.8 Released||May 27, 2016|
|Secure Desktops with Qubes: Introduction||May 27, 2016|
|Chris Birchall's Re-Engineering Legacy Software (Manning Publications)||May 26, 2016|
|ServersCheck's Thermal Imaging Camera Sensor||May 25, 2016|
- Secure Desktops with Qubes: Introduction
- Secure Desktops with Qubes: Installation
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Working with Command Arguments
- CentOS 6.8 Released
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- ServersCheck's Thermal Imaging Camera Sensor
- Chris Birchall's Re-Engineering Legacy Software (Manning Publications)
- Petros Koutoupis' RapidDisk
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide