I ran into compilation problems when I first tried to compile the fltk component (hence, yesterday I was going to cover only the console program). I'm not sure what I did to get it working, but I think it was downloading fltk 1.3 manually from the fltk Web site, then compiling and installing it separately. If you manage to get it compiled, you can run the GUI program now by entering:
Note the capital letter above—it's the differentiator between the GUI and command-line programs.
If you'd like quick access to NUT, copy the executables into bin folders. If you're still in the fltk directory, change back into the main directory of the nut folder:
$ cd ..
Next, enter these commands as either root or sudo:
# mv nut /usr/local/bin/ # mv nut.1 /usr/local/man/man1/ # mv fltk/Nut /usr/local/bin
Now you either can run the command-line version with
nut or the GUI with
Unfortunately, the long installation instructions haven't left me much room to cover the actual usage of NUT, but thankfully, things are pretty simple to use.
The console version uses a series of number-driven menus to navigate between functions and foods. For instance, option 1 is for recording meals, followed immediately by a prompt for the date, the meal number and, finally, the name of the food.
Entering the name of the food needn't be precise, as NUT's main strength is its database. Long lists of premade choices exist, and each choice has detailed information regarding a food's nutritional value, such as protein, carbohydrates, specific vitamins and so on.
Head back into the main menu, and more options exist, such as an analysis of your meals and food suggestions, trend plotting and so on, but most people will want to look at options 4 and 6. Here you can browse the extensive database, comparing nutritional values of all sorts of food and drink to your heart's content. The entries are extensive—everything from Red Bull to bearded seal meat.
As for the GUI, I'm not 100% sure, but it appears to have more options than the console version, such as reset controls and the ability to control various ratios. Perhaps I missed them in the console version, but either way, there's definitely more on the screen, more of the time. Plus, everything is broken down into tabs, making the whole process more intuitive, saving the user from navigating endless submenus.
All in all, this is a very clever program despite the currently long-winded installation process. Once those issues are ironed out, NUT will be a seriously nifty nutrition program.
Read More: http://nut.sourceforge.net
John Knight is the New Projects columnist for Linux Journal.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?