If you're watching your weight, monitoring your health and dietary habits, or simply unconvinced by flashy food labels that don't tell the whole story, this is the project for you. According to the Web site:
I have written open-source free nutrition software, NUT, which records what you eat and analyzes your meals for nutrient levels in terms of the "Daily Value", or DV, which is the standard for food labeling in the US. The program uses the free food composition database from the USDA. This free nutritional analysis software was written for UNIX systems (I use Linux), but it can be compiled on just about any system with a C compiler. (To get a free C compiler, Windows people might look at Cygwin or MinGW, and Mac people might look at xcode.) By experimenting with NUT, you can find the optimal level of the various nutrients and how to implement this with foods available to you. NUT can help reconstruct the lost instruction manual to your care and feeding, because, when the authorities and crackpots disagree on the proper human diet, you can design an experiment using the food composition tables to discover the truth!
NUT has an extensive database of food statistics, worth the price of admission alone (console version pictured).
The NUT GUI makes using this program much less tiresome and displays other forms of information simultaneously. Here's the stats for bearded seal oil.
One of the main reasons for using NUT is recording your daily meals and then running detailed analysis against them.
I'm unsure of other distributions, but binaries are available for Debian and Ubuntu. I run with the usual source option here. Grab the latest source tarball, extract it, and open a terminal in the new folder. At the time of this writing, NUT didn't have an install script, so you'll need to do a number of steps manually. Assuming the /usr/local folders are fine for installation, issue the following commands as root:
# mkdir /usr/local/lib/nut/ # mv raw.data/* /usr/local/lib/nut/
If your distro uses sudo (such as Ubuntu), simply prefix those commands with the sudo command.
Once this step is out of the way, compile the program with:
If the compiling goes well, you should be able to use the console program immediately. Simply enter the command:
This runs the console program, which I look at in the next session. As for the GUI program, that needs to be compiled separately.
Change into the flkt directory by entering:
$ cd fltk
And again, enter the command:
John Knight is the New Projects columnist for Linux Journal.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?