Linux Finance Programs Review
The next program is QHacc version 0.4.3. (Note: QHacc's authors have released version 0.5, but too late for me to include in this article.) This program requires the QT toolkit in order to work, which I found to be quite an easy installation process. I compiled the QT toolkit from sources and followed the instructions for installing it, then I compiled and installed QHacc without incident.
QHacc provides a simple two-paned layout. The left pane contains a list of accounts and balances, and the right pane contains the ledger for the selected account (see Figure 3).
Transaction entry is a little different from the other programs. Selecting “new” in the ledger brings up a transaction window where you enter the check number, date, payee, amount and memo. QHacc inserts the next available check number and provides an auto-complete feature for the payee. The transaction is entered into the ledger by pressing the ENTER key, while pressing the ESC key cancels it. Withdrawals must be preceded by a minus sign, because QHacc does not provide separate credit and debit text boxes.
QHacc also provides a mechanism for memorizing transactions. After entering the transaction that you want to memorize, right-click on it in the ledger and select “Memorize” from the pop-up menu. To insert a memorized transaction, right-click on an empty ledger line, go to the memorized item in the pop-up menu and select the transaction you wish to insert.
QHacc can be set up for single- or double-entry bookkeeping. If you want to use categories for keeping track of your transactions, you must use double-entry bookkeeping. You must also use double-entry accounting to automatically update account balances when transferring money between them, otherwise you have to enter the transfer in both accounts. If you elect to use double-entry bookkeeping, you can also split a transaction among several accounts.
QHacc provides a simple graphing function that shows the net total of transactions by the week. According to the companion TODO document, more graphs will be added in the future.
Account reconciliation is the same as in the other programs. Remember to enter any interest payments or service charges before using it. Enter the starting and ending balances from your bank statement, then select entries to clear.
At version 0.4.3, QHacc is the youngest of the programs at which I looked. I did find one problem. If I entered 00 for the year, it used 1900. Also, QHacc does not offer the ability to import QIF files.
GnuCash is the most ambitious financial program being developed at this time. It offers the greatest variety of account types, sub-accounts and stock price retrieval. This program was the most difficult to compile and get working because it depends on quite a few other programs, libraries and Perl modules. I looked at both the stable version (1.2.5) and the current development version (1.3.6).
Before attempting to use either of these versions, read the documentation closely to determine which additional programs, libraries and Perl modules you will require. Version 1.2.5 requires Motif or LessTif and version 1.3.6 uses GNOME and the GTK. I had better luck installing them on a Red Hat 6.1 system than I did on a SuSE 6.1 system.
GnuCash offers a slightly different interface than the other programs I tested. Its main window displays a list of accounts with balances, and a new ledger window is opened for each account. This allows you to view and edit more than one account at the same time (see Figure 4).
GnuCash offers more types of accounts than the other programs (see Figure 5). An account can be identified as a bank account, cash, asset, credit card, stock, liability, mutual fund, currency, income, expense or equity. Accounts can be children of other accounts, allowing you to create portfolios of funds. The ledger windows change slightly depending upon the type of account you are working with.
The ledger windows offer you the choice of displaying single or multi-line entries and allowing you to sort transactions by date, check number, transaction amount, memo or description. Unfortunately, they do not remember your display selections after you close them.
Keyboard entry leaves a bit to be desired in version 1.2.5. While you can use the TAB key to move through the fields, in single-line mode you cannot tab over to the payment or deposit fields because the focus jumps from the account field to the “Record” button. Version 1.3.6 puts the command buttons above the ledger, fixes the tab movement function and accepts a transaction when you press the ENTER key. Neither version automatically increments check numbers in the ledger window.
I had some problems importing my QIF file from Quicken 99. GnuCash version 1.2.5 read my Quicken file and did a good job of creating my chart of accounts, but all the ledger entries from the QIF file had a date of 12/31/1969. Version 1.3.6 could not even read my QIF file, quitting with the message “wrong argument in position 1”.
GnuCash offers reports but no graphs at this time. The reports included in version 1.2.5 are Balance Sheet, Profit and Loss and Portfolio Valuation. Version 1.3.6 offers these reports plus additional ones, such as a budget report, but there is no way to create a budget from within GnuCash at this time.
While certainly the most ambitious program of the group, GnuCash was also the most difficult to install. The documentation does state which programs and libraries are required by GnuCash, and I had better luck installing GnuCash on a Red Hat 6.1 system than a SuSE 6.1 system. The dependence on so many external programs and the difficulty of importing QIF files are the main problems with GnuCash.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Designing Electronics with Linux
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
2 hours 48 min ago
- BASH script to log IPs on public web server
7 hours 15 min ago
10 hours 51 min ago
- Reply to comment | Linux Journal
11 hours 23 min ago
- All the articles you talked
13 hours 46 min ago
- All the articles you talked
13 hours 50 min ago
- All the articles you talked
13 hours 51 min ago
18 hours 16 min ago
- Keeping track of IP address
20 hours 7 min ago
- Roll your own dynamic dns
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?