Use curl to Monitor Your Vonage Phone Bill

 in

If you're a Vonage user and you'd like to keep tabs on your bill as the month progresses, the script described here can help. The script uses curl to login to your Vonage account and download the web page with your current balance. The balance is then extracted using grep and sed.

Downloading web pages with curl is fairly easy, it gets a bit tricky though when you need to login to a web site before you can get to the web page you want to download. The basic sequence of steps to get to a page behind a login page using curl is:

  • Determine the login URL (the page with the login form on it).
  • Open the login page in your browser.
  • Use the browser's "View Source" option to look at the HTML and locate the login form. Make note of the URL that the form posts to and the names of the fields that get posted. Generally there are only 2 fields: username and password (although they might have different names), but there may also be some hidden fields that you need to include (as well as understand what they are and what is an appropriate value to send).
  • Retrieve the login page with curl. Note, that it may not actually be necessary to retrieve this page but if the page that the form posts to expects some cookies to be set, you may need to retrieve this first page so that you can create and save the cookies (which curl does for you).
  • Invoke curl a second time and post the login data.
  • Now invoke curl a third time passing the URL of the actual page that you want to retrieve.
  • If the site has a logout link you can optionally also use curl a fourth time to retrieve the logout page to ensure that the session is closed.

The curl command below retrieves the Vonage login page:

curl --silent --cookie-jar $cookie_jar \
    --output $web_page-1 \
    http://www.vonage.com/?login

Note the --cookie-jar option, this stores any cookies required by the website in the specified file. The file to store the retrieved page is specified by the --output option.

The curl command below now posts the login data required by the login form:

curl --silent --cookie $cookie_jar --cookie-jar $cookie_jar \
    --location \
    --data "username=$username&password=$password" \
    --output $web_page-2 \
    https://secure.vonage.com/vonage-web/public/login.htm

Notice here that in addition to --cookie-jar option we also specify the --cookie option. This tells curl to use the cookie jar that we created in the first invocation as input for this invocation. We also, specify the --location option so that any redirects sent by the page are followed. The actual data to post is specified with the --data option. The values before the equals signs in the data are the fields names from the login form, the values after are the appropriate field values to post.

The following two curl commands now retrieve the billing page and logout from Vonage:

curl --silent --cookie $cookie_jar --cookie-jar $cookie_jar \
    --output $web_page-3 \
    https://secure.vonage.com/webaccount/billing/index.htm

curl --silent --cookie $cookie_jar --cookie-jar $cookie_jar \
    --location \
    --output $web_page-4 \
    https://secure.vonage.com/webaccount/public/logoff.htm

All that's left now is to extract the account balance from the billing page (the third page that we retrieved). After looking at the returned HTML, I was able to see where the data I wanted was located and determine a way to filter out all the extraneous information using grep and sed:

echo Phone bill: $(grep 'td_value_total_amount' $web_page-3 | sed -e 's/.*>\$//' -e 's/<.*//')

The following shows a sample run of the script:

$ sh check-vonage.sh


Phone bill: 12.50

The entire script follows:

#!/bin/bash

cookie_jar=cookies.tmp
web_page=vonage.tmp
username=USERNAME
password=PASSWORD

trap "rm -f $cookie_jar $web_page-*" EXIT

curl --silent --cookie-jar $cookie_jar \
    --output $web_page-1 \
    http://www.vonage.com/?login

curl --silent --cookie $cookie_jar --cookie-jar $cookie_jar \
    --location \
    --data "username=$username&password=$password" \
    --output $web_page-2 \
    https://secure.vonage.com/vonage-web/public/login.htm

curl --silent --cookie $cookie_jar --cookie-jar $cookie_jar \
    --output $web_page-3 \
    https://secure.vonage.com/webaccount/billing/index.htm

curl --silent --cookie $cookie_jar --cookie-jar $cookie_jar \
    --location \
    --output $web_page-4 \
    https://secure.vonage.com/webaccount/public/logoff.htm

echo
echo
echo Phone bill: $(grep 'td_value_total_amount' $web_page-3 | sed -e 's/.*>\$//' -e 's/<.*//')
AttachmentSize
check-vonage.sh_.txt843 bytes
______________________

Mitch Frazier is an Associate Editor for Linux Journal.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

tip to get the --data string

Anonymous's picture

good article, here's a tip to get the --data string.

1. use lynx to download source code of login page:
lynx --source "http://loginpage.com" > some_file
(you might have to edit lynx.cfg to accept self signed ssl certs)

2 open up the login file with vim and change method=post to method=get and save file

3 use your web browser to open up the edited login file and fill in the fields and hit the submit button.
The string for the --data option will be in the address bar of your web browser after the ?.

using this method saves you from missing hidden fields.

Interesting idea. Please consider using mktemp

R A Lichtensteiger's picture

Very nice example of using curl to cull information from a web site. I do have one quibble -- instead of defining your variables like this:

cookie_jar=cookies.tmp
web_page=vonage.tmp

I recommend you use mktemp(1) to create a randomized directory and put your files in there. This avoids predictable file names containing potentially sensitive information:

ME=`basename $0`
TMPDIR=`mktemp -qt -d ${ME}.XXXXXX`
if [ $? -ne 0 ]
then
echo "${ME} Error: unable to create temp directory"
exit 1
fi

COOKIEJAR=${TMPDIR}/cookies
PAGEBASE=${TMPDIR}/vonage-out

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix