SQL Comes to Nmap: Power and Convenience

When you're using Nmap to check the security of many hosts, put MySQL to work keeping track of trends and changes.

to receive a single line of output. (We added the order by clause for future use.) The open_ports column, when viewed along with the date/time and runid columns, sketches a rough trend of open ports over a period of time.

The targets table captures information on each target it encounters, one row per unique IP address. This is the only place where the hostname, if resolvable, and the OS guessed by Nmap are captured. Let's find out what it knows about our target:

mysql> select  * from targets
    -> where ip = '192.168.10.44';

Notice that the OS_guessed field now contains Linux Kernel 2.4.0-2.5.20 and the hostname column is set to ophelia.hasnains.com (I like Shakespeare's tragic heroines).

Now that we basically have all the bits and pieces, let's construct a single query to put all the information in one place for our target host:

mysql> select r.runid, r.d, r.t, t.ip, t.host,
    -> t.os_guessed, p.port, p.protocol, p.service,
    -> p.state, p.fullversion from runlist r,
    -> targets t, portstat p
    -> where r.runid = 100 and p.target_ip = t.ip
    -> and p.runid = r.runid
    -> order by r.runid, r.d, r.t, t.ip;

We're not showing the output for reasons of brevity, but you could try it on your own. We could use a report writer to group the results by targets. For fancier output, we need to get heavier artillery, such as PHP or Perl. One of the most useful reports is to identify the change in open ports for each target. For instance, say our target has closed 111/TCP but opened 23/TCP. In such a scenario, the open_ports column in hoststats still would show four ports even though the details have changed. But a custom program easily could pick out the difference(s) to report.

Useful Queries

The most common query would be to find out what ports are open for a given target, and that can be accomplished with:

mysql> select d, t, port, protocol, state,
    -> fullversion from portstat
    -> where target_ip =  '192.168.10.44'
    -> order by d,t,port;

Another common query is whether a given port was open on a target at some time in the past—“Did we have SSH open on 192.168.10.44 two weeks ago?” As long as nmapsql was installed, assuming nmapsql runs routinely from crontab, the answer would be in the following query:

mysql> select d, t,target_ip, port,protocol,
    -> service, state, fullversion from portstat
    -> where port = 22 and protocol = "tcp"
    -> and state = "open"
    -> d = date_sub( curdate(), interval 14 day)
    -> order by d, runid, target_ip ;

Obviously, you could have more than one instance of nmapsql running on a given day, hence the order by clause. If you were using a third-party tool, such as PHP or Perl, to generate the result set, you could consult the runlist table to find the runid for the exact time frame you need and query for results of that runid for your target of choice.

Another useful query is to identify the total number of targets in a given network with a given port open—“How many hosts in the 192.168.10/24 subnet have 80/TCP open?” This query would produce the following result:

mysql> select runid, d, t, target_ip, port,
    -> protocol, state from portstats
    -> where port = 80 and protocol = 'tcp'
    -> and state = 'open'
    -> and target_ip like '192.168.10.%';

Text matching doesn't quite lend itself to subnet matching, but you get the general idea.

Using a Different Database

In many cases, such as when a consultant goes from one network to another, it's desirable to be able to change the name of the database, perhaps to the customer's name, so that the data from multiple places doesn't get jumbled together. At the time of this writing, the way to do that is to update the db=nmaplog item in the ~/nmapsql.rc file used by nmaplog to pick up database access information.

In order to change the database in use, replace nmaplog in ~/nmapsql.rc with the appropriate name, and then make sure the user specified in ~/nmapsql.rc has permissions on that database. Then, load the database schema into the new database. Assuming the new database is called newnmap, the following line would load the schema:


$ mysql newnmap < nmaplog.sql

I don't recommend using different databases, however. It's far easier to unload the data to a disk file and then load a blank schema into the nmaplog database. The following lines would accomplish this:


$ mysqldump nmaplog > newnmap.sql
$ mysql nmaplog < nmaplog.sql

Depending on how your database permissions are set up, you might have to specify the MySQL user name and password for the above commands to work.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix