SQL Comes to Nmap: Power and Convenience
to receive a single line of output. (We added the order by clause for future use.) The open_ports column, when viewed along with the date/time and runid columns, sketches a rough trend of open ports over a period of time.
The targets table captures information on each target it encounters, one row per unique IP address. This is the only place where the hostname, if resolvable, and the OS guessed by Nmap are captured. Let's find out what it knows about our target:
mysql> select * from targets -> where ip = '192.168.10.44';
Notice that the OS_guessed field now contains Linux Kernel 2.4.0-2.5.20 and the hostname column is set to ophelia.hasnains.com (I like Shakespeare's tragic heroines).
Now that we basically have all the bits and pieces, let's construct a single query to put all the information in one place for our target host:
mysql> select r.runid, r.d, r.t, t.ip, t.host, -> t.os_guessed, p.port, p.protocol, p.service, -> p.state, p.fullversion from runlist r, -> targets t, portstat p -> where r.runid = 100 and p.target_ip = t.ip -> and p.runid = r.runid -> order by r.runid, r.d, r.t, t.ip;
We're not showing the output for reasons of brevity, but you could try it on your own. We could use a report writer to group the results by targets. For fancier output, we need to get heavier artillery, such as PHP or Perl. One of the most useful reports is to identify the change in open ports for each target. For instance, say our target has closed 111/TCP but opened 23/TCP. In such a scenario, the open_ports column in hoststats still would show four ports even though the details have changed. But a custom program easily could pick out the difference(s) to report.
The most common query would be to find out what ports are open for a given target, and that can be accomplished with:
mysql> select d, t, port, protocol, state, -> fullversion from portstat -> where target_ip = '192.168.10.44' -> order by d,t,port;
Another common query is whether a given port was open on a target at some time in the past—“Did we have SSH open on 192.168.10.44 two weeks ago?” As long as nmapsql was installed, assuming nmapsql runs routinely from crontab, the answer would be in the following query:
mysql> select d, t,target_ip, port,protocol, -> service, state, fullversion from portstat -> where port = 22 and protocol = "tcp" -> and state = "open" -> d = date_sub( curdate(), interval 14 day) -> order by d, runid, target_ip ;
Obviously, you could have more than one instance of nmapsql running on a given day, hence the order by clause. If you were using a third-party tool, such as PHP or Perl, to generate the result set, you could consult the runlist table to find the runid for the exact time frame you need and query for results of that runid for your target of choice.
Another useful query is to identify the total number of targets in a given network with a given port open—“How many hosts in the 192.168.10/24 subnet have 80/TCP open?” This query would produce the following result:
mysql> select runid, d, t, target_ip, port, -> protocol, state from portstats -> where port = 80 and protocol = 'tcp' -> and state = 'open' -> and target_ip like '192.168.10.%';
Text matching doesn't quite lend itself to subnet matching, but you get the general idea.
In many cases, such as when a consultant goes from one network to another, it's desirable to be able to change the name of the database, perhaps to the customer's name, so that the data from multiple places doesn't get jumbled together. At the time of this writing, the way to do that is to update the db=nmaplog item in the ~/nmapsql.rc file used by nmaplog to pick up database access information.
In order to change the database in use, replace nmaplog in ~/nmapsql.rc with the appropriate name, and then make sure the user specified in ~/nmapsql.rc has permissions on that database. Then, load the database schema into the new database. Assuming the new database is called newnmap, the following line would load the schema:
$ mysql newnmap < nmaplog.sql
I don't recommend using different databases, however. It's far easier to unload the data to a disk file and then load a blank schema into the nmaplog database. The following lines would accomplish this:
$ mysqldump nmaplog > newnmap.sql $ mysql nmaplog < nmaplog.sql
Depending on how your database permissions are set up, you might have to specify the MySQL user name and password for the above commands to work.
Webinar: 8 Signs You’re Beyond Cron
On Demand NOW
Join Linux Journal and Pat Cameron, Director of Automation Technology at HelpSystems, as they discuss the eight primary advantages of moving beyond cron job scheduling. In this webinar, you’ll learn about integrating cron with an enterprise scheduler.View Now!
|My Humble Little Game Collection||May 28, 2015|
|New Linux Based OS Brings Internet of Things Closer to Reality||May 27, 2015|
|Non-Linux FOSS: All the Bitcoin, None of the Bloat||May 26, 2015|
|Dr Hjkl on the Command Line||May 21, 2015|
|Initializing and Managing Services in Linux: Past, Present and Future||May 20, 2015|
|Goodbye, Pi. Hello, C.H.I.P.||May 18, 2015|
- My Humble Little Game Collection
- New Linux Based OS Brings Internet of Things Closer to Reality
- Initializing and Managing Services in Linux: Past, Present and Future
- Dr Hjkl on the Command Line
- Using Hiera with Puppet
- Non-Linux FOSS: All the Bitcoin, None of the Bloat
- Infinite BusyBox with systemd
- Gartner Dubs DivvyCloud Cool Cloud Management Vendor
- Goodbye, Pi. Hello, C.H.I.P.
- It's Easier to Ask Forgiveness...