SQL Comes to Nmap: Power and Convenience
nmapsql's usefulness is hard to appreciate when run infrequently on one or two targets. It's in large environments with multiple subnets and dozens of targets where nmapsql really shines. The simplest deployment, of course, is where nmapsql and the MySQL server reside on the same host, such as a laptop a consultant carries from network to network. Because most networks are firewalled and use RFC 1918 addressing, duplicate IP addresses in the targets table is highly possible with a single laptop in roving environments. In these cases, you should unload the data and use a fresh database for each new environment.
nmapsql lends itself to other deployment scenarios: mid-sized environments where multiple scanners from different subnets log back to a single MySQL server and large environments where multiple self-contained (MySQL and nmapsql on the same box) systems do their local scanning and logging. In both these environments, duplicate RFC 1918 addresses are unlikely. However, because of the lag between scanning/logging locally and collecting to the central server, the data isn't in real time. These are two situations where the scanner ID is useful to separate data.
Security practitioners—and I must admit, some black hats—appreciate nmapsql's functionality, as it fulfills a great need. The project's immediate goals are to allow users to set nmapsql-specific options from inside nmapfe, the Nmap front end, and to build a reporting front end with PHP so end users do not have to enter queries manually in MySQL. Both of these currently are under development.
Looking further, there are plans to integrate the results of Nessus vulnerability scans into the same database, creating a single console for port scan vulnerability assessment results. Toward that goal, nmapsql's Web site currently has a simple parser that loads result files created from the Nessus client.
Hasnain Atique (firstname.lastname@example.org) lives in sunny Singapore with his wife and three-year-old daughter. When he's not watching Harry Potter with his daughter, he tries to be the lord of the pings and occasionally succeeds.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
1 hour 30 min ago
- Reply to comment | Linux Journal
5 hours 30 min ago
- Yeah, user namespaces are
6 hours 47 min ago
- Cari Uang
10 hours 18 min ago
- user namespaces
13 hours 11 min ago
13 hours 37 min ago
- One advantage with VMs
16 hours 6 min ago
- about info
16 hours 39 min ago
16 hours 40 min ago
16 hours 41 min ago