SQL Comes to Nmap: Power and Convenience
nmapsql's usefulness is hard to appreciate when run infrequently on one or two targets. It's in large environments with multiple subnets and dozens of targets where nmapsql really shines. The simplest deployment, of course, is where nmapsql and the MySQL server reside on the same host, such as a laptop a consultant carries from network to network. Because most networks are firewalled and use RFC 1918 addressing, duplicate IP addresses in the targets table is highly possible with a single laptop in roving environments. In these cases, you should unload the data and use a fresh database for each new environment.
nmapsql lends itself to other deployment scenarios: mid-sized environments where multiple scanners from different subnets log back to a single MySQL server and large environments where multiple self-contained (MySQL and nmapsql on the same box) systems do their local scanning and logging. In both these environments, duplicate RFC 1918 addresses are unlikely. However, because of the lag between scanning/logging locally and collecting to the central server, the data isn't in real time. These are two situations where the scanner ID is useful to separate data.
Security practitioners—and I must admit, some black hats—appreciate nmapsql's functionality, as it fulfills a great need. The project's immediate goals are to allow users to set nmapsql-specific options from inside nmapfe, the Nmap front end, and to build a reporting front end with PHP so end users do not have to enter queries manually in MySQL. Both of these currently are under development.
Looking further, there are plans to integrate the results of Nessus vulnerability scans into the same database, creating a single console for port scan vulnerability assessment results. Toward that goal, nmapsql's Web site currently has a simple parser that loads result files created from the Nessus client.
Hasnain Atique (firstname.lastname@example.org) lives in sunny Singapore with his wife and three-year-old daughter. When he's not watching Harry Potter with his daughter, he tries to be the lord of the pings and occasionally succeeds.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide