Publisher No Starch Press touts Justin Seitz's new book Gray Hat Python as “the first Python book written for security analysts”. Subtitled “Python Programming for Hackers and Reverse Engineers”, the book explains the intricacies of using Python to assist in security analysis tasks, teaching readers how to design debuggers, create powerful fuzzers, utilize open-source libraries to automate tedious tasks, interface with security tools and more. Gray Hat Python, says No Starch, covers everything from the nuts and bolts of how to use the language for basic code and DLL injection to using Python to analyze binaries and disassemble software. More than anything, however, the book reveals how superior the Python language is when it comes to hacking, reverse engineering, malware analysis and software testing.
The gist behind Black Duck Software's new Black Duck Suite is to give development organizations a comprehensive management platform for taking advantage of open-source components while addressing the associated management, compliance and security challenges. Black Duck says that its new product brings “new levels of automation and efficiency” to these tasks and “enables developers to focus on creating innovative business value instead of 're-inventing the wheel'”. Black Duck Suite is a unified framework of the company's Code Center, Export and Protex enterprise products, plus SDK with Web services API that integrates with other tools and environments. Key product features include a searchable internal catalog, a customizable approval work flow and a comprehensive KnowledgeBase of open-source information.
Rounding out the trio of memorable color + object company names is BlueStripe Software, which recently released version 2.0 of FactFinder, an application for staging, deploying and managing business-critical applications. Now available for Red Hat Enterprise Linux, FactFinder enables “unsurpassed intelligence into the performance and behavior” of applications, allowing users to understand their structure and relationship to each other, efficiently manage them, identify performance issues and perform triage to resolve issues. Key new features include automatic discovery and mapping, health and performance measurement and service-level driven triage.
Please send information about releases of Linux-related products to email@example.com or New Products c/o Linux Journal, PO Box 980985, Houston, TX 77098. Submissions are edited for length and content.
James Gray is Products Editor for Linux Journal
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Build a Skype Server for Your Home Phone System
- Tech Tip: Really Simple HTTP Server with Python
- Why Python?
- A Topic for Discussion - Open Source Feature-Richness?
- Not free anymore
3 hours 59 min ago
7 hours 47 min ago
- Reply to comment | Linux Journal
7 hours 55 min ago
- Understanding the Linux Kernel
10 hours 9 min ago
12 hours 39 min ago
- Kernel Problem
22 hours 42 min ago
- BASH script to log IPs on public web server
1 day 3 hours ago
1 day 6 hours ago
- Reply to comment | Linux Journal
1 day 7 hours ago
- All the articles you talked
1 day 9 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?