Creating Software-backed iSCSI Targets in Red Hat Enterprise Linux 6
Studying for certification exams can be an adventure. Even more so when the certification exam is a hands-on, performance-based exam. The quandry most people I know fall into, is that to effectively study for such an exam, you need access to a lab environment with elements that may be beyond the scope of the average Linux enthusiast. One such element is iSCSI.
A good example of this comes from the Red Hat website, which mentions on their Red Hat Certified Engineer (RHCE) exam objectives page:
Unless I'm reading too much into this statement (and I don't think I am...), I interpret this objective as stating that I need to know how to configure the iSCSI Initiator side and not the iSCSI Target side. Keep in mind that the Initiator is the host that is accessing the storage device (like a client), as opposed to the Target, which is the host that is sharing out a iSCSI device. The Target could be a complicated network-attached storage device, or something as simple as a Linux host.
Therefore, for our lab setup to be useful, we will need access to a host with a configured iSCSI Target. Let's take a look at one way we can make this happen.
In my lab setup, I have a host named server1.example.com, with an IP address of 192.168.1.10, which runs Red Hat Enterprise Linux Server v6.0 (x86_64). This will be my iSCSI Target. A second host, named client1.example.com (192.168.1.11), which is also running RHEL 6, will be the iSCSI initator.
I will use the Open-iSCSI project software (a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI) included with RHEL 6.
Creating the iSCSI Target
First, you will need to create a software-backed iSCSI target. Install the scsi-target-utils package and its dependencies:
From the RHEL 6 DVD, I installed the following packages on server1 using the yum localinstall command:
Also from the RHEL 6 DVD, I installed the following packages on client1using the yum localinstall command:
Start the tgtd service
The tgtd service hosts SCSI targets and uses the iSCSI protocol to enable communications between targets and initiators. Start the tgtd service and make the service persistant after reboots with the chkconfig command:
# service tgtd start
# chkconfig tgtd on
Now we will need some type of storage device to use as a target. Any accessible storage device will do, but to keep things simple, here are two options we can use:
Option #1: Create an LVM volume
LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be very handy features for managing iSCSI storage. In the following example, /dev/md1 is tagged as a physical volume. A volume group named virtstore is created using /dev/md1. Finally, we create a logical volume named virtimage1, using 20G of space from the virtstore volume group:
# pvcreate /dev/md1
# vgcreate virtstore /dev/md1
# lvcreate -L 20G -n virtimage1
Option #2: Create file-based images
File-based storage is sufficient for testing or experimentation, but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based image, named virtimage2.img, for an iSCSI target.
Create a new directory to store the image file:
# mkdir -p /var/lib/tgtd/virtualization
Create an image named virtimage2.img with a size of 10G.
# dd if=/dev/zero of=/var/lib/tgtd/virtualization/virtimage2.img bs=1M seek=10000 count=0
Configure the correct SELinux context for the new image and directory.
# restorecon -R /var/lib/tgtd
Create the targets
Targets can be created by adding an XML entry to the /etc/tgt/targets.conf file, using our favorite text editor. The target attribute requires an iSCSI Qualified Name (IQN), in the format:
yyyy-mm represents the 4-digit year and 2-digit month the device was started (for example: 2011-07);
reversed.domain.name is the hosts domain name in reverse. For example, server1.example.com, in an IQN, becomes com.example.server1; and
OptionalIdentifierText is any text string, without spaces, that helps the administrator identify which device is which.
This example creates iSCSI targets for both types of images created in the optional steps on server1.example.com, with an optional identifier trial. Add the following to the /etc/tgt/targets.conf file:
backing-store /dev/virtstore/virtimage1 #LUN1
backing-store /var/lib/tgtd/virtualization/virtimage2.img #LUN2
LUN 0 will appear as a device of type "controller".
Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI (the driver uses iSCSI by default, but make sure that line is not commented out).
Be aware that this example creates a globally accessible target without access control. This is fine for a lab enviroment, but not for a production environment! Refer to the scsi-target-utils documentation and example files for information on implementing secure access.
Now we need to restart the tgtd service to reload the configuration changes:
# service tgtd restart
If your host is using iptables, open port 3260 for iSCSI access:
# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT
# service iptables save
# service iptables restart
Verify the new iSCSI Targets
View the new targets to ensure the setup was successful, by using the tgt-admin command:
# tgt-admin --show
Target 1: iqn.2011-07.com.example.server1:trial
I_T nexus information:
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB
Removable media: No
Backing store type: rdwr
Backing store path: None
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 20000 MB
Removable media: No
Backing store type: rdwr
Backing store path: /dev/virtstore/virtimage1
SCSI ID: IET 00010002
SCSI SN: beaf12
Size: 10000 MB
Removable media: No
Backing store type: rdwr
Backing store path: /var/lib/tgtd/virtualization/virtimage2.img
Notice that the ACL list is set to ALL. This allows all systems on the local network to access this device. I recommend you always set host access ACLs for production environments. However, for our lab, not having an ACL is ok.
Testing the targets
Now we can test whether the new iSCSI device is discoverable from client1.example.com:
# service iscsid start
# chkconfig iscsid on
# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10
So far so good! You can view more information about your iSCSI Target with the following command:
# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10
Now we can log into the iSCSI target:
# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10 -l
Logging in to [iface: default, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260]
Login to [iface: default, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260] successful.
Success! Now the target is enabled to be accessed upon reboots (persistent) and it's added to a node database in /var/lib/iscsi. The iscsid service will access this database upon system startup and re-attach your iSCSI Targets.
To disconnect the iSCSI Target, you will need to log out:
# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10 -u
Logging out of session [sid: 2, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260]
Logout of [sid: 2, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260] successful.
You could also log out of all attached targets by using this command:
iscsiadm -m node -U all
For more information, refer to the man pages or the documentation in /usr/share/doc. Now we have a working iSCSI environment to work with.
There are other features of this setup with I haven't even touched, such as access control and iSNS, but that's beyond the scope of what appears to be required for RHCE level certification.
Since CentOS 6.0 has been released, and is 100% binary compatible with what they refer to as "TUV" or "The Upstream Vendor", I am willing to speculate that these instructions will work on CentOS just fine. But don't take my word for it, install CentOS 6 on your test machine (or test Virtual Machine) and try it yourself!
Pete Vargas Mas is an avid indoorsman and a Linux Consultant.Pete is a RHCE and a MCSA, which so far has not caused any eddies in the space-time continuum. He spends most of his time these days herding hundreds of Linux servers.
|Nightfall on Linux||Oct 26, 2016|
|Daily Giveaway - Fun Prizes from Red Hat!||Oct 25, 2016|
|Installing and Running a Headless Virtualization Server||Oct 25, 2016|
|Ubuntu MATE, Not Just a Whim||Oct 21, 2016|
|Non-Linux FOSS: Screenshotting for Fun and Profit!||Oct 20, 2016|
|Nasdaq Selects Drupal 8||Oct 19, 2016|
- Nightfall on Linux
- Installing and Running a Headless Virtualization Server
- Secure Desktops with Qubes: Compartmentalization
- Daily Giveaway - Fun Prizes from Red Hat!
- Ubuntu MATE, Not Just a Whim
- Nasdaq Selects Drupal 8
- Build Your Own Raspberry Pi Camera
- Non-Linux FOSS: Screenshotting for Fun and Profit!
- Canonical Ltd.'s Ubuntu Core
- Polishing the wegrep Wrapper Script
Pick up any e-commerce web or mobile app today, and you’ll be holding a mashup of interconnected applications and services from a variety of different providers. For instance, when you connect to Amazon’s e-commerce app, cookies, tags and pixels that are monitored by solutions like Exact Target, BazaarVoice, Bing, Shopzilla, Liveramp and Google Tag Manager track every action you take. You’re presented with special offers and coupons based on your viewing and buying patterns. If you find something you want for your birthday, a third party manages your wish list, which you can share through multiple social- media outlets or email to a friend. When you select something to buy, you find yourself presented with similar items as kind suggestions. And when you finally check out, you’re offered the ability to pay with promo codes, gifts cards, PayPal or a variety of credit cards.Get the Guide