Creating Software-backed iSCSI Targets in Red Hat Enterprise Linux 6
Studying for certification exams can be an adventure. Even more so when the certification exam is a hands-on, performance-based exam. The quandry most people I know fall into, is that to effectively study for such an exam, you need access to a lab environment with elements that may be beyond the scope of the average Linux enthusiast. One such element is iSCSI.
A good example of this comes from the Red Hat website, which mentions on their Red Hat Certified Engineer (RHCE) exam objectives page:
Configure a system as an iSCSI initiator that persistently mounts an iSCSI target
Unless I'm reading too much into this statement (and I don't think I am...), I interpret this objective as stating that I need to know how to configure the iSCSI Initiator side and not the iSCSI Target side. Keep in mind that the Initiator is the host that is accessing the storage device (like a client), as opposed to the Target, which is the host that is sharing out a iSCSI device. The Target could be a complicated network-attached storage device, or something as simple as a Linux host.
Therefore, for our lab setup to be useful, we will need access to a host with a configured iSCSI Target. Let's take a look at one way we can make this happen.
The Environment
In my lab setup, I have a host named server1.example.com, with an IP address of 192.168.1.10, which runs Red Hat Enterprise Linux Server v6.0 (x86_64). This will be my iSCSI Target. A second host, named client1.example.com (192.168.1.11), which is also running RHEL 6, will be the iSCSI initator.
I will use the Open-iSCSI project software (a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI) included with RHEL 6.
Creating the iSCSI Target
First, you will need to create a software-backed iSCSI target. Install the scsi-target-utils package and its dependencies:
From the RHEL 6 DVD, I installed the following packages on server1 using the yum localinstall command:
Packages/scsi-target-utils-1.0.4-3.el6.x86_64.rpm
Packages/perl-Config-General-2.44-1.el6.noarch.rpm
Also from the RHEL 6 DVD, I installed the following packages on client1using the yum localinstall command:
Packages/iscsi-initiator-utils-6.2.0.872-10.el6.x86_64.rpm
Start the tgtd service
The tgtd service hosts SCSI targets and uses the iSCSI protocol to enable communications between targets and initiators. Start the tgtd service and make the service persistant after reboots with the chkconfig command:
# service tgtd start
# chkconfig tgtd on
Now we will need some type of storage device to use as a target. Any accessible storage device will do, but to keep things simple, here are two options we can use:
Option #1: Create an LVM volume
LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be very handy features for managing iSCSI storage. In the following example, /dev/md1 is tagged as a physical volume. A volume group named virtstore is created using /dev/md1. Finally, we create a logical volume named virtimage1, using 20G of space from the virtstore volume group:
# pvcreate /dev/md1
# vgcreate virtstore /dev/md1
# lvcreate -L 20G -n virtimage1
Option #2: Create file-based images
File-based storage is sufficient for testing or experimentation, but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based image, named virtimage2.img, for an iSCSI target.
Create a new directory to store the image file:
# mkdir -p /var/lib/tgtd/virtualization
Create an image named virtimage2.img with a size of 10G.
# dd if=/dev/zero of=/var/lib/tgtd/virtualization/virtimage2.img bs=1M seek=10000 count=0
Configure the correct SELinux context for the new image and directory.
# restorecon -R /var/lib/tgtd
Create the targets
Targets can be created by adding an XML entry to the /etc/tgt/targets.conf file, using our favorite text editor. The target attribute requires an iSCSI Qualified Name (IQN), in the format:
iqn.yyyy-mm.reversed.domain.name:OptionalIdentifierText
where:
yyyy-mm represents the 4-digit year and 2-digit month the device was started (for example: 2011-07);
reversed.domain.name is the hosts domain name in reverse. For example, server1.example.com, in an IQN, becomes com.example.server1; and
OptionalIdentifierText is any text string, without spaces, that helps the administrator identify which device is which.
This example creates iSCSI targets for both types of images created in the optional steps on server1.example.com, with an optional identifier trial. Add the following to the /etc/tgt/targets.conf file:
<target iqn.2011-07.com.example.server1:trial>
backing-store /dev/virtstore/virtimage1 #LUN1
backing-store /var/lib/tgtd/virtualization/virtimage2.img #LUN2
write-cache off
</target>
LUN 0 will appear as a device of type "controller".
Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI (the driver uses iSCSI by default, but make sure that line is not commented out).
Be aware that this example creates a globally accessible target without access control. This is fine for a lab enviroment, but not for a production environment! Refer to the scsi-target-utils documentation and example files for information on implementing secure access.
Now we need to restart the tgtd service to reload the configuration changes:
# service tgtd restart
IPTables Configuration
If your host is using iptables, open port 3260 for iSCSI access:
# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT
# service iptables save
# service iptables restart
Verify the new iSCSI Targets
View the new targets to ensure the setup was successful, by using the tgt-admin command:
# tgt-admin --show
Target 1: iqn.2011-07.com.example.server1:trial
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: None
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 20000 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: /dev/virtstore/virtimage1
LUN: 2
Type: disk
SCSI ID: IET 00010002
SCSI SN: beaf12
Size: 10000 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: /var/lib/tgtd/virtualization/virtimage2.img
Account information:
ACL information:
ALL
Notice that the ACL list is set to ALL. This allows all systems on the local network to access this device. I recommend you always set host access ACLs for production environments. However, for our lab, not having an ACL is ok.
Testing the targets
Now we can test whether the new iSCSI device is discoverable from client1.example.com:
# service iscsid start
# chkconfig iscsid on
# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10
192.168.1.10:3260,1 iqn.2011-07.com.example.server1:trial
#
So far so good! You can view more information about your iSCSI Target with the following command:
# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10
Now we can log into the iSCSI target:
# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10 -l
Logging in to [iface: default, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260]
Login to [iface: default, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260] successful.
#
Success! Now the target is enabled to be accessed upon reboots (persistent) and it's added to a node database in /var/lib/iscsi. The iscsid service will access this database upon system startup and re-attach your iSCSI Targets.
To disconnect the iSCSI Target, you will need to log out:
# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10 -u
Logging out of session [sid: 2, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260]
Logout of [sid: 2, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260] successful.
You could also log out of all attached targets by using this command:
iscsiadm -m node -U all
For more information, refer to the man pages or the documentation in /usr/share/doc. Now we have a working iSCSI environment to work with.
There are other features of this setup with I haven't even touched, such as access control and iSNS, but that's beyond the scope of what appears to be required for RHCE level certification.
Since CentOS 6.0 has been released, and is 100% binary compatible with what they refer to as "TUV" or "The Upstream Vendor", I am willing to speculate that these instructions will work on CentOS just fine. But don't take my word for it, install CentOS 6 on your test machine (or test Virtual Machine) and try it yourself!