Creating Software-backed iSCSI Targets in Red Hat Enterprise Linux 6

Studying for certification exams can be an adventure. Even more so when the certification exam is a hands-on, performance-based exam. The quandry most people I know fall into, is that to effectively study for such an exam, you need access to a lab environment with elements that may be beyond the scope of the average Linux enthusiast. One such element is iSCSI. 

A good example of this comes from the Red Hat website, which mentions on their Red Hat Certified Engineer (RHCE) exam objectives page:

Configure a system as an iSCSI initiator that persistently mounts an iSCSI target

Unless I'm reading too much into this statement (and I don't think I am...), I interpret this objective as stating that I need to know how to configure the iSCSI Initiator side and not the iSCSI Target side. Keep in mind that the Initiator is the host that is accessing the storage device (like a client), as opposed to the Target, which is the host that is sharing out a iSCSI device. The Target could be a complicated network-attached storage device, or something as simple as a Linux host.

Therefore, for our lab setup to be useful, we will need access to a host with a configured iSCSI Target. Let's take a look at one way we can make this happen.

The Environment

In my lab setup, I have a host named server1.example.com, with an IP address of 192.168.1.10, which runs Red Hat Enterprise Linux Server v6.0 (x86_64). This will be my iSCSI Target. A second host, named client1.example.com (192.168.1.11), which is also running RHEL 6, will be the iSCSI initator.  

I will use the Open-iSCSI project software (a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI) included with RHEL 6.

Creating the iSCSI Target

First, you will need to create a software-backed iSCSI target. Install the scsi-target-utils package and its dependencies:

From the RHEL 6 DVD, I installed the following packages on server1 using the yum localinstall command:

   Packages/scsi-target-utils-1.0.4-3.el6.x86_64.rpm
   Packages/perl-Config-General-2.44-1.el6.noarch.rpm

Also from the RHEL 6 DVD, I installed the following packages on client1using the yum localinstall command:

   Packages/iscsi-initiator-utils-6.2.0.872-10.el6.x86_64.rpm

Start the tgtd service

The tgtd service hosts SCSI targets and uses the iSCSI protocol to enable communications between targets and initiators. Start the tgtd service and make the service persistant after reboots with the chkconfig command:

# service tgtd start
# chkconfig tgtd on

Now we will need some type of storage device to use as a target.  Any accessible storage device will do, but to keep things simple, here are two options we can use:

Option #1: Create an LVM volume

LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be very handy features for managing iSCSI storage. In the following example, /dev/md1 is tagged as a physical volume. A volume group named virtstore is created using /dev/md1. Finally, we create a logical volume named virtimage1, using 20G of space from the virtstore volume group:

# pvcreate /dev/md1
# vgcreate virtstore /dev/md1
# lvcreate -L 20G -n virtimage1

Option #2: Create file-based images

File-based storage is sufficient for testing or experimentation, but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based image, named virtimage2.img, for an iSCSI target.

Create a new directory to store the image file:

# mkdir -p /var/lib/tgtd/virtualization

Create an image named virtimage2.img with a size of 10G.

# dd if=/dev/zero of=/var/lib/tgtd/virtualization/virtimage2.img bs=1M seek=10000 count=0

Configure the correct SELinux context for the new image and directory.

# restorecon -R /var/lib/tgtd

Create the targets

Targets can be created by adding an XML entry to the /etc/tgt/targets.conf file, using our favorite text editor. The target attribute requires an iSCSI Qualified Name (IQN), in the format:

   iqn.yyyy-mm.reversed.domain.name:OptionalIdentifierText

where:

   yyyy-mm represents the 4-digit year and 2-digit month the device was started (for example: 2011-07);

reversed.domain.name is the hosts domain name in reverse. For example, server1.example.com, in an IQN, becomes com.example.server1; and

OptionalIdentifierText is any text string, without spaces, that helps the administrator identify which device is which.

This example creates iSCSI targets for both types of images created in the optional steps on server1.example.com, with an optional identifier trial. Add the following to the /etc/tgt/targets.conf file:

<target iqn.2011-07.com.example.server1:trial>
backing-store /dev/virtstore/virtimage1   #LUN1
backing-store /var/lib/tgtd/virtualization/virtimage2.img   #LUN2
write-cache off
</target>

LUN 0 will appear as a device of type "controller".

Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI (the driver uses iSCSI by default, but make sure that line is not commented out).

Be aware that this example creates a globally accessible target without access control. This is fine for a lab enviroment, but not for a production environment! Refer to the scsi-target-utils documentation and example files for information on implementing secure access.

Now we need to restart the tgtd service to reload the configuration changes:

# service tgtd restart

IPTables Configuration

If your host is using iptables, open port 3260 for iSCSI access:

# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT
# service iptables save
# service iptables restart

Verify the new iSCSI Targets

View the new targets to ensure the setup was successful, by using the tgt-admin command:

# tgt-admin --show

  Target 1: iqn.2011-07.com.example.server1:trial
  System information:
  Driver: iscsi
  State: ready
  I_T nexus information:
  LUN information:
  LUN: 0
      Type: controller
      SCSI ID: IET     00010000
      SCSI SN: beaf10
      Size: 0 MB
      Online: Yes
      Removable media: No
      Backing store type: rdwr
      Backing store path: None
  LUN: 1
      Type: disk
      SCSI ID: IET     00010001
      SCSI SN: beaf11
      Size: 20000 MB
      Online: Yes
      Removable media: No
      Backing store type: rdwr
      Backing store path: /dev/virtstore/virtimage1
  LUN: 2
      Type: disk
      SCSI ID: IET     00010002
      SCSI SN: beaf12
      Size: 10000 MB
      Online: Yes
      Removable media: No
      Backing store type: rdwr
      Backing store path: /var/lib/tgtd/virtualization/virtimage2.img
  Account information:
  ACL information:
  ALL

Notice that the ACL list is set to ALL. This allows all systems on the local network to access this device. I recommend you always set host access ACLs for production environments. However, for our lab, not having an ACL is ok.

Testing the targets

Now we can test whether the new iSCSI device is discoverable from client1.example.com:

# service iscsid start
# chkconfig iscsid on
# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10
192.168.1.10:3260,1 iqn.2011-07.com.example.server1:trial
#

So far so good! You can view more information about your iSCSI Target with the following command:

# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10

Now we can log into the iSCSI target:

# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10 -l
Logging in to [iface: default, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260]
Login to [iface: default, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260] successful.
#

Success! Now the target is enabled to be accessed upon reboots (persistent) and it's added to a node database in /var/lib/iscsi. The iscsid service will access this database upon system startup and re-attach your iSCSI Targets.

To disconnect the iSCSI Target, you will need to log out:

# iscsiadm -m node -T iqn.2011-07.com.example.server1:trial -p 192.168.1.10 -u
Logging out of session [sid: 2, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260]
Logout of [sid: 2, target: iqn.2011-07.com.example.server1:trial, portal: 192.168.1.10,3260] successful.

You could also log out of all attached targets by using this command:

iscsiadm -m node -U all

For more information, refer to the man pages or the documentation in /usr/share/doc. Now we have a working iSCSI environment to work with. 

There are other features of this setup with I haven't even touched, such as access control and iSNS, but that's beyond the scope of what appears to be required for RHCE level certification.

Since CentOS 6.0 has been released, and is 100% binary compatible with what they refer to as "TUV" or "The Upstream Vendor", I am willing to speculate that these instructions will work on CentOS just fine. But don't take my word for it, install CentOS 6 on your test machine (or test Virtual Machine) and try it yourself!

 

 

 

______________________

Pete Vargas Mas is an avid indoorsman and a Linux Consultant in the Washington DC Metro area. Pete is a RHCE and a MCITP, which so far has not caused any eddies in the space-time continuum. He spends most of his time these days herding 529 Linux servers.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Iscsi target

Craig Sayler's picture

SCSI-related events might occur at a number of points while the system starts:

1.
The init script in the initrd will log in to iSCSI targets used for / (if any). This is done using the iscsistart utility (which can do this without requiring iscsid to run).
2.
When the root filesystem has been mounted and the various service initscripts get run, the iscsid initscript will get called. This script will then start iscsid if any iSCSI targets are used for /, or if any targets in the iSCSI database are marked to be logged in to automatically.
3.
After the classic network service script has been run (or would have been run if enabled) the iscsi initscript will run. If the network is accessible, this will log in to any targets in the iSCSI database which are marked to be logged in to automatically. If the network is not accessible, this script will exit quietly.
4.
When using NetworkManager to access the network (instead of the classic network service script), NetworkManager will call the iscsi initscript. See: /etc/NetworkManager/dispatcher.d/04-iscsi

Because NetworkManager is installed in /usr, you cannot use it to configure network access if /usr is on network-attached storage such as an iSCSI target.

Craig Sayler is a
Sr. Linux/Unix, HPC, Virtualization Computer Scientist @ NASA

The other point to note is

Anonymous's picture

The other point to note is that you should mount the disk or LV with the option _netdev in /etc/fstab.

e.g.
UUID=54e1bd41-68d0-4804-94f8-1b255e53a88d /mountpoint ext4 _netdev 0 0

Yes, that's a good idea,

Pete Vargas Mas's picture

Yes, that's a good idea, since the drive is network attached, and if it doesn't come online before the server checks filesystem integrity during boot-up, you're going to find yourself at an emergency login prompt. However, you could also use a UDEV rule to ensure your iSCSI device always comes up with the same device name, and that the system knows that it's a network attached disk. I believe Ubuntu may be a good example of this. A UDEV discussion might be a good idea for a future article! Also, this is the sort of thing to try in your lab to see what happens...

Pete Vargas Mas is an avid indoorsman and a Linux Consultant in the Washington DC Metro area. Pete is a RHCE and a MCITP, which so far has not caused any eddies in the space-time continuum. He spends most of his time these days herding 529 Linux servers.

great article...but....

Anonymous's picture

....how do you use these disks? more specifically use them as a physical volume in lvm? :)

Once the iSCSI Target is

Pete Vargas Mas's picture

Once the iSCSI Target is attached to your system, it will appear to be a regular SCSI hard drive, with a device name of /dev/sd*. To use it in an LVM config, treat it like any other disk. Partition it and tag it as a physical volume, add it to a volume group, and create logical volumes as needed.

Keep in mind that the iSCSI Target is network attached, so there will be latency issues associated with the speed of the network. In most setups that I have seen, a dedicated, private, Gig-E LAN is setup just for storage devices (an iSCSI Storage Area Network or SAN).

Of course in our lab setup, this won't be an issue. But you will get the most satisfaction out of iSCSI by using Gigabit Ethernet links.

Pete Vargas Mas is an avid indoorsman and a Linux Consultant in the Washington DC Metro area. Pete is a RHCE and a MCITP, which so far has not caused any eddies in the space-time continuum. He spends most of his time these days herding 529 Linux servers.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState