With an iSCSI target we can provide access to disk storage on a server over the network to a client iSCSI initiator. The iSCSI initiator will then be able to use the storage from the iSCSI target server as if it were a local disk.
Here we cover how you can set up both an iSCSI target and an iSCSI initiator in Linux and connect them together.
Studying for your RHCE certification? Checkout our RHCE video course over at Udemy which is 20% off when you use the code ROOTUSER.
Example Environment
In this example we will be working with two different Linux servers, both of which are running CentOS 7.
- Client: 192.168.1.100: This Linux system acts as the iSCSI initiator, it will connect to the iSCSI target on the server over the network.
- Server: 192.168.1.200: This Linux system acts as the iSCSI target server, it provides the disk space that will be accessible over the network to the client.
Configure iSCSI Target
First we’ll start by configuring the iSCSI target on our server, which will be offering its disk space over the network to our client which is the iSCSI initiator.
We want to install the ‘targetcli’ package on the server, this provides a shell for viewing and modifying the target configuration so that we can export local storage resources such as files, volumes or RAM disks to external systems. The layout is tree based and navigation works in a similar way to moving through the file system with commands such as ‘cd’ and ‘ls’ available.
yum install targetcli -y
Once installed we want to start the target service, and enable it so that it automatically starts up on system boot.
systemctl start target systemctl enable target
For further information on basic service management with systemctl, see our guide here.
Now we can run the targetcli command, followed by ls from within the targetcli prompt to get an idea of what is available.
[root@server ~]# targetcli Warning: Could not load preferences file /root/.targetcli/prefs.bin. targetcli shell version 2.1.fb41 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> ls o- / .................................... [...] o- backstores ......................... [...] | o- block ............. [Storage Objects: 0] | o- fileio ............ [Storage Objects: 0] | o- pscsi ............. [Storage Objects: 0] | o- ramdisk ........... [Storage Objects: 0] o- iscsi ....................... [Targets: 0] o- loopback .................... [Targets: 0]
The targetcli command offers tab completion, so if you get stuck just press tab a couple of times to view available options. These will change depending on what level of the hierarchy you are in, and just like the file system you can always go up with ‘cd ..’.
Creating a Backstore
The first thing listed are the backstores. Backstores provide different ways of storing the data locally that will be exported to an external system. The available options are block, fileio, pscsi and ramdisk. In our example we’ll be demonstrating both block and fileio options as these are quite common.
A block backstore is simply a Linux block device such as a hard drive like /dev/sdc.
A fileio backstore is a file on the file system that has been created with a predefined size, generally the performance of a single file is not as good as a block backstore.
To create a backstore, type the backstores command followed by the type that you want to create, such as fileio. If you get stuck put the tab completion to work. Here we are creating the testfile fileio backstore, this will make a 500MB /tmp/fileio file on disk, where write_back=false simply says to not use any caching which will decrease performance but will reduce possible data loss – a better option in a production environment.
/> backstores/fileio create testfile /tmp/fileio 500M write_back=false Created fileio testfile with size 524288000
If you’re using a block device rather than a file, the command would look like this. In this example we are using the disk /dev/sdc as our backstore.
/> backstores/block create name=block dev=/dev/sdc Created block storage object block using /dev/sdc.
Once complete if you issue the ‘ls’ command again and the backstores should be listed.
/> ls o- / ...................................................... [...] o- backstores ........................................... [...] | o- block ............................... [Storage Objects: 1] | | o- block ....... [/dev/sdc (1.0GiB) write-thru deactivated] | o- fileio .............................. [Storage Objects: 1] | | o- testfile .. [/tmp/fileio (500.0MiB) write-thru deactivated] | o- pscsi ............................... [Storage Objects: 0] | o- ramdisk ............................. [Storage Objects: 0] o- iscsi ......................................... [Targets: 0] o- loopback ...................................... [Targets: 0]
Create the iSCSI Target and Portal
Next we want to create the actual iSCSI target itself, start by moving into the iSCSI path as shown below. You don’t need to prepend the ‘cd’ command to access it, however that works as well.
/> iscsi/
Once in here, we can create the iSCSI target with a specific IQN (iqn.2016-01.com.example) and iSCSI target name (target).
/iscsi> create iqn.2016-01.com.example:target Created target iqn.2016-01.com.example:target. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260.
Alternatively you can just enter ‘create’ by itself and it will automatically use a default IQN and target name, you don’t have to manually pick these however the option is nice.
Now if we run ‘ls’ again we should see our iSCSI target listed.
/iscsi> ls o- iscsi ............................. [Targets: 1] o- iqn.2016-01.com.example:target ..... [TPGs: 1] o- tpg1 ................ [no-gen-acls, no-auth] o- acls ........................... [ACLs: 0] o- luns ........................... [LUNs: 0] o- portals ..................... [Portals: 1] o- 0.0.0.0:3260 ...................... [OK]
As we can see here, a portal has already been created. As of RHEL 7.1 once a target has been set, a default portal will also be configured which will listen on TCP port 3260 on 0.0.0.0.
Create a LUN
Next we want to make a LUN with our previously defined backstore.
First move into the target portal group (TPG) that was just created.
/iscsi> iqn.2016-01.com.example:target/tpg1/
After this we can create the LUN, specifying any backstore that we have previously created. By default a LUN created in this way will have read-write permissions applied. Here we create a LUN for both our fileio and block backstores.
/iscsi/iqn.20...e:target/tpg1> luns/ create /backstores/fileio/testfile Created LUN 0. /iscsi/iqn.20...e:target/tpg1> luns/ create /backstores/block/block Created LUN 1.
Now if we execute ‘ls’ we should see both of our LUNs present.
/iscsi/iqn.20...e:target/tpg1> ls o- tpg1 .................... [no-gen-acls, no-auth] o- acls ............................... [ACLs: 0] o- luns ............................... [LUNs: 2] | o- lun0 ....... [fileio/testfile (/tmp/fileio)] | o- lun1 .............. [block/block (/dev/sdc)] o- portals ......................... [Portals: 1] o- 0.0.0.0:3260 .......................... [OK]
Create an ACL
Now we need to configure an access control list (ACL) to define the initiators that are actually allowed to connect to the iSCSI target.
To do this, go to the client system that will be our iSCSI initiator and get the contents of the /etc/iscsi/initiatorname.iscsi file – you can edit this file if you want, otherwise the default is fine. This will work as long as what we configure in our ACL on the iSCSI target server is the same as the contents of this file on the iSCSI initiator client.
[root@client ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.1994-05.com.redhat:5bf95f78165
Now that we have the name of our initiator from the client system, we enter the ACL section on the server within the TPG.
/iscsi/iqn.20...e:target/tpg1> acls/
Now create the ACL by specifying the initiator name found on the client system.
/iscsi/iqn.20...get/tpg1/acls> create iqn.1994-05.com.redhat:5bf95f78165 Created Node ACL for iqn.1994-05.com.redhat:5bf95f78165 Created mapped LUN 1. Created mapped LUN 0.
Note that all LUNs that have been created within this iSCSI target will automatically be mapped to the ACL.
/iscsi/iqn.20...get/tpg1/acls> ls o- acls ........................................ [ACLs: 1] o- iqn.1994-05.com.redhat:5bf95f78165 ..[Mapped LUNs: 2] o- mapped_lun0 .............. [lun0 fileio/file1 (rw)] o- mapped_lun1 ............... [lun1 block/block (rw)]
That’s all of the configuration required, we can use ‘cd’ to go up to the root of the iSCSI target and then use ‘ls’ to view all of the configuration.
/iscsi/iqn.20...get/tpg1/acls> cd ../.. /iscsi/iqn.20...xample:target> ls o- iqn.2016-01.com.example:target .......................... [TPGs: 1] o- tpg1 ..................................... [no-gen-acls, no-auth] o- acls ................................................ [ACLs: 1] | o- iqn.1994-05.com.redhat:5bf95f78165 ......... [Mapped LUNs: 2] | o- mapped_lun0 ................... [lun0 fileio/testfile (rw)] | o- mapped_lun1 ....................... [lun1 block/block (rw)] o- luns ................................................ [LUNs: 2] | o- lun0 ........................ [fileio/testfile (/tmp/fileio)] | o- lun1 ............................... [block/block (/dev/sdc)] o- portals .......................................... [Portals: 1] o- 0.0.0.0:3260 ........................................... [OK]
Saving Changes
To save the configuration, simply exit and this will write everything to the /etc/target/saveconfig.json file as shown below.
/iscsi/iqn.20...xample:target> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json
iSCSI Firewall Rules
Once complete the target server should now be listening on TCP port 3260 as shown below.
[root@server ~]# netstat -antup | grep 3260 tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN -
Now we need to allow traffic through firewalld on this port.
[root@server ~]# firewall-cmd --permanent --add-port=3260/tcp success [root@server ~]# firewall-cmd --reload success
Our iSCSI target is now ready to accept connections from the iSCSI initiator on our client system (iqn.1994-05.com.redhat:5bf95f78165).
Configure iSCSI Initiator
Now that the iSCSI target has been configured and setup, we can proceed with configuring the iSCSI initiator on the client side to connect to the iSCSI target.
The initiator will need the iscsi-initiator-utils package to be installed prior to connecting, install it first as shown below.
yum install iscsi-initiator-utils -y
Next be sure to start and enable both iscsid and iscsi. Note that you will likely need to restart these if you edit the IQN of the initiator later.
systemctl enable iscsid iscsi systemctl start iscsid iscsi
We will be connecting with the initiator name specified in the /etc/iscsi/initiatorname.iscsi file, if you modify this you will also need to update the ACL on the iSCSI target as it needs to be the same on both sides.
Next we can perform a discovery against the IP address of the target server to see what iSCSI targets are on offer. In this instance 192.168.1.200 is our iSCSI target server.
[root@client ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.200 192.168.1.200:3260,1 iqn.2016-01.com.example:target
From the client system we can see that the available target, next we want to log into it in order to use it.
[root@client ~]# iscsiadm -m node -T iqn.2016-01.com.example:target -l Logging in to [iface: default, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260] (multiple) Login to [iface: default, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260] successful.
From the client we can view all active iSCSI sessions as shown below.
[root@client mnt]# iscsiadm -m session -P 0 tcp: [1] 192.168.1.200:3260,1 iqn.2016-01.com.example:target (non-flash)
We can also change -P 0 to 1,2 or 3 for increasing levels of information.
The fileio and block disks shared from the iSCSI target are now available to the iSCSI initiator, as shown below. In this case local disk /dev/sdb is our fileio file over on the target server in /tmp/fileio, while local disk /dev/sdc is the block disk /dev/sdc over on the target server.
[root@client ~]# lsblk --scsi NAME HCTL TYPE VENDOR MODEL REV TRAN sda 2:0:0:0 disk VMware, VMware Virtual S 1.0 spi sdb 3:0:0:0 disk LIO-ORG testfile 4.0 iscsi sdc 3:0:0:1 disk LIO-ORG block 4.0 iscsi sr0 1:0:0:0 rom NECVMWar VMware IDE CDR10 1.00 ata
Both of these disks are now usable as if they were normal locally attached disks to the client system.
[root@client ~]# fdisk -l Disk /dev/sdb: 524 MB, 524288000 bytes, 409600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 8388608 bytes Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 4194304 bytes
We can partition or put a file system onto them as if they were local disks.
[root@client ~]# mkfs.xfs /dev/sdb meta-data=/dev/sdb isize=256 agcount=4, agsize=12800 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=51200, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@client ~]# mkfs.xfs /dev/sdc ...
From there we can mount them anywhere as required, here we mount to /mnt for testing and see that it’s available for use.
[root@client ~]# mount /dev/sdb /mnt [root@client ~]# mkdir /mnt2 [root@client ~]# mount /dev/sdc /mnt2 [root@client ~]# df -h | grep mnt Filesystem Size Used Avail Use% Mounted on /dev/sdb 497M 11M 486M 6% /mnt /dev/sdc 1014M 33M 982M 4% /mnt2
We could then add these into /etc/fstab to mount them automatically during system boot.
To log out of the iSCSI target, first unmount the disks.
[root@client /]# umount /mnt [root@client /]# umount /mnt2
Then perform the actual log out, after this we confirm there are no active sessions.
[root@client ~]# iscsiadm -m node -u Logging out of session [sid: 1, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260] Logout of [sid: 1, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260] successful. [root@client ~]# iscsiadm -m session -P 0 iscsiadm: No active sessions.
At this point if we rebooted the client system, it will automatically log back in to the iSCSI target, so if you did set up auto mounting via /etc/fstab it should mount properly. If we then reboot the iSCSI target server, it should automatically start up the target service, making the iSCSI target available on system boot.
Extra Tips
Well that’s quite a bit too remember! Luckily almost all of this information is available in the targetcli and iscsiadm manual pages, so keep these in mind if you get stuck.
man targetcli man iscsiadm
Summary
We have now shown you how to first configure the iSCSI target on the server that will be exporting its storage over the network, and then set up an iSCSI initiator on a client system to connect to the target server. Once connected, the iSCSI initiator client is able to make use of the disks exported on the iSCSI target server as though they were locally attached. This includes performing operations such as partitioning or creating a file system and then mounting the disks and accessing them over the network to read and write data via iSCSI.
This post is part of our Red Hat Certified Engineer (RHCE) exam study guide series. For more RHCE related posts and information check out our full RHCE study guide.
Ensure that the iscsi and iscsid services are enabled on the client, otherwise you may have some unexpected fun after system restart.
Quite right, I’ve updated the post thanks!
Thank you so much. I was running off two different blogs, ones that didn’t go through fully what I was doing. And yours answered my questions.
Yeah I know the feeling! I remember when studying for my RHCE that there was so much conflicting information out there that didn’t work as I needed, so when I worked everything out I tried my best to illustrate it clearly in my posts.
I have configured iscsi target on ubuntu 16.04 and initiator is Microsoft Windows computer.Please share with me a link or some help so that I can use it. I want to know how to access the file system on target on initiator and vice versa.
Once the target is setup you should be able to walk through the GUI on the Windows side to put the information in and connect.
I am facing error while i am trying to login at client side. error 24 login failed due to Authorization failure. My entries in initiator.iscsi is all good still getting error
Thanks a lot this all went perfectly for me on Ubuntu 18.04.2 LTS and was very useful.
Great guide – took 30 minutes in a VM environment, and was a great learning experience. Thanks!
In order for this to work, the
node.startup
value for this target should be set toautomatic
instead ofmanual
. Depending on your setup, you might have to do this in/etc/iscsi/nodes/IQN/IP/default
,/etc/iscsi/iscsid.conf
, or both.