ISCSI Initiator with Linux: Difference between revisions

From Lolly's Wiki
Jump to navigationJump to search
No edit summary
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
/etc/netplan/bond0.yaml
[[Category:Linux|iSCSI]]
<source lang=yaml>
[[Category:iSCSI|Linux]]
 
= iSCSI with jumbo-frames and multipathing =
== Configure networking ==
=== LACP-bonding for the frontend ===
==== /etc/netplan/bond0.yaml ====
<syntaxhighlight lang=yaml>
network:
network:
   version: 2
   version: 2
Line 31: Line 37:
       search:
       search:
       - domain.de
       - domain.de
</source>
</syntaxhighlight>


/etc/netplan/iscsi.yaml
=== Two dedicated 10GE interfaces with jumbo-frames for the backend ===
<source lang=yaml>
==== /etc/netplan/iscsi.yaml ====
<syntaxhighlight lang=yaml>
network:
network:
   version: 2
   version: 2
Line 57: Line 64:
       match:
       match:
         macaddress: a0:36:9f:d4:cd:18
         macaddress: a0:36:9f:d4:cd:18
</source>
</syntaxhighlight>
<source lang=bash>
 
=== Apply the parameters and check settings ===
<syntaxhighlight lang=bash>
# netplan apply
# netplan apply
# ip a sh iscsi0
# ip a sh iscsi0
Line 82: Line 91:
group default qlen 1000
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.171.112.135/16 brd 10.171.255.255 scope global bond0
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever
valid_lft forever preferred_lft forever
</source>
</syntaxhighlight>
<source lang=bash>
 
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.171.1
=== Check if all components are configured right for jumbo-frames ===
PING 10.250.171.1 (10.250.171.1) from 10.250.171.32 iscsi0: 8972(9000) bytes of data.
<syntaxhighlight lang=bash>
8980 bytes from 10.250.171.1: icmp_seq=1 ttl=64 time=0.227 ms
# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
8980 bytes from 10.250.171.1: icmp_seq=2 ttl=64 time=0.187 ms
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.171.1: icmp_seq=3 ttl=64 time=0.198 ms
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
--- 10.250.171.1 ping statistics ---
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms


# ping -c 3 -M do -s 8972 -I iscsi1 10.251.171.1
# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.171.1 (10.251.171.1) from 10.251.171.32 iscsi1: 8972(9000) bytes of data.
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.171.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.171.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.171.1: icmp_seq=3 ttl=64 time=0.191 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.171.1 ping statistics ---
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms
</source>   
</syntaxhighlight>   
If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.
 
== Configure iSCSI ==
=== Setup initiator iqn ===
Setup a new iqn:
<syntaxhighlight>
# /sbin/iscsi-iname
</syntaxhighlight>


/etc/iscsi/initiatorname.iscsi
The result is in /etc/iscsi/initiatorname.iscsi
<source>
 
==== /etc/iscsi/initiatorname.iscsi ====
<syntaxhighlight>
## DO NOT EDIT OR REMOVE THIS FILE!
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you remove this file, the iSCSI daemon will not start.
Line 115: Line 136:
## for each iSCSI initiator.  Do NOT duplicate iSCSI InitiatorNames.
## for each iSCSI initiator.  Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123
</source>
</syntaxhighlight>


<source lang=bash>
=== Setup iSCSI-Interfaces ===
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>
<syntaxhighlight lang=bash>
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi0
# iscsiadm -m iface -I iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
</syntaxhighlight>


# iscsiadm -m discovery -t st -p 10.250.171.1
=== Discover LUNs that are offered by the storage ===
iscsiadm: cannot make connection to 10.250.171.1: No route to host
<syntaxhighlight lang=bash>
iscsiadm: cannot make connection to 10.250.171.1: No route to host
# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.171.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
 
</syntaxhighlight>
# iscsiadm -m discovery -t st -p 10.251.171.1
<syntaxhighlight lang=bash>
iscsiadm: cannot make connection to 10.251.171.1: No route to host
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.171.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1
</source>
</syntaxhighlight>




 
=== Login to discovered LUNs ===
<source lang=bash>
<syntaxhighlight lang=bash>
# iscsiadm -m node -T  iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
# iscsiadm -m node -T  iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Line 156: Line 198:


# iscsiadm -m node -T  iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
# iscsiadm -m node -T  iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.171.1, portal: 10.251.71.1,3260] (multiple)
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.171.1, portal: 10.251.71.1,3260] successful.
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.
</syntaxhighlight>


=== Take a look at the running session ===
<syntaxhighlight lang=bash>
# iscsiadm -m session -P 1
# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
Line 192: Line 237:
                 iSCSI Session State: LOGGED_IN
                 iSCSI Session State: LOGGED_IN
                 Internal iscsid Session State: NO CHANGE
                 Internal iscsid Session State: NO CHANGE
 
</syntaxhighlight>
# lsscsi
[0:2:0:0]    disk    DELL    PERC H730 Mini  4.30  /dev/sda
[11:0:0:0]  cd/dvd  HL-DT-ST DVD+-RW GTA0N    A3C0  /dev/sr0
[12:0:0:1]  disk    HUAWEI  XSG1            4305  /dev/sdb
[13:0:0:1]  disk    HUAWEI  XSG1            4305  /dev/sdc




=== Check the session is still ok after a restart of iscsid.service ===
<syntaxhighlight lang=bash>
# systemctl status iscsid.service
# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
</syntaxhighlight>


=== Enable automatic startup of connection ===
<syntaxhighlight lang=bash>
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic
 
</syntaxhighlight>
=== Check timeout parameter ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 |  grep node.session.timeo.replacement_timeout
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 |  grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
node.session.timeo.replacement_timeout = 120
Line 213: Line 264:
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 |  grep node.session.timeo.replacement_timeout
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 |  grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120
node.session.timeo.replacement_timeout = 120
 
</syntaxhighlight>
=== Adjust timeout values to your needs ===
<syntaxhighlight lang=bash>
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1  -o update -n  node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1  -o update -n  node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1  -o update -n  node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1  -o update -n  node.session.timeo.replacement_timeout -v 10
</source>
</syntaxhighlight>


== Configure multipathing ==
=== List SCSI devices ===
<syntaxhighlight lang=bash>
# lsscsi
[0:2:0:0]    disk    DELL    PERC H730 Mini  4.30  /dev/sda  <--- this is our internal disk / raid
[11:0:0:0]  cd/dvd  HL-DT-ST DVD+-RW GTA0N    A3C0  /dev/sr0
[12:0:0:1]  disk    HUAWEI  XSG1            4305  /dev/sdb  <--- this is our iSCSI-storage
[13:0:0:1]  disk    HUAWEI  XSG1            4305  /dev/sdc  <--- this is our iSCSI-storage
</syntaxhighlight>


<source lang=bash>
=== Get wwids for devices ===
<syntaxhighlight lang=bash>
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
361866da075bdee001f9a2ede2705b9ba
Line 226: Line 289:
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004
3628dee5100f846b5243be07d00000004
</syntaxhighlight>


 
=== Setup multipathing configuration ===
# cat /etc/multipath.conf
==== /etc/multipath.conf ====
<syntaxhighlight>
defaults {
defaults {
     user_friendly_names yes
     user_friendly_names yes
}
}
devices {
devices {
      device {
    device {
               vendor                  "HUAWEI"
               vendor                  "HUAWEI"
               product                "XSG1"
               product                "XSG1"
               path_grouping_policy    multibus
               path_grouping_policy    failover
               path_checker            tur
               path_checker            tur
               prio                    const
               prio                    const
Line 242: Line 307:
               failback                immediate
               failback                immediate
               no_path_retry          15
               no_path_retry          15
}
              dev_loss_tmo            30
              fast_io_fail_tmo        5
    }
}
}
blacklist {
blacklist {
# devnode "^sd[a]$"
  # devnode "^sd[a]$"
# I highly recommend you blacklist by wwid instead of device name
  # I highly recommend you blacklist by wwid instead of device name
# blacklist /dev/sda
   
wwid 361866da075bdee001f9a2ede2705b9ba
  # blacklist /dev/sda by wwid
  wwid 361866da075bdee001f9a2ede2705b9ba
}
}
multipaths {
multipaths {
multipath {
  multipath {
wwid 3628dee5100f846b5243be07d00000004
    wwid 3628dee5100f846b5243be07d00000004
# alias here can be anything descriptive for your LUN
    # alias here can be anything descriptive for your LUN
alias veeamrepo
    alias data
}
  }
}
}
</syntaxhighlight>
<syntaxhighlight lang=bash>
# systemctl restart multipathd.service
# joutnalctl -lfu multipathd.service
</syntaxhighlight>
<syntaxhighlight lang=bash>
# multipathd show config | less
<find HUAWEI and check parameters>
</syntaxhighlight>


 
=== Startup multipathing ===
From the multipath(1) man page:
<pre>
-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
</pre>
<syntaxhighlight lang=bash>
# multipath -r
# multipath -r
</syntaxhighlight>
<syntaxhighlight lang=bash>
# multipath -ll
# multipath -ll
veeamrepo (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
`-+- policy='round-robin 0' prio=50 status=active
   |- 12:0:0:1 sdb 8:16 active ready running
   |- 12:0:0:1 sdb 8:16 active ready running
   `- 13:0:0:1 sdc 8:32 active ready running
   `- 13:0:0:1 sdc 8:32 active ready running
</syntaxhighlight>


# ls -al /dev/mapper/veeamrepo
<syntaxhighlight lang=bash>
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/veeamrepo -> ../dm-0
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0
</syntaxhighlight>


=== Create a systemd unit to mount it at the right time during boot ===
<syntaxhighlight lang=bash>
# systemctl edit --force --full data.mount
</syntaxhighlight>


# systemctl cat mnt-veeamrepo.mount
==== /etc/systemd/system/data.mount ====
# /etc/systemd/system/mnt-veeamrepo.mount
<syntaxhighlight lang=inifile>
[Unit]
[Unit]
Before=remote-fs.target
Before=remote-fs.target
After=iscsi.service
After=iscsi.service
Requires=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-veeamrepo.target
After=blockdev@dev-mapper-data.target


[Mount]
[Mount]
Where=/mnt/veeamrepo
Where=/data
What=/dev/mapper/veeamrepo
What=/dev/mapper/data
Type=xfs
Type=xfs
Options=defaults
Options=defaults
</source>
</syntaxhighlight>
 


Dokumente:
=== Enable your unit on next reboot and start it for now ===
<syntaxhighlight lang=bash>
# systemctl enable data.mount
# systemctl start  data.mount
</syntaxhighlight>
=== Check for success ===
<syntaxhighlight lang=bash>
# df -h /dev/mapper/data
Filesystem      Size Used Avail Use% Mounted on
/dev/mapper/data  10T  72G  10T  1% /data
</syntaxhighlight>


== Further reading ==
External link collection:
     • https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
     • https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdf
     • https://www.suse.com/support/kb/doc/?id=000019648
     • https://www.suse.com/support/kb/doc/?id=000019648
     • https://ubuntu.com/server/docs/service-iscsi
     • https://ubuntu.com/server/docs/service-iscsi
     •
     •

Latest revision as of 13:40, 6 May 2022


iSCSI with jumbo-frames and multipathing

Configure networking

LACP-bonding for the frontend

/etc/netplan/bond0.yaml

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: false
      dhcp6: false
      optional: true
    eno2:
      dhcp4: false
      dhcp6: false
      optional: true
  bonds:
    bond0:
     interfaces:
     - eno1
     - eno2
     parameters:
       lacp-rate: slow
       mode: 802.3ad
       transmit-hash-policy: layer2
     addresses:
     - 10.71.112.135/16
     gateway4: 10.71.101.1 
     nameservers:
       addresses:
       - 10.71.111.11
       - 10.71.111.12
       search:
       - domain.de

Two dedicated 10GE interfaces with jumbo-frames for the backend

/etc/netplan/iscsi.yaml

network:
  version: 2
  renderer: networkd
  ethernets:
    enp132s0f0:
      dhcp4: false
      dhcp6: false
      mtu: 9000
      addresses:
        - 10.250.71.32/24
      set-name: iscsi0
      match:
        macaddress: a0:36:9f:d4:cd:1a
    enp132s0f1:
      dhcp4: false
      dhcp6: false
      mtu: 9000
      addresses:
        - 10.251.71.32/24
      set-name: iscsi1
      match:
        macaddress: a0:36:9f:d4:cd:18

Apply the parameters and check settings

# netplan apply
# ip a sh iscsi0
7: iscsi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.250.71.32/24 brd 10.250.71.255 scope global iscsi0
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd1a/64 scope link
valid_lft forever preferred_lft forever

# ip a sh iscsi1
5: iscsi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default
qlen 1000
link/ether a0:36:9f:d4:cd:18 brd ff:ff:ff:ff:ff:ff
inet 10.251.71.32/24 brd 10.251.71.255 scope global iscsi1
valid_lft forever preferred_lft forever
inet6 fe80::a236:9fff:fed4:cd18/64 scope link
valid_lft forever preferred_lft forever

# ip a sh bond0
12: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 32:2d:f2:c0:e2:3f brd ff:ff:ff:ff:ff:ff
inet 10.71.112.135/16 brd 10.71.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::302d:f2ff:fed0:e23f/64 scope link
valid_lft forever preferred_lft forever

Check if all components are configured right for jumbo-frames

# ping -c 3 -M do -s 8972 -I iscsi0 10.250.71.1
PING 10.250.71.1 (10.250.71.1) from 10.250.71.32 iscsi0: 8972(9000) bytes of data.
8980 bytes from 10.250.71.1: icmp_seq=1 ttl=64 time=0.227 ms
8980 bytes from 10.250.71.1: icmp_seq=2 ttl=64 time=0.187 ms
8980 bytes from 10.250.71.1: icmp_seq=3 ttl=64 time=0.198 ms
--- 10.250.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.187/0.204/0.227/0.016 ms

# ping -c 3 -M do -s 8972 -I iscsi1 10.251.71.1
PING 10.251.71.1 (10.251.71.1) from 10.251.71.32 iscsi1: 8972(9000) bytes of data.
8980 bytes from 10.251.71.1: icmp_seq=1 ttl=64 time=0.202 ms
8980 bytes from 10.251.71.1: icmp_seq=2 ttl=64 time=0.195 ms
8980 bytes from 10.251.71.1: icmp_seq=3 ttl=64 time=0.191 ms
--- 10.251.71.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.191/0.196/0.202/0.004 ms

If it does not has success, one of the switches on the way or the iSCSI-storage misses jumbo-frame settings.

Configure iSCSI

Setup initiator iqn

Setup a new iqn:

# /sbin/iscsi-iname

The result is in /etc/iscsi/initiatorname.iscsi

/etc/iscsi/initiatorname.iscsi

## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator.  The InitiatorName must be unique
## for each iSCSI initiator.  Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:4efdaa48c123

Setup iSCSI-Interfaces

# iscsiadm -m iface -I iscsi0 -o new
# iscsiadm -m iface -I iscsi0 --op=update -n iface.net_ifacename -v iscsi0
# iscsiadm -m iface -I iscsi0
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi0
iface.net_ifacename = iscsi0
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD
# iscsiadm -m iface -I iscsi1 -o new
# iscsiadm -m iface -I iscsi1 --op=update -n iface.net_ifacename -v iscsi1
# iscsiadm -m iface -I iscsi1
# BEGIN RECORD 2.0-874
iface.iscsi_ifacename = iscsi1
iface.net_ifacename = iscsi1
iface.ipaddress = <empty>
iface.hwaddress = <empty>
iface.transport_name = tcp
...
# END RECORD

Discover LUNs that are offered by the storage

# iscsiadm -m discovery -t st -p 10.250.71.1
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: cannot make connection to 10.250.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.250.71.1:3260,1 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1
# iscsiadm -m discovery -t st -p 10.251.71.1
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: cannot make connection to 10.251.71.1: No route to host
iscsiadm: connection login retries (reopen_max) 5 exceeded
10.251.71.1:3260,2 iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1


Login to discovered LUNs

# iscsiadm -m node -T  iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 --login
Logging in to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] (multiple)
Login to [iface: iscsi0, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1, portal: 10.250.71.1,3260] successful.

# iscsiadm -m node -T  iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 --login
Logging in to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] (multiple)
Login to [iface: iscsi1, target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1, portal: 10.251.71.1,3260] successful.

Take a look at the running session

# iscsiadm -m session -P 1
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)
        Current Portal: 10.250.71.1:3260,1
        Persistent Portal: 10.250.71.1:3260,1
                **********
                Interface:
                **********
                Iface Name: iscsi0
                Iface Transport: tcp
                Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
                Iface IPaddress: 10.250.71.32
                Iface HWaddress: <empty>
                Iface Netdev: iscsi0
                SID: 1
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
Target: iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
        Current Portal: 10.251.71.1:3260,2
        Persistent Portal: 10.251.71.1:3260,2
                **********
                Interface:
                **********
                Iface Name: iscsi1
                Iface Transport: tcp
                Iface Initiatorname: iqn.1993-08.org.debian:01:4efdaa48c123
                Iface IPaddress: 10.251.71.32
                Iface HWaddress: <empty>
                Iface Netdev: iscsi1
                SID: 2
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE


Check the session is still ok after a restart of iscsid.service

# systemctl status iscsid.service
# systemctl restart iscsid.service
# systemctl status iscsid.service
# iscsiadm -m session -o show
tcp: [1] 10.251.71.1:3260,2 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 (non-flash)
tcp: [2] 10.250.71.1:3260,1 iqn.2006-
08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 (non-flash)

Enable automatic startup of connection

# iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
# iscsiadm -m node --op=update -n node.startup -v automatic

Check timeout parameter

# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1 |  grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120

# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1 |  grep node.session.timeo.replacement_timeout
node.session.timeo.replacement_timeout = 120

Adjust timeout values to your needs

# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20000:10.250.71.1  -o update -n  node.session.timeo.replacement_timeout -v 10
# iscsiadm -m node -T iqn.2006-08.com.huawei:oceanstor:210028def5f846b5::20001:10.251.71.1  -o update -n  node.session.timeo.replacement_timeout -v 10

Configure multipathing

List SCSI devices

# lsscsi
[0:2:0:0]    disk    DELL     PERC H730 Mini   4.30  /dev/sda   <--- this is our internal disk / raid
[11:0:0:0]   cd/dvd  HL-DT-ST DVD+-RW GTA0N    A3C0  /dev/sr0
[12:0:0:1]   disk    HUAWEI   XSG1             4305  /dev/sdb   <--- this is our iSCSI-storage
[13:0:0:1]   disk    HUAWEI   XSG1             4305  /dev/sdc   <--- this is our iSCSI-storage

Get wwids for devices

# /lib/udev/scsi_id --whitelisted --device=/dev/sda
361866da075bdee001f9a2ede2705b9ba
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3628dee5100f846b5243be07d00000004
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3628dee5100f846b5243be07d00000004

Setup multipathing configuration

/etc/multipath.conf

defaults {
    user_friendly_names yes
}
devices {
     device {
               vendor                  "HUAWEI"
               product                 "XSG1"
               path_grouping_policy    failover
               path_checker            tur
               prio                    const
               path_selector           "round-robin 0"
               failback                immediate
               no_path_retry           15
               dev_loss_tmo            30
               fast_io_fail_tmo        5
     }
}
blacklist {
  # devnode "^sd[a]$"
  # I highly recommend you blacklist by wwid instead of device name
     
  # blacklist /dev/sda by wwid
  wwid 361866da075bdee001f9a2ede2705b9ba
}
multipaths {
  multipath {
    wwid 3628dee5100f846b5243be07d00000004
    # alias here can be anything descriptive for your LUN
    alias data
  }
}
# systemctl restart multipathd.service
# joutnalctl -lfu multipathd.service
# multipathd show config | less
<find HUAWEI and check parameters>

Startup multipathing

From the multipath(1) man page:

-r Force a reload of all existing multipath maps. This command is delegated to the
multipathd daemon if it's running. In this case, other command line switches of the multipath
command have no effect.
-ll Show ("list") the current multipath topology from all available information (sysfs, the
device mapper, path checkers ...).
# multipath -r
# multipath -ll
data (3628dee5100f846b5243be07d00000004) dm-0 HUAWEI,XSG1
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 12:0:0:1 sdb 8:16 active ready running
  `- 13:0:0:1 sdc 8:32 active ready running
# ls -al /dev/mapper/data
lrwxrwxrwx 1 root root 7 Okt 18 14:46 /dev/mapper/data -> ../dm-0

Create a systemd unit to mount it at the right time during boot

# systemctl edit --force --full data.mount

/etc/systemd/system/data.mount

[Unit]
Before=remote-fs.target
After=iscsi.service
Requires=iscsi.service
After=blockdev@dev-mapper-data.target

[Mount]
Where=/data
What=/dev/mapper/data
Type=xfs
Options=defaults

Enable your unit on next reboot and start it for now

# systemctl enable data.mount
# systemctl start  data.mount

Check for success

# df -h /dev/mapper/data
Filesystem       Size Used Avail Use% Mounted on
/dev/mapper/data  10T  72G   10T   1% /data

Further reading

External link collection:

https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server.pdfhttps://www.suse.com/support/kb/doc/?id=000019648https://ubuntu.com/server/docs/service-iscsi