Skip to main content

We are running 11a with IBM SVC storage integration and storage snapshots. On our Linux proxies there are regularly leftover multipath devices. In the case below the only mapped device at this time is the 5GB disk (mpatha, for testing). There is no backup running anymore, no 10 TB disk should be there. The amount of devices varies from host to host, some don’t have any leftovers.

 

# multipath -l
mpathsqu (3600507680c80802f3000000000037e70) dm-41 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqh (3600507680c80802f3000000000037e82) dm-15 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqt (3600507680c80802f3000000000037e77) dm-39 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqs (3600507680c80802f3000000000037e6b) dm-37 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqr (3600507680c80802f3000000000037e67) dm-35 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpatha (3600507680c80802f30000000000274b6) dm-6 IBM,2145
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 1:0:0:0 sdg 8:96 active undef running
| |- 5:0:0:0 sdc 8:32 active undef running
| |- 1:0:1:0 sdh 8:112 active undef running
| `- 5:0:9:0 sdt 65:48 active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 1:0:7:0 sdk 8:160 active undef running
|- 5:0:5:0 sdq 65:0 active undef running
|- 1:0:8:0 sdl 8:176 active undef running
`- 5:0:6:0 sdr 65:16 active undef running
mpathsqq (3600507680c80802f3000000000037e7a) dm-33 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqp (3600507680c80802f3000000000037e7f) dm-31 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqo (3600507680c80802f3000000000037e79) dm-29 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqn (3600507680c80802f3000000000037e83) dm-27 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqm (3600507680c80802f3000000000037e75) dm-25 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsql (3600507680c80802f3000000000037e7d) dm-23 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqk (3600507680c80802f3000000000037e81) dm-21 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqw (3600507680c80802f3000000000037e6e) dm-45 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqj (3600507680c80802f3000000000037e84) dm-19 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqv (3600507680c80802f3000000000037e78) dm-43 ##,##
size=10T features='0' hwhandler='0' wp=rw
mpathsqi (3600507680c80802f3000000000037e7e) dm-17 ##,##
size=10T features='0' hwhandler='0' wp=rw


# lsscsi | grep IBM
r1:0:0:0] disk IBM 2145 0000 /dev/sdg
r1:0:1:0] disk IBM 2145 0000 /dev/sdh
r1:0:4:16] disk IBM 2145 0000 /dev/sdi
r1:0:5:16] disk IBM 2145 0000 /dev/sdj
r1:0:7:0] disk IBM 2145 0000 /dev/sdk
r1:0:8:0] disk IBM 2145 0000 /dev/sdl
r1:0:9:16] disk IBM 2145 0000 /dev/sdm
r1:0:11:16] disk IBM 2145 0000 /dev/sdn
r5:0:0:0] disk IBM 2145 0000 /dev/sdc
r5:0:3:16] disk IBM 2145 0000 /dev/sdo
r5:0:4:16] disk IBM 2145 0000 /dev/sdp
r5:0:5:0] disk IBM 2145 0000 /dev/sdq
r5:0:6:0] disk IBM 2145 0000 /dev/sdr
r5:0:8:16] disk IBM 2145 0000 /dev/sds
r5:0:9:0] disk IBM 2145 0000 /dev/sdt
r5:0:10:16] disk IBM 2145 0000 /dev/sdu



# fdisk -l /dev/sdj
fdisk: cannot open /dev/sdj: No such file or directory

 

Same here on an other proxy.

 

# multipath -ll
mpathoib (3600507680c80802f3000000000036fd4) dm-7 IBM,2145
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:8 sde 8:64 active ready running
| |- 5:0:0:8 sdl 8:176 active ready running
| |- 1:0:1:8 sdg 8:96 active ready running
| `- 5:0:2:8 sdn 8:208 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:6:8 sdi 8:128 active ready running
|- 5:0:6:8 sdp 8:240 active ready running
|- 1:0:7:8 sdk 8:160 active ready running
`- 5:0:8:8 sdr 65:16 active ready running
mpatha (3600507680c80802f30000000000274b6) dm-6 IBM,2145
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:0 sdd 8:48 active ready running
| |- 5:0:0:0 sda 8:0 active ready running
| |- 1:0:1:0 sdf 8:80 active ready running
| `- 5:0:2:0 sdm 8:192 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:6:0 sdh 8:112 active ready running
|- 5:0:6:0 sdo 8:224 active ready running
|- 1:0:7:0 sdj 8:144 active ready running
`- 5:0:8:0 sdq 65:0 active ready running

# lsscsi |grep IBM
<1:0:0:0] disk IBM 2145 0000 /dev/sdd
<1:0:0:8] disk IBM 2145 0000 /dev/sde
<1:0:1:0] disk IBM 2145 0000 /dev/sdf
<1:0:1:8] disk IBM 2145 0000 /dev/sdg
<1:0:6:0] disk IBM 2145 0000 /dev/sdh
<1:0:6:8] disk IBM 2145 0000 /dev/sdi
<1:0:7:0] disk IBM 2145 0000 /dev/sdj
<1:0:7:8] disk IBM 2145 0000 /dev/sdk
<5:0:0:0] disk IBM 2145 0000 /dev/sda
<5:0:0:8] disk IBM 2145 0000 /dev/sdl
<5:0:2:0] disk IBM 2145 0000 /dev/sdm
<5:0:2:8] disk IBM 2145 0000 /dev/sdn
<5:0:6:0] disk IBM 2145 0000 /dev/sdo
<5:0:6:8] disk IBM 2145 0000 /dev/sdp
<5:0:8:0] disk IBM 2145 0000 /dev/sdq
<5:0:8:8] disk IBM 2145 0000 /dev/sdr


# fdisk -l /dev/sdn
Disk /dev/sdn: 10 TiB, 10995116277760 bytes, 21474836480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: DBF187B5-3EA8-4145-86B0-E0D5880A0F9B

Device Start End Sectors Size Type
/dev/sdn1 2048 21474836446 21474834399 10T VMware VMFS



 

Here is a host where everything is looking perfectly fine. Only the permanently mapped 5 GB test LUN is visible.

 

# multipath -l
mpatha (3600507680c80802f30000000000274b6) dm-6 IBM,2145
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 1:0:0:0 sdd 8:48 active undef running
| |- 5:0:7:0 sdj 8:144 active undef running
| |- 1:0:1:0 sde 8:64 active undef running
| `- 5:0:0:0 sdh 8:112 active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 1:0:6:0 sdf 8:80 active undef running
|- 5:0:4:0 sdi 8:128 active undef running
|- 1:0:7:0 sdg 8:96 active undef running
`- 5:0:2:0 sda 8:0 active undef running

 

 

Any ideas? 

Hi Ralf, I am curious how are you connecting to the SVC, is it iSCSI or FC?


I will be honest I have not had a chance to try Linux proxies with FC myself. Did you open a support case with logs for the team to check it out?

Are you on the latest patch?


If you have Redhat OS, i guess you have support from RH? Maybe a ticket to the support could be usefull especially if it’s a integration on RHEL 8.

I had some troubles in the past on RHEL 7 and RH support helped us to correct the problem.


Its FC.

 

multipathd> show paths
hcil dev dev_t pri dm_st chk_st dev_st next_check
1:0:0:0 sdd 8:48 50 active ready running XXX....... 39/120
1:0:1:0 sde 8:64 50 active ready running XXXXXXXXX. 119/120
1:0:10:26 sdn 8:208 10 failed faulty running X......... 16/120
1:0:11:26 sdo 8:224 10 failed faulty running .......... 6/120
1:0:2:16 sdf 8:80 50 failed faulty running XXX....... 10/30
1:0:3:26 sdg 8:96 50 failed faulty running X......... 19/120
1:0:4:16 sdh 8:112 50 failed faulty running X......... 16/120
1:0:5:26 sdi 8:128 50 failed faulty running X......... 20/120
1:0:6:0 sdj 8:144 10 active ready running XXXXXXX... 89/120
1:0:7:0 sdk 8:160 10 active ready running .......... 6/120
1:0:9:16 sdm 8:192 10 failed faulty running X......... 16/120
1:0:8:16 sdl 8:176 10 failed faulty running XX........ 30/120
5:0:0:0 sda 8:0 50 active ready running XXX....... 46/120
5:0:1:0 sdp 8:240 50 active ready running XXXX...... 50/120
5:0:11:26 sdz 65:144 10 failed faulty running X......... 16/120
5:0:9:26 sdx 65:112 10 failed faulty running .......... 3/120
5:0:2:16 sdq 65:0 50 failed faulty running X......... 13/120
5:0:3:16 sdr 65:16 50 failed faulty running .......... 6/120
5:0:4:26 sds 65:32 50 failed faulty running X......... 23/120
5:0:5:26 sdt 65:48 50 failed faulty running XXXXX..... 17/30
5:0:6:0 sdu 65:64 10 active ready running XXX....... 43/120
5:0:8:0 sdw 65:96 10 active ready running XXXXX..... 69/120
5:0:7:16 sdv 65:80 10 failed faulty running XX........ 25/120
5:0:10:16 sdy 65:128 10 failed faulty running X......... 14/120



multipathd> show topology
create: mpatha (3600507680c80802f30000000000274b6) dm-6 IBM,2145
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:0 sdd 8:48 active ready running
| |- 5:0:0:0 sda 8:0 active ready running
| |- 1:0:1:0 sde 8:64 active ready running
| `- 5:0:1:0 sdp 8:240 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:6:0 sdj 8:144 active ready running
|- 5:0:6:0 sdu 65:64 active ready running
|- 1:0:7:0 sdk 8:160 active ready running
`- 5:0:8:0 sdw 65:96 active ready running
create: mpathrff (3600507680c80802f3000000000037006) dm-7 IBM,2145
size=10T features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 1:0:3:26 sdg 8:96 failed faulty running
| |- 5:0:4:26 sds 65:32 failed faulty running
| |- 1:0:5:26 sdi 8:128 failed faulty running
| `- 5:0:5:26 sdt 65:48 failed faulty running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 1:0:10:26 sdn 8:208 failed faulty running
|- 5:0:11:26 sdz 65:144 failed faulty running
|- 1:0:11:26 sdo 8:224 failed faulty running
`- 5:0:9:26 sdx 65:112 failed faulty running
create: mpathrev (3600507680c80802f3000000000037000) dm-8 IBM,2145
size=10T features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 1:0:2:16 sdf 8:80 failed faulty running
| |- 5:0:2:16 sdq 65:0 failed faulty running
| |- 1:0:4:16 sdh 8:112 failed faulty running
| `- 5:0:3:16 sdr 65:16 failed faulty running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 1:0:9:16 sdm 8:192 failed faulty running
|- 5:0:7:16 sdv 65:80 failed faulty running
|- 1:0:8:16 sdl 8:176 failed faulty running
`- 5:0:10:16 sdy 65:128 failed faulty running


 

 

RHEL 8 multipath.conf

 

 

defaults {
find_multipaths yes
user_friendly_names yes
path_selector "round-robin 0"
path_grouping_policy multibus
prio alua
polling_interval 30
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
}

devices {
device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
#features "1 queue_if_no_path"
prio alua
path_checker tur
failback immediate
no_path_retry "5"
rr_weight uniform
rr_min_io_rq 1
dev_loss_tmo 120
}
}

blacklist {
device {
vendor "HP"
product "LOGICAL VOLUME"
}
}

 


Latest patch, I did not open a case yet, as linux knowledge is not the strength of Veeam support. And maybe it’s more of an linux topic than Veeam. When I had problems with storage integration in the past, Veeam pointed at IBM so this will be a painful process anyway.


Comment