I want to use iscsi on k8s 1.8.2, but throw an error on kubelet log
Nov 14 16:41:09 server-222 kubelet: E1114 16:41:09.553260 1188 iscsi_util.go:233] iscsi: failed to rescan session with error: iscsiadm: No session found.
Nov 14 16:41:09 server-222 kubelet: (exit status 21)
Nov 14 16:41:09 server-222 kernel: scsi host18: iSCSI Initiator over TCP/IP
Nov 14 16:41:09 server-222 kubelet: E1114 16:41:09.632195 1188 iscsi_util.go:293] iscsi: failed to get any path for iscsi disk, last err seen:
Nov 14 16:41:09 server-222 kubelet: iscsi: failed to attach disk: Error: iscsiadm: Could not login to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260].
Nov 14 16:41:09 server-222 kubelet: iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
Nov 14 16:41:09 server-222 kubelet: iscsiadm: Could not log into all portals
Nov 14 16:41:09 server-222 kubelet: Logging in to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260] (multiple)
Nov 14 16:41:09 server-222 kubelet: (exit status 19)
Nov 14 16:41:09 server-222 kubelet: E1114 16:41:09.632435 1188 nestedpendingoperations.go:264] Operation for ""kubernetes.io/iscsi/172.22.117.221:3260:iqn.2017-11.cn.falseuser:storage.target00:0"" failed. No retries permitted until 2017-11-14 16:41:41.632354441 +0800 CST (durationBeforeRetry 32s). Error: MountVolume.WaitForAttach failed for volume "is" (UniqueName: "kubernetes.io/iscsi/172.22.117.221:3260:iqn.2017-11.cn.falseuser:storage.target00:0") pod "pod-volume" (UID: "336d1e55-c917-11e7-8221-0e67df33d01b") : failed to get any path for iscsi disk, last err seen:
Nov 14 16:41:09 server-222 kubelet: iscsi: failed to attach disk: Error: iscsiadm: Could not login to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260].
Nov 14 16:41:09 server-222 kubelet: iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
Nov 14 16:41:09 server-222 kubelet: iscsiadm: Could not log into all portals
Nov 14 16:41:09 server-222 kubelet: Logging in to [iface: default, target: iqn.2017-11.cn.falseuser:storage.target00, portal: 172.22.117.221,3260] (multiple)
Nov 14 16:41:09 server-222 kubelet: (exit status 19)
My iscsi volume config yaml as:
- name: is
iscsi:
targetPortal: 172.22.117.221:3260
iqn: iqn.2017-11.cn.falseuser:storage.target00
lun: 0
fsType: ext4
readOnly: false
chapAuthDiscovery: true
chapAuthSession: true
secretRef:
name: chap-secret
I test manually connect iscsi target on node, it works
but on k8s it failed
Содержание
- Arch Linux
- #1 2012-04-12 15:27:16
- [SOLVED] iscsi target lio (targetcli_fb) problem
- Thread: Can’t login to iSCSI target
- [solved] Can’t login to iSCSI target
- Re: Can’t login to iSCSI target
- Русские Блоги
- проверка подлинности пароля безопасности iSCSI
- iSCSI
- Introduction
- Overview
- Links
- Planning
- Setting up the server/target
- Installation
- Configuration
- Setting up the client/initiator
- Installation
- Configuration
- Testing
- Using the iSCSI-provided block device
- /etc/fstab (fsck not possible)
- /etc/fstab-iscsi (fsck possible)
- Normal operations
- Issue investigation
- How to identify which /dev/sd[a-z]+ are iSCSI devices
- Error messages
- iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
Arch Linux
You are not logged in.
#1 2012-04-12 15:27:16
[SOLVED] iscsi target lio (targetcli_fb) problem
I configured the targetcli:(don’t use authentication)
tpg1> set parameter AuthMethod=None
set attribute authentication=0
/etc/rc.d/target run well.
But when the initiator connects the target, target reports the error:
iSCSI Initiator Node: iqn.xxxx-xxxx-xxxx is not authorized to access iSCSI target portal group: 1
uname -a
Linux 3.2.13-1-ARCH
<
«fabric_modules»: [],
«storage_objects»: [
<
«attributes»: <
«block_size»: 512,
«emulate_dpo»: 0,
«emulate_fua_read»: 0,
«emulate_fua_write»: 1,
«emulate_rest_reord»: 0,
«emulate_tas»: 1,
«emulate_tpu»: 0,
«emulate_tpws»: 0,
«emulate_ua_intlck_ctrl»: 0,
«emulate_write_cache»: 0,
«enforce_pr_isids»: 1,
«is_nonrot»: 0,
«max_sectors»: 1024,
«max_unmap_block_desc_count»: 0,
«max_unmap_lba_count»: 0,
«optimal_sectors»: 1024,
«queue_depth»: 32,
«unmap_granularity»: 0,
«unmap_granularity_alignment»: 0
>,
«buffered_mode»: true,
«dev»: «/home/faicker/test.img»,
«name»: «disk1»,
«plugin»: «fileio»,
«size»: 10737418240,
«wwn»: «b83cf4a1-5e30-4df4-a5a2-ac7163d78a4d»
>
],
«targets»: [
<
«fabric»: «iscsi»,
«tpgs»: [
<
«attributes»: <
«authentication»: 0,
«cache_dynamic_acls»: 0,
«default_cmdsn_depth»: 16,
«demo_mode_write_protect»: 1,
«generate_node_acls»: 0,
«login_timeout»: 15,
«netif_timeout»: 2,
«prod_mode_write_protect»: 0
>,
«enable»: 1,
«luns»: [
<
«index»: 0,
«storage_object»: «/backstores/fileio/disk1»
>
],
«node_acls»: [],
«parameters»: <
«AuthMethod»: «None»,
«DataDigest»: «CRC32C,None»,
«DataPDUInOrder»: «Yes»,
«DataSequenceInOrder»: «Yes»,
«DefaultTime2Retain»: «20»,
«DefaultTime2Wait»: «2»,
«ErrorRecoveryLevel»: «0»,
«FirstBurstLength»: «65536»,
«HeaderDigest»: «CRC32C,None»,
«IFMarkInt»: «2048
65535″,
«IFMarker»: «No»,
«ImmediateData»: «Yes»,
«InitialR2T»: «Yes»,
«MaxBurstLength»: «262144»,
«MaxConnections»: «1»,
«MaxOutstandingR2T»: «1»,
«MaxRecvDataSegmentLength»: «8192»,
«OFMarkInt»: «2048
65535″,
«OFMarkInt»: «2048
65535″,
«OFMarker»: «No»,
«TargetAlias»: «LIO Target»
>,
«portals»: [
<
«ip_address»: «0.0.0.0»,
«port»: 3260
>
],
«tag»: 1
>
],
«wwn»: «iqn.2003-01.org.linux-iscsi.u205.x8664:sn.5f9e1e0139f2»
>
]
>
Last edited by faicker (2012-04-16 15:27:21)
Источник
Thread: Can’t login to iSCSI target
Thread Tools
Display
[solved] Can’t login to iSCSI target
Hi.
To speed-up transfers between a iSCSI target (Synology DS1511+) and a workstation (HP Z800) I purchased an Intel ethernet card. The NAS and computer are directly connected.
The problem is that now I can’t login to the NAS anymore. I used this guide in the past to configure the iSCSI initiator.
Step 1: iscsiadm -m discovery -t st -p 192.168.1.2
What happened? I tried to fix the problem by removing the contents of /etc/iscsi/ifaces/ and /etc/iscsi/nodes/ but it didn’t help.
I have no authentication set on the iSCSI target.
I’m using Ubuntu 12.04 LTS, 64-bit.
The solution was to disable: Header Digest and Data Digest on the target.
Thanks for any advice,
zee
Last edited by zee; September 25th, 2012 at 06:42 AM . Reason: Found solution
Re: Can’t login to iSCSI target
I’m having the same problem.
Even with the doc from version 12.04
https://help.ubuntu.com/12.04/server. initiator.html
And the NAS is working properly because I already done it from the past with an earlier Ubuntu version.
Without chap credential, it’s OK and working fine.
I tried your solution : disable: Header Digest and Data Digest on the target
And nothing changed.
But I want to use chap and I want to do my login over the console.
I don’t want to put user-name and password in the «/etc/iscsi/iscsid.conf»
Any tricks for this?
Last edited by JDubois450; January 29th, 2013 at 06:10 AM .
Источник
Русские Блоги
проверка подлинности пароля безопасности iSCSI
проверка подлинности пароля безопасности iSCSI
Чтобы обеспечить аутентификацию по имени пользователя и паролю iscsi, первым изменяемым файлом является /etc/iet/initiators.allow.
1. Изменить /etc/iet/initiators.allow
Контроль доступа по IP, который был выполнен ранее, следует удалить. Из этого также видно, что контроль доступа по IP и аутентификация по паролю не могут сосуществовать!
2. Настройте целевой сервер iscsi:
IncomingUser disuser dispass123456
#This IncomingUser здесь используется для аутентификации обнаружения, которое является глобальным и действительным для всех клиентов.
IncomingUser windows windows123456
# Поскольку IncomingUser настроен между Target и Lun, он действителен только для общего диска и раздела.
Lun 0 Path=/dev/sdc,Type=fileio,ScsiId=xyz,ScsiSN=xyz
IncomingUser linux linux
Lun 0 Path=/dev/sdb,Type=fileio,ScsiId=xyz,ScsiSN=xyz
3. Настройте клиент iscsi-initiator:
# Выше указаны имя пользователя и пароль, используемые клиентом во время обнаружения
# Это логин клиента
Выполните операции обнаружения на клиенте Linux:
[[email protected] iscsi]# iscsiadm -m discovery -t sendtargets -p 192.168.10.50
Здесь можно увидеть два диска, что указывает на правильность конфигурации этого диска.
Перезапустите службу iscsi на клиенте Linux:
[[email protected] iscsi]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists
[ OK ]off network shutdown. Starting iSCSI daemon: [ OK ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2013-09.com.xfzhou.Target:sdc, portal: 192.168.10.50,3260]
Logging in to [iface: default, target: iqn.2013-09.com.xfzhou.Target:sdb, portal: 192.168.10.50,3260]
iscsiadm: Could not login to [iface: default, target: iqn.2013-09.com.xfzhou.Target:sdc, portal: 192.168.10.50,3260]:
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
Login to [iface: default, target: iqn.2013-09.com.xfzhou.Target:sdb, portal: 192.168.10.50,3260]: successful
iscsiadm: Could not log into all portals. Err 19.
Из приведенной выше информации также видно, что настройки верны. Вход в sdb прошел успешно, но sdc, как и ожидалось, не прошел.
В то же время вы также можете увидеть новый общий диск, используя fdisk в клиенте Linux:
Disk /dev/sdg: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 10240 10485744 83 Linux
4. Конфигурация клиента Windows:
Сначала добавьте новое открытие следующим образом:
В этом расширенном режиме вы можете ввести пароль обнаружения следующим образом:
Здесь следует отметить, что существуют требования к длине этого пароля в окнах, которая должна быть больше 12 символов. Фактически, политика паролей в моих окнах отключена, это не должно быть проблемой системы. Но мой клиент linux может использовать пароль длиной менее 12 символов для входа в систему, поэтому это должно быть проблемой с этим программным обеспечением ms. Если длина пароля меньше 12, появится запрос:
Просмотрите системный журнал следующим образом:
Когда обнаружение настроено, вы можете увидеть два общих диска на вкладке Targets:
Выберите диск с разрешением для входа в систему, то есть SDC, нажмите «Вход», а затем нажмите «Дополнительно», чтобы ввести информацию для аутентификации при входе в систему:
Тот же пароль также требует длины. Наконец, вы можете увидеть, что общий диск был подключен.
На этом аутентификация по паролю завершена!
Источник
iSCSI
Introduction
This page summarises configuring iSCSI on Debian 6 and 7. Thanks to HowtoForge’s excellent Using iSCSI On Debian Squeeze (Initiator And Target) from which most of the information was learned.
Overview
iSCSI allows a server to provide a virtual block device over a network to a client. The virtual block device can then be treated like a real block device – for example it can be partitioned and file systems created in the partitions.
In iSCSI terminology the server is a «target» and the client is an «initiator». On this page they are called server/target and client/initiator.
Links
Planning
- A user name and password (for the iSCSI configuration so a free choice. There may be a limit of 16 characters on the password).
- A server/target computer:
- root access.
- The IP address. If there is more than one, the one that will be used by the client/initiator to access it.
- A local block device to be made available to the initiator (client) via iSCSI. May be a file, a HDD (whole device or partition), an LVM volume or a RAID device.
- A client/initiator computer
- root access.
Setting up the server/target
Installation
aptitude -y install iscsitarget iscsitarget-dkms
Configuration
Optionally backup the configuration files that will be changed: /etc/default/iscsitarget and /etc/iet/ietd.conf.
s ed -i ‘s/ISCSITARGET_ENABLE=false/ISCSITARGET_ENABLE=true/’ /etc/default/iscsitarget
The next step sets up to serve a single LVM volume, /dev/vg0/lv0. Values that need to be changed are red . The user and password values are need when configuring the client/initiator. Values that are arbitrary strings (so could be changed) are in blue .
user= someone
password= secret
local_device= /dev/vg0/lv0
oIFS=$IFS; array=($(hostname —long)); IFS=$oIFS
for ((i=$<#array[*]>;i>0;i—)); do backwards_fqdn+=.$; done
( echo «Target iqn.$(date +%Y-%m)$backwards_fqdn: storage.lun0 »
echo » IncomingUser $user $password»
echo » OutgoingUser»
echo » Lun 0 Path=$local_device,Type=fileio»
echo » Alias LUN0 «
) > /etc/iet/ietd.conf
It can be useful to know the Target value just created when configuring the client/initiator. It can be displayed with
head -1 /etc/iet/ietd.conf
Further devices can be added by editing /etc/iet/ietd.conf, replicating and modifying the first stanza.
Setting up the client/initiator
Installation
aptitude -y install open-iscsi
Configuration
Optionally backup the configuration file that will be changed: /etc/iscsi/iscsid.conf.
sed -i ‘s/node.startup = manual/node.startup = automatic/’ /etc/iscsi/iscsid.conf
In the next step, the iSCSI daemon is used to generate an initial configuration. Values that need to be changed are red . Starting the daemon will generate error messages because there’s no configuration yet.
target_ip= 192.168.10.27
/etc/init.d/open-iscsi restart
iscsiadm -m discovery -t st -p $target_ip
This should create a sub-directory of /etc/iscsi/nodes/ with the same name as the Target created when configuring the server/target.
Within that sub-directory there should be a further sub-directory with name beginning with the server/target’s IP address.
Note: if the server/target has two IP address (accessible by the client/initiator?) there will be two such sub-sub-directories. It may be possible to configure a client/initiator to work this way but initial explorations did not identify how to do so. In this case, delete the sub-sub-directory for the IP address you do not want to use.
In the next step, the user name and password are added to the configuration.
Change to the new /etc/iscsi/nodes/ / directory. In the commands below, the sed command should be on a single line.
user= someone
password= secret
sed -i «s/^node.session.auth.authmethod = None$/node.session.auth.authmethod = CHAPnnode.session.auth.username = $usernnode.session.auth.password = $password/» default
Testing
The output should include:
Login to [iface: default, target: , portal: ,
and a new /dev/sd[a-z]+ device file should have appeared.
Using the iSCSI-provided block device
The new /dev/sd[a-z]+ block device can be configured as desired.
If it is configured with file system(s) that should be mounted at boot there are two solutions dpending on whether the file systems should be fscked .
/etc/fstab (fsck not possible)
/etc/fstab is used in the usual way with some special considerations:
- LABEL or UUID must be used in case the /dev/sd[a-z]+ name assigned by udev changes from boot to boot.
- The options must include _netdev. This ensures that mounting is deferred until the networking daemons (including open-iscsi) are running.
- The sixth field (fs_passno) must be set to 0. This disables fsck when fstab is processed, necessary because the devices are not created until later, when the open-iscsi boot script runs.
/etc/fstab-iscsi (fsck possible)
Create /etc/fstab-iscsi, based on this sample. The UUID can be found using the blkid command while the iSCSI-backed device is present:
# This is the configuration file for /etc/init.d/mountscsi.sh
# This file follows the same format as /etc/fstab.
# First column: it is strongly recommended that UIDs or LABELs are used.
# dump column: values may be omitted; if they are present they are ignored.
# pass column: 0 disables fsck; any other value (conventionally 1) enables it.
UUID=ff17c31e-eaff-4b49-b5f9-39ec81892e70 /mnt/hd/iSCSI jfs defaults 1
Install /etc/init.d/mountscsi.sh by creating the file with read and execute permission for root and writeable only by root (sorry about the formatting; apparently I can’t drive Confluence WIKI) .
. and creating the required symlinks:
update-rc.d mountiscsi.sh defaults
That will generate two «do not match LSB Default» warnings which can be ignored.
Normal operations
In normal operations the client/initiator should be shut down before the server/target. Doing otherwise will result in a delayed shutdown by the client/initiator.
Issue investigation
How to identify which /dev/sd[a-z]+ are iSCSI devices
The easiest way is to list /dev/disk/by-path/:
ls -l /dev/disk/by-path/ | grep ‘ip-.*iqn.’
If lshw is installed, more information is available by
lshw -class disk -class storage
hdparm doesn’t work on iSCSI devices. When smartctl was tried there was a server/target kernel abort task for iSCSI target.
Error messages
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
As the messages suggest, an authentication failure. Check user name and password consistency between server/target and client/initiator.
Источник
I configured the targetcli:(don’t use authentication)
tpg1> set parameter AuthMethod=None
set attribute authentication=0
/etc/rc.d/target run well.
But when the initiator connects the target, target reports the error:
iSCSI Initiator Node: iqn.xxxx-xxxx-xxxx is not authorized to access iSCSI target portal group: 1
uname -a
Linux 3.2.13-1-ARCH
what else settings? Thanks.
This is the saveconfig.json
——————————————-
{
«fabric_modules»: [],
«storage_objects»: [
{
«attributes»: {
«block_size»: 512,
«emulate_dpo»: 0,
«emulate_fua_read»: 0,
«emulate_fua_write»: 1,
«emulate_rest_reord»: 0,
«emulate_tas»: 1,
«emulate_tpu»: 0,
«emulate_tpws»: 0,
«emulate_ua_intlck_ctrl»: 0,
«emulate_write_cache»: 0,
«enforce_pr_isids»: 1,
«is_nonrot»: 0,
«max_sectors»: 1024,
«max_unmap_block_desc_count»: 0,
«max_unmap_lba_count»: 0,
«optimal_sectors»: 1024,
«queue_depth»: 32,
«unmap_granularity»: 0,
«unmap_granularity_alignment»: 0
},
«buffered_mode»: true,
«dev»: «/home/faicker/test.img»,
«name»: «disk1»,
«plugin»: «fileio»,
«size»: 10737418240,
«wwn»: «b83cf4a1-5e30-4df4-a5a2-ac7163d78a4d»
}
],
«targets»: [
{
«fabric»: «iscsi»,
«tpgs»: [
{
«attributes»: {
«authentication»: 0,
«cache_dynamic_acls»: 0,
«default_cmdsn_depth»: 16,
«demo_mode_write_protect»: 1,
«generate_node_acls»: 0,
«login_timeout»: 15,
«netif_timeout»: 2,
«prod_mode_write_protect»: 0
},
«enable»: 1,
«luns»: [
{
«index»: 0,
«storage_object»: «/backstores/fileio/disk1»
}
],
«node_acls»: [],
«parameters»: {
«AuthMethod»: «None»,
«DataDigest»: «CRC32C,None»,
«DataPDUInOrder»: «Yes»,
«DataSequenceInOrder»: «Yes»,
«DefaultTime2Retain»: «20»,
«DefaultTime2Wait»: «2»,
«ErrorRecoveryLevel»: «0»,
«FirstBurstLength»: «65536»,
«HeaderDigest»: «CRC32C,None»,
«IFMarkInt»: «2048~65535»,
«IFMarker»: «No»,
«ImmediateData»: «Yes»,
«InitialR2T»: «Yes»,
«MaxBurstLength»: «262144»,
«MaxConnections»: «1»,
«MaxOutstandingR2T»: «1»,
«MaxRecvDataSegmentLength»: «8192»,
«OFMarkInt»: «2048~65535»,
«OFMarkInt»: «2048~65535»,
«OFMarker»: «No»,
«TargetAlias»: «LIO Target»
},
«portals»: [
{
«ip_address»: «0.0.0.0»,
«port»: 3260
}
],
«tag»: 1
}
],
«wwn»: «iqn.2003-01.org.linux-iscsi.u205.x8664:sn.5f9e1e0139f2»
}
]
}
Last edited by faicker (2012-04-16 15:27:21)
I have configure openfiler -SAN for shared storage. A linux server is iscis initiator when ever i am starting the service manually i am getting login error, although shared disks are visible on the linux initiator. Here is the output :
service iscsi start
Starting iscsi: iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:normal1, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:normal2, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm3, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:normal3, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm5, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm4, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm2, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm1, portal: 192.168.1.195,3260].
iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
iscsiadm: Could not log into all portals
[ OK ]
I can see the disks on the linux initiator :
[root@linux1 by-path]# iscsiadm -m discovery -t sendtargets -p 192.168.2.195
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm5
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm4
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm3
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs
[root@linux1 by-path]#
how to solve this.
- Forum
- The Ubuntu Forum Community
- Ubuntu Specialised Support
- Ubuntu Servers, Cloud and Juju
- Server Platforms
- [ubuntu] Can’t login to iSCSI target
-
[solved] Can’t login to iSCSI target
Hi.
To speed-up transfers between a iSCSI target (Synology DS1511+) and a workstation (HP Z800) I purchased an Intel ethernet card. The NAS and computer are directly connected.The problem is that now I can’t login to the NAS anymore. I used this guide in the past to configure the iSCSI initiator.
Step 1: iscsiadm -m discovery -t st -p 192.168.1.2
Code:
root@erwin:~# iscsiadm -m discovery -t sendtargets -p 192.168.1.2 192.168.1.101:3260,0 iqn.2000-01.com.synology:syn-data 192.168.1.2:3260,0 iqn.2000-01.com.synology:syn-data
Step 2: iscsiadm -m node
Code:
root@erwin:~# iscsiadm -m node 192.168.1.101:3260,0 iqn.2000-01.com.synology:syn-data 192.168.1.2:3260,0 iqn.2000-01.com.synology:syn-data
Step 3: do the login
Code:
root@erwin:~# iscsiadm --mode node --targetname iqn.2000-01.com.synology:syn-data --portal 192.168.1.2:3260 --login Logging in to [iface: iscsi-1, target: iqn.2000-01.com.synology:syn-data, portal: 192.168.1.2,3260] iscsiadm: Could not login to [iface: iscsi-1, target: iqn.2000-01.com.synology:syn-data, portal: 192.168.1.2,3260]: iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
What happened? I tried to fix the problem by removing the contents of /etc/iscsi/ifaces/ and /etc/iscsi/nodes/ but it didn’t help.
I have no authentication set on the iSCSI target.I’m using Ubuntu 12.04 LTS, 64-bit.
The solution was to disable: Header Digest and Data Digest on the target.
Thanks for any advice,
zeeLast edited by zee; September 25th, 2012 at 06:42 AM.
Reason: Found solution
-
Re: Can’t login to iSCSI target
Hi,
I’m having the same problem….
Even with the doc from version 12.04
https://help.ubuntu.com/12.04/server…initiator.htmlAnd the NAS is working properly because I already done it from the past with an earlier Ubuntu version.
Without chap credential, it’s OK and working fine.
I tried your solution : disable: Header Digest and Data Digest on the target
And nothing changed…But I want to use chap and I want to do my login over the console.
I don’t want to put user-name and password in the «/etc/iscsi/iscsid.conf»Any tricks for this?
Thx
Last edited by JDubois450; January 29th, 2013 at 06:10 AM.
Bookmarks
Bookmarks
Posting Permissions
-
Attention, TrueNAS Community Members.
General Help has now been set to read-only mode.
To make sure you can easily find what you’re looking for, we’ve relocated all relevant categories under their respective version. This change will simplify searching for information and minimize any confusion about where to post.
Register for the iXsystems Community to get an ad-free experience
-
Thread starter
Sixthmoon
-
Start date
Dec 26, 2014
- Status
- Not open for further replies.
-
#1
I recently upgraded from 9.2.1.8 to 9.3.
After that the iscsi service would not start. I found bug #7077 and updated /usr/local/libexec/nas/generate_ctl_conf.py
I previously had an iscsi target that was working. Now I am getting an authentication error. While troubleshooting, I have removed CHAP authentication, but I am still getting the error.
I am wondering if the typo fix has anything to do with the problem I am seeing.
Anyone have a tips to troubleshoot the problem?
Thx,
-
#2
The error I am getting is: iscsiadm: initiator reported error (19 — encountered non-retryable iSCSI login failure)
dlavigne
Guest
-
#3
Which initiator software? Anything in the logs of the inititator?
-
#4
I did get past the issue.
The initiator is open-iscsi on Centos 6.
It must have been a configuration error on my part. Not sure exactly what the problem was because I removed and recreated the Initiators, Targets, and Extents and all is working normally. I also deleted the session records on the test client machine.
- Status
- Not open for further replies.
The default option for persistent volumes on k3s is local-path
,
which provisions (on-demand) the storage on the node’s local disk. This
has the unfortunate side-effect that the container is now tied to that
particular node.
To get around this, I thought I’d take a look at iSCSI. I’ve got an
unused Synology DS211 NAS that can act as an iSCSI target, so I put a
disk in it, installed the latest DSM version and got started.
I mean: how hard could it be?
Setting up the iSCSI Target
Setting up the iSCSI target is relatively simple:
- Log into the DS211.
- Open the main menu and choose “iSCSI Manager”.
- This is renamed to “SAN Manager” in DSM 7.x, and things have moved around a bit.
- On the “Target” page, click “Create”.
- Give it a sensible name. Since I’m just testing, I called it “testing”. I also edited the IQN, replacing “Target-1” with “testing”.
- I did not enable CHAP. This is all on a local, trusted, network, and I didn’t want to deal with auth at this point.
- Click “Next”.
- Select “Create a new iSCSI LUN”. A LUN (Logical Unit Number) is just a fancy name for a volume, effectively.
- It needs a name, I named it “testing-LUN-1”.
- It seems like you can have multiple LUNs per target, but I’ve not tried that. Presumably it allows you to expand the disk later.
- I’ve only got one disk (and one volume) in my DS211, so the default location is the only option.
- The default capacity is 1GB. This is fine for a quick test.
- You can choose between “Thick” and “Thin” provisioning.
- The help text is a little vague about this: “better performance” vs. “flexible storage allocation”, but what I think it means is: “pre-allocated” vs. “allocated on demand”.
- The first will take a chunk of space on the NAS, even if you’re not using all of the space inside the LUN.
- The second only grows when the LUN grows, but could fail (catastrophically?) if you run out of space on the NAS.
Note that a LUN can only be used by one initiator at a
time.
There are cluster-aware filesystems that get around this with fancy
locking or other schemes, but that’s out of scope here.
See also:
- Synology KB: How to start using the iSCSI target service on Synology NAS
- TechRepublic: How to integrate a Synology NAS in your VMware Lab
- ServeTheHome: How to Setup an iSCSI Target Using a Synology DS1812+ NAS
Install the open-iscsi
package on your cluster nodes
sudo apt install open-iscsi # on all cluster nodes
I didn’t do this until later, and I think it caused me some problems.
Aside: you can probably use node labels and selectors so that
iSCSI-using containers are only scheduled on nodes that have open-scsi
installed.
Mount the volume
The deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing
labels:
app: testing
spec:
replicas: 1
selector:
matchLabels:
app: testing
template:
metadata:
labels:
app: testing
name: testing
spec:
containers:
- name: ubuntu
image: ubuntu:latest
command: ["/bin/sleep", "7d"]
volumeMounts:
- name: testing-vol
mountPath: /var/lib/testing
volumes:
- name: testing-vol
iscsi:
targetPortal: 192.168.28.124:3260
iqn: iqn.2000-01.com.synology:ds211.testing.25e6c0dc53
lun: 1
readOnly: false
Note the IP address for the targetPortal
, rather than a host name. This is important (and also annoying).
See rancher#12433.
I added the volume to the deployment, as this page suggests, but I suspect – based on the k8s docs – that you can use a persistent volume (PV) and persistent volume claim (PVC) instead.
I don’t know why you’d choose one over the other at this point.
So it worked then?
No. At this point, I ran into a bunch of problems.
The first one is that I hadn’t installed the open-iscsi
package yet.
I’m so used to everything just retrying that I got a bit careless about
doing things in the “right” order. I don’t know whether this was the
cause of my later problems, but … maybe?
The main problem was that my container refused to mount the volume. It
kept reporting the following:
iscsiadm: initiator reported error (19-encountered non-retryable iSCSI login failure)
Running iscsid
in debug mode gave me a little more:
iscsid: conn 0 login rejected: initiator error - target not found (02/03)
…but ultimately I have no idea what was wrong. Eventually, I created a
1GB volume on the NAS and attempted to mount it on the node, rather than
in a container:
$ sudo iscsiadm -m discovery -t sendtargets -p 192.168.28.124:3260
...
192.168.28.124:3260,1 iqn.2000-01.com.synology:ds211.testing.25e6c0dc53
192.168.28.124:3260,1 iqn.2000-01.com.synology:ds211.tmp.25e6c0dc53
...
$ sudo iscsiadm -m node
--targetname iqn.2000-01.com.synology:ds211.tmp.25e6c0dc53
--portal 192.168.28.124:3260 --login
At that point, it all started working and the container was able to
mount the volume correctly. ¯_(ツ)_/¯
Coming back to this later, I suspect that the problem is caused by
overlapping pod lifetimes. Kubernetes prefers to bring up a new pod
before tearing down an old one. This effectively means that more than
one container is accessing the iSCSI LUN at once, and we know that’s a
bad thing. I’ll need to play with it some more to confirm that, though.
Is the volume persistent?
Yes.
I logged into the container:
$ kubectl exec --stdin --tty testing-5d4458cc68-jffcx -- /bin/bash
# touch /var/lib/testing/kilroy-was-here
Then I deleted the pod and waited for the deployment to recreate it (on
a different node). The file was still there.
Can I use iSCSI on the node?
Yeah, it’s just standard Linux stuff. Once you’ve logged in…
$ sudo iscsiadm -m node
--targetname iqn.2000-01.com.synology:ds211.tmp.25e6c0dc53
--portal 192.168.28.124:3260 --login
…you have a new block device…
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 57.3G 0 disk
├─sda1 8:1 1 256M 0 part /boot
└─sda2 8:2 1 57.1G 0 part /
sdb 8:16 0 1G 0 disk
sdb
is the “tmp” volume. At this point you can partition, format, and
mount it as normal.