Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko.
I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner
This is strange because it happens randomly on any connections. Is there any way to fix it ?
asked Sep 1, 2014 at 15:36
4
It depends on what you mean by «fix». The underlying cause, as pointed out in the comments, are congestion/lack of resources. In that way, it’s similar to some HTTP codes. That’s the normal cause, it could be that the ssh server is returning the wrong header data.
429 Too Many Requests, tells the client to use rate limiting, or sometimes APIs will return 503 in a similar way, if you exceed your quota. The idea being, to try again later, with a delay.
You can attempt to handle this exception in your code, wait a little while, and try again. You can also edit your transport.py file, to set the banner timeout to something higher. If you have an application where it doesn’t matter how quickly the server responds, you could set this to 60 seconds.
EDIT:
Editing your transport file is no longer needed
as per Greg’s answer. When you call connect, you can pass a banner_timeout (which solves this issue), a timeout (for the underlying TCP), and an auth_timeout (waiting for authentication response). Greg’s answer has a code example with banner_timeout that you can directly lift.
answered Mar 24, 2015 at 4:57
TinBaneTinBane
85711 silver badges19 bronze badges
1
Adding to TinBane’s answers, suggesting to edit transport.py
: you don’t have to do that anymore.
Since Paramiko v. 1.15.0, released in 2015, (this PR, to be precise) you can configure that value when creating Paramiko connection, like this:
client = SSHClient()
client.connect('ssh.example.com', banner_timeout=200)
In the current version of Paramiko as of writing these words, v. 2.7.1, you have 2 more timeouts that you can configure when calling connect
method, for these 3 in total (source):
banner_timeout
— an optional timeout (in seconds) to wait for the SSH banner to be presented.timeout
— an optional timeout (in seconds) for the TCP connectauth_timeout
— an optional timeout (in seconds) to wait for an authentication response.
answered Dec 23, 2019 at 10:37
Greg DubickiGreg Dubicki
5,4272 gold badges54 silver badges64 bronze badges
2
When changing the timeout value (as TinBane mentioned) in the transport.py file from 15 to higher the issue resolved partially. that is at line #484:
self.banner_timeout = 200 # It was 15
However, to resolve it permanently I added a static line to transport.py to declare the new higher value at the _check_banner(self):
function.
Here is specifically the change:
- It was like this:
def _check_banner(self):
for i in range(100):
if i == 0:
timeout = self.banner_timeout
else:
timeout = 2
- After the permanent change became like this:
def _check_banner(self):
for i in range(100):
if i == 0:
timeout = self.banner_timeout
timeout = 200 # <<<< Here is the explicit declaration
else:
timeout = 2
answered Nov 25, 2019 at 21:03
0
paramiko seems to raise this error when I pass a non-existent filename to kwargs
>key_filename
. I’m sure there are other situations where this exception is raised nonsensically.
answered Jun 16, 2021 at 21:53
jberrymanjberryman
16.2k5 gold badges42 silver badges81 bronze badges
I had this issue with 12 parallel (12 threads) connections via single bastion.
As I had to solve it «quick and dirty» I’ve added a sleep time.
for target in targets:
deployer.deploy_target(target, asynchronous=True)
Changed to:
for target in targets:
deployer.deploy_target(target, asynchronous=True)
time.sleep(5)
This works for me.
As well I’ve added a banner_timeout as was suggested above to make it more reliable.
client.connect(bastion_address, banner_timeout=60)
answered Oct 5, 2021 at 9:02
HoreyHorey
212 bronze badges
I’m very new to this so I doubt I’m really qualified to answer anyone’s questions, however I feel I may offer a simple solution to this issue.
After i migrated from 1 machine to another, all my scripts that worked perfectly prior all stopped working with the following error after the move:
Exception: Error reading SSH protocol banner
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/paramiko/transport.py", line 2211, in _check_banner
I tried as many people suggested above of manually amending the transport.py file but all that happened for me was it took 60 seconds to timeout rather than the default 15.
Anyway i noticed that my new machine was running a slightly older version on paramiko so I simply upgraded this and it worked.
pip3 -U paramiko
answered Aug 27, 2021 at 10:34
Well, I was also getting this with one of the juniper devices. The timeout didn’t help at all. When I used pyex with this, it created multiple ssh/netconf sessions with juniper box. Once I changed the «set system services ssh connection-limit 10» From 5, it started working.
answered Apr 14, 2022 at 11:55
In my case, to speed up the connection download rate I have created a multiprocessing Pool of 10 processes, so 10 new ssh paramiko connections were active at the same time and downloading the data at the same time.
When I increased this number to 20, I started getting this error.
So, probably some congestion and too many connections active on one machine.
answered Jun 7, 2022 at 8:43
anicicnanicicn
1836 silver badges15 bronze badges
@ktbyers, I gave your solution a try but that doesn’t seem to solve my problem. Thanks to pkapp on IRC I was able to debug a bit further what’s going on.
I started by activating the debug logs but paramiko isn’t very chatty about what it does under the hood unfortunately.
import logging logging.basicConfig(level=logging.DEBUG)
These are the only things paramiko sends me back before throwing the traceback at me.
DEBUG:paramiko.transport:starting thread (client mode): 0xb4c74668
DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_1.16.0
ERROR:paramiko.transport:Exception: Error reading SSH protocol banner
I also learned that the %h %p
won’t be automatically used by paramiko when passing them as a string to a ProxyCommand
. (Even though it does seem to be working on my system, that may be the problem) Also, the nc
approach looks like it works better than the OpenSSH -W
flag. So my actual ProxyCommand
then looked like this :
cmd = "ssh {}@{} nc {} 22".format(host_cfg.get('user'), host_cfg.get('hostname'), destination_ip) # cmd is now "ssh root@jump_ip nc dest_ip 22" where jump_ip and dest_ip are valid IPs sock = ProxyCommand(cmd)
Still getting the same error though, so it didn’t come from there. I added a time.sleep
right after the call to ProxyCommand
, checked my proxy’s logs and the returned stdout
and stderr
from the subprocess like this :
sock = ProxyCommand(cmd) print(sock.process.poll()) print(sock.process.stdout.read()) print(sock.process.stderr.read())
This code yells the following output :
None
b'SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u1rn'
b''
While the time.sleep
or the reading of stdout/stderr
is active, the connection on my proxy is kept open. (Of course I removed these before going any further because I’m not supposed to read directly from the process) What I don’t understand is why that _check_banner function fails although the stdout of the socket is clearly beginning with SSH-
…
On the other hand, as soon as the client.connect(...)
is called, the connection is immediatly destroyed on my proxy. I now need a way to investigate why the connection fails this way.
For those who wants more information, here is the line in paramiko that causes that error : transport.py:1858
(Thanks again pkapp for all the help on IRC o/)
Bug Description
Revision history for this message
Revision history for this message
Huh, I see this in the n-net logs:
2014-07-27 20:11:47.967 DEBUG nova.network.manager [req-802f7e4b-3989-4343-94d0-849cefdb64aa TestVolumeBootPattern-32554776 TestVolumeBootPattern-422744072] [instance: 5ba6082f-5742-447a-9d56-bb52ae8634fb] Allocated fixed ip None on network 27dd907f-ec5f-4e9e-b369-a5a3b6bd13fa allocate_fixed_ip /opt/stack/new/nova/nova/network/manager.py:925
Notice the None, that seems odd…
I do see this later:
2014-07-27 20:12:16.240 DEBUG nova.network.manager [req-94127694-71f3-46d2-a62c-118a4d1556cb TestVolumeBootPattern-32554776 TestVolumeBootPattern-422744072] [instance: 5ba6082f-5742-447a-9d56-bb52ae8634fb] Network deallocation for instance deallocate_for_instance /opt/stack/new/nova/nova/network/manager.py:561
2014-07-27 20:12:16.279 DEBUG nova.network.manager [req-94127694-71f3-46d2-a62c-118a4d1556cb TestVolumeBootPattern-32554776 TestVolumeBootPattern-422744072] [instance: 5ba6082f-5742-447a-9d56-bb52ae8634fb] Deallocate fixed ip 10.1.0.3 deallocate_fixed_ip /opt/stack/new/nova/nova/network/manager.py:946
So when was the fixed IP actually allocated, or is that just a logging bug?
Revision history for this message
Maybe bug 1349590 is related, that’s a nova-network issue with floating IPs.
Revision history for this message
Revision history for this message
Revision history for this message
Reviewed: https://review.openstack.org/110384
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=a4c580ff03f4abb03970dd6de315ca0ba6849617
Submitter: Jenkins
Branch: master
commit a4c580ff03f4abb03970dd6de315ca0ba6849617
Author: Matt Riedemann <email address hidden>
Date: Tue Jul 29 10:18:13 2014 -0700
Add trace logging to allocate_fixed_ip
The address is being logged as None in some cases
that are failing in grenade jobs so this adds more
trace logging to the base network manager’s
allocate_fixed_ip method so we can see which paths
are being taken in the code and what the outputs
are.
Change-Id: I37de4b3bbb9e51b57eb4d048e05fc00382eed23d
Related-Bug: #1349617
Revision history for this message
Download full text (12.3 KiB)
I hit a similar issue, but a bit different, in http://logs.openstack.org/53/76053/16/check/check-grenade-dsvm-partial-ncpu/5a53b07/console.html#_2014-08-18_16_36_31_962 . Seems sometime it will fail to connected, sometime it fail to get the banner.
2014-08-18 16:36:31.962 | 2014-08-18 16:33:03,400 8863 INFO [tempest.common.ssh] Creating ssh connection to ‘172.24.4.1’ as ‘cirros’ with public key authentication
2014-08-18 16:36:31.962 | 2014-08-18 16:33:03,412 8863 INFO [paramiko.transport] Connected (version 2.0, client OpenSSH_6.6.1p1)
2014-08-18 16:36:31.962 | 2014-08-18 16:33:03,589 8863 INFO [paramiko.transport] Authentication (publickey) failed.
2014-08-18 16:36:31.962 | 2014-08-18 16:33:03,591 8863 WARNING [tempest.common.ssh] Failed to establish authenticated ssh connection to cirros@172.24.4.1 (Authentication failed.). Number attempts: 1. Retry after 2 seconds.
2014-08-18 16:36:31.962 | 2014-08-18 16:33:06,101 8863 INFO [paramiko.transport] Connected (version 2.0, client OpenSSH_6.6.1p1)
2014-08-18 16:36:31.962 | 2014-08-18 16:33:06,273 8863 INFO [paramiko.transport] Authentication (publickey) failed.
2014-08-18 16:36:31.962 | 2014-08-18 16:33:06,276 8863 WARNING [tempest.common.ssh] Failed to establish authenticated ssh connection to cirros@172.24.4.1 (Authentication failed.). Number attempts: 2. Retry after 3 seconds.
2014-08-18 16:36:31.962 | 2014-08-18 16:33:09,786 8863 INFO [paramiko.transport] Connected (version 2.0, client OpenSSH_6.6.1p1)
2014-08-18 16:36:31.962 | 2014-08-18 16:33:09,961 8863 INFO [paramiko.transport] Authentication (publickey) failed.
2014-08-18 16:36:31.963 | 2014-08-18 16:33:09,963 8863 WARNING [tempest.common.ssh] Failed to establish authenticated ssh connection to cirros@172.24.4.1 (Authentication failed.). Number attempts: 3. Retry after 4 seconds.
2014-08-18 16:36:31.963 | 2014-08-18 16:33:14,475 8863 INFO [paramiko.transport] Connected (version 2.0, client OpenSSH_6.6.1p1)
2014-08-18 16:36:31.963 | 2014-08-18 16:33:14,645 8863 INFO [paramiko.transport] Authentication (publickey) failed.
2014-08-18 16:36:31.963 | 2014-08-18 16:33:14,649 8863 WARNING [tempest.common.ssh] Failed to establish authenticated ssh connection to cirros@172.24.4.1 (Authentication failed.). Number attempts: 4. Retry after 5 seconds.
2014-08-18 16:36:31.963 | 2014-08-18 16:33:20,161 8863 INFO [paramiko.transport] Connected (version 2.0, client OpenSSH_6.6.1p1)
2014-08-18 16:36:31.963 | 2014-08-18 16:33:20,331 8863 INFO [paramiko.transport] Authentication (publickey) failed.
2014-08-18 16:36:31.963 | 2014-08-18 16:33:20,335 8863 WARNING [tempest.common.ssh] Failed to establish authenticated ssh connection to cirros@172.24.4.1 (Authentication failed.). Number attempts: 5. Retry after 6 seconds.
2014-08-18 16:36:31.963 | 2014-08-18 16:33:26,847 8863 INFO [paramiko.transport] Connected (version 2.0, client OpenSSH_6.6.1p1)
2014-08-18 16:36:31.963 | 2014-08-18 16:33:27,018 8863 INFO [paramiko.transport] Authentication (publickey) failed.
2014-08-18 16:36:31.964 | 2014-08-18 16:33:27,020 8863 WARNING [tem…
Changed in neutron: | |
importance: | Undecided → High |
assignee: | nobody → Salvatore Orlando (salvatore-orlando) |
importance: | High → Critical |
milestone: | none → juno-3 |
Changed in neutron: | |
importance: | Critical → High |
Revision history for this message
Revision history for this message
Just noticed a similar SSH time outs with «check-grenade-dsvm-partial-ncpu'» test job[1] from test ‘tempest/scenario/test_snapshot_pattern.py’:
————-
.
.
2014-08-27 08:28:47.776 | 2014-08-27 08:28:41,120 9490 INFO [tempest.common.debug] Host ns list[]
2014-08-27 08:28:47.777 | 2014-08-27 08:28:41,121 9490 ERROR [tempest.scenario.test_snapshot_pattern] Initializing SSH connection failed
2014-08-27 08:28:47.777 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern Traceback (most recent call last):
2014-08-27 08:28:47.777 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern File «tempest/scenario/test_snapshot_pattern.py», line 52, in _ssh_to_server
2014-08-27 08:28:47.777 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern return self.get_remote_client(server_or_ip)
2014-08-27 08:28:47.778 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern File «tempest/scenario/manager.py», line 332, in get_remote_client
2014-08-27 08:28:47.778 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern linux_client.validate_authentication()
2014-08-27 08:28:47.778 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern File «tempest/common/utils/linux/remote_client.py», line 54, in validate_authentication
2014-08-27 08:28:47.779 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern self.ssh_client.test_connection_auth()
2014-08-27 08:28:47.779 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern File «tempest/common/ssh.py», line 151, in test_connection_auth
2014-08-27 08:28:47.779 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern connection = self._get_ssh_connection()
2014-08-27 08:28:47.780 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern File «tempest/common/ssh.py», line 88, in _get_ssh_connection
2014-08-27 08:28:47.780 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern password=self.password)
2014-08-27 08:28:47.780 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern SSHTimeout: Connection to the 172.24.4.1 via SSH timed out.
2014-08-27 08:28:47.781 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern User: cirros, Password: None
2014-08-27 08:28:47.781 | 2014-08-27 08:28:41.121 9490 TRACE tempest.scenario.test_snapshot_pattern
.
.
————-
[1] http://logs.openstack.org/04/117104/2/check/check-grenade-dsvm-partial-ncpu/d3829fe/console.html
Revision history for this message
Here is what I’ve gathered so far. I looked through a few failed builds and focused on one [0] that uses the metadata service rather than config drive as it gives more clues.
1. The messages about “userdata” in the guest console don’t seem related to the failure i.e. the guest console only shows up in the logs if the build fails. I think it always says «/run/cirros/datasource/data/user-data was not ‘#!’ or executable» or “no userdata for datasource» if no “userdata” is being used, and none is. The ssh keys are part of the metadata in these tests, not the userdata portion of the metadata.
2. In the metadata service log [1], there are zero calls to e.g. «GET /2009-04-04/meta-data/user-data HTTP/1.1» further supporting no userdata relationship.
3. Ssh keys are added to the metadata in nova/api/metadata.py by nova itself, so it appears unlikely there is anything wrong there, or at least I didn’t see anything unusual. The key is created by a POST to nova [2] and nova creates the key. The key content then appears several times in the log messages of the metadata service (it seems fine, uncorrupted).
4. The error “Exception: Error reading SSH protocol banner[Errno 104] Connection reset by peer” implies a corruption of some kind (being that it seems communication wasn’t a problem otherwise, there’s a route) — this seems consistent with too low of an mtu and data getting truncated “occasionally.” In the log [3], the attempt to connect begins with connection refused (before sshd starts), then changes to authentication failure (likely before the guest has tried to pull the key from the metadata service), then changes to the ssh protocol banner read error. Which sounds like the key was retrieved but it’s corrupted (truncated?).
5. Web search for the same error yielded others having problems with mtu setting in the guest, where they can ping but not ssh with key pair, openstack [4] and cirros [5].
Is it at all possible that there’s an issue with the mtu of the guest sometimes? It would explain the randomness and the protocol banner errors, if data is getting truncated sometimes. I’m not sure where to go from here, I didn’t think anything like this would show up in the guest kernel logs.
[0] http://logs.openstack.org/38/115938/6/check/check-tempest-dsvm-neutron-pg-full-2/8833a83
[1] http://logs.openstack.org/38/115938/6/check/check-tempest-dsvm-neutron-pg-full-2/8833a83/logs/screen-q-meta.txt.gz
[2] http://logs.openstack.org/38/115938/6/check/check-tempest-dsvm-neutron-pg-full-2/8833a83/console.html#_2014-08-28_18_39_33_546
[3] http://logs.openstack.org/38/115938/6/check/check-tempest-dsvm-neutron-pg-full-2/8833a83/console.html#_2014-08-28_18_39_33_659
[4] https://ask.openstack.org/en/question/32958/unable-to-ssh-with-key-pair/
[5] https://bugs.launchpad.net/cirros/+bug/1301958
summary: |
— test_volume_boot_pattern fails in grenade with «SSHException: Error — reading SSH protocol banner[Errno 104] Connection reset by peer» + test_volume_boot_pattern fails with «SSHException: Error reading SSH + protocol banner[Errno 104] Connection reset by peer» |
Changed in nova: | |
status: | New → Confirmed |
importance: | Undecided → Critical |
Revision history for this message
I think we should focus on two aspects:
1) Ping works otherwise we won’t get to SSH test
2) SSH connections shows always authentication failures before ‘SSH protocol banner’ errors.
I don’t know about the MTU possibility, but I wouldn’t expect it to happen on single host tests.
Revision history for this message
I was thinking maybe the auth failure might happen before the guest reads the public key from metadata, then after it reads a corrupted key, it keeps sending back a truncated or otherwise invalid data in response to the SSH connection request. I read more about the paramiko error «Error reading SSH protocol banner[Errno 104]» and it can also mean the remote host didn’t send a banner at all (not responding at all, like Salvatore mentioned in comment #10).
I combed logs some more and didn’t find anything useful so I’m now going to try to reproduce the issue locally using devstack. I’d like to see the logs inside the guest (sshd logs, etc) after this happens. Which makes me wonder if we could add something to tempest to mount the guest disk if ssh failure like this happens and capture some of the guest logs for debugging.
Revision history for this message
Melanie,
we have been discussing this issue in openstack-qa.
since we too have been unable to find any evidence regarding issues with user data, we’re going to validate the MTU hypothesis you made.
I’m going to push a patch to match it to cirros’ MTU in the gate.
On the other hand a new patches cirros build with the fix for the bug you pointed out will be released soon.
Revision history for this message
Salvatore,
Okay. I agree MTU seems unlikely to be the issue but I’m glad if we can rule it out for sure.
Do you think we could do a verbose ssh in the tempest test (like ssh -vvv) to see the details of the exchange when the failure happens?
Revision history for this message
I don’t think paramiko allow us to do that. Bypassing paramiko in tempest is too much code churn I think.
I will try to reproduce in a local environment. it should not be too hard as I can intercept this failure also on VMware NSX-CI.
Revision history for this message
Revision history for this message
Thanks for the pointer melanie. I’ll see locally first how hard it would be and whether it requires changes on the infrastructure side. This is debugging info worth having (unlike the pedant namespace info we dump which I never find useful).
Revision history for this message
Cool. I’m trying some things locally in tempest too to see what happens when I call the log_to_file function. If I get something working in tempest, I’ll put up a patch (if you haven’t already found a way).
summary: |
— test_volume_boot_pattern fails with «SSHException: Error reading SSH — protocol banner[Errno 104] Connection reset by peer» + SSHException: Error reading SSH protocol banner[Errno 104] Connection + reset by peer |
Changed in neutron: | |
milestone: | juno-3 → juno-rc1 |
Revision history for this message
Revision history for this message
Thanks melanie — that’s good stuff to have.
I have a few local repro environments locally when I’m running a tweaked tempest that will not destroy the vm to which the SSH connection failed.
Revision history for this message
I reproduced the failure and I can confirm I have no authorized_keys file in the failing instance.
To reproduce the failure it is sufficient to start an instance with 4 cores and 8GB of memory, launch devstack with a localrc very similar to that of the full neutron test, and then keep running scenario tests.
A tweak for not removing the instance where ssh fails helps a lot: http://paste.openstack.org/show/105982/
Revision history for this message
Awesome Salvatore, thanks for sharing that patch.
So it’s running the latest Cirros 0.3.2 which I see fixed some bugs related to getting metadata [1]. Do you see anything interesting in /var/log/cloud-init.log in the VM?
[1] https://launchpad.net/cirros/trunk/0.3.2
Changed in tempest: | |
assignee: | nobody → Salvatore Orlando (salvatore-orlando) |
Revision history for this message
Download full text (5.4 KiB)
So this is what I found out.
Instance log from a failing istance [1]. The important bit there is «cirros-apply-local already run per instance», and not «no userdata for datasource» as initially thought. That was just me being stupid and thinking the public key was part of user data. That was really silly.
«cirros-apply-local already run per instance» seems to appear in the console log for all SSH protocol banner failures [2]. the presence of duplicates makes it difficult to prove correlation with SSH protocol banner failures.
However, they key here is that local testing revealing that when the SSH connection fails there is no authorized_keys file in /home/cirros/.ssh. This obviously explains the authentication failure. Whether the subsequent SSH protocol banner errors are due to the cited MTU problems or else it has to be clarified yet.
What is certain is that cirros processes the data source containing the public SSH key before starting sshd. So the auth failures cannot be due to the init process not being yet complete.
The cirros initialization process executes a set of steps on an instance basis. This steps include setting public ssh keys.
«On an instance basis » means that these steps are not executed at each boot but once per instance.
cirros-apply local [3] is the step which processes, among other things, ssh public keys.
It is called by the cirros-per scripts [4], which at the end of its execution writes a marker file [5]. The cirros-per process will terminate if when executed the marker file is already present [6]
During the failing test it has been observed the following:
from the console log:
[ 3.696172] rtc_cmos 00:01: setting system clock to 2014-09-04 19:05:27 UTC (1409857527)
from the cirros-apply marker directory:
$ ls -le /var/lib/cirros/sem/
total 3
-rw-r—r— 1 root root 35 Thu Sep 4 13:06:28 2014 instance.197ce1ac-e2df-4d3a-b392-4803383ddf74.check-version
-rw-r—r— 1 root root 22 Thu Sep 4 13:05:07 2014 instance.197ce1ac-e2df-4d3a-b392-4803383ddf74.cirros-apply-local
-rw-r—r— 1 root root 24 Thu Sep 4 13:06:31 2014 instance.197ce1ac-e2df-4d3a-b392-4803383ddf74.userdata
as cirros defaults to MDT (UTC -6), this means the apply-local marker has been applied BEFORE instance boot.
This is consistent with the situation we’re seeing where the failure always occur after events such as resize or stop.
The ssh public key should be applied in the first boot of the VM. When it’s restarted the process is skipped as the key should already be there. Unfortunately the key isn’t there, which is a bit of a mystery, especially since the instance is powered off in a graceful way thanks to [7].
Nevertheless when an instance receives a shutdown signal it sends a TERM signal to all processes. Meaning that the apply-local spawned by cirros-per at [4] can be killed before it actually writes the key.
However, cirros-per even if it retrieves the return code it writes the marker in any case [5].
This creates the conditions for a situation where the marker can be present without having actually completed the apply-local phase. As a result it is possible to have guests without SSH …
Read more…
Revision history for this message
Changed in tempest: | |
assignee: | Salvatore Orlando (salvatore-orlando) → Joe Gordon (jogo) |
status: | New → In Progress |
assignee: | Joe Gordon (jogo) → Matthew Treinish (treinish) |
Revision history for this message
Changed in neutron: | |
status: | New → Incomplete |
Changed in nova: | |
status: | Confirmed → Incomplete |
Changed in grenade: | |
status: | New → Incomplete |
Changed in tempest: | |
assignee: | Matthew Treinish (treinish) → Joe Gordon (jogo) |
Revision history for this message
Revision history for this message
Reviewed: https://review.openstack.org/119268
Committed: https://git.openstack.org/cgit/openstack/tempest/commit/?id=cd879c5287f4c260b1ec29e593dcad3efcfe5af7
Submitter: Jenkins
Branch: master
commit cd879c5287f4c260b1ec29e593dcad3efcfe5af7
Author: Matthew Treinish <email address hidden>
Date: Thu Sep 4 20:41:48 2014 -0400
Verify network connectivity before state check
This commit adds an initial ssh connection after bringing a server up
in setUp. This should ensure that the image has a chance to initialize
prior to messing with it’s state. The test’s here are to verify that
after performing a nova operation on a running instance network
connectivity is retained. However, it’s is never checked that we can
connect to the server in the first place. A probable cause for the
constant ssh failures in these tests is that the server hasn’t had a
finish it’s cloud-init (or cirros-init) stage when we’re stopping it,
this should also fix those issues.
Change-Id: I126fd4943582c4b759b3cc5a67babaa8d062fb4d
Partial-Bug: #1349617
Revision history for this message
No failure in neutron jobs since the patch merged (11 hours now)
3 failures in grenade-partial-ncpu (in gate).
The patch was not expected to fix the grenade job. If I’m not mistaken this job run’s icehouse n-cpu on the ‘new’ part of grenade, and therefore the failure might occur because the instance if being abruptly shut down and then resumed.
Changed in neutron: | |
milestone: | juno-rc1 → none |
assignee: | Salvatore Orlando (salvatore-orlando) → nobody |
Revision history for this message
Hi Irena,
Do you remember why default vnic_type was not set in neutron when you were working on adding vnic_type into the port binding? Is there any reason not to do that? As you know, nova depends on this information to determine if sr-iov port should be allocated. Just want to check with you for the fix to 1370077.
Thanks,
Robert
Revision history for this message
Hi Robert,
vnic_type was added to neutron to be used with ML2.
You can also see it in the blueprint description: https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
I second Salvatore’s suggestion to default nova to VNIC_NORMAL, if binding:vnic_type is not specified by neutron.
Cheers,
Irena
From: Robert Li (baoli) [mailto:<email address hidden>]
Sent: Tuesday, September 16, 2014 7:13 PM
To: Irena Berezovsky
Cc: Salvatore Orlando; Bob Melander (bmelande)
Subject: Default vnic_type, RE: https://bugs.launchpad.net/neutron/+bug/1370077
Hi Irena,
Do you remember why default vnic_type was not set in neutron when you were working on adding vnic_type into the port binding? Is there any reason not to do that? As you know, nova depends on this information to determine if sr-iov port should be allocated. Just want to check with you for the fix to 1370077.
Thanks,
Robert
Revision history for this message
Hi Irena,
I was thinking about doing it from Nova side as well. In that case, I will close 1370077 and create one from Nova side.
—Robert
On 9/16/14, 3:56 PM, «Irena Berezovsky» <<email address hidden><mailto:<email address hidden>>> wrote:
Hi Robert,
vnic_type was added to neutron to be used with ML2.
You can also see it in the blueprint description: https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
I second Salvatore’s suggestion to default nova to VNIC_NORMAL, if binding:vnic_type is not specified by neutron.
Cheers,
Irena
From: Robert Li (baoli) [mailto:<email address hidden>]
Sent: Tuesday, September 16, 2014 7:13 PM
To: Irena Berezovsky
Cc: Salvatore Orlando; Bob Melander (bmelande)
Subject: Default vnic_type, RE: https://bugs.launchpad.net/neutron/+bug/1370077
Hi Irena,
Do you remember why default vnic_type was not set in neutron when you were working on adding vnic_type into the port binding? Is there any reason not to do that? As you know, nova depends on this information to determine if sr-iov port should be allocated. Just want to check with you for the fix to 1370077.
Thanks,
Robert
Changed in tempest: | |
assignee: | Joe Gordon (jogo) → nobody |
status: | In Progress → New |
Changed in nova: | |
milestone: | none → juno-rc1 |
Revision history for this message
unclear if this is fixed or not, there was a hit a single hit in the check queue on September 15th. No hits in the gate queue in over a week.
Changed in nova: | |
importance: | Critical → Undecided |
Revision history for this message
Changed in tempest: | |
status: | New → Confirmed |
assignee: | nobody → Matthew Treinish (treinish) |
status: | Confirmed → Fix Committed |
Changed in nova: | |
milestone: | juno-rc1 → none |
Revision history for this message
affects: tempest
status: fixreleased
Changed in tempest: | |
importance: | Undecided → Critical |
status: | Fix Committed → Fix Released |
Revision history for this message
Revision history for this message
Reviewed: https://review.openstack.org/137096
Committed: https://git.openstack.org/cgit/openstack/tempest/commit/?id=1fd223e750048f8f39dea2f1b3fc6c73ff0b27d1
Submitter: Jenkins
Branch: master
commit 1fd223e750048f8f39dea2f1b3fc6c73ff0b27d1
Author: Matt Riedemann <email address hidden>
Date: Tue Nov 25 07:16:09 2014 -0800
Skip test_volume_boot_pattern until bug 1373513 is fixed
Between the races to delete a volume and hitting timeouts because things
are hanging with lvm in Cinder and the various SSH timeouts, this test
is a constant burden.
The SSH problems have been around for a long time and don’t seem to be
getting any new attention.
The Cinder volume delete hangs have also been around for awhile now and
don’t seem to be getting much serious attention, so until the Cinder
volume delete hangs are fixed (or at least getting some serious
attention), let’s just skip this test scenario.
Related-Bug: #1373513
Related-Bug: #1370496
Related-Bug: #1349617
Change-Id: Idb50bcdbc9683d322e9292abf50404e885a11a8e
Revision history for this message
I’m seeing a problem that appears to map to this bug, and I’m unclear whether that’s expected (i.e. because there are parts of this bug for which fixes have not yet propagated everywhere), or if my problem should be reported as new.
Specifically, in the check-tempest-dsvm-docker check for https://review.openstack.org/#/c/146914/, I’m seeing:
2015-01-13 21:38:10.693 | Traceback (most recent call last):
2015-01-13 21:38:10.693 | File «tempest/test.py», line 112, in wrapper
2015-01-13 21:38:10.693 | return f(self, *func_args, **func_kwargs)
2015-01-13 21:38:10.693 | File «tempest/scenario/test_snapshot_pattern.py», line 72, in test_snapshot_pattern
2015-01-13 21:38:10.693 | self._write_timestamp(fip_for_server[‘ip’])
2015-01-13 21:38:10.693 | File «tempest/scenario/test_snapshot_pattern.py», line 51, in _write_timestamp
2015-01-13 21:38:10.693 | ssh_client = self.get_remote_client(server_or_ip)
2015-01-13 21:38:10.693 | File «tempest/scenario/manager.py», line 317, in get_remote_client
2015-01-13 21:38:10.693 | linux_client.validate_authentication()
2015-01-13 21:38:10.694 | File «tempest/common/utils/linux/remote_client.py», line 55, in validate_authentication
2015-01-13 21:38:10.694 | self.ssh_client.test_connection_auth()
2015-01-13 21:38:10.694 | File «tempest/common/ssh.py», line 151, in test_connection_auth
2015-01-13 21:38:10.694 | connection = self._get_ssh_connection()
2015-01-13 21:38:10.694 | File «tempest/common/ssh.py», line 88, in _get_ssh_connection
2015-01-13 21:38:10.694 | password=self.password)
2015-01-13 21:38:10.694 | SSHTimeout: Connection to the 172.24.4.1 via SSH timed out.
2015-01-13 21:38:10.694 | User: cirros, Password: None
Searching maps that symptom to https://bugs.launchpad.net/grenade/+bug/1362554, which is a duplicate of this one.
Please can you advise whether this is expected, or something new?
Thanks — Neil
Revision history for this message
I got same issue in my OpenStack CI. Please advise me, Thanks.
Revision history for this message
Looking through the comments, I am unsure whether the really is a bug in cirros involved here, or whether the issue was only triggered by being stopped too fast during cloud-init.
Changed in cirros: | |
status: | New → Incomplete |
Changed in neutron: | |
importance: | High → Undecided |
Changed in grenade: | |
status: | Incomplete → Invalid |
Revision history for this message
Changed in nova: | |
status: | Incomplete → Fix Released |
assignee: | nobody → Augustina Ragwitz (auggy) |
no longer affects: | cirros |
Revision history for this message
This bug is > 180 days without activity. We are unsetting assignee and milestone and setting status to Incomplete in order to allow its expiry in 60 days.
If the bug is still valid, then update the bug status.
To post a comment you must log in.
Уведомления
- Начало
- » Python для новичков
- » Не могу разобраться с exeptions
#1 Окт. 30, 2015 22:29:58
Не могу разобраться с exeptions
Доброго времени суток!
Не могу разобраться с перехватом исключений.
Код
def ssh_connect(host, username, password): ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: ssh.connect(host, port=22, username=username, password=password) except paramiko.ssh_exception.BadAuthenticationType: print(logbadhost + host + " is not accept passwors") logger.log(3, logbadhost + host + " is not accept passwors") return 2 except paramiko.ssh_exception.AuthenticationException: logger.log(2, logbadpass + username + "@" + host + ":" + password) return 1 except paramiko.ssh_exception.SSHException: logger.log(3, logbadhost + host + " is ebnytii kakoito") return 2 except socket.timeout: logger.log(3, logbadhost + host + " is down") return 2 except socket.error as e: # print(e) return except ConnectionRefusedError as e: # print(e) logger.log(3, logbadhost + host + " is ebnytii kakoito") return 2 except EOFError as e: # print(e) logger.log(3, logbadhost + host + " is ebnytii kakoito") return 2 except: return 2 else: print(loggoodpass + username + "@" + host + ":" + password) logger.log(4, loggoodpass + username + "@" + host + ":" + password) return 0 finally: ssh.close()
Вроде в try обернул, но все равно пишет
Exception: Error reading SSH protocol banner Bad file descriptor
Traceback (most recent call last):
File “/usr/local/lib/python3.4/dist-packages/paramiko/transport.py”, line 1707, in _check_banner
buf = self.packetizer.readline(timeout)
File “/usr/local/lib/python3.4/dist-packages/paramiko/packet.py”, line 281, in readline
buf += self._read_timeout(timeout)
File “/usr/local/lib/python3.4/dist-packages/paramiko/packet.py”, line 434, in _read_timeout
x = self.__socket.recv(128)
OSError: Bad file descriptorDuring handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/usr/local/lib/python3.4/dist-packages/paramiko/transport.py”, line 1584, in run
self._check_banner()
File “/usr/local/lib/python3.4/dist-packages/paramiko/transport.py”, line 1711, in _check_banner
raise SSHException(‘Error reading SSH protocol banner’ + str(e))
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner Bad file descriptor
Заранее спасибо за помощь!
Офлайн
- Пожаловаться
#2 Окт. 31, 2015 01:44:24
Не могу разобраться с exeptions
В трейсбеке не указана исходная строчка кода в которой произошла ошибка. М.б. ошибка происходит за конструкцией try? ибо выглядит все корректно
_________________________________________________________________________________
полезный блог о python john16blog.blogspot.com
Офлайн
- Пожаловаться
#3 Окт. 31, 2015 09:03:58
Не могу разобраться с exeptions
Проблема, насколько я понимаю, кроется в библиотеке paramiko. Она используется только в этой функции, а она (функция) вся в try. Если попробовать подключиться к хосту через встроенный ssh клиент, возникает ошибка “подключение сброшено”.
Еще бывают вот такие ошибки:
Exception: Error reading SSH protocol banner Connection reset by peer
Traceback (most recent call last):
File “/usr/local/lib/python3.4/dist-packages/paramiko/transport.py”, line 1707, in _check_banner
buf = self.packetizer.readline(timeout)
File “/usr/local/lib/python3.4/dist-packages/paramiko/packet.py”, line 281, in readline
buf += self._read_timeout(timeout)
File “/usr/local/lib/python3.4/dist-packages/paramiko/packet.py”, line 434, in _read_timeout
x = self.__socket.recv(128)
ConnectionResetError: Connection reset by peerDuring handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/usr/local/lib/python3.4/dist-packages/paramiko/transport.py”, line 1584, in run
self._check_banner()
File “/usr/local/lib/python3.4/dist-packages/paramiko/transport.py”, line 1711, in _check_banner
raise SSHException(‘Error reading SSH protocol banner’ + str(e))
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner Connection reset by peer
Но как так может произойти? Ведь все вызовы paramiko обернуты в try?
Офлайн
- Пожаловаться
#4 Окт. 31, 2015 12:05:44
Не могу разобраться с exeptions
Я смог воспроизвести ошибку. Для этого в качестве порта указал порт ftp сервера. Ситуация такая — try except отрабатывают. Функция возвращает код завершения. Однако, происходит с точки зрения интерпретатора неординарная ситуация — во время исключительной ситуации возбуждает другое исключение. Интерпретатор спешит об этом сообщить на стандартный поток ошибок. Собственно это вы и видите. Т.о. все в порядке и вы можете выполнять свой код далее.
Если вам прям вот надо от этого избавиться…то это уже другой вопрос.
_________________________________________________________________________________
полезный блог о python john16blog.blogspot.com
Офлайн
- Пожаловаться
#5 Окт. 31, 2015 21:41:00
Не могу разобраться с exeptions
возможно ли как то получить этот самый exeption, что бы знать, что неверно указан порт? да и от вывода было бы избавиться не плохо (но не просто заткнуть, а перехватить как нибудь)
Отредактировано devnull01 (Окт. 31, 2015 21:41:13)
Офлайн
- Пожаловаться
#6 Ноя. 1, 2015 03:48:33
Не могу разобраться с exeptions
devnull01
от вывода было бы избавиться не плохо (но не просто заткнуть, а перехватить как нибудь)
вы можете оперировать внутри программы потоками вывода. Для этого наверняка подойдет любой файло-подобный объект. Для linux например вполне корректно делать так:
>>> import sys >>> _s = sys.stderr >>> sys.stderr = open('/dev/null', 'w') >>> >>> print('qwe') qwe >>> 1/0 >>> >>> sys.stderr = _s >>> >>> 1/0 Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: integer division or modulo by zero >>>
devnull01
Возможно ли как то получить этот самый exeption, что бы знать, что неверно указан порт?
Для работы с исключениями есть модуль traceback, там весь необходимый функционал есть.
_________________________________________________________________________________
полезный блог о python john16blog.blogspot.com
Офлайн
- Пожаловаться
#7 Ноя. 1, 2015 16:19:55
Не могу разобраться с exeptions
спасибо большое!
Офлайн
- Пожаловаться
- Начало
- » Python для новичков
- » Не могу разобраться с exeptions
Для получения доступа к удаленному хосту нам нужно авторизоваться на jumphost1, а затем jumphost2. для этого мы пытаемся создать туннель, как показано в скрипте Python ниже.
Моя основная цель этого соединения — выполнить скрипт-скрипт и перенаправить вывод в то же место, где находится скрипт. Скрипт находится на локальной машине, откуда pyc-файл создаст туннель и подключит удаленную машину.
Добавлена информация: оба прыжка, которые sshkeygen включает с парольной фразой. Так будет спрашивать пароль.
[root@centseven ~]# cat pyc
import paramiko
from sshtunnel import SSHTunnelForwarder
with SSHTunnelForwarder(
('1.5.18.1', 22),
ssh_username='user',
ssh_pkey="/root/.ssh/id_rsa",
ssh_private_key_password="userpass",
remote_bind_address=("1.15.18.1", 22),
local_bind_address=('127.0.0.1', 1111)
) as tunnel:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname=127.0.0.1, port=1111, username=root, password=remotepass)
# do some operations with client session
stdin, stdout, stderr = client.exec_command("./script >> output.txt")
print stdout.channel.recv_exit_status() # status is 0
client.close()
print('FINISH!')
Текущая ошибка с предложенным изменением, теперь она запрашивает у меня пароль, а при вводе пароля выдает приведенную ниже ошибку
# python pyc
Enter passphrase for key '/root/.ssh/id_rsa':
2017-05-14 23:44:34,322| ERROR | Secsh channel 0 open FAILED: open failed: Administratively prohibited
2017-05-14 23:44:34,337| ERROR | Could not establish connection from ('127.0.0.1', 1111) to remote side of the tunnel
2017-05-14 23:44:34,338| ERROR | Exception: Error reading SSH protocol banner
2017-05-14 23:44:34,339| ERROR | Traceback (most recent call last):
2017-05-14 23:44:34,339| ERROR | File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/paramiko/transport.py", line 1740, in run
2017-05-14 23:44:34,339| ERROR | self._check_banner()
2017-05-14 23:44:34,339| ERROR | File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/paramiko/transport.py", line 1888, in _check_banner
2017-05-14 23:44:34,340| ERROR | raise SSHException('Error reading SSH protocol banner' + str(e))
2017-05-14 23:44:34,340| ERROR | SSHException: Error reading SSH protocol banner
2017-05-14 23:44:34,340| ERROR |
Traceback (most recent call last):
File "pyc", line 16, in <module>
client.connect(hostname="127.0.0.1",port=1111,username="root",password="nasadmin")
File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/paramiko/client.py", line 338, in connect
t.start_client()
File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/paramiko/transport.py", line 493, in start_client
raise e
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
< Сильный > Edit1
python stack.py
Enter passphrase for key '/root/.ssh/id_rsa': 2017-05-15 00:14:24,437| ERROR | Exception: Error reading SSH protocol banner
2017-05-15 00:14:24,439| ERROR | Traceback (most recent call last):
2017-05-15 00:14:24,439| ERROR | File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/paramiko/transport.py", line 1740, in run
2017-05-15 00:14:24,440| ERROR | self._check_banner()
2017-05-15 00:14:24,440| ERROR | File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/paramiko/transport.py", line 1888, in _check_banner
2017-05-15 00:14:24,440| ERROR | raise SSHException('Error reading SSH protocol banner' + str(e))
2017-05-15 00:14:24,440| ERROR | SSHException: Error reading SSH protocol banner
2017-05-15 00:14:24,440| ERROR |
2017-05-15 00:14:24,442| ERROR | Could not connect to gateway remotehost:22 : Error reading SSH protocol banner
Traceback (most recent call last):
File "stack.py", line 9, in <module>
remote_bind_address=("remotehost", 22)
File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/sshtunnel.py", line 1482, in __enter__
self.start()
File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/sshtunnel.py", line 1224, in start
reason='Could not establish session to SSH gateway')
File "/root/.pyenv/versions/ansible2/lib/python2.7/site-packages/sshtunnel.py", line 1036, in _raise
raise exception(reason)
sshtunnel.BaseSSHTunnelForwarderError: Could not establish session to SSH gateway
< Сильный > .ssh / конфигурации
## lo8
Host jump1-*
User user
IdentityFile ~/.ssh/id_rsa
ForwardAgent yes
ServerAliveInterval 60
ServerAliveCountMax 12
Host jump01-temporary
Hostname HostIP
Port 2222
Host jump02
Hostname HostIP
Port 2222
Host jump01
Hostname HostIP
Port 22
ProxyCommand ssh -W %h:%p jump01
Host jump02
Hostname HostIP
Port 22
ProxyCommand ssh -W %h:%p jump02
Host Remote host
Hotname HostIP
Есть 2 сервера прыжков, к которым нам нужно подключить локальную машину -> JUMP 1 -> Jump 2 -> Remote Host
2 ответа
Лучший ответ
Для Exception
: изменение
client.connect(hostname=127.0.0.1, port=1111, username=root, password=nasadmin)
для
client.connect(hostname="127.0.0.1",port=1111,username="root",password="nasadmin")
Они string
, а не variable
。
< Сильный > Обновление
ваш код исправен после исправления с настройкой ssh по умолчанию в centos6.9
, затем я думаю, что это проблема системной ошибки ssh administratively prohibited
: когда я установил AllowTcpForwarding no
в /etc/ssh/sshd_config
из { {X4}} и перезапустите sshd, ошибка пришла
2017-05-17 16:11:09,475| ERROR | Secsh channel 0 open FAILED: open failed: Administratively prohibited
2017-05-17 16:11:09,478| ERROR | Could not establish connection from ('127.0.0.1', 3333) to remote side of the tunnel
2017-05-17 16:11:09,479| ERROR | Exception: Error reading SSH protocol banner
2017-05-17 16:11:09,481| ERROR | Traceback (most recent call last):
2017-05-17 16:11:09,481| ERROR | File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1723, in run
2017-05-17 16:11:09,481| ERROR | self._check_banner()
2017-05-17 16:11:09,481| ERROR | File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1871, in _check_banner
2017-05-17 16:11:09,482| ERROR | raise SSHException('Error reading SSH protocol banner' + str(e))
2017-05-17 16:11:09,482| ERROR | SSHException: Error reading SSH protocol banner
2017-05-17 16:11:09,482| ERROR |
Подробнее см. ssh-tunneling-error-channel-1-open-fail-административно-запрещено-открыть
Удачи!
4
Cheney
17 Май 2017 в 08:59
Попробуй это :
import paramiko
from sshtunnel import SSHTunnelForwarder
with SSHTunnelForwarder(
('1.5.18.1', 22),
ssh_username='user',
ssh_pkey="/root/.ssh/id_rsa",
ssh_private_key_password="userpass",
remote_bind_address=("1.15.18.1", 22)
) as tunnel:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname=tunnel.local_bind_host, port=tunnel.local_bind_port, username="root", password="remotepass")
# do some operations with client session
stdin, stdout, stderr = client.exec_command("./script >> output.txt")
print stdout.channel.recv_exit_status() # status is 0
client.close()
print('FINISH!')
2
Manu Singhal
16 Май 2017 в 10:21