Urlerror urlopen error errno 104 connection reset by peer

Hi, I am getting quite often the following error when uploading files: urllib2.URLError: What can I do to troubleshoot the issue ? Thanks

Here is the full error stack.

It consistently happens after ~2 min on different connections (home and work) and different computers.

I usually manage to upload one file a day. And after I get those errors.
The files I am uploading are about 5 GB size.
Is there a daily upload cap ?

Getting upload URL...
Traceback (most recent call last):
  File "/home/someuser/bin/b2", line 949, in <module>
    main()
  File "/home/someuser/bin/b2", line 942, in main
    upload_file(args)
  File "/home/someuser/bin/b2", line 702, in upload_file
    response = post_file(url, headers, local_file)
  File "/home/someuser/bin/b2", line 313, in post_file
    with OpenUrl(url, data_file, headers) as response_file:
  File "/home/someuser/bin/b2", line 276, in __enter__
    self.file = urllib2.urlopen(request)
  File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python2.7/urllib2.py", line 431, in open
    response = self._open(req, data)
  File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
    '_open', req)
  File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
    result = func(*args)
  File "/usr/lib64/python2.7/urllib2.py", line 1240, in https_open
    context=self._context)
  File "/usr/lib64/python2.7/urllib2.py", line 1197, in do_open
    raise URLError(err)
urllib2.URLError: <urlopen error [Errno 104] Connection reset by peer>
Getting upload URL...
Traceback (most recent call last):
  File "/home/someuser/bin/b2", line 949, in <module>
    main()
  File "/home/someuser/bin/b2", line 942, in main
    upload_file(args)
  File "/home/someuser/bin/b2", line 702, in upload_file
    response = post_file(url, headers, local_file)
  File "/home/someuser/bin/b2", line 313, in post_file
    with OpenUrl(url, data_file, headers) as response_file:
  File "/home/someuser/bin/b2", line 276, in __enter__
    self.file = urllib2.urlopen(request)
  File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python2.7/urllib2.py", line 431, in open
    response = self._open(req, data)
  File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
    '_open', req)
  File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
    result = func(*args)
  File "/usr/lib64/python2.7/urllib2.py", line 1240, in https_open
    context=self._context)
  File "/usr/lib64/python2.7/urllib2.py", line 1197, in do_open
    raise URLError(err)
urllib2.URLError: <urlopen error [Errno 104] Connection reset by peer>

thx

Содержание

  1. URLError: in 01_the_machine_learning_landscape.ipynb #345
  2. Comments
  3. Connection reset by peer #8
  4. Comments

URLError: in 01_the_machine_learning_landscape.ipynb #345

when I run the code in the cell

I encounter the error like this:

/anaconda3/envs/tf2/lib/python3.7/http/client.py in request(self, method, url, body, headers, encode_chunked) 1261 «»»Send a complete request to the server.»»» -> 1262 self._send_request(method, url, body, headers, encode_chunked) 1263

/anaconda3/envs/tf2/lib/python3.7/http/client.py in _send_request(self, method, url, body, headers, encode_chunked) 1307 body = _encode(body, ‘body’) -> 1308 self.endheaders(body, encode_chunked=encode_chunked) 1309

/anaconda3/envs/tf2/lib/python3.7/http/client.py in endheaders(self, message_body, encode_chunked) 1256 raise CannotSendHeader() -> 1257 self._send_output(message_body, encode_chunked=encode_chunked) 1258

/anaconda3/envs/tf2/lib/python3.7/http/client.py in _send_output(self, message_body, encode_chunked) 1027 del self._buffer[:] -> 1028 self.send(msg) 1029

/anaconda3/envs/tf2/lib/python3.7/http/client.py in send(self, data) 967 if self.auto_open: —> 968 self.connect() 969 else:

/anaconda3/envs/tf2/lib/python3.7/http/client.py in connect(self) 1431 self.sock = self._context.wrap_socket(self.sock, -> 1432 server_hostname=server_hostname) 1433

/anaconda3/envs/tf2/lib/python3.7/ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 422 context=self, —> 423 session=session 424 )

/anaconda3/envs/tf2/lib/python3.7/ssl.py in _create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session) 869 raise ValueError(«do_handshake_on_connect should not be specified for non-blocking sockets») —> 870 self.do_handshake() 871 except (OSError, ValueError):

/anaconda3/envs/tf2/lib/python3.7/ssl.py in do_handshake(self, block) 1138 self.settimeout(None) -> 1139 self._sslobj.do_handshake() 1140 finally: ConnectionResetError: [Errno 104] Connection reset by peer During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) in 6 print(«Downloading», filename) 7 url = DOWNLOAD_ROOT + «datasets/lifesat/» + filename —-> 8 urllib.request.urlretrieve(url, datapath + filename)

/anaconda3/envs/tf2/lib/python3.7/urllib/request.py in urlretrieve(url, filename, reporthook, data) 245 url_type, path = splittype(url) 246 —> 247 with contextlib.closing(urlopen(url, data)) as fp: 248 headers = fp.info() 249

/anaconda3/envs/tf2/lib/python3.7/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context) 220 else: 221 opener = _opener —> 222 return opener.open(url, data, timeout) 223 224 def install_opener(opener):

/anaconda3/envs/tf2/lib/python3.7/urllib/request.py in open(self, fullurl, data, timeout) 523 req = meth(req) 524 —> 525 response = self._open(req, data) 526 527 # post-process response

/anaconda3/envs/tf2/lib/python3.7/urllib/request.py in _open(self, req, data) 541 protocol = req.type 542 result = self._call_chain(self.handle_open, protocol, protocol + —> 543 ‘_open’, req) 544 if result: 545 return result

/anaconda3/envs/tf2/lib/python3.7/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args) 501 for handler in handlers: 502 func = getattr(handler, meth_name) —> 503 result = func(*args) 504 if result is not None: 505 return result

/anaconda3/envs/tf2/lib/python3.7/urllib/request.py in https_open(self, req) 1391 def https_open(self, req): 1392 return self.do_open(http.client.HTTPSConnection, req, -> 1393 context=self._context, check_hostname=self._check_hostname) 1394 1395 https_request = AbstractHTTPHandler.do_request_

/anaconda3/envs/tf2/lib/python3.7/urllib/request.py in do_open(self, http_class, req, **http_conn_args) 1350 encode_chunked=req.has_header(‘Transfer-encoding’)) 1351 except OSError as err: # timeout error -> 1352 raise URLError(err) 1353 r = h.getresponse() 1354 except: URLError: «>

The text was updated successfully, but these errors were encountered:

Источник

Connection reset by peer #8

I am trying to upload a (large) sequence and getting intermittent ‘connection reset by peer’ errors. The full output is in this gist. After the last error, the script never returns and it looks like the sequence did not get uploaded.

The text was updated successfully, but these errors were encountered:

Looking in your manual upload folder it seems like most of the images got uploaded. However since AWS S3 server at some point timed out I suspect that we do not close the image for pushing it to the server. Sweden is at sleep now so I gonna hand this over to @jesolem when he wakes up tomorrow. Please be patient until then. I think that in the end its a matter of handling timeouts to AWS gracefully.

@mvexel Can you try it again now? I just committed a version which should handle timeouts better and retry.

I throttled my connection with ipfw to try to reproduce but couldn’t. Let me know if you still have issues.

Well, some problem with Connection reset by peer. Larger sequence (more pictures) at least with one error, short sequence usually ok. After last message «Success: xxx.jpg» script stops. (don’t ask for «Finalize upload?»)

Exception in thread Thread-2:
Traceback (most recent call last):
File «/usr/lib/python2.7/threading.py», line 810, in __bootstrap_inner
self.run()
File «/home/kayle/OSM/mapillary/mapillary_tools/python/upload.py», line 184, in run
upload_file(filepath, *_self.params)
File «/home/kayle/OSM/mapillary/mapillary_tools/python/upload.py», line 120, in upload_file
response = urllib2.urlopen(request)
File «/usr/lib/python2.7/urllib2.py», line 127, in urlopen
return _opener.open(url, data, timeout)
File «/usr/lib/python2.7/urllib2.py», line 404, in open
response = self._open(req, data)
File «/usr/lib/python2.7/urllib2.py», line 422, in _open
‘_open’, req)
File «/usr/lib/python2.7/urllib2.py», line 382, in _call_chain
result = func(_args)
File «/usr/lib/python2.7/urllib2.py», line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File «/usr/lib/python2.7/urllib2.py», line 1184, in do_open
raise URLError(err)
URLError:

@kaylesk This issue is for upload_with_authentication.py not upload.py. There is no «finalize upload» or verification needed for photos taken with the app. They already belong to a sequence etc.

I used upload_with_authentication.py and got this error — http://freemap.kayle.sk/error.txt

Running upload_with_authentication.py from my local disk on a stable network has worked for me. But running on an external disk has not worked once. Any time the network is unstable, I get an error or two (like above), and after that the script freezes and no further uploads are attempted. I think my issue is not the internal/external disk, but some kind of timeout related to the large dataset size.

Now I received an email from Richard Weait claiming the same thing. Here are his comments:

This seems trickier than I thought. I haven’t been able to reproduce. Will keep looking.

@craigtaverner Do you get the confirmation at the end to work?

As Craig said above, I’ve been having issues with a large set of images as well.

Each «broken pipe» appeared to block that thread, and with many images, eventually all of my upload threads stopped. So the uploads never completed.

I’ve tried to make the except in L133 (upload.py) less selective so it will retry on any URL error. Seems like it only caught socket timeouts? And I’ve added a few prints so I can watch the progress. I also reduced to one thread. So far, the current upload has had three re-tries and is continuing.

@jesolem what confirmation? Right now it blocks during the uploads and no final message is delivered. Are you talking about the other bug with the ‘DONE’ directory? That seems to be fixed, but not related to this current issue.

I’m testing upload.py (planning to take pics on holiday and transfer via pc), I got a similar error:

Posting here because it’s similar, if needed I move to another issue.

@sabas yes, that looks like the same. Thanks!

@rweait might be on to something, catch any URLError and retry. I can’t reproduce this, would be great if you let me know if that’s a solution.

Hey, beside numerous «Connection Reset by Peer» messages which appear also in my upload sessions of

1000 imgs/call the Script returns «HTTP error: HTTP Error 400: Bad Request on VIRB0886.JPG» for example multiple times during the upload. No traceback is given and I’m not sure wether the script retries the upload. I cannot keep all that images until someone fixes the script, so all images that aren’t uploaded successfully after two calls of the program will be lost.

Just chiming in that I am still getting some of these errors (https://gist.github.com/5d27579a151f8ac664d9) using the latest build, but not enough for me to worry a great deal about.

Actually, I think it is preventing batches from completing when one of the uploads fails. I have uploaded several batches now where the script seems to be stuck waiting for one upload to succeed.

Adding my own experience, I get a lot of broken pipe error with the upload_with_authentication.py

I think the problem is that we live too far away (network wise) from s3-eu-west-1. This is how a trace route to mapillary.uploads.manual.images.s3-eu-west-1.amazonaws.com looks from Vietnam:

I have a 45/45mb fiber connection, but the international connection can be unreliable with both high latency and packet loss so @jesolem I don’t think throttling with ipfw is enough to reproduce the problem.

I think @maning is also from my part of the world and @mvexel and @rweait are about half as far away network wise (the connection from SE Asia to EU goes through US).

Источник

Помните возникшую проблему: [Errno 104] Сброс подключения одноранговым узлом <a id=»sec-1″ name=»sec-1″> </a>

Сегодня есть потребность в работе. В базе данных есть таблица с почти 30 000 записей URL, и каждая запись представляет собой изображение. Мне нужно попросить их получить каждое изображение и сохранить его локально. Я написал это так (псевдокод) в начале:

import requests

for url in urls:
    try:
        r = requests.get(url).content
        save_image(r)
    except Exception, e:
        print str(e)

Однако при запуске на сервере вы обнаружите, что каждые несколько запросов будут сообщать об ошибках, подобных следующему:

HTTPConnectionPool(host='wx.qlogo.cn', port=80): Max retries exceeded with url: /mmopen/aTVWntpJLCAr2pichIUx8XMevb3SEbktTuLkxJLHWVTwGfkprKZ7rkEYDrKRr5icyDGIvU4iasoyRrqsffbe3UUQXT5EfMEbYKg/0 (Caused by <class 'socket.error'>: [Errno 104] Connection reset by peer)

Это напоминает мне предыдущий проходhacker news api При запросе данных по одному на вашем компьютере, чтобы ускорить скорость обработки, интерфейс запрашивается в многопроцессорном режиме, и эта ошибка также может возникнуть. Раньше я делал запись об ошибке и передавал ее напрямую. В этом случае, поскольку мне нужно было запросить все изображения, я проверил соответствующие причины в Google. Вероятно, это было связано с тем, что я часто запрашивал, а сервер закрыл соединение запроса отдела. УвидетьВот, Вот, Вот。
Итак, я сделал это примерно, и он действительно решил:

import requests

for url in urls:
    for i in range(10):
        try:
            r = requests.get(url).content
        except Exception, e:
            if i >= 9:
                do_some_log()
            else:
                time.sleep(0.5)
        else:
            time.sleep(0.1)
            break

     save_image(r)

Код очень прост, но он может объяснить общее решение. Увеличение задержки между каждым запросом может уменьшить большинство отклоненных запросов, но все еще есть некоторые запросы, которые отклоняются, поэтому после того, как эта часть запроса отклонена, повторная попытка Отклонено 10 раз перед отказом (записывается в журнал). В реальном запросе количество отклонений с задержкой 0,1 с значительно меньше.Количество отклоненных повторных попыток составляет не более 3 раз, и, наконец, все изображения успешно удалены.

Понравилась статья? Поделить с друзьями:
  • Url не принадлежит указанному домену как исправить
  • Url signature expired как исправить
  • Url session task failed with error
  • Url error 60 ssl certificate problem self signed certificate
  • Url error 60 ssl certificate problem certificate has expired