New connection error python

I've Python 3 running in a linux server. I need to install some libraries (obviously) so I'm trying : pip3 install numpy Which, is resulting in the following error: Collecting numpy Retrying (...

I’ve Python 3 running in a linux server. I need to install some libraries (obviously) so I’m trying :

pip3 install numpy

Which, is resulting in the following error:

Collecting numpy
  Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f7542572828>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /simple/numpy/
  Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f7542572eb8>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /simple/numpy/
  Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f7542572be0>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /simple/numpy/
  Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f7542572d30>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /simple/numpy/
  Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f7542572860>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /simple/numpy/
  Could not find a version that satisfies the requirement numpy (from versions: )
No matching distribution found for numpy

Questions:

  1. What could be the problem? Why is this error being raised?
  2. How can this be avoided in the future?

Feel free to ask for more details.

UPDATE:
I tried ping google.com and got the error:

ping: google.com: Name or service not known

But when I tried ping 8.8.8.8, I got:

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=118 time=10.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=118 time=10.6 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=118 time=10.6 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=118 time=10.6 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=10 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=11 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=12 ttl=118 time=10.8 ms
64 bytes from 8.8.8.8: icmp_seq=13 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=14 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=15 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=16 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=17 ttl=118 time=10.6 ms
64 bytes from 8.8.8.8: icmp_seq=18 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=19 ttl=118 time=10.6 ms
64 bytes from 8.8.8.8: icmp_seq=20 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=21 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=22 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=23 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=24 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=25 ttl=118 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=26 ttl=118 time=10.7 ms
^C
--- 8.8.8.8 ping statistics ---
26 packets transmitted, 26 received, 0% packet loss, time 25046ms
rtt min/avg/max/mdev = 10.655/10.731/10.827/0.073 ms

A problem with DNS maybe? What should I do?

Содержание

  1. urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host #836
  2. Comments
  3. Fix “Max retries exceeded with URL” error in Python requests library
  4. “Max retries exceeded with URL” debugging
  5. Double-check the URL
  6. Unstable internet connection / server overload
  7. Increase request timeout
  8. Apply backoff factor

urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host #836

I often have this error, but script works :

GET http://x.x.x.x:9200/_nodes/_all/http [status:N/A request:2.992s]
Traceback (most recent call last):
File «/usr/lib/python3.6/site-packages/urllib3/connection.py», line 171, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File «/usr/lib/python3.6/site-packages/urllib3/util/connection.py», line 79, in create_connection
raise err
File «/usr/lib/python3.6/site-packages/urllib3/util/connection.py», line 69, in create_connection
sock.connect(sa)
OSError: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File «/usr/lib/python3.6/site-packages/elasticsearch/connection/http_urllib3.py», line 172, in perform_request
response = self.pool.urlopen(method, url, body, retries=Retry(False), headers=request_headers, **kw)
File «/usr/lib/python3.6/site-packages/urllib3/connectionpool.py», line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File «/usr/lib/python3.6/site-packages/urllib3/util/retry.py», line 343, in increment
raise six.reraise(type(error), error, _stacktrace)
File «/usr/lib/python3.6/site-packages/urllib3/packages/six.py», line 686, in reraise
raise value
File «/usr/lib/python3.6/site-packages/urllib3/connectionpool.py», line 600, in urlopen
chunked=chunked)
File «/usr/lib/python3.6/site-packages/urllib3/connectionpool.py», line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File «/usr/lib64/python3.6/http/client.py», line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File «/usr/lib64/python3.6/http/client.py», line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File «/usr/lib64/python3.6/http/client.py», line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File «/usr/lib64/python3.6/http/client.py», line 1026, in _send_output
self.send(msg)
File «/usr/lib64/python3.6/http/client.py», line 964, in send
self.connect()
File «/usr/lib/python3.6/site-packages/urllib3/connection.py», line 196, in connect
conn = self._new_conn()
File «/usr/lib/python3.6/site-packages/urllib3/connection.py», line 180, in _new_conn
self, «Failed to establish a new connection: %s» % e)
urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host

The text was updated successfully, but these errors were encountered:

It sounds like there might be some problem in the connection between your python program and your Elasticsearch cluster.

Unfortunately I can’t offer any advice here because there’s not enough information.

How many nodes are in your cluster. Do you have them sitting behind a proxy or not? are you using x-pack or not? are you using a hosted service or hosting it your self.

There’s a lot of factors here going on that can cause HTTP connection errors.

Any more info would be helpful.

@laurentvv thanks for that. This is helpful.

My next question is you say you have 3 nodes, 3 VM’s. Where are those VM’s running? Are they all on one computer? On 3 computers? In a data center? On your desk?

On a related note:

If you use curl instead of the python client. Do you experience similar errors?

the error you’re getting is a result of a network connection or firewall rule or something.
It is actually not an error related to

  1. Elasticsearch
  2. Elasticsearch-py (this library)

This error is coming from urllib3. And the reason you’re getting this error is because python can not make a connection to the HTTP host it’s trying to connect to.

If we take out Elasticsearch and Elasticsearch-py. This error happens because a python process using URLlib3 can not connect to a networked host.

This is why I asked all the questions, because somewhere you have a network configuration that’s causing the python process to not be able to connect to a host in your cluster. It could be related to anything from the VM firewall rules, the host firewall rules, the data center firewall rules, routers, etc..

Источник

Fix “Max retries exceeded with URL” error in Python requests library

Python is a simple, minimalistic, and easy-to-comprehend programming language that is globally-accepted and universally-used today. Its simple, easy-to-learn syntax can sometimes lead Python developers – especially those who are newer to the language – into missing some of its subtleties and underestimating the power of the diverse Python language.

One of the most popular error messages that new developers encounter when using requests library in Python is the “Max retries exceeded with URL” (besides timeout errors). While it seems simple, sometimes this somewhat vague error message can make even advanced Python developers scratching their head for a few good hours.

This article will show you what causes “Max retries exceeded with URL” error and a few ways to debug it.

“Max retries exceeded with URL” debugging

Max retries exceeded with URL is a common error, you will encounter it when using requests library to make a request. The error indicates that the request cannot be made successfully. Usually, the verbose error message should look like the output below

Sometimes, the error message may look slightly different, like below :

The error message usually begins with requests.exceptions.ConnectionError , which tell us that there is something bad happened when requests was trying to connect. Sometimes, the exception is requests.exceptions.SSLError which is obviously a SSL-related problem.

The exception then followed by a more detailed string about the error, which could be Failed to establish a new connection: [Errno 61] Connection refused , [Errno 11001] getaddrinfo failed , [Errno 10054] An existing connection was forcibly closed by the remote host or [Errno -2] Name or service not known . These messages were produced by the underlying system library which requests called internally. Based on these texts, we can further isolate and fix the problems.

Double-check the URL

There are a possibility that your requested URL wrong. It may be malformed or leading to a non-existent endpoint. In reality, this is usually the case among Python beginners. Seasoned developers can also encounter this error, especially when the URL is parsed from a webpage, which can be a relative URL or schemeless URL.

One way to further debug this is to prepare the URL in advance, then print it before actually making a connection.

Unstable internet connection / server overload

The underlying problem may be related to your own connection or the server you’re trying to connect to. Unstable internet connection may cause packet loss between network hops, leading to unsuccessful connection. There are times the server has received so many requests that it cannot process them all, therefore your requests won’t receive a response.

In this case, you can try increasing retry attempts and disable keep-alive connections to see if the problems go away. The amount of time spent for each request will certainly increase too, but that’s a trade-off you must accept. Better yet, find a more reliable internet connection.

Increase request timeout

Another way that you can avoid “Max retries exceeded with URL” error, especially when the server is busy handling a huge number of connections, is to increase the amount of time requests library waits for a response from the server. In other words, you wait longer for a response, but increase the chance for a request to successfully finishes. This method can also be applied when the server is in a location far away from yours.

In order to increase request timeout, simply pass the time value in seconds to the get or post method :

You can also pass a tuple to timeout with the first element being a connect timeout (the time it allows for the client to establish a connection to the server), and the second being a read timeout (the time it will wait on a response once your client has established a connection).

If the request establishes a connection within 2 seconds and receives data within 5 seconds of the connection being established, then the response will be returned as it was before. If the request times out, then the function will raise a Timeout exception:

Apply backoff factor

backoff_factor is an urllib3 argument, the library which requests relies on to initialize a network connection. Below is an example where we use backoff_factor to slow down the requests to the servers whenever there’s a failed one.

According to urllib3 documentation, backoff_factor is base value which the library use to calculate sleep interval between retries. Specifically, urllib3 will sleep for * (2 ^ ( — 1)) seconds after every failed connection attempt.

For example, If the backoff_factor is 0.1, then sleep() will sleep for 0.0s, 0.2s, 0.4s, … between retries. By default, backoff is disabled (set to 0). It will also force a retry if the status code returned is 500, 502, 503 or 504.

You can customize Retry to have even more granular control over retries. Other notable options are:

  • total – Total number of retries to allow.
  • connect – How many connection-related errors to retry on.
  • read – How many times to retry on read errors.
  • redirect – How many redirects to perform.
  • _methodwhitelist – Set of uppercased HTTP method verbs that we should retry on.
  • _statusforcelist – A set of HTTP status codes that we should force a retry on.
  • _backofffactor – A backoff factor to apply between attempts.
  • _raise_onredirect – Whether, if the number of redirects is exhausted, to raise a MaxRetryError , or to return a response with a response code in the 3xx range.
  • raise_on_status – Similar meaning to _raise_onredirect: whether we should raise an exception, or return a response, if status falls in _statusforcelist range and retries have been exhausted.

We hope that the article helped you successfully debugged “Max retries exceeded with URL” error in Python requests library, as well as avoid encountering it in the future. We’ve also written a few other guides for fixing common Python errors, such as Timeout in Python requests, Python Unresolved Import in VSCode or “IndexError: List Index Out of Range” in Python. If you have any suggestion, please feel free to leave a comment below.

Источник

This part of the documentation covers all the interfaces of Requests. For
parts where Requests depends on external libraries, we document the most
important right here and provide links to the canonical documentation.

Main Interface¶

All of Requests’ functionality can be accessed by these 7 methods.
They all return an instance of the Response object.

requests.request(method, url, **kwargs)[source]

Constructs and sends a Request.

Parameters:
  • method – method for the new Request object.
  • url – URL for the new Request object.
  • params – (optional) Dictionary, list of tuples or bytes to send
    in the body of the Request.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • json – (optional) A JSON serializable Python object to send in the body of the Request.
  • headers – (optional) Dictionary of HTTP Headers to send with the Request.
  • cookies – (optional) Dict or CookieJar object to send with the Request.
  • files – (optional) Dictionary of 'name': file-like-objects (or {'name': file-tuple}) for multipart encoding upload.
    file-tuple can be a 2-tuple ('filename', fileobj), 3-tuple ('filename', fileobj, 'content_type')
    or a 4-tuple ('filename', fileobj, 'content_type', custom_headers), where 'content-type' is a string
    defining the content type of the given file and custom_headers a dict-like object containing additional headers
    to add for the file.
  • auth – (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
  • timeout (float or tuple) – (optional) How many seconds to wait for the server to send data
    before giving up, as a float, or a (connect timeout, read
    timeout)
    tuple.
  • allow_redirects (bool) – (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to True.
  • proxies – (optional) Dictionary mapping protocol to the URL of the proxy.
  • verify – (optional) Either a boolean, in which case it controls whether we verify
    the server’s TLS certificate, or a string, in which case it must be a path
    to a CA bundle to use. Defaults to True.
  • stream – (optional) if False, the response content will be immediately downloaded.
  • cert – (optional) if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’) pair.
Returns:

Response object

Return type:

requests.Response

Usage:

>>> import requests
>>> req = requests.request('GET', 'https://httpbin.org/get')
<Response [200]>
requests.head(url, **kwargs)[source]

Sends a HEAD request.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Returns:

Response object

Return type:

requests.Response

requests.get(url, params=None, **kwargs)[source]

Sends a GET request.

Parameters:
  • url – URL for the new Request object.
  • params – (optional) Dictionary, list of tuples or bytes to send
    in the body of the Request.
  • **kwargs – Optional arguments that request takes.
Returns:

Response object

Return type:

requests.Response

requests.post(url, data=None, json=None, **kwargs)[source]

Sends a POST request.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • json – (optional) json data to send in the body of the Request.
  • **kwargs – Optional arguments that request takes.
Returns:

Response object

Return type:

requests.Response

requests.put(url, data=None, **kwargs)[source]

Sends a PUT request.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • json – (optional) json data to send in the body of the Request.
  • **kwargs – Optional arguments that request takes.
Returns:

Response object

Return type:

requests.Response

requests.patch(url, data=None, **kwargs)[source]

Sends a PATCH request.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • json – (optional) json data to send in the body of the Request.
  • **kwargs – Optional arguments that request takes.
Returns:

Response object

Return type:

requests.Response

requests.delete(url, **kwargs)[source]

Sends a DELETE request.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Returns:

Response object

Return type:

requests.Response

Exceptions¶

exception requests.RequestException(*args, **kwargs)[source]

There was an ambiguous exception that occurred while handling your
request.

exception requests.ConnectionError(*args, **kwargs)[source]

A Connection error occurred.

exception requests.HTTPError(*args, **kwargs)[source]

An HTTP error occurred.

exception requests.URLRequired(*args, **kwargs)[source]

A valid URL is required to make a request.

exception requests.TooManyRedirects(*args, **kwargs)[source]

Too many redirects.

exception requests.ConnectTimeout(*args, **kwargs)[source]

The request timed out while trying to connect to the remote server.

Requests that produced this error are safe to retry.

exception requests.ReadTimeout(*args, **kwargs)[source]

The server did not send any data in the allotted amount of time.

exception requests.Timeout(*args, **kwargs)[source]

The request timed out.

Catching this error will catch both
ConnectTimeout and
ReadTimeout errors.

Request Sessions¶

class requests.Session[source]

A Requests session.

Provides cookie persistence, connection-pooling, and configuration.

Basic Usage:

>>> import requests
>>> s = requests.Session()
>>> s.get('https://httpbin.org/get')
<Response [200]>

Or as a context manager:

>>> with requests.Session() as s:
>>>     s.get('https://httpbin.org/get')
<Response [200]>
auth = None

Default Authentication tuple or object to attach to
Request.

cert = None

SSL client certificate default, if String, path to ssl client
cert file (.pem). If Tuple, (‘cert’, ‘key’) pair.

close()[source]

Closes all adapters and as such the session

cookies = None

A CookieJar containing all currently outstanding cookies set on this
session. By default it is a
RequestsCookieJar, but
may be any other cookielib.CookieJar compatible object.

delete(url, **kwargs)[source]

Sends a DELETE request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

requests.Response

get(url, **kwargs)[source]

Sends a GET request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

requests.Response

get_adapter(url)[source]

Returns the appropriate connection adapter for the given URL.

Return type: requests.adapters.BaseAdapter
get_redirect_target(resp)

Receives a Response. Returns a redirect URI or None

head(url, **kwargs)[source]

Sends a HEAD request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

requests.Response

A case-insensitive dictionary of headers to be sent on each
Request sent from this
Session.

hooks = None

Event-handling hooks.

max_redirects = None

Maximum number of redirects allowed. If the request exceeds this
limit, a TooManyRedirects exception is raised.
This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is
30.

merge_environment_settings(url, proxies, stream, verify, cert)[source]

Check the environment and merge it with some settings.

Return type: dict
mount(prefix, adapter)[source]

Registers a connection adapter to a prefix.

Adapters are sorted in descending order by prefix length.

options(url, **kwargs)[source]

Sends a OPTIONS request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

requests.Response

params = None

Dictionary of querystring data to attach to each
Request. The dictionary values may be lists for
representing multivalued query parameters.

patch(url, data=None, **kwargs)[source]

Sends a PATCH request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • **kwargs – Optional arguments that request takes.
Return type:

requests.Response

post(url, data=None, json=None, **kwargs)[source]

Sends a POST request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • json – (optional) json to send in the body of the Request.
  • **kwargs – Optional arguments that request takes.
Return type:

requests.Response

prepare_request(request)[source]

Constructs a PreparedRequest for
transmission and returns it. The PreparedRequest has settings
merged from the Request instance and those of the
Session.

Parameters: requestRequest instance to prepare with this
session’s settings.
Return type: requests.PreparedRequest
proxies = None

Dictionary mapping protocol or protocol and host to the URL of the proxy
(e.g. {‘http’: ‘foo.bar:3128’, ‘http://host.name’: ‘foo.bar:4012’}) to
be used on each Request.

put(url, data=None, **kwargs)[source]

Sends a PUT request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • **kwargs – Optional arguments that request takes.
Return type:

requests.Response

rebuild_auth(prepared_request, response)

When being redirected we may want to strip authentication from the
request to avoid leaking credentials. This method intelligently removes
and reapplies authentication where possible to avoid credential loss.

rebuild_method(prepared_request, response)

When being redirected we may want to change the method of the request
based on certain specs or browser behavior.

rebuild_proxies(prepared_request, proxies)

This method re-evaluates the proxy configuration by considering the
environment variables. If we are redirected to a URL covered by
NO_PROXY, we strip the proxy configuration. Otherwise, we set missing
proxy keys for this URL (in case they were stripped by a previous
redirect).

This method also replaces the Proxy-Authorization header where
necessary.

Return type: dict
request(method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None)[source]

Constructs a Request, prepares it and sends it.
Returns Response object.

Parameters:
  • method – method for the new Request object.
  • url – URL for the new Request object.
  • params – (optional) Dictionary or bytes to be sent in the query
    string for the Request.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like
    object to send in the body of the Request.
  • json – (optional) json to send in the body of the
    Request.
  • headers – (optional) Dictionary of HTTP Headers to send with the
    Request.
  • cookies – (optional) Dict or CookieJar object to send with the
    Request.
  • files – (optional) Dictionary of 'filename': file-like-objects
    for multipart encoding upload.
  • auth – (optional) Auth tuple or callable to enable
    Basic/Digest/Custom HTTP Auth.
  • timeout (float or tuple) – (optional) How long to wait for the server to send
    data before giving up, as a float, or a (connect timeout,
    read timeout)
    tuple.
  • allow_redirects (bool) – (optional) Set to True by default.
  • proxies – (optional) Dictionary mapping protocol or protocol and
    hostname to the URL of the proxy.
  • stream – (optional) whether to immediately download the response
    content. Defaults to False.
  • verify – (optional) Either a boolean, in which case it controls whether we verify
    the server’s TLS certificate, or a string, in which case it must be a path
    to a CA bundle to use. Defaults to True.
  • cert – (optional) if String, path to ssl client cert file (.pem).
    If Tuple, (‘cert’, ‘key’) pair.
Return type:

requests.Response

resolve_redirects(resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None, yield_requests=False, **adapter_kwargs)

Receives a Response. Returns a generator of Responses or Requests.

send(request, **kwargs)[source]

Send a given PreparedRequest.

Return type: requests.Response
should_strip_auth(old_url, new_url)

Decide whether Authorization header should be removed when redirecting

stream = None

Stream response content default.

trust_env = None

Trust environment settings for proxy configuration, default
authentication and similar.

verify = None

SSL Verification default.

Lower-Level Classes¶

class requests.Request(method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None)[source]

A user-created Request object.

Used to prepare a PreparedRequest, which is sent to the server.

Parameters:
  • method – HTTP method to use.
  • url – URL to send.
  • headers – dictionary of headers to send.
  • files – dictionary of {filename: fileobject} files to multipart upload.
  • data – the body to attach to the request. If a dictionary or
    list of tuples [(key, value)] is provided, form-encoding will
    take place.
  • json – json for the body to attach to the request (if files or data is not specified).
  • params – URL parameters to append to the URL. If a dictionary or
    list of tuples [(key, value)] is provided, form-encoding will
    take place.
  • auth – Auth handler or (user, pass) tuple.
  • cookies – dictionary or CookieJar of cookies to attach to this request.
  • hooks – dictionary of callback hooks, for internal usage.

Usage:

>>> import requests
>>> req = requests.Request('GET', 'https://httpbin.org/get')
>>> req.prepare()
<PreparedRequest [GET]>
deregister_hook(event, hook)

Deregister a previously registered hook.
Returns True if the hook existed, False if not.

prepare()[source]

Constructs a PreparedRequest for transmission and returns it.

register_hook(event, hook)

Properly register a hook.

class requests.Response[source]

The Response object, which contains a
server’s response to an HTTP request.

apparent_encoding

The apparent encoding, provided by the chardet library.

close()[source]

Releases the connection back to the pool. Once this method has been
called the underlying raw object must not be accessed again.

Note: Should not normally need to be called explicitly.

content

Content of the response, in bytes.

cookies = None

A CookieJar of Cookies the server sent back.

elapsed = None

The amount of time elapsed between sending the request
and the arrival of the response (as a timedelta).
This property specifically measures the time taken between sending
the first byte of the request and finishing parsing the headers. It
is therefore unaffected by consuming the response content or the
value of the stream keyword argument.

encoding = None

Encoding to decode with when accessing r.text.

Case-insensitive Dictionary of Response Headers.
For example, headers['content-encoding'] will return the
value of a 'Content-Encoding' response header.

history = None

A list of Response objects from
the history of the Request. Any redirect responses will end
up here. The list is sorted from the oldest to the most recent request.

is_permanent_redirect

True if this Response one of the permanent versions of redirect.

is_redirect

True if this Response is a well-formed HTTP redirect that could have
been processed automatically (by Session.resolve_redirects).

iter_content(chunk_size=1, decode_unicode=False)[source]

Iterates over the response data. When stream=True is set on the
request, this avoids reading the content at once into memory for
large responses. The chunk size is the number of bytes it should
read into memory. This is not necessarily the length of each item
returned as decoding can take place.

chunk_size must be of type int or None. A value of None will
function differently depending on the value of stream.
stream=True will read data as it arrives in whatever size the
chunks are received. If stream=False, data is returned as
a single chunk.

If decode_unicode is True, content will be decoded using the best
available encoding based on the response.

iter_lines(chunk_size=512, decode_unicode=False, delimiter=None)[source]

Iterates over the response data, one line at a time. When
stream=True is set on the request, this avoids reading the
content at once into memory for large responses.

Note

This method is not reentrant safe.

json(**kwargs)[source]

Returns the json-encoded content of a response, if any.

Parameters: **kwargs – Optional arguments that json.loads takes.
Raises: ValueError – If the response body does not contain valid json.
links

Returns the parsed header links of the response, if any.

next

Returns a PreparedRequest for the next request in a redirect chain, if there is one.

ok

Returns True if status_code is less than 400, False if not.

This attribute checks if the status code of the response is between
400 and 600 to see if there was a client error or a server error. If
the status code is between 200 and 400, this will return True. This
is not a check to see if the response code is 200 OK.

raise_for_status()[source]

Raises stored HTTPError, if one occurred.

reason = None

Textual reason of responded HTTP Status, e.g. “Not Found” or “OK”.

request = None

The PreparedRequest object to which this
is a response.

status_code = None

Integer Code of responded HTTP Status, e.g. 404 or 200.

text

Content of the response, in unicode.

If Response.encoding is None, encoding will be guessed using
chardet.

The encoding of the response content is determined based solely on HTTP
headers, following RFC 2616 to the letter. If you can take advantage of
non-HTTP knowledge to make a better guess at the encoding, you should
set r.encoding appropriately before accessing this property.

url = None

Final URL location of Response.

Lower-Lower-Level Classes¶

class requests.PreparedRequest[source]

The fully mutable PreparedRequest object,
containing the exact bytes that will be sent to the server.

Generated from either a Request object or manually.

Usage:

>>> import requests
>>> req = requests.Request('GET', 'https://httpbin.org/get')
>>> r = req.prepare()
<PreparedRequest [GET]>

>>> s = requests.Session()
>>> s.send(r)
<Response [200]>
body = None

request body to send to the server.

deregister_hook(event, hook)

Deregister a previously registered hook.
Returns True if the hook existed, False if not.

dictionary of HTTP headers.

hooks = None

dictionary of callback hooks, for internal usage.

method = None

HTTP verb to send to the server.

path_url

Build the path URL to use.

prepare(method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None)[source]

Prepares the entire request with the given parameters.

prepare_auth(auth, url=»)[source]

Prepares the given HTTP auth data.

prepare_body(data, files, json=None)[source]

Prepares the given HTTP body data.

prepare_content_length(body)[source]

Prepare Content-Length header based on request method and body

prepare_cookies(cookies)[source]

Prepares the given HTTP cookie data.

This function eventually generates a Cookie header from the
given cookies using cookielib. Due to cookielib’s design, the header
will not be regenerated if it already exists, meaning this function
can only be called once for the life of the
PreparedRequest object. Any subsequent calls
to prepare_cookies will have no actual effect, unless the “Cookie”
header is removed beforehand.

Prepares the given HTTP headers.

prepare_hooks(hooks)[source]

Prepares the given hooks.

prepare_method(method)[source]

Prepares the given HTTP method.

prepare_url(url, params)[source]

Prepares the given HTTP URL.

register_hook(event, hook)

Properly register a hook.

url = None

HTTP URL to send the request to.

class requests.adapters.BaseAdapter[source]

The Base Transport Adapter

close()[source]

Cleans up adapter specific items.

send(request, stream=False, timeout=None, verify=True, cert=None, proxies=None)[source]

Sends PreparedRequest object. Returns Response object.

Parameters:
  • request – The PreparedRequest being sent.
  • stream – (optional) Whether to stream the request content.
  • timeout (float or tuple) – (optional) How long to wait for the server to send
    data before giving up, as a float, or a (connect timeout,
    read timeout)
    tuple.
  • verify – (optional) Either a boolean, in which case it controls whether we verify
    the server’s TLS certificate, or a string, in which case it must be a path
    to a CA bundle to use
  • cert – (optional) Any user-provided SSL certificate to be trusted.
  • proxies – (optional) The proxies dictionary to apply to the request.
class requests.adapters.HTTPAdapter(pool_connections=10, pool_maxsize=10, max_retries=0, pool_block=False)[source]

The built-in HTTP Adapter for urllib3.

Provides a general-case interface for Requests sessions to contact HTTP and
HTTPS urls by implementing the Transport Adapter interface. This class will
usually be created by the Session class under the
covers.

Parameters:
  • pool_connections – The number of urllib3 connection pools to cache.
  • pool_maxsize – The maximum number of connections to save in the pool.
  • max_retries – The maximum number of retries each connection
    should attempt. Note, this applies only to failed DNS lookups, socket
    connections and connection timeouts, never to requests where data has
    made it to the server. By default, Requests does not retry failed
    connections. If you need granular control over the conditions under
    which we retry a request, import urllib3’s Retry class and pass
    that instead.
  • pool_block – Whether the connection pool should block for connections.

Usage:

>>> import requests
>>> s = requests.Session()
>>> a = requests.adapters.HTTPAdapter(max_retries=3)
>>> s.mount('http://', a)

Add any headers needed by the connection. As of v2.0 this does
nothing by default, but is left for overriding by users that subclass
the HTTPAdapter.

This should not be called from user code, and is only exposed for use
when subclassing the
HTTPAdapter.

Parameters:
  • request – The PreparedRequest to add headers to.
  • kwargs – The keyword arguments from the call to send().
build_response(req, resp)[source]

Builds a Response object from a urllib3
response. This should not be called from user code, and is only exposed
for use when subclassing the
HTTPAdapter

Parameters:
  • req – The PreparedRequest used to generate the response.
  • resp – The urllib3 response object.
Return type:

requests.Response

cert_verify(conn, url, verify, cert)[source]

Verify a SSL certificate. This method should not be called from user
code, and is only exposed for use when subclassing the
HTTPAdapter.

Parameters:
  • conn – The urllib3 connection object associated with the cert.
  • url – The requested URL.
  • verify – Either a boolean, in which case it controls whether we verify
    the server’s TLS certificate, or a string, in which case it must be a path
    to a CA bundle to use
  • cert – The SSL certificate to verify.
close()[source]

Disposes of any internal state.

Currently, this closes the PoolManager and any active ProxyManager,
which closes any pooled connections.

get_connection(url, proxies=None)[source]

Returns a urllib3 connection for the given URL. This should not be
called from user code, and is only exposed for use when subclassing the
HTTPAdapter.

Parameters:
  • url – The URL to connect to.
  • proxies – (optional) A Requests-style dictionary of proxies used on this request.
Return type:

urllib3.ConnectionPool

init_poolmanager(connections, maxsize, block=False, **pool_kwargs)[source]

Initializes a urllib3 PoolManager.

This method should not be called from user code, and is only
exposed for use when subclassing the
HTTPAdapter.

Parameters:
  • connections – The number of urllib3 connection pools to cache.
  • maxsize – The maximum number of connections to save in the pool.
  • block – Block when no free connections are available.
  • pool_kwargs – Extra keyword arguments used to initialize the Pool Manager.

Returns a dictionary of the headers to add to any request sent
through a proxy. This works with urllib3 magic to ensure that they are
correctly sent to the proxy, rather than in a tunnelled request if
CONNECT is being used.

This should not be called from user code, and is only exposed for use
when subclassing the
HTTPAdapter.

Parameters: proxy – The url of the proxy being used for this request.
Return type: dict
proxy_manager_for(proxy, **proxy_kwargs)[source]

Return urllib3 ProxyManager for the given proxy.

This method should not be called from user code, and is only
exposed for use when subclassing the
HTTPAdapter.

Parameters:
  • proxy – The proxy to return a urllib3 ProxyManager for.
  • proxy_kwargs – Extra keyword arguments used to configure the Proxy Manager.
Returns:

ProxyManager

Return type:

urllib3.ProxyManager

request_url(request, proxies)[source]

Obtain the url to use when making the final request.

If the message is being sent through a HTTP proxy, the full URL has to
be used. Otherwise, we should only use the path portion of the URL.

This should not be called from user code, and is only exposed for use
when subclassing the
HTTPAdapter.

Parameters:
  • request – The PreparedRequest being sent.
  • proxies – A dictionary of schemes or schemes and hosts to proxy URLs.
Return type:

str

send(request, stream=False, timeout=None, verify=True, cert=None, proxies=None)[source]

Sends PreparedRequest object. Returns Response object.

Parameters:
  • request – The PreparedRequest being sent.
  • stream – (optional) Whether to stream the request content.
  • timeout (float or tuple or urllib3 Timeout object) – (optional) How long to wait for the server to send
    data before giving up, as a float, or a (connect timeout,
    read timeout)
    tuple.
  • verify – (optional) Either a boolean, in which case it controls whether
    we verify the server’s TLS certificate, or a string, in which case it
    must be a path to a CA bundle to use
  • cert – (optional) Any user-provided SSL certificate to be trusted.
  • proxies – (optional) The proxies dictionary to apply to the request.
Return type:

requests.Response

Authentication¶

class requests.auth.AuthBase[source]

Base class that all auth implementations derive from

class requests.auth.HTTPBasicAuth(username, password)[source]

Attaches HTTP Basic Authentication to the given Request object.

class requests.auth.HTTPDigestAuth(username, password)[source]

Attaches HTTP Digest Authentication to the given Request object.

Encodings¶

requests.utils.get_encodings_from_content(content)[source]

Returns encodings from given content string.

Parameters: content – bytestring to extract encodings from.

Returns encodings from given HTTP Header Dict.

Parameters: headers – dictionary to extract encoding from.
Return type: str
requests.utils.get_unicode_from_response(r)[source]

Returns the requested content back in unicode.

Parameters: r – Response object to get unicode content from.

Tried:

  1. charset from content-type
  2. fall back and replace all unicode characters
Return type: str

Cookies¶

requests.utils.dict_from_cookiejar(cj)[source]

Returns a key/value dictionary from a CookieJar.

Parameters: cj – CookieJar object to extract cookies from.
Return type: dict
requests.utils.add_dict_to_cookiejar(cj, cookie_dict)[source]

Returns a CookieJar from a key/value dictionary.

Parameters:
  • cj – CookieJar to insert cookies into.
  • cookie_dict – Dict of key/values to insert into CookieJar.
Return type:

CookieJar

requests.cookies.cookiejar_from_dict(cookie_dict, cookiejar=None, overwrite=True)[source]

Returns a CookieJar from a key/value dictionary.

Parameters:
  • cookie_dict – Dict of key/values to insert into CookieJar.
  • cookiejar – (optional) A cookiejar to add the cookies to.
  • overwrite – (optional) If False, will not replace cookies
    already in the jar with new ones.
Return type:

CookieJar

class requests.cookies.RequestsCookieJar(policy=None)[source]

Compatibility class; is a cookielib.CookieJar, but exposes a dict
interface.

This is the CookieJar we create by default for requests and sessions that
don’t specify one, since some clients may expect response.cookies and
session.cookies to support dict operations.

Requests does not use the dict interface internally; it’s just for
compatibility with external client code. All requests code should work
out of the box with externally provided instances of CookieJar, e.g.
LWPCookieJar and FileCookieJar.

Unlike a regular CookieJar, this class is pickleable.

Warning

dictionary operations that are normally O(1) may be O(n).

Add correct Cookie: header to request (urllib.request.Request object).

The Cookie2 header is also added unless policy.hide_cookie2 is true.

clear(domain=None, path=None, name=None)

Clear some cookies.

Invoking this method without arguments will clear all cookies. If
given a single argument, only cookies belonging to that domain will be
removed. If given two arguments, cookies belonging to the specified
path within that domain are removed. If given three arguments, then
the cookie with the specified name, path and domain is removed.

Raises KeyError if no matching cookie exists.

clear_expired_cookies()

Discard all expired cookies.

You probably don’t need to call this method: expired cookies are never
sent back to the server (provided you’re using DefaultCookiePolicy),
this method is called by CookieJar itself every so often, and the
.save() method won’t save expired cookies anyway (unless you ask
otherwise by passing a true ignore_expires argument).

clear_session_cookies()

Discard all session cookies.

Note that the .save() method won’t save session cookies anyway, unless
you ask otherwise by passing a true ignore_discard argument.

copy()[source]

Return a copy of this RequestsCookieJar.

Extract cookies from response, where allowable given the request.

get(name, default=None, domain=None, path=None)[source]

Dict-like get() that also supports optional domain and path args in
order to resolve naming collisions from using one cookie jar over
multiple domains.

Warning

operation is O(n), not O(1).

get_dict(domain=None, path=None)[source]

Takes as an argument an optional domain and path and returns a plain
old Python dict of name-value pairs of cookies that meet the
requirements.

Return type: dict
get_policy()[source]

Return the CookiePolicy instance used.

items()[source]

Dict-like items() that returns a list of name-value tuples from the
jar. Allows client-code to call dict(RequestsCookieJar) and get a
vanilla python dict of key value pairs.

See also

keys() and values().

iteritems()[source]

Dict-like iteritems() that returns an iterator of name-value tuples
from the jar.

See also

iterkeys() and itervalues().

iterkeys()[source]

Dict-like iterkeys() that returns an iterator of names of cookies
from the jar.

See also

itervalues() and iteritems().

itervalues()[source]

Dict-like itervalues() that returns an iterator of values of cookies
from the jar.

See also

iterkeys() and iteritems().

keys()[source]

Dict-like keys() that returns a list of names of cookies from the
jar.

See also

values() and items().

list_domains()[source]

Utility method to list all the domains in the jar.

list_paths()[source]

Utility method to list all the paths in the jar.

make_cookies(response, request)

Return sequence of Cookie objects extracted from response object.

multiple_domains()[source]

Returns True if there are multiple domains in the jar.
Returns False otherwise.

Return type: bool
pop(k[, d]) → v, remove specified key and return the corresponding value.¶

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem() → (k, v), remove and return some (key, value) pair¶

as a 2-tuple; but raise KeyError if D is empty.

set(name, value, **kwargs)[source]

Dict-like set() that also supports optional domain and path args in
order to resolve naming collisions from using one cookie jar over
multiple domains.

set_cookie(cookie, *args, **kwargs)[source]

Set a cookie, without checking whether or not it should be set.

set_cookie_if_ok(cookie, request)

Set a cookie if policy says it’s OK to do so.

setdefault(k[, d]) → D.get(k,d), also set D[k]=d if k not in D¶
update(other)[source]

Updates this jar with cookies from another CookieJar or dict-like

values()[source]

Dict-like values() that returns a list of values of cookies from the
jar.

See also

keys() and items().

class requests.cookies.CookieConflictError[source]

There are two cookies that meet the criteria specified in the cookie jar.
Use .get and .set and include domain and path args in order to be more specific.

with_traceback()

Exception.with_traceback(tb) –
set self.__traceback__ to tb and return self.

Status Code Lookup¶

requests.codes

The codes object defines a mapping from common names for HTTP statuses
to their numerical codes, accessible either as attributes or as dictionary
items.

>>> requests.codes['temporary_redirect']
307
>>> requests.codes.teapot
418
>>> requests.codes['o/']
200

Some codes have multiple names, and both upper- and lower-case versions of
the names are allowed. For example, codes.ok, codes.OK, and
codes.okay all correspond to the HTTP status code 200.

  • 100: continue
  • 101: switching_protocols
  • 102: processing
  • 103: checkpoint
  • 122: uri_too_long, request_uri_too_long
  • 200: ok, okay, all_ok, all_okay, all_good, o/,
  • 201: created
  • 202: accepted
  • 203: non_authoritative_info, non_authoritative_information
  • 204: no_content
  • 205: reset_content, reset
  • 206: partial_content, partial
  • 207: multi_status, multiple_status, multi_stati, multiple_stati
  • 208: already_reported
  • 226: im_used
  • 300: multiple_choices
  • 301: moved_permanently, moved, o-
  • 302: found
  • 303: see_other, other
  • 304: not_modified
  • 305: use_proxy
  • 306: switch_proxy
  • 307: temporary_redirect, temporary_moved, temporary
  • 308: permanent_redirect, resume_incomplete, resume
  • 400: bad_request, bad
  • 401: unauthorized
  • 402: payment_required, payment
  • 403: forbidden
  • 404: not_found, -o-
  • 405: method_not_allowed, not_allowed
  • 406: not_acceptable
  • 407: proxy_authentication_required, proxy_auth, proxy_authentication
  • 408: request_timeout, timeout
  • 409: conflict
  • 410: gone
  • 411: length_required
  • 412: precondition_failed, precondition
  • 413: request_entity_too_large
  • 414: request_uri_too_large
  • 415: unsupported_media_type, unsupported_media, media_type
  • 416: requested_range_not_satisfiable, requested_range, range_not_satisfiable
  • 417: expectation_failed
  • 418: im_a_teapot, teapot, i_am_a_teapot
  • 421: misdirected_request
  • 422: unprocessable_entity, unprocessable
  • 423: locked
  • 424: failed_dependency, dependency
  • 425: unordered_collection, unordered
  • 426: upgrade_required, upgrade
  • 428: precondition_required, precondition
  • 429: too_many_requests, too_many
  • 431: header_fields_too_large, fields_too_large
  • 444: no_response, none
  • 449: retry_with, retry
  • 450: blocked_by_windows_parental_controls, parental_controls
  • 451: unavailable_for_legal_reasons, legal_reasons
  • 499: client_closed_request
  • 500: internal_server_error, server_error, /o,
  • 501: not_implemented
  • 502: bad_gateway
  • 503: service_unavailable, unavailable
  • 504: gateway_timeout
  • 505: http_version_not_supported, http_version
  • 506: variant_also_negotiates
  • 507: insufficient_storage
  • 509: bandwidth_limit_exceeded, bandwidth
  • 510: not_extended
  • 511: network_authentication_required, network_auth, network_authentication

Migrating to 1.x¶

This section details the main differences between 0.x and 1.x and is meant
to ease the pain of upgrading.

API Changes¶

  • Response.json is now a callable and not a property of a response.

    import requests
    r = requests.get('https://github.com/timeline.json')
    r.json()   # This *call* raises an exception if JSON decoding fails
    
  • The Session API has changed. Sessions objects no longer take parameters.
    Session is also now capitalized, but it can still be
    instantiated with a lowercase session for backwards compatibility.

    s = requests.Session()    # formerly, session took parameters
    s.auth = auth
    s.headers.update(headers)
    r = s.get('https://httpbin.org/headers')
    
  • All request hooks have been removed except ‘response’.

  • Authentication helpers have been broken out into separate modules. See
    requests-oauthlib and requests-kerberos.

  • The parameter for streaming requests was changed from prefetch to
    stream and the logic was inverted. In addition, stream is now
    required for raw response reading.

    # in 0.x, passing prefetch=False would accomplish the same thing
    r = requests.get('https://github.com/timeline.json', stream=True)
    for chunk in r.iter_content(8192):
        ...
    
  • The config parameter to the requests method has been removed. Some of
    these options are now configured on a Session such as keep-alive and
    maximum number of redirects. The verbosity option should be handled by
    configuring logging.

    import requests
    import logging
    
    # Enabling debugging at http.client level (requests->urllib3->http.client)
    # you will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA.
    # the only thing missing will be the response.body which is not logged.
    try: # for Python 3
        from http.client import HTTPConnection
    except ImportError:
        from httplib import HTTPConnection
    HTTPConnection.debuglevel = 1
    
    logging.basicConfig() # you need to initialize logging, otherwise you will not see anything from requests
    logging.getLogger().setLevel(logging.DEBUG)
    requests_log = logging.getLogger("urllib3")
    requests_log.setLevel(logging.DEBUG)
    requests_log.propagate = True
    
    requests.get('https://httpbin.org/headers')
    

Licensing¶

One key difference that has nothing to do with the API is a change in the
license from the ISC license to the Apache 2.0 license. The Apache 2.0
license ensures that contributions to Requests are also covered by the Apache
2.0 license.

Migrating to 2.x¶

Compared with the 1.0 release, there were relatively few backwards
incompatible changes, but there are still a few issues to be aware of with
this major release.

For more details on the changes in this release including new APIs, links
to the relevant GitHub issues and some of the bug fixes, read Cory’s blog
on the subject.

API Changes¶

  • There were a couple changes to how Requests handles exceptions.
    RequestException is now a subclass of IOError rather than
    RuntimeError as that more accurately categorizes the type of error.
    In addition, an invalid URL escape sequence now raises a subclass of
    RequestException rather than a ValueError.

    requests.get('http://%zz/')   # raises requests.exceptions.InvalidURL
    

    Lastly, httplib.IncompleteRead exceptions caused by incorrect chunked
    encoding will now raise a Requests ChunkedEncodingError instead.

  • The proxy API has changed slightly. The scheme for a proxy URL is now
    required.

    proxies = {
      "http": "10.10.1.10:3128",    # use http://10.10.1.10:3128 instead
    }
    
    # In requests 1.x, this was legal, in requests 2.x,
    #  this raises requests.exceptions.MissingScheme
    requests.get("http://example.org", proxies=proxies)
    

Behavioural Changes¶

  • Keys in the headers dictionary are now native strings on all Python
    versions, i.e. bytestrings on Python 2 and unicode on Python 3. If the
    keys are not native strings (unicode on Python 2 or bytestrings on Python 3)
    they will be converted to the native string type assuming UTF-8 encoding.
  • Values in the headers dictionary should always be strings. This has
    been the project’s position since before 1.0 but a recent change
    (since version 2.11.0) enforces this more strictly. It’s advised to avoid
    passing header values as unicode when possible.

I often have this error, but script works :

GET http://x.x.x.x:9200/_nodes/_all/http [status:N/A request:2.992s]
Traceback (most recent call last):
File «/usr/lib/python3.6/site-packages/urllib3/connection.py», line 171, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File «/usr/lib/python3.6/site-packages/urllib3/util/connection.py», line 79, in create_connection
raise err
File «/usr/lib/python3.6/site-packages/urllib3/util/connection.py», line 69, in create_connection
sock.connect(sa)
OSError: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File «/usr/lib/python3.6/site-packages/elasticsearch/connection/http_urllib3.py», line 172, in perform_request
response = self.pool.urlopen(method, url, body, retries=Retry(False), headers=request_headers, **kw)
File «/usr/lib/python3.6/site-packages/urllib3/connectionpool.py», line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File «/usr/lib/python3.6/site-packages/urllib3/util/retry.py», line 343, in increment
raise six.reraise(type(error), error, _stacktrace)
File «/usr/lib/python3.6/site-packages/urllib3/packages/six.py», line 686, in reraise
raise value
File «/usr/lib/python3.6/site-packages/urllib3/connectionpool.py», line 600, in urlopen
chunked=chunked)
File «/usr/lib/python3.6/site-packages/urllib3/connectionpool.py», line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File «/usr/lib64/python3.6/http/client.py», line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File «/usr/lib64/python3.6/http/client.py», line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File «/usr/lib64/python3.6/http/client.py», line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File «/usr/lib64/python3.6/http/client.py», line 1026, in _send_output
self.send(msg)
File «/usr/lib64/python3.6/http/client.py», line 964, in send
self.connect()
File «/usr/lib/python3.6/site-packages/urllib3/connection.py», line 196, in connect
conn = self._new_conn()
File «/usr/lib/python3.6/site-packages/urllib3/connection.py», line 180, in _new_conn
self, «Failed to establish a new connection: %s» % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9e5db5d208>: Failed to establish a new connection: [Errno 113] No route to host

24 Дек. 2015, Python, 342297 просмотров,

Стандартная библиотека Python имеет ряд готовых модулей по работе с HTTP.

  • urllib
  • httplib

Если уж совсем хочется хардкора, то можно и сразу с socket поработать. Но у всех этих модулей есть один большой недостатокнеудобство работы.

Во-первых, большое обилие классов и функций. Во-вторых, код получается вовсе не pythonic. Многие программисты любят Python за его элегантность и простоту, поэтому и был создан модуль, призванный решать проблему существующих и имя ему requests или HTTP For Humans. На момент написания данной заметки, последняя версия библиотеки — 2.9.1. С момента выхода Python версии 3.5 я дал себе негласное обещание писать новый код только на Py >= 3.5. Пора бы уже полностью перебираться на 3-ю ветку змеюки, поэтому в моих примерах print отныне является функцией, а не оператором :-)

Что же умеет requests?

Для начала хочется показать как выглядит код работы с http, используя модули из стандартной библиотеки Python и код при работе с requests. В качестве мишени для стрельбы http запросами будет использоваться очень удобный сервис httpbin.org


>>> import urllib.request
>>> response = urllib.request.urlopen('https://httpbin.org/get')
>>> print(response.read())
b'{n  "args": {}, n  "headers": {n    "Accept-Encoding": "identity", n    "Host": "httpbin.org", n    "User-Agent": "Python-urllib/3.5"n  }, n  "origin": "95.56.82.136", n  "url": "https://httpbin.org/get"n}n'
>>> print(response.getheader('Server'))
nginx
>>> print(response.getcode())
200
>>> 

Кстати, urllib.request это надстройка над «низкоуровневой» библиотекой httplib о которой я писал выше.

>>> import requests
>>> response = requests.get('https://httpbin.org/get')
>>> print(response.content)
b'{n  "args": {}, n  "headers": {n    "Accept": "*/*", n    "Accept-Encoding": "gzip, deflate", n    "Host": "httpbin.org", n    "User-Agent": "python-requests/2.9.1"n  }, n  "origin": "95.56.82.136", n  "url": "https://httpbin.org/get"n}n'
>>> response.json()
{'headers': {'Accept-Encoding': 'gzip, deflate', 'User-Agent': 'python-requests/2.9.1', 'Host': 'httpbin.org', 'Accept': '*/*'}, 'args': {}, 'origin': '95.56.82.136', 'url': 'https://httpbin.org/get'}
>>> response.headers
{'Connection': 'keep-alive', 'Content-Type': 'application/json', 'Server': 'nginx', 'Access-Control-Allow-Credentials': 'true', 'Access-Control-Allow-Origin': '*', 'Content-Length': '237', 'Date': 'Wed, 23 Dec 2015 17:56:46 GMT'}
>>> response.headers.get('Server')
'nginx'

В простых методах запросов значительных отличий у них не имеется. Но давайте взглянем на работы с Basic Auth:


>>> import urllib.request
>>> password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
>>> top_level_url = 'https://httpbin.org/basic-auth/user/passwd'
>>> password_mgr.add_password(None, top_level_url, 'user', 'passwd')
>>> handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
>>> opener = urllib.request.build_opener(handler)
>>> response = opener.open(top_level_url)
>>> response.getcode()
200
>>> response.read()
b'{n  "authenticated": true, n  "user": "user"n}n'

>>> import requests
>>> response = requests.get('https://httpbin.org/basic-auth/user/passwd', auth=('user', 'passwd'))
>>> print(response.content)
b'{n  "authenticated": true, n  "user": "user"n}n'
>>> print(response.json())
{'user': 'user', 'authenticated': True}

А теперь чувствуется разница между pythonic и non-pythonic? Я думаю разница на лицо. И несмотря на тот факт, что requests ничто иное как обёртка над urllib3, а последняя является надстройкой над стандартными средствами Python, удобство написания кода в большинстве случаев является приоритетом номер один.

В requests имеется:

  • Множество методов http аутентификации
  • Сессии с куками
  • Полноценная поддержка SSL
  • Различные методы-плюшки вроде .json(), которые вернут данные в нужном формате
  • Проксирование
  • Грамотная и логичная работа с исключениями

О последнем пункте мне бы хотелось поговорить чуточку подробнее.

Обработка исключений в requests

При работе с внешними сервисами никогда не стоит полагаться на их отказоустойчивость. Всё упадёт рано или поздно, поэтому нам, программистам, необходимо быть всегда к этому готовыми, желательно заранее и в спокойной обстановке.

Итак, как у requests дела обстоят с различными факапами в момент сетевых соединений? Для начала определим ряд проблем, которые могут возникнуть:

  • Хост недоступен. Обычно такого рода ошибка происходит из-за проблем конфигурирования DNS. (DNS lookup failure)
  • «Вылет» соединения по таймауту
  • Ошибки HTTP. Подробнее о HTTP кодах можно посмотреть здесь.
  • Ошибки SSL соединений (обычно при наличии проблем с SSL сертификатом: просрочен, не является доверенным и т.д.)

Базовым классом-исключением в requests является RequestException. От него наследуются все остальные

  • HTTPError
  • ConnectionError
  • Timeout
  • SSLError
  • ProxyError

И так далее. Полный список всех исключений можно посмотреть в requests.exceptions.

Timeout

В requests имеется 2 вида таймаут-исключений:

  • ConnectTimeout — таймаут на соединения
  • ReadTimeout — таймаут на чтение
>>> import requests
>>> try:
...     response = requests.get('https://httpbin.org/user-agent', timeout=(0.00001, 10))
... except requests.exceptions.ConnectTimeout:
...     print('Oops. Connection timeout occured!')
...     
Oops. Connection timeout occured!
>>> try:
...     response = requests.get('https://httpbin.org/user-agent', timeout=(10, 0.0001))
... except requests.exceptions.ReadTimeout:
...     print('Oops. Read timeout occured')
... except requests.exceptions.ConnectTimeout:
...     print('Oops. Connection timeout occured!')
...     
Oops. Read timeout occured

ConnectionError


>>> import requests
>>> try:
...     response = requests.get('http://urldoesnotexistforsure.bom')
... except requests.exceptions.ConnectionError:
...     print('Seems like dns lookup failed..')
...     
Seems like dns lookup failed..

HTTPError


>>> import requests
>>> try:
...     response = requests.get('https://httpbin.org/status/500')
...     response.raise_for_status()
... except requests.exceptions.HTTPError as err:
...     print('Oops. HTTP Error occured')
...     print('Response is: {content}'.format(content=err.response.content))
...     
Oops. HTTP Error occured
Response is: b''

Я перечислил основные виды исключений, которые покрывают, пожалуй, 90% всех проблем, возникающих при работе с http. Главное помнить, что если мы действительно намерены отловить что-то и обработать, то это необходимо явно запрограммировать, если же нам неважен тип конкретного исключения, то можно отлавливать общий базовый класс RequestException и действовать уже от конкретного случая, например, залоггировать исключение и выкинуть его дальше наверх. Кстати, о логгировании я напишу отдельный подробный пост.

У блога появился свой Telegram канал, где я стараюсь делиться интересными находками из сети на тему разработки программного обеспечения. Велком, как говорится :)

Полезные «плюшки»

  • httpbin.org очень полезный сервис для тестирования http клиентов, в частности удобен для тестирования нестандартного поведения сервиса
  • httpie консольный http клиент (замена curl) написанный на Python
  • responses mock библиотека для работы с requests
  • HTTPretty mock библиотека для работы с http модулями

💌 Присоединяйтесь к рассылке

Понравился контент? Пожалуйста, подпишись на рассылку.

Error Messages:

Traceback (most recent call last):
File "/usr/local/python3/lib/python3.7/site-packages/ddt.py", line 192, in wrapper
return func(self, *args, **kwargs)
File "/usr/hxy/auto-test/interface/test_start.py", line 49, in test
result = RequestsHandle().httpRequest(method, reparam.uri, data=reparam.data, headers=reparam.headers)
File "/usr/hxy/auto-test/common/request_handle.py", line 32, in httpRequest
headers=headers, verify=False, proxies=proxies)
File "/usr/local/python3/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/python3/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/python3/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='sync-test.helianhealth.com', port=443): Max retries exceeded with url: /sync-channel/channel/admin/hsp/template/isOnline (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd369b643c8>: Failed to establish a new connection: [Errno -2] Name or service not known'))

I don’t get the error locally, but when I deploy the project on a Linux server, I get the error.
This is because there are other technicians using the server besides me, and the version of the request is outdated.
Solution: Update requests with the command: pip install -U requests

If the following error occurs.

ERROR: Cannot uninstall ‘requests’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
After installing a package with distutils, you need to uninstall it with distutils. Unfortunately distutils does not contain an uninstall command, so “uninstall using distutils” means that you have to remove the package manually.

cd /usr/lib/python2.7/site-packages/
mkdir /opt/pylib_backup/
mv requests* /opt/pylib_backup/

PIP list sees that the requests package has been unloaded

[[email protected]_server site-packages]# pip list |grep request
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
[[email protected]_server site-packages]# 

Read More:

  • Failed to establish a new connection: [winerror 10048] in the requests thread pool, the interface is called circularly to get the datagram error
  • Python requests Error: ConnectionError: (‘Connection aborted.‘, error(104, ‘Connection reset by peer‘))
  • [Solved] Python requests ConnectionError Error: connection aborted BadStatusLine
  • Python PIP TypeError: expected str, bytes or os.PathLike object, not int
  • Failed to Create New Environment Error: Collecting package metadata (current_repodata.json): failed.
  • When sending HTTP request, python encountered: error 54, ‘connection reset by peer’ solution
  • Invalid python sd, Fatal Python error: init_fs_encoding: failed to get the Python cod [How to Solve]
  • Jupyter notebook Failed to Switch to the Virual Environment: DLL load failed python.exe could not find the entry
  • [Solved] Python Error: An attempt has been made to start a new process before the current process has finished …
  • [Solved] pycharm Import New Project Error: cannot set up a python sdk
  • [Solved] python-sutime Error: the JSON object must be str, bytes or bytearray, not ‘java.lang.String‘
  • Chromdriver Install error: File “C:python37libsite-packagesseleniumwebdrivercommonservice.py”, line 76, in start stdin=PIPE) File…
  • [Solved] gyp verb `which` failed Error: not found: python2
  • [Solved] bert_as_service startup error: Tensorflow 2.1.0 is not tested!
  • How to Solve Python WARNING: Ignoring invalid distribution -ip (e:pythonpython_dowmloadlibsite-packages)
  • Python: How to Delete Empty Files or Folders in the Directory
  • [Solved] torchvision Error: UserWarning: Failed to load image Python extension: Could not find module
  • Importerror: DLL load failed: unable to find the specified module in Python
  • How to Solve Python Importerror: DLL load failed: unable to find the specified program using tensorflow

The following are 18
code examples of urllib3.exceptions.NewConnectionError().
You can vote up the ones you like or vote down the ones you don’t like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
urllib3.exceptions
, or try the search function

.

Example #1

def post(self, url: str, data: Dict[str, str]) -> Response:
        """Perform HTTP POST request.

        :param url: the request url
        :param data: the data send to server
        :return: the response from server
        :raise: :exc:`ConnectionError <stellar_sdk.exceptions.ConnectionError>`
        """
        try:
            resp = requests.post(url=url, data=data, headers=HEADERS)
        except (RequestException, NewConnectionError) as err:
            raise ConnectionError(err)
        return Response(
            status_code=resp.status_code,
            text=resp.text,
            headers=dict(resp.headers),
            url=resp.url,
        ) 

Example #2

def get_info(self):
        if not self.is_checked:
            try:
                r = requests.get(str(self), timeout=(10, 3))
                if r.status_code == 200:
                    self.server = r.headers['Server']
                elif r.status_code >= 400:
                    raise TargetNotExistException(self.url)
            except requests.exceptions.ReadTimeout as rt:
                logger.exception(rt)

            try:
                url = re.compile(r"https?://(www.)?")
                self.url_c = url.sub('', self.url).strip().strip('/')
                self.ip = socket.gethostbyname(self.url_c)
            except socket.gaierror as err:
                logger.exception(err)
            except NewConnectionError:
                raise TargetNotExistException(self.url)

            self.is_checked = True 

Example #3

def connect(self):
        logger.debug('[MyHTTPConnection] connect %s:%i', self.host, self.port)
        try:
            self._tor_stream = self._circuit.create_stream((self.host, self.port))
            logger.debug('[MyHTTPConnection] tor_stream create_socket')
            self.sock = self._tor_stream.create_socket()
            if self._tunnel_host:
                self._tunnel()
        except TimeoutError:
            logger.error('TimeoutError')
            raise ConnectTimeoutError(
                self, 'Connection to %s timed out. (connect timeout=%s)' % (self.host, self.timeout)
            )
        except Exception as e:
            logger.error('NewConnectionError')
            raise NewConnectionError(self, 'Failed to establish a new connection: %s' % e) 

Example #4

def get_resource(self, resource, namespace="all"):
        ret, resources = None, list()
        try:
            ret, namespaced_resource = self._call_api_client(resource)
        except ApiException as ae:
            self.logger.warning("resource autocomplete disabled, encountered "
                                "ApiException", exc_info=1)
        except (NewConnectionError, MaxRetryError, ConnectTimeoutError):
            self.logger.warning("unable to connect to k8 cluster", exc_info=1)
        if ret:
            for i in ret.items:
                if namespace == "all" or not namespaced_resource:
                    resources.append((i.metadata.name, i.metadata.namespace))
                elif namespace == i.metadata.namespace:
                    resources.append((i.metadata.name, i.metadata.namespace))
        return resources 

Example #5

def get(self, url: str, params: Dict[str, str] = None) -> Response:
        """Perform HTTP GET request.

        :param url: the request url
        :param params: the request params
        :return: the response from server
        :raise: :exc:`ConnectionError <stellar_sdk.exceptions.ConnectionError>`
        """
        try:
            resp = requests.get(url=url, params=params, headers=HEADERS)
        except (RequestException, NewConnectionError) as err:
            raise ConnectionError(err)
        return Response(
            status_code=resp.status_code,
            text=resp.text,
            headers=dict(resp.headers),
            url=resp.url,
        ) 

Example #6

def get(self, url: str, params: Dict[str, str] = None) -> Response:
        """Perform HTTP GET request.

        :param url: the request url
        :param params: the request params
        :return: the response from server
        :raise: :exc:`ConnectionError <stellar_sdk.exceptions.ConnectionError>`
        """
        try:
            resp = self._session.get(url, params=params, timeout=self.request_timeout)
        except (RequestException, NewConnectionError) as err:
            raise ConnectionError(err)
        return Response(
            status_code=resp.status_code,
            text=resp.text,
            headers=dict(resp.headers),
            url=resp.url,
        ) 

Example #7

def post(self, url: str, data: Dict[str, str] = None) -> Response:
        """Perform HTTP POST request.

        :param url: the request url
        :param data: the data send to server
        :return: the response from server
        :raise: :exc:`ConnectionError <stellar_sdk.exceptions.ConnectionError>`
        """
        try:
            resp = self._session.post(url, data=data, timeout=self.post_timeout)
        except (RequestException, NewConnectionError) as err:
            raise ConnectionError(err)
        return Response(
            status_code=resp.status_code,
            text=resp.text,
            headers=dict(resp.headers),
            url=resp.url,
        ) 

Example #8

def test_retries_on_transport(execute_mock):
    """Testing retries on the transport level

    This forces us to override low-level APIs because the retry mechanism on the urllib3
    (which uses requests) is pretty low-level itself.
    """
    expected_retries = 3
    execute_mock.side_effect = NewConnectionError(
        "Should be HTTPConnection", "Fake connection error"
    )
    transport = RequestsHTTPTransport(
        url="http://127.0.0.1:8000/graphql", retries=expected_retries,
    )
    client = Client(transport=transport)

    query = gql(
        """
        {
          myFavoriteFilm: film(id:"RmlsbToz") {
            id
            title
            episodeId
          }
        }
        """
    )
    with client as session:  # We're using the client as context manager
        with pytest.raises(Exception):
            session.execute(query)

    # This might look strange compared to the previous test, but making 3 retries
    # means you're actually doing 4 calls.
    assert execute_mock.call_count == expected_retries + 1 

Example #9

def set_up_index(self):
        try:
            try:
                try:
                    index_exists = self.__es_conn__.indices.exists(index=__index_name__)
                    if not index_exists:
                        self.create_index()
                    else:
                        res = self.__es_conn__.indices.get_mapping(index=__index_name__)
                        try:
                            current_version = res[__index_name__]['mappings']['_meta']['version']
                            if current_version < __index_version__:
                                self.update_index(current_version)
                            elif current_version is None:
                                logger.error("Old Index Mapping. Manually reindex the index to persist your data.")
                                print("n -- Old Index Mapping. Manually reindex the index to persist your data.--n")
                                sys.exit(1)
                        except KeyError:
                            logger.error("Old Index Mapping. Manually reindex the index to persist your data.")
                            print("n -- Old Index Mapping. Manually reindex the index to persist your data.--n")
                            sys.exit(1)

                except ESConnectionError as e:
                    logger.error("Elasticsearch is not installed or its service is not running. {0}".format(e))
                    print("n -- Elasticsearch is not installed or its service is not running.--n", e)
                    sys.exit(1)
            except NewConnectionError:
                pass
        except ConnectionRefusedError:
            pass 

Example #10

def _new_conn(self):
        logger.debug('[MyHTTPSConnection] new conn %s:%i', self.host, self.port)
        try:
            self._tor_stream = self._circuit.create_stream((self.host, self.port))
            logger.debug('[MyHTTPSConnection] tor_stream create_socket')
            return self._tor_stream.create_socket()
        except TimeoutError:
            logger.error('TimeoutError')
            raise ConnectTimeoutError(
                self, 'Connection to %s timed out. (connect timeout=%s)' % (self.host, self.timeout)
            )
        except Exception as e:
            logger.error('NewConnectionError')
            raise NewConnectionError(self, 'Failed to establish a new connection: %s' % e) 

Example #11

def run(count=0):
    global bot
    try:
        bot = Bot(multi_logs=True, selenium_local_session=False,
                  proxy_address_port=get_proxy(os.environ.get('INSTA_USER')), disable_image_load=True)
        selenium_url = "http://%s:%d/wd/hub" % (os.environ.get('SELENIUM', 'selenium'), 4444)
        bot.set_selenium_remote_session(logger=logging.getLogger(), selenium_url=selenium_url, selenium_driver=selenium_driver(selenium_url))
        bot.login()
        bot.set_settings()
        bot.act()
    except (NewConnectionError, WebDriverException) as exc:
        bot.logger.warning("Exception in run: %s; try again: count=%s" % (exc, count))
        if count > 3:
            print("Exception in run(): %s n %s" % (exc, traceback.format_exc()))
            report_exception(exc)
        else:
            run(count=count + 1)

    except (ProtocolError, MaxRetryError) as exc:
        bot.logger.error("Abort because of %s; n%s" % (exc, traceback.format_exc()))
        return

    except Exception as exc:
        print("Exception in run(): %s n %s" % (exc, traceback.format_exc()))
        report_exception(exc)
    finally:
        print("END")
        bot.end() 

Example #12

def run(count=0):
    global bot
    try:
        bot = Bot(multi_logs=True, selenium_local_session=False,
                  proxy_address_port=get_proxy(os.environ.get('INSTA_USER')), disable_image_load=False)
        selenium_url = "http://%s:%d/wd/hub" % (os.environ.get('SELENIUM', 'selenium'), 4444)
        bot.set_selenium_remote_session(logger=logging.getLogger(), selenium_url=selenium_url, selenium_driver=selenium_driver(selenium_url))
        bot.try_first_login()
    except (NewConnectionError, NewConnectionError) as exc:
        bot.logger.warning("Exception in run: %s; try again: count=%s" % (exc, count))
        if count > 3:
            print("Exception in run(): %s n %s" % (exc, traceback.format_exc()))
            report_exception(exc)
        else:
            run(count=count + 1)

    except (ProtocolError, MaxRetryError) as exc:
        bot.logger.error("Abort because of %s; n%s" % (exc, traceback.format_exc()))
        return

    except Exception as exc:
        print("Exception in run(): %s n %s" % (exc, traceback.format_exc()))
        report_exception(exc)
    finally:
        print("END")
        bot.end() 

Example #13

def send(self, method="GET", *args, **kwargs):
        """
        Send a GET/POST/HEAD request using the object's proxies and headers
        :param method: Method to send request in. GET/POST/HEAD
        """
        proxies = self._get_request_proxies()

        try:
            if method.upper() in self.allowed_methods:
                kwargs['timeout'] = kwargs['timeout'] if 'timeout' in kwargs else 5
                return request(method, proxies=proxies, headers=self.headers, cookies=self.cookies, *args, **kwargs)
            else:
                raise RequestHandlerException("Unsupported method: {}".format(method))
        except ProxyError:
            # TODO: Apply fail over for bad proxies or drop them
            raise RequestHandlerException("Error connecting to proxy")
        except (ConnectTimeout, ReadTimeout):
            raise RequestHandlerException("Connection with server timed out")
        except NewConnectionError:
            raise RequestHandlerException("Address cannot be resolved")
            # New connection error == Can't resolve address
        except ConnectionError:
            # TODO: Increase delay
            raise RequestHandlerException("Error connecting to host")
        except TooManyRedirects:
            raise RequestHandlerException("Infinite redirects detected - too many redirects error")
        except UnicodeDecodeError:
            # Following issue #19, apparently some sites do not use utf-8 in their uris :<>
            pass 

Example #14

def send(self, request):
        try:
            proxy_url = self._proxy_config.proxy_url_for(request.url)
            manager = self._get_connection_manager(request.url, proxy_url)
            conn = manager.connection_from_url(request.url)
            self._setup_ssl_cert(conn, request.url, self._verify)

            request_target = self._get_request_target(request.url, proxy_url)
            urllib_response = conn.urlopen(
                method=request.method,
                url=request_target,
                body=request.body,
                headers=request.headers,
                retries=False,
                assert_same_host=False,
                preload_content=False,
                decode_content=False,
            )

            http_response = botocore.awsrequest.AWSResponse(
                request.url,
                urllib_response.status,
                urllib_response.headers,
                urllib_response,
            )

            if not request.stream_output:
                # Cause the raw stream to be exhausted immediately. We do it
                # this way instead of using preload_content because
                # preload_content will never buffer chunked responses
                http_response.content

            return http_response
        except URLLib3SSLError as e:
            raise SSLError(endpoint_url=request.url, error=e)
        except (NewConnectionError, socket.gaierror) as e:
            raise EndpointConnectionError(endpoint_url=request.url, error=e)
        except ProxyError as e:
            raise ProxyConnectionError(proxy_url=proxy_url, error=e)
        except URLLib3ConnectTimeoutError as e:
            raise ConnectTimeoutError(endpoint_url=request.url, error=e)
        except URLLib3ReadTimeoutError as e:
            raise ReadTimeoutError(endpoint_url=request.url, error=e)
        except ProtocolError as e:
            raise ConnectionClosedError(
                error=e,
                request=request,
                endpoint_url=request.url
            )
        except Exception as e:
            message = 'Exception received when sending urllib3 HTTP request'
            logger.debug(message, exc_info=True)
            raise HTTPClientError(error=e) 

Example #15

def send(self, request):
        try:
            proxy_url = self._proxy_config.proxy_url_for(request.url)
            manager = self._get_connection_manager(request.url, proxy_url)
            conn = manager.connection_from_url(request.url)
            self._setup_ssl_cert(conn, request.url, self._verify)

            request_target = self._get_request_target(request.url, proxy_url)
            urllib_response = conn.urlopen(
                method=request.method,
                url=request_target,
                body=request.body,
                headers=request.headers,
                retries=False,
                assert_same_host=False,
                preload_content=False,
                decode_content=False,
                chunked=self._chunked(request.headers),
            )

            http_response = botocore.awsrequest.AWSResponse(
                request.url,
                urllib_response.status,
                urllib_response.headers,
                urllib_response,
            )

            if not request.stream_output:
                # Cause the raw stream to be exhausted immediately. We do it
                # this way instead of using preload_content because
                # preload_content will never buffer chunked responses
                http_response.content

            return http_response
        except URLLib3SSLError as e:
            raise SSLError(endpoint_url=request.url, error=e)
        except (NewConnectionError, socket.gaierror) as e:
            raise EndpointConnectionError(endpoint_url=request.url, error=e)
        except ProxyError as e:
            raise ProxyConnectionError(proxy_url=proxy_url, error=e)
        except URLLib3ConnectTimeoutError as e:
            raise ConnectTimeoutError(endpoint_url=request.url, error=e)
        except URLLib3ReadTimeoutError as e:
            raise ReadTimeoutError(endpoint_url=request.url, error=e)
        except ProtocolError as e:
            raise ConnectionClosedError(
                error=e,
                request=request,
                endpoint_url=request.url
            )
        except Exception as e:
            message = 'Exception received when sending urllib3 HTTP request'
            logger.debug(message, exc_info=True)
            raise HTTPClientError(error=e) 

Example #16

def send(self, request):
        try:
            proxy_url = self._proxy_config.proxy_url_for(request.url)
            manager = self._get_connection_manager(request.url, proxy_url)
            conn = manager.connection_from_url(request.url)
            self._setup_ssl_cert(conn, request.url, self._verify)

            request_target = self._get_request_target(request.url, proxy_url)
            urllib_response = conn.urlopen(
                method=request.method,
                url=request_target,
                body=request.body,
                headers=request.headers,
                retries=False,
                assert_same_host=False,
                preload_content=False,
                decode_content=False,
            )

            http_response = botocore.awsrequest.AWSResponse(
                request.url,
                urllib_response.status,
                urllib_response.headers,
                urllib_response,
            )

            if not request.stream_output:
                # Cause the raw stream to be exhausted immediately. We do it
                # this way instead of using preload_content because
                # preload_content will never buffer chunked responses
                http_response.content

            return http_response
        except URLLib3SSLError as e:
            raise SSLError(endpoint_url=request.url, error=e)
        except (NewConnectionError, socket.gaierror) as e:
            raise EndpointConnectionError(endpoint_url=request.url, error=e)
        except ProxyError as e:
            raise ProxyConnectionError(proxy_url=proxy_url, error=e)
        except URLLib3ConnectTimeoutError as e:
            raise ConnectTimeoutError(endpoint_url=request.url, error=e)
        except URLLib3ReadTimeoutError as e:
            raise ReadTimeoutError(endpoint_url=request.url, error=e)
        except ProtocolError as e:
            raise ConnectionClosedError(
                error=e,
                request=request,
                endpoint_url=request.url
            )
        except Exception as e:
            message = 'Exception received when sending urllib3 HTTP request'
            logger.debug(message, exc_info=True)
            raise HTTPClientError(error=e) 

Example #17

def send(self, request):
        try:
            proxy_url = self._proxy_config.proxy_url_for(request.url)
            manager = self._get_connection_manager(request.url, proxy_url)
            conn = manager.connection_from_url(request.url)
            self._setup_ssl_cert(conn, request.url, self._verify)

            request_target = self._get_request_target(request.url, proxy_url)
            urllib_response = conn.urlopen(
                method=request.method,
                url=request_target,
                body=request.body,
                headers=request.headers,
                retries=False,
                assert_same_host=False,
                preload_content=False,
                decode_content=False,
            )

            http_response = botocore.awsrequest.AWSResponse(
                request.url,
                urllib_response.status,
                urllib_response.headers,
                urllib_response,
            )

            if not request.stream_output:
                # Cause the raw stream to be exhausted immediately. We do it
                # this way instead of using preload_content because
                # preload_content will never buffer chunked responses
                http_response.content

            return http_response
        except URLLib3SSLError as e:
            raise SSLError(endpoint_url=request.url, error=e)
        except (NewConnectionError, socket.gaierror) as e:
            raise EndpointConnectionError(endpoint_url=request.url, error=e)
        except ProxyError as e:
            raise ProxyConnectionError(proxy_url=proxy_url, error=e)
        except URLLib3ConnectTimeoutError as e:
            raise ConnectTimeoutError(endpoint_url=request.url, error=e)
        except URLLib3ReadTimeoutError as e:
            raise ReadTimeoutError(endpoint_url=request.url, error=e)
        except ProtocolError as e:
            raise ConnectionClosedError(
                error=e,
                request=request,
                endpoint_url=request.url
            )
        except Exception as e:
            message = 'Exception received when sending urllib3 HTTP request'
            logger.debug(message, exc_info=True)
            raise HTTPClientError(error=e) 

Example #18

def send(self, request):
        try:
            proxy_url = self._proxy_config.proxy_url_for(request.url)
            manager = self._get_connection_manager(request.url, proxy_url)
            conn = manager.connection_from_url(request.url)
            self._setup_ssl_cert(conn, request.url, self._verify)

            request_target = self._get_request_target(request.url, proxy_url)
            urllib_response = conn.urlopen(
                method=request.method,
                url=request_target,
                body=request.body,
                headers=request.headers,
                retries=False,
                assert_same_host=False,
                preload_content=False,
                decode_content=False,
                chunked=self._chunked(request.headers),
            )

            http_response = botocore.awsrequest.AWSResponse(
                request.url,
                urllib_response.status,
                urllib_response.headers,
                urllib_response,
            )

            if not request.stream_output:
                # Cause the raw stream to be exhausted immediately. We do it
                # this way instead of using preload_content because
                # preload_content will never buffer chunked responses
                http_response.content

            return http_response
        except URLLib3SSLError as e:
            raise SSLError(endpoint_url=request.url, error=e)
        except (NewConnectionError, socket.gaierror) as e:
            raise EndpointConnectionError(endpoint_url=request.url, error=e)
        except ProxyError as e:
            raise ProxyConnectionError(proxy_url=proxy_url, error=e)
        except URLLib3ConnectTimeoutError as e:
            raise ConnectTimeoutError(endpoint_url=request.url, error=e)
        except URLLib3ReadTimeoutError as e:
            raise ReadTimeoutError(endpoint_url=request.url, error=e)
        except ProtocolError as e:
            raise ConnectionClosedError(
                error=e,
                request=request,
                endpoint_url=request.url
            )
        except Exception as e:
            message = 'Exception received when sending urllib3 HTTP request'
            logger.debug(message, exc_info=True)
            raise HTTPClientError(error=e) 

Понравилась статья? Поделить с друзьями:
  • New aksusbd i386 package pre installation script subprocess returned error exit status 8
  • Nevoks pagee shorted ошибка
  • Neverwinter ошибка 36 a file was locked that could not be opened for writing
  • Neverwinter nights как изменить мировоззрение
  • Neverwinter nights runtime error что делать