Requests proxy error

A simple, yet elegant, HTTP library. Contribute to psf/requests development by creating an account on GitHub.
«»» requests.exceptions ~~~~~~~~~~~~~~~~~~~ This module contains the set of Requests’ exceptions. «»» from urllib3.exceptions import HTTPError as BaseHTTPError from .compat import JSONDecodeError as CompatJSONDecodeError class RequestException(IOError): «»»There was an ambiguous exception that occurred while handling your request. «»» def __init__(self, *args, **kwargs): «»»Initialize RequestException with `request` and `response` objects.»»» response = kwargs.pop(«response», None) self.response = response self.request = kwargs.pop(«request», None) if response is not None and not self.request and hasattr(response, «request»): self.request = self.response.request super().__init__(*args, **kwargs) class InvalidJSONError(RequestException): «»»A JSON error occurred.»»» class JSONDecodeError(InvalidJSONError, CompatJSONDecodeError): «»»Couldn’t decode the text into json»»» def __init__(self, *args, **kwargs): «»» Construct the JSONDecodeError instance first with all args. Then use it’s args to construct the IOError so that the json specific args aren’t used as IOError specific args and the error message from JSONDecodeError is preserved. «»» CompatJSONDecodeError.__init__(self, *args) InvalidJSONError.__init__(self, *self.args, **kwargs) class HTTPError(RequestException): «»»An HTTP error occurred.»»» class ConnectionError(RequestException): «»»A Connection error occurred.»»» class ProxyError(ConnectionError): «»»A proxy error occurred.»»» class SSLError(ConnectionError): «»»An SSL error occurred.»»» class Timeout(RequestException): «»»The request timed out. Catching this error will catch both :exc:`~requests.exceptions.ConnectTimeout` and :exc:`~requests.exceptions.ReadTimeout` errors. «»» class ConnectTimeout(ConnectionError, Timeout): «»»The request timed out while trying to connect to the remote server. Requests that produced this error are safe to retry. «»» class ReadTimeout(Timeout): «»»The server did not send any data in the allotted amount of time.»»» class URLRequired(RequestException): «»»A valid URL is required to make a request.»»» class TooManyRedirects(RequestException): «»»Too many redirects.»»» class MissingSchema(RequestException, ValueError): «»»The URL scheme (e.g. http or https) is missing.»»» class InvalidSchema(RequestException, ValueError): «»»The URL scheme provided is either invalid or unsupported.»»» class InvalidURL(RequestException, ValueError): «»»The URL provided was somehow invalid.»»» class InvalidHeader(RequestException, ValueError): «»»The header value provided was somehow invalid.»»» class InvalidProxyURL(InvalidURL): «»»The proxy URL provided is invalid.»»» class ChunkedEncodingError(RequestException): «»»The server declared chunked encoding but sent an invalid chunk.»»» class ContentDecodingError(RequestException, BaseHTTPError): «»»Failed to decode response content.»»» class StreamConsumedError(RequestException, TypeError): «»»The content for this response was already consumed.»»» class RetryError(RequestException): «»»Custom retries logic failed»»» class UnrewindableBodyError(RequestException): «»»Requests encountered an error when trying to rewind a body.»»» # Warnings class RequestsWarning(Warning): «»»Base warning for Requests.»»» class FileModeWarning(RequestsWarning, DeprecationWarning): «»»A file was opened in text mode, but Requests determined its binary length.»»» class RequestsDependencyWarning(RequestsWarning): «»»An imported dependency doesn’t match the expected version range.»»»

Почему не работает proxy в python requests?

Простой 2 комментария

Потому что HTTPS-прокси — не то, что вы думаете, а Secure Web Proxy (HTTP-прокси с TLS поверх), что, наиболее вероятно, ваш прокси-сервер не поддерживает.

Убедитесь, что у вас протокол прокси везде указан как ‘http’, а не ‘https’.

Loler100, здесь под «https» обозначена возможность открывать https-сайты (поддержка метода CONNECT со стороны прокси), а не Secure Web Proxy.

requests и так использует CONNECT для доступа к HTTPS-сайтам через HTTP-прокси, дополнительных настроек у него для этого нет.

Вероятно, у вас операционная система Windows и в настройках соединений включен и указан proxy-сервер. Здесь то и возникает конфликт с парсингом типа proxy.
Если это так, то решение ниже:

    В интерпретаторе

Переходите по возвращаемому пути до папки (init открывать не нужно)
например:
C:Program FilesPython310Liburllib

Открываете request.py редактором с повышенными привилегиями
C:Program FilesPython310Liburllibrequest.py

находите строчку
proxies[‘https’] = ‘https://%s’ % proxyServer

меняете её на
proxies[‘https’] = ‘http://%s’ % proxyServer

Хотя разумнее отключить системные прокси и настроить окружение и переменные

Источник

Python-сообщество

Уведомления

#1 Март 20, 2020 11:23:34

Не получается простейший запрос requests

Traceback (most recent call last):
File “C:Python27main1.py”, line 28, in
res = requests.get(‘https://yobit.net/api/3/ticker/eth_usd’) # получаем данные info
File “C:Python27libsite-packagesrequestsapi.py”, line 72, in get
return request(‘get’, url, params=params, **kwargs)
File “C:Python27libsite-packagesrequestsapi.py”, line 58, in request
return session.request(method=method, url=url, **kwargs)
File “C:Python27libsite-packagesrequestssessions.py”, line 508, in request
resp = self.send(prep, **send_kwargs)
File “C:Python27libsite-packagesrequestssessions.py”, line 618, in send
r = adapter.send(request, **kwargs)
File “C:Python27libsite-packagesrequestsadapters.py”, line 502, in send
raise ProxyError(e, request=request)
ProxyError: HTTPSConnectionPool(host=’yobit.net’, port=443): Max retries exceeded with url: /api/3/ticker/eth_usd (Caused by ProxyError(‘Cannot connect to proxy.’, error(‘Tunnel connection failed: 407 Proxy Authentication Required’,)))

не могу понять что надо, через браузер выполняет через питон не хочет?

#2 Март 20, 2020 12:44:52

Не получается простейший запрос requests

Вы используете соединение через прокси-сервер. Ваш браузер настроен на аутентификацию на прокси-сервере, а питон-скрипт нет.

#3 Март 20, 2020 14:39:47

Не получается простейший запрос requests

Установлена вот такая версия Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 22:45:29) on win32

Интернет соединение через свисток мегафона
можно пример из нескольких строчек чтоб выполнился запрос https://yobit.net/api/3/ticker/eth_usd
понимаю что это элементарно, но все же я в разделе для новичков …
заранее спасибо

#4 Март 20, 2020 15:49:31

Не получается простейший запрос requests

Вот результат без использования прокси:

Scorp1978
error(‘Tunnel connection failed: 407 Proxy Authentication Required ‘,)))

Отредактировано Rafik (Март 20, 2020 15:51:16)

#5 Март 20, 2020 17:46:45

Не получается простейший запрос requests

я перешел на другой компьютер где инет напрямую без прокси
выполнил вот это программу
import requests
res = requests.get(‘https://yobit.net/api/3/ticker/eth_usd’)
print (res)

ошибки
Traceback (most recent call last):
File “CUsers/server-2/AppData/Local/Programs/Python/Python38-32/zap.py”, line 1, in
import requests
ModuleNotFoundError: No module named ‘requests’

я так понимаю что не нашел модуль requests

#6 Март 20, 2020 18:34:34

Не получается простейший запрос requests

Scorp1978
я так понимаю что не нашел модуль requests

#7 Март 20, 2020 18:52:33

Не получается простейший запрос requests

Вы удивительно догадливы! ))))))) без обид. я пытался его установить у меня не получилось конечно можно было сразу начать с этого вопрос но решил все с самого начала
>>> pip install requests
SyntaxError: invalid syntax
>>> easy_install requests
SyntaxError: invalid syntax
>>>

#8 Март 20, 2020 19:40:33

Не получается простейший запрос requests

Scorp1978
вы пытаетесь выполнить команду операционной системы из интерактивного сеанса питона. Так ничего не получится, выполняйте это из cmd.

#9 Март 21, 2020 02:56:43

Не получается простейший запрос requests

из cmd пишет что pip не является внутренней или внешней командой

#10 Март 21, 2020 02:57:20

Не получается простейший запрос requests

нужно скачать программу pip я правильно понимаю? но пишут что на питоне выше 3.4 пип уже в комплекте может в путях дело?

Отредактировано Scorp1978 (Март 21, 2020 02:59:10)

Источник

How to Use a Proxy with Python Requests?

Introduction

In this article, you will examine how to use the Python Requests library behind a proxy server. Developers use proxies for anonymity, security, and sometimes will even use more than one to prevent websites from banning their IP addresses. Proxies also carry several other benefits such as bypassing filters and censorship. Feel free to learn more about rotating proxies before continuing, but let’s get started!

Prerequisites & Installation

This article is intended for those who would like to scrape behind a proxy in Python. To get the most of the material, it is beneficial to:

✅ Have experience with Python 3 🐍.

✅ Python 3 installed on your local machine.

Check if the python-requests pacakges is installed by opening the terminal and typing:

pip freeze will display all your current python packages and their versions, so go ahead and check if it is present. If not, install it by running:

How to use a Proxy with Python Requests

To use a proxy in Python, first import the requests package.

Next create a proxies dictionary that defines the HTTP and HTTPS connections. This variable should be a dictionary that maps a protocol to the proxy URL. Additionally, make a url variable set to the webpage you’re scraping from.

Notice in the example below, the dictionary defines the proxy URL for two separate protocols: HTTP and HTTPS. Each connection maps to an individual URL and port, but this does not mean that the two cannot be the same

  1. Lastly, create a response variable that uses any of the requests methods. The method will take in two arguments: the URL variable you created and the dictionary defined.

You may use the same syntax for different api calls, but regardless of the call you’re making, you need to specify the protocol.

Requests Methods ✍️

Proxy Authentication 👩‍💻

If you need to add authentication, you can rewrite your code using the following syntax:

Proxy Sessions 🕒

You may also find yourself wanting to scrape from websites that utilize sessions, in this case, you would have to create a session object. You can do this by first creating a session variable and setting it to the requests Session() method. Then similar to before, you would send your session proxies through the requests method, but this time only passing in the url as the argument.

Environmental Variables 🌱

You may find yourself reusing the same proxy for each request, so feel free to DRY up your code by setting some environmental variables:

If you decide to set environmental variables, there’s no longer a need to set proxies in your code. As soon as you make a request, an api call will be made!

Reading Responses 📖

If you would like to read your data:

JSON: for JSON-formatted responses the requests package provides a built-in method.

Rotating Proxies with Requests

Remember how we said some developers use more than one proxy? Well, now you can too!

Anytime you find yourself scraping from a webpage repeatedly, it’s good practice to use more than one proxy, because there’s a good chance your scraper will get blocked, meaning your IP address will get banned. The scraping cancel culture is real! So, to avoid being canceled, it’s best to utilize rotating proxies. A rotating proxy is a proxy server that assigns a new IP address from the proxy pool for each connection.

To rotate IP addresses, you first need to have a pool of IPs available. You can use free proxies found on the internet or commercial solutions. In most cases, if your service relies on scraped data a free proxy will most likely not be enough.

How to Rotate IPs with Requests

In order to start rotating your IP addresses, you need a list of free proxies. In the case free proxies do fit your scrapping needs, here you can find a list of free proxies. Today you’ll be writing a script that chooses and rotates through proxies.

First import the requests , BeautifulSoup , and choice libraries.

Next define a method get_proxy() that will be responsible for retrieving IP addresses for you to use. In this method you will define your url as whatever proxy list resources you choose to use. After sending a request api call, convert the response into a Beautiful Soup object to make extraction easier. Use the html5lib parser library to parse the website’s HTML, as you would for a browser. Create a proxy variable that uses choice to randomly choose an IP address from the list of proxies generated by soup . Within the map function, you can use a lambda function to convert the HTML element into text for both retrieved IP addresses and port numbers.

Create a proxy_request method that takes in 3 arguments: the request_type , the url , and **kwargs . Inside this method, define your proxy dictionary as the proxy returned from the get_proxy method. Similiar to before, you’ll use the requests , passing in your arguments.

You can now scrape and rotate all at once!🌀

Use ScrapingBee’s Proxy Mode

Believe it or not, there is another free* alternative that makes scraping behind a proxy even easier! That alternative is ScrapingBee’s Proxy Mode, a proxy front-end to the API. 🐝

Make a free account on ScrapingBee. Once logged on, you can see your account information, including your API Key. *And not to mention 1000 free API credits! 🍯😍

Run the following script, passing your api_key as the proxy username and the API parameters as the proxy password. You can skip the proxy password if the default API parameters suit your needs.:

Remember that if you want to use proxy mode, your code must be configured not to verify SSL certificates. In this case, it would be verify=False since you are working with Python Requests.

That’s all there is to sending successful HTTP requests! When you use ScrapingBee’s Proxy Mode, you no longer need to deal with proxy rotation manually, we take care of everything for you. 😎

Conclusion

While it might be tempting to start scraping right away with your fancy new proxies, there are still a few key things you should know. For starters, not all proxies are the same. There are actually different types, with the three main being: transparent proxies, anonymous proxies, and elite proxies.

In most cases, you will use an elite proxy, whether paid or free, since they are the best solution to avoid being detected. If using a proxy for the sole purpose of privacy, anonymous proxies may be worth your while. It’s not advised to use a transparent proxy unless there is a particular reason for doing so, since transparent proxies reveal your real IP address and that you are using a proxy server.

Now that we have that all cleared up, it’s time to start scraping with a proxy in Python. So, get on out there and make all the requests you can dream up!💭

Resources

Maxine is a software engineer and passionate technical writer, who enjoys spending her free time incorporating her knowledge of environmental technologies into web development.

Источник

Developer Interface¶

This part of the documentation covers all the interfaces of Requests. For parts where Requests depends on external libraries, we document the most important right here and provide links to the canonical documentation.

Main Interface¶

All of Requests’ functionality can be accessed by these 7 methods. They all return an instance of the Response object.

Constructs and sends a Request .

Sends a HEAD request.

Parameters:
  • method – method for the new Request object.
  • url – URL for the new Request object.
  • params – (optional) Dictionary, list of tuples or bytes to send in the body of the Request .
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • json – (optional) A JSON serializable Python object to send in the body of the Request .
  • headers – (optional) Dictionary of HTTP Headers to send with the Request .
  • cookies – (optional) Dict or CookieJar object to send with the Request .
  • files – (optional) Dictionary of ‘name’: file-like-objects (or <‘name’: file-tuple>) for multipart encoding upload. file-tuple can be a 2-tuple (‘filename’, fileobj) , 3-tuple (‘filename’, fileobj, ‘content_type’) or a 4-tuple (‘filename’, fileobj, ‘content_type’, custom_headers) , where ‘content-type’ is a string defining the content type of the given file and custom_headers a dict-like object containing additional headers to add for the file.
  • auth – (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
  • timeout (floatortuple) – (optional) How many seconds to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
  • allow_redirects (bool) – (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to True .
  • proxies – (optional) Dictionary mapping protocol to the URL of the proxy.
  • verify – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use. Defaults to True .
  • stream – (optional) if False , the response content will be immediately downloaded.
  • cert – (optional) if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’) pair.
Returns:
Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Returns:

requests. get ( url, params=None, **kwargs ) [source] ¶

Sends a GET request.

Parameters:
  • url – URL for the new Request object.
  • params – (optional) Dictionary, list of tuples or bytes to send in the body of the Request .
  • **kwargs – Optional arguments that request takes.
Returns:

requests. post ( url, data=None, json=None, **kwargs ) [source] ¶

Sends a POST request.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • json – (optional) json data to send in the body of the Request .
  • **kwargs – Optional arguments that request takes.
Returns:

requests. put ( url, data=None, **kwargs ) [source] ¶

Sends a PUT request.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • json – (optional) json data to send in the body of the Request .
  • **kwargs – Optional arguments that request takes.
Returns:

requests. patch ( url, data=None, **kwargs ) [source] ¶

Sends a PATCH request.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • json – (optional) json data to send in the body of the Request .
  • **kwargs – Optional arguments that request takes.
Returns:

requests. delete ( url, **kwargs ) [source] ¶

Sends a DELETE request.

Exceptions¶

There was an ambiguous exception that occurred while handling your request.

exception requests. ConnectionError ( *args, **kwargs ) [source] ¶

A Connection error occurred.

exception requests. HTTPError ( *args, **kwargs ) [source] ¶

An HTTP error occurred.

exception requests. URLRequired ( *args, **kwargs ) [source] ¶

A valid URL is required to make a request.

exception requests. TooManyRedirects ( *args, **kwargs ) [source] ¶

Too many redirects.

exception requests. ConnectTimeout ( *args, **kwargs ) [source] ¶

The request timed out while trying to connect to the remote server.

Requests that produced this error are safe to retry.

exception requests. ReadTimeout ( *args, **kwargs ) [source] ¶

The server did not send any data in the allotted amount of time.

The request timed out.

Catching this error will catch both ConnectTimeout and ReadTimeout errors.

Request Sessions¶

A Requests session.

Provides cookie persistence, connection-pooling, and configuration.

Or as a context manager:

Default Authentication tuple or object to attach to Request .

SSL client certificate default, if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’) pair.

Closes all adapters and as such the session

A CookieJar containing all currently outstanding cookies set on this session. By default it is a RequestsCookieJar , but may be any other cookielib.CookieJar compatible object.

Sends a DELETE request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Returns:
Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

get ( url, **kwargs ) [source] ¶

Sends a GET request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

get_adapter ( url ) [source] ¶

Returns the appropriate connection adapter for the given URL.

Return type: requests.adapters.BaseAdapter

get_redirect_target ( resp ) ¶

Receives a Response. Returns a redirect URI or None

Sends a HEAD request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

headers = None¶

A case-insensitive dictionary of headers to be sent on each Request sent from this Session .

Maximum number of redirects allowed. If the request exceeds this limit, a TooManyRedirects exception is raised. This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is 30.

Check the environment and merge it with some settings.

Return type: dict

mount ( prefix, adapter ) [source] ¶

Registers a connection adapter to a prefix.

Adapters are sorted in descending order by prefix length.

Sends a OPTIONS request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • **kwargs – Optional arguments that request takes.
Return type:

params = None¶

Dictionary of querystring data to attach to each Request . The dictionary values may be lists for representing multivalued query parameters.

Sends a PATCH request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • **kwargs – Optional arguments that request takes.
Return type:

post ( url, data=None, json=None, **kwargs ) [source] ¶

Sends a POST request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • json – (optional) json to send in the body of the Request .
  • **kwargs – Optional arguments that request takes.
Return type:

prepare_request ( request ) [source] ¶

Constructs a PreparedRequest for transmission and returns it. The PreparedRequest has settings merged from the Request instance and those of the Session .

Parameters: request – Request instance to prepare with this session’s settings.
Return type: requests.PreparedRequest

proxies = None¶

Dictionary mapping protocol or protocol and host to the URL of the proxy (e.g. <‘http’: ‘foo.bar:3128’, ‘http://host.name’: ‘foo.bar:4012’>) to be used on each Request .

Sends a PUT request. Returns Response object.

Parameters:
  • url – URL for the new Request object.
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • **kwargs – Optional arguments that request takes.
Return type:

rebuild_auth ( prepared_request, response ) ¶

When being redirected we may want to strip authentication from the request to avoid leaking credentials. This method intelligently removes and reapplies authentication where possible to avoid credential loss.

rebuild_method ( prepared_request, response ) ¶

When being redirected we may want to change the method of the request based on certain specs or browser behavior.

rebuild_proxies ( prepared_request, proxies ) ¶

This method re-evaluates the proxy configuration by considering the environment variables. If we are redirected to a URL covered by NO_PROXY, we strip the proxy configuration. Otherwise, we set missing proxy keys for this URL (in case they were stripped by a previous redirect).

This method also replaces the Proxy-Authorization header where necessary.

Return type: dict

request ( method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None ) [source] ¶

Constructs a Request , prepares it and sends it. Returns Response object.

Parameters:
  • method – method for the new Request object.
  • url – URL for the new Request object.
  • params – (optional) Dictionary or bytes to be sent in the query string for the Request .
  • data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request .
  • json – (optional) json to send in the body of the Request .
  • headers – (optional) Dictionary of HTTP Headers to send with the Request .
  • cookies – (optional) Dict or CookieJar object to send with the Request .
  • files – (optional) Dictionary of ‘filename’: file-like-objects for multipart encoding upload.
  • auth – (optional) Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth.
  • timeout (floatortuple) – (optional) How long to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
  • allow_redirects (bool) – (optional) Set to True by default.
  • proxies – (optional) Dictionary mapping protocol or protocol and hostname to the URL of the proxy.
  • stream – (optional) whether to immediately download the response content. Defaults to False .
  • verify – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use. Defaults to True .
  • cert – (optional) if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’) pair.
Return type:

resolve_redirects ( resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None, yield_requests=False, **adapter_kwargs ) ¶

Receives a Response. Returns a generator of Responses or Requests.

Send a given PreparedRequest.

Return type: requests.Response

should_strip_auth ( old_url, new_url ) ¶

Decide whether Authorization header should be removed when redirecting

Stream response content default.

Trust environment settings for proxy configuration, default authentication and similar.

SSL Verification default.

Lower-Level Classes¶

A user-created Request object.

Used to prepare a PreparedRequest , which is sent to the server.

Parameters:
  • method – HTTP method to use.
  • url – URL to send.
  • headers – dictionary of headers to send.
  • files – dictionary of files to multipart upload.
  • data – the body to attach to the request. If a dictionary or list of tuples [(key, value)] is provided, form-encoding will take place.
  • json – json for the body to attach to the request (if files or data is not specified).
  • params – URL parameters to append to the URL. If a dictionary or list of tuples [(key, value)] is provided, form-encoding will take place.
  • auth – Auth handler or (user, pass) tuple.
  • cookies – dictionary or CookieJar of cookies to attach to this request.
  • hooks – dictionary of callback hooks, for internal usage.

Deregister a previously registered hook. Returns True if the hook existed, False if not.

Constructs a PreparedRequest for transmission and returns it.

Properly register a hook.

class requests. Response [source] ¶

The Response object, which contains a server’s response to an HTTP request.

The apparent encoding, provided by the chardet library.

Releases the connection back to the pool. Once this method has been called the underlying raw object must not be accessed again.

Note: Should not normally need to be called explicitly.

Content of the response, in bytes.

A CookieJar of Cookies the server sent back.

The amount of time elapsed between sending the request and the arrival of the response (as a timedelta). This property specifically measures the time taken between sending the first byte of the request and finishing parsing the headers. It is therefore unaffected by consuming the response content or the value of the stream keyword argument.

Encoding to decode with when accessing r.text.

Case-insensitive Dictionary of Response Headers. For example, headers[‘content-encoding’] will return the value of a ‘Content-Encoding’ response header.

A list of Response objects from the history of the Request. Any redirect responses will end up here. The list is sorted from the oldest to the most recent request.

True if this Response one of the permanent versions of redirect.

True if this Response is a well-formed HTTP redirect that could have been processed automatically (by Session.resolve_redirects ).

iter_content ( chunk_size=1, decode_unicode=False ) [source] ¶

Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory. This is not necessarily the length of each item returned as decoding can take place.

chunk_size must be of type int or None. A value of None will function differently depending on the value of stream . stream=True will read data as it arrives in whatever size the chunks are received. If stream=False, data is returned as a single chunk.

If decode_unicode is True, content will be decoded using the best available encoding based on the response.

iter_lines ( chunk_size=512, decode_unicode=False, delimiter=None ) [source] ¶

Iterates over the response data, one line at a time. When stream=True is set on the request, this avoids reading the content at once into memory for large responses.

This method is not reentrant safe.

Returns the json-encoded content of a response, if any.

Parameters: **kwargs – Optional arguments that json.loads takes.
Raises: ValueError – If the response body does not contain valid json.

links ¶

Returns the parsed header links of the response, if any.

Returns a PreparedRequest for the next request in a redirect chain, if there is one.

Returns True if status_code is less than 400, False if not.

This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If the status code is between 200 and 400, this will return True. This is not a check to see if the response code is 200 OK .

Raises stored HTTPError , if one occurred.

Textual reason of responded HTTP Status, e.g. “Not Found” or “OK”.

The PreparedRequest object to which this is a response.

Integer Code of responded HTTP Status, e.g. 404 or 200.

Content of the response, in unicode.

If Response.encoding is None, encoding will be guessed using chardet .

The encoding of the response content is determined based solely on HTTP headers, following RFC 2616 to the letter. If you can take advantage of non-HTTP knowledge to make a better guess at the encoding, you should set r.encoding appropriately before accessing this property.

Final URL location of Response.

Lower-Lower-Level Classes¶

The fully mutable PreparedRequest object, containing the exact bytes that will be sent to the server.

Generated from either a Request object or manually.

request body to send to the server.

deregister_hook ( event, hook ) ¶

Deregister a previously registered hook. Returns True if the hook existed, False if not.

dictionary of HTTP headers.

dictionary of callback hooks, for internal usage.

HTTP verb to send to the server.

Build the path URL to use.

Prepares the entire request with the given parameters.

Prepares the given HTTP auth data.

Prepares the given HTTP body data.

Prepare Content-Length header based on request method and body

Prepares the given HTTP cookie data.

This function eventually generates a Cookie header from the given cookies using cookielib. Due to cookielib’s design, the header will not be regenerated if it already exists, meaning this function can only be called once for the life of the PreparedRequest object. Any subsequent calls to prepare_cookies will have no actual effect, unless the “Cookie” header is removed beforehand.

Prepares the given HTTP headers.

Prepares the given hooks.

Prepares the given HTTP method.

Prepares the given HTTP URL.

Properly register a hook.

HTTP URL to send the request to.

class requests.adapters. BaseAdapter [source] ¶

The Base Transport Adapter

Cleans up adapter specific items.

Sends PreparedRequest object. Returns Response object.

Parameters:
  • request – The PreparedRequest being sent.
  • stream – (optional) Whether to stream the request content.
  • timeout (floatortuple) – (optional) How long to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
  • verify – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use
  • cert – (optional) Any user-provided SSL certificate to be trusted.
  • proxies – (optional) The proxies dictionary to apply to the request.

class requests.adapters. HTTPAdapter ( pool_connections=10, pool_maxsize=10, max_retries=0, pool_block=False ) [source]¶

The built-in HTTP Adapter for urllib3.

Provides a general-case interface for Requests sessions to contact HTTP and HTTPS urls by implementing the Transport Adapter interface. This class will usually be created by the Session class under the covers.

Parameters:
  • pool_connections – The number of urllib3 connection pools to cache.
  • pool_maxsize – The maximum number of connections to save in the pool.
  • max_retries – The maximum number of retries each connection should attempt. Note, this applies only to failed DNS lookups, socket connections and connection timeouts, never to requests where data has made it to the server. By default, Requests does not retry failed connections. If you need granular control over the conditions under which we retry a request, import urllib3’s Retry class and pass that instead.
  • pool_block – Whether the connection pool should block for connections.

Add any headers needed by the connection. As of v2.0 this does nothing by default, but is left for overriding by users that subclass the HTTPAdapter .

This should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter .

Parameters:
  • request – The PreparedRequest to add headers to.
  • kwargs – The keyword arguments from the call to send().

build_response ( req, resp ) [source]¶

Builds a Response object from a urllib3 response. This should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter

Parameters:
  • req – The PreparedRequest used to generate the response.
  • resp – The urllib3 response object.
Return type:

cert_verify ( conn, url, verify, cert ) [source] ¶

Verify a SSL certificate. This method should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter .

Parameters:
  • conn – The urllib3 connection object associated with the cert.
  • url – The requested URL.
  • verify – Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use
  • cert – The SSL certificate to verify.

close ( ) [source]¶

Disposes of any internal state.

Currently, this closes the PoolManager and any active ProxyManager, which closes any pooled connections.

Returns a urllib3 connection for the given URL. This should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter .

Parameters:
  • url – The URL to connect to.
  • proxies – (optional) A Requests-style dictionary of proxies used on this request.
Return type:

init_poolmanager ( connections, maxsize, block=False, **pool_kwargs ) [source] ¶

Initializes a urllib3 PoolManager.

This method should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter .

Parameters:
  • connections – The number of urllib3 connection pools to cache.
  • maxsize – The maximum number of connections to save in the pool.
  • block – Block when no free connections are available.
  • pool_kwargs – Extra keyword arguments used to initialize the Pool Manager.

proxy_headers ( proxy ) [source]¶

Returns a dictionary of the headers to add to any request sent through a proxy. This works with urllib3 magic to ensure that they are correctly sent to the proxy, rather than in a tunnelled request if CONNECT is being used.

This should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter .

Parameters: proxy – The url of the proxy being used for this request.
Return type: dict

proxy_manager_for ( proxy, **proxy_kwargs ) [source] ¶

Return urllib3 ProxyManager for the given proxy.

This method should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter .

Parameters:
  • proxy – The proxy to return a urllib3 ProxyManager for.
  • proxy_kwargs – Extra keyword arguments used to configure the Proxy Manager.
Returns:

request_url ( request, proxies ) [source] ¶

Obtain the url to use when making the final request.

If the message is being sent through a HTTP proxy, the full URL has to be used. Otherwise, we should only use the path portion of the URL.

This should not be called from user code, and is only exposed for use when subclassing the HTTPAdapter .

Parameters:
  • request – The PreparedRequest being sent.
  • proxies – A dictionary of schemes or schemes and hosts to proxy URLs.
Return type:

send ( request, stream=False, timeout=None, verify=True, cert=None, proxies=None ) [source] ¶

Sends PreparedRequest object. Returns Response object.

Authentication¶

Base class that all auth implementations derive from

class requests.auth. HTTPBasicAuth ( username, password ) [source] ¶

Attaches HTTP Basic Authentication to the given Request object.

class requests.auth. HTTPDigestAuth ( username, password ) [source] ¶

Attaches HTTP Digest Authentication to the given Request object.

Encodings¶

Returns encodings from given content string.

Parameters:
  • request – The PreparedRequest being sent.
  • stream – (optional) Whether to stream the request content.
  • timeout (floatortupleorurllib3 Timeout object) – (optional) How long to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
  • verify – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use
  • cert – (optional) Any user-provided SSL certificate to be trusted.
  • proxies – (optional) The proxies dictionary to apply to the request.
Return type:
Parameters: content – bytestring to extract encodings from.

requests.utils. get_encoding_from_headers ( headers ) [source] ¶

Returns encodings from given HTTP Header Dict.

Parameters: headers – dictionary to extract encoding from.
Return type: str

requests.utils. get_unicode_from_response ( r ) [source] ¶

Returns the requested content back in unicode.

Parameters: r – Response object to get unicode content from.
  1. charset from content-type
  2. fall back and replace all unicode characters
Return type: str

Cookies¶

Returns a key/value dictionary from a CookieJar.

Parameters: cj – CookieJar object to extract cookies from.
Return type: dict

requests.utils. add_dict_to_cookiejar ( cj, cookie_dict ) [source] ¶

Returns a CookieJar from a key/value dictionary.

Parameters:
  • cj – CookieJar to insert cookies into.
  • cookie_dict – Dict of key/values to insert into CookieJar.
Return type:

requests.cookies. cookiejar_from_dict ( cookie_dict, cookiejar=None, overwrite=True ) [source] ¶

Returns a CookieJar from a key/value dictionary.

Parameters:
  • cookie_dict – Dict of key/values to insert into CookieJar.
  • cookiejar – (optional) A cookiejar to add the cookies to.
  • overwrite – (optional) If False, will not replace cookies already in the jar with new ones.
Return type:

class requests.cookies. RequestsCookieJar ( policy=None ) [source] ¶

Compatibility class; is a cookielib.CookieJar, but exposes a dict interface.

This is the CookieJar we create by default for requests and sessions that don’t specify one, since some clients may expect response.cookies and session.cookies to support dict operations.

Requests does not use the dict interface internally; it’s just for compatibility with external client code. All requests code should work out of the box with externally provided instances of CookieJar , e.g. LWPCookieJar and FileCookieJar .

Unlike a regular CookieJar, this class is pickleable.

dictionary operations that are normally O(1) may be O(n).

Add correct Cookie: header to request (urllib.request.Request object).

The Cookie2 header is also added unless policy.hide_cookie2 is true.

Clear some cookies.

Invoking this method without arguments will clear all cookies. If given a single argument, only cookies belonging to that domain will be removed. If given two arguments, cookies belonging to the specified path within that domain are removed. If given three arguments, then the cookie with the specified name, path and domain is removed.

Raises KeyError if no matching cookie exists.

Discard all expired cookies.

You probably don’t need to call this method: expired cookies are never sent back to the server (provided you’re using DefaultCookiePolicy), this method is called by CookieJar itself every so often, and the .save() method won’t save expired cookies anyway (unless you ask otherwise by passing a true ignore_expires argument).

Discard all session cookies.

Note that the .save() method won’t save session cookies anyway, unless you ask otherwise by passing a true ignore_discard argument.

Return a copy of this RequestsCookieJar.

extract_cookies ( response, request ) ¶

Extract cookies from response, where allowable given the request.

Dict-like get() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains.

operation is O(n), not O(1).

Takes as an argument an optional domain and path and returns a plain old Python dict of name-value pairs of cookies that meet the requirements.

Return type: dict

get_policy ( ) [source] ¶

Return the CookiePolicy instance used.

Dict-like items() that returns a list of name-value tuples from the jar. Allows client-code to call dict(RequestsCookieJar) and get a vanilla python dict of key value pairs.

keys() and values().

Dict-like iteritems() that returns an iterator of name-value tuples from the jar.

iterkeys() and itervalues().

Dict-like iterkeys() that returns an iterator of names of cookies from the jar.

itervalues() and iteritems().

Dict-like itervalues() that returns an iterator of values of cookies from the jar.

iterkeys() and iteritems().

Dict-like keys() that returns a list of names of cookies from the jar.

values() and items().

Utility method to list all the domains in the jar.

Utility method to list all the paths in the jar.

make_cookies ( response, request ) ¶

Return sequence of Cookie objects extracted from response object.

Returns True if there are multiple domains in the jar. Returns False otherwise.

Return type: bool

pop ( k [ , d ] ) → v, remove specified key and return the corresponding value.¶

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem ( ) → (k, v), remove and return some (key, value) pair¶

as a 2-tuple; but raise KeyError if D is empty.

Dict-like set() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains.

Set a cookie, without checking whether or not it should be set.

set_cookie_if_ok ( cookie, request ) ¶

Set a cookie if policy says it’s OK to do so.

setdefault ( k [ , d ] ) → D.get(k,d), also set D[k]=d if k not in D¶ update ( other ) [source] ¶

Updates this jar with cookies from another CookieJar or dict-like

Dict-like values() that returns a list of values of cookies from the jar.

There are two cookies that meet the criteria specified in the cookie jar. Use .get and .set and include domain and path args in order to be more specific.

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

Status Code Lookup¶

The codes object defines a mapping from common names for HTTP statuses to their numerical codes, accessible either as attributes or as dictionary items.

Some codes have multiple names, and both upper- and lower-case versions of the names are allowed. For example, codes.ok , codes.OK , and codes.okay all correspond to the HTTP status code 200.

  • 100: continue
  • 101: switching_protocols
  • 102: processing
  • 103: checkpoint
  • 122: uri_too_long , request_uri_too_long
  • 200: ok , okay , all_ok , all_okay , all_good , o/ , ✓
  • 201: created
  • 202: accepted
  • 203: non_authoritative_info , non_authoritative_information
  • 204: no_content
  • 205: reset_content , reset
  • 206: partial_content , partial
  • 207: multi_status , multiple_status , multi_stati , multiple_stati
  • 208: already_reported
  • 226: im_used
  • 300: multiple_choices
  • 301: moved_permanently , moved , o-
  • 302: found
  • 303: see_other , other
  • 304: not_modified
  • 305: use_proxy
  • 306: switch_proxy
  • 307: temporary_redirect , temporary_moved , temporary
  • 308: permanent_redirect , resume_incomplete , resume
  • 400: bad_request , bad
  • 401: unauthorized
  • 402: payment_required , payment
  • 403: forbidden
  • 404: not_found , -o-
  • 405: method_not_allowed , not_allowed
  • 406: not_acceptable
  • 407: proxy_authentication_required , proxy_auth , proxy_authentication
  • 408: request_timeout , timeout
  • 409: conflict
  • 410: gone
  • 411: length_required
  • 412: precondition_failed , precondition
  • 413: request_entity_too_large
  • 414: request_uri_too_large
  • 415: unsupported_media_type , unsupported_media , media_type
  • 416: requested_range_not_satisfiable , requested_range , range_not_satisfiable
  • 417: expectation_failed
  • 418: im_a_teapot , teapot , i_am_a_teapot
  • 421: misdirected_request
  • 422: unprocessable_entity , unprocessable
  • 423: locked
  • 424: failed_dependency , dependency
  • 425: unordered_collection , unordered
  • 426: upgrade_required , upgrade
  • 428: precondition_required , precondition
  • 429: too_many_requests , too_many
  • 431: header_fields_too_large , fields_too_large
  • 444: no_response , none
  • 449: retry_with , retry
  • 450: blocked_by_windows_parental_controls , parental_controls
  • 451: unavailable_for_legal_reasons , legal_reasons
  • 499: client_closed_request
  • 500: internal_server_error , server_error , /o , ✗
  • 501: not_implemented
  • 502: bad_gateway
  • 503: service_unavailable , unavailable
  • 504: gateway_timeout
  • 505: http_version_not_supported , http_version
  • 506: variant_also_negotiates
  • 507: insufficient_storage
  • 509: bandwidth_limit_exceeded , bandwidth
  • 510: not_extended
  • 511: network_authentication_required , network_auth , network_authentication

Migrating to 1.x¶

This section details the main differences between 0.x and 1.x and is meant to ease the pain of upgrading.

API Changes¶

Response.json is now a callable and not a property of a response.

The Session API has changed. Sessions objects no longer take parameters. Session is also now capitalized, but it can still be instantiated with a lowercase session for backwards compatibility.

All request hooks have been removed except ‘response’.

Authentication helpers have been broken out into separate modules. See requests-oauthlib and requests-kerberos.

The parameter for streaming requests was changed from prefetch to stream and the logic was inverted. In addition, stream is now required for raw response reading.

The config parameter to the requests method has been removed. Some of these options are now configured on a Session such as keep-alive and maximum number of redirects. The verbosity option should be handled by configuring logging.

Licensing¶

One key difference that has nothing to do with the API is a change in the license from the ISC license to the Apache 2.0 license. The Apache 2.0 license ensures that contributions to Requests are also covered by the Apache 2.0 license.

Migrating to 2.x¶

Compared with the 1.0 release, there were relatively few backwards incompatible changes, but there are still a few issues to be aware of with this major release.

For more details on the changes in this release including new APIs, links to the relevant GitHub issues and some of the bug fixes, read Cory’s blog on the subject.

API Changes¶

There were a couple changes to how Requests handles exceptions. RequestException is now a subclass of IOError rather than RuntimeError as that more accurately categorizes the type of error. In addition, an invalid URL escape sequence now raises a subclass of RequestException rather than a ValueError .

Lastly, httplib.IncompleteRead exceptions caused by incorrect chunked encoding will now raise a Requests ChunkedEncodingError instead.

The proxy API has changed slightly. The scheme for a proxy URL is now required.

Behavioural Changes¶

  • Keys in the headers dictionary are now native strings on all Python versions, i.e. bytestrings on Python 2 and unicode on Python 3. If the keys are not native strings (unicode on Python 2 or bytestrings on Python 3) they will be converted to the native string type assuming UTF-8 encoding.
  • Values in the headers dictionary should always be strings. This has been the project’s position since before 1.0 but a recent change (since version 2.11.0) enforces this more strictly. It’s advised to avoid passing header values as unicode when possible.

Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release.

Stay Informed

Receive updates on new releases and upcoming projects.

Источник

When sending a request with authentication, I get a requests.exceptions.SSLError error which you can See below.

proxies = { 'https' : "http://user:pass@ip:port/" } 

url = "https://httpbin.org/ip"

numberResponse = requests.get(url,proxies=proxies).text

print(numberResponse)

The requests.exceptions.SSLError

Traceback (most recent call last):
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connectionpool.py", line 696, in urlopen       
    self._prepare_proxy(conn)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connectionpool.py", line 964, in _prepare_proxy
    conn.connect()
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connection.py", line 359, in connect
    conn = self._connect_tls_proxy(hostname, conn)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connection.py", line 496, in _connect_tls_proxy
    return ssl_wrap_socket(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3utilssl_.py", line 428, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3utilssl_.py", line 472, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libssl.py", line 1040, in _create
    self.do_handshake()
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1125)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsadapters.py", line 439, in send
    resp = conn.urlopen(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connectionpool.py", line 755, in urlopen
    retries = retries.increment(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3utilretry.py", line 573, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /ip (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1125)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:/Users/K_Yuk/OneDrive/Desktop/Gmail generator/test.py", line 15, in <module>
    numberResponse = requests.get(url,proxies=proxies).text
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsapi.py", line 76, in get
    return request('get', url, params=params, **kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsapi.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestssessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestssessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsadapters.py", line 514, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /ip (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1125)')))

So then I tired verify=False as one of the requests.get() parameters but then get a requests.exceptions.ProxyError error which you can see below :

proxies = { 'https' : "http://user:pass@10.10.1.10:3128/"} 

url = "https://httpbin.org/ip"

numberResponse = requests.get(url,proxies=proxies,verify=False).text

print(numberResponse)

The requests.exceptions.ProxyError

Traceback (most recent call last):
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connectionpool.py", line 696, in urlopen       
    self._prepare_proxy(conn)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connectionpool.py", line 964, in _prepare_proxy
    conn.connect()
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connection.py", line 359, in connect
    conn = self._connect_tls_proxy(hostname, conn)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connection.py", line 496, in _connect_tls_proxy
    return ssl_wrap_socket(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3utilssl_.py", line 428, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3utilssl_.py", line 472, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libssl.py", line 1040, in _create
    self.do_handshake()
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
FileNotFoundError: [Errno 2] No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsadapters.py", line 439, in send
    resp = conn.urlopen(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3connectionpool.py", line 755, in urlopen
    retries = retries.increment(
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesurllib3utilretry.py", line 573, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /ip (Caused by ProxyError('Cannot connect to proxy.', FileNotFoundError(2, 'No such file or directory')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:/Users/K_Yuk/OneDrive/Desktop/Gmail generator/test.py", line 15, in <module>
    numberResponse = requests.get(url,proxies=proxies,verify=False).text
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsapi.py", line 76, in get
    return request('get', url, params=params, **kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsapi.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestssessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestssessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
  File "C:UsersK_YukAppDataLocalProgramsPythonPython38libsite-packagesrequestsadapters.py", line 510, in send
    raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /ip (Caused by ProxyError('Cannot connect to proxy.', FileNotFoundError(2, 'No such file or directory')))

I tired to look every for the answer but nothing seems to work. I can’t send a request with a proxy with
authentication. Any ideas?

Advertisement

Answer

The problem is very likely not the authentication. Unfortunately, you don’t provide details of the proxy configuration and the URL you use for the proxy. The only thing you provide is:

proxies = { 'https' : eampleIpWithAuth } 

Based on the reference to _connect_tls_proxy in the stacktrace the eampleIpWithAuth is very likely something like https://..., i.e. you try to access the proxy itself over HTTPS. Note that accessing a proxy over HTTPS is different from using a HTTP proxy for HTTPS. When accessing a HTTPS URL over a HTTPS proxy one essentially does double encryption to the proxy:

client --- [HTTPS wrapped inside HTTPS] --- proxy --- [HTTPS] --- server

Whereas with a HTTPS URL over a “normal” HTTP proxy there is only single encryption, i.e. it looks (simplified) like this:

client --- [HTTPS wrapped inside HTTP]  --- proxy --- [HTTPS] --- server

Very likely the proxy you want to use is a plain HTTP proxy, and not a HTTPS proxy. This is actually the most common case.

The error happens since the proxy is not able to speak TLS but gets accessed by TLS. The fix is to use http://proxy and not https://proxy as the proxy address. Note that the latter worked in older versions of Python since proxy over HTTPS was not supported and a value of https:// for the protocol was treated the same as http://.

1 People found this is helpful

Понравилась статья? Поделить с друзьями:
  • Resident evil 3 remake неустранимая ошибка re3 exe
  • Resident evil 3 remake как изменить сложность
  • Resident evil 2 runtime error
  • Resident evil 2 remake ошибка при установке
  • Resident evil 2 remake ошибка неустранимая приложения re2 exe