I had similar problem and I was hoping that aiohttp could have a retry logic on connection errors and possible on HTTP status. Because there isn’t just thing yet available I came up with the following which works for REST APIs where json is returned::
HTTP_STATUS_CODES_TO_RETRY = [500, 502, 503, 504]
class FailedRequest(Exception):
"""
A wrapper of all possible exception during a HTTP request
"""
code = 0
message = ''
url = ''
raised = ''
def __init__(self, *, raised='', message='', code='', url=''):
self.raised = raised
self.message = message
self.code = code
self.url = url
super().__init__("code:{c} url={u} message={m} raised={r}".format(
c=self.code, u=self.url, m=self.message, r=self.raised))
async def send_http(session, method, url, *,
retries=1,
interval=0.9,
backoff=3,
read_timeout=15.9,
http_status_codes_to_retry=HTTP_STATUS_CODES_TO_RETRY,
**kwargs):
"""
Sends a HTTP request and implements a retry logic.
Arguments:
session (obj): A client aiohttp session object
method (str): Method to use
url (str): URL for the request
retries (int): Number of times to retry in case of failure
interval (float): Time to wait before retries
backoff (int): Multiply interval by this factor after each failure
read_timeout (float): Time to wait for a response
"""
backoff_interval = interval
raised_exc = None
attempt = 0
if method not in ['get', 'patch', 'post']:
raise ValueError
if retries == -1: # -1 means retry indefinitely
attempt = -1
elif retries == 0: # Zero means don't retry
attempt = 1
else: # any other value means retry N times
attempt = retries + 1
while attempt != 0:
if raised_exc:
log.error('caught "%s" url:%s method:%s, remaining tries %s, '
'sleeping %.2fsecs', raised_exc, method.upper(), url,
attempt, backoff_interval)
await asyncio.sleep(backoff_interval)
# bump interval for the next possible attempt
backoff_interval = backoff_interval * backoff
log.info('sending %s %s with %s', method.upper(), url, kwargs)
try:
with aiohttp.Timeout(timeout=read_timeout):
async with getattr(session, method)(url, **kwargs) as response:
if response.status == 200:
try:
data = await response.json()
except json.decoder.JSONDecodeError as exc:
log.error(
'failed to decode response code:%s url:%s '
'method:%s error:%s response:%s',
response.status, url, method.upper(), exc,
response.reason
)
raise aiohttp.errors.HttpProcessingError(
code=response.status, message=exc.msg)
else:
log.info('code:%s url:%s method:%s response:%s',
response.status, url, method.upper(),
response.reason)
raised_exc = None
return data
elif response.status in http_status_codes_to_retry:
log.error(
'received invalid response code:%s url:%s error:%s'
' response:%s', response.status, url, '',
response.reason
)
raise aiohttp.errors.HttpProcessingError(
code=response.status, message=response.reason)
else:
try:
data = await response.json()
except json.decoder.JSONDecodeError as exc:
log.error(
'failed to decode response code:%s url:%s '
'error:%s response:%s', response.status, url,
exc, response.reason
)
raise FailedRequest(
code=response.status, message=exc,
raised=exc.__class__.__name__, url=url)
else:
log.warning('received %s for %s', data, url)
print(data['errors'][0]['detail'])
raised_exc = None
except (aiohttp.errors.ClientResponseError,
aiohttp.errors.ClientRequestError,
aiohttp.errors.ClientOSError,
aiohttp.errors.ClientDisconnectedError,
aiohttp.errors.ClientTimeoutError,
asyncio.TimeoutError,
aiohttp.errors.HttpProcessingError) as exc:
try:
code = exc.code
except AttributeError:
code = ''
raised_exc = FailedRequest(code=code, message=exc, url=url,
raised=exc.__class__.__name__)
else:
raised_exc = None
break
attempt -= 1
if raised_exc:
raise raised_exc
Содержание
- ServerDisconnectedError on subsequent requests only on Py 3.8 #4549
- Comments
- aiohttp client_exception ServerDisconnectedError — это проблема сервера API, aiohttp или моего кода?
- 3 ответа
- Connection Reset by peer error using aiohttp and asyncio #5969
- <>’s edit
- Describe the bug
- To Reproduce
- Expected behavior
- Logs/tracebacks
- Python Version
- aiohttp Version
- multidict Version
- yarl Version
- Related component
- Additional context
- Code of Conduct
- Replies: 2 suggested answers · 4 replies
- Client Reference¶
- Client Session¶
- Basic API¶
- Connectors¶
- BaseConnector¶
- TCPConnector¶
- UnixConnector¶
- Connection¶
- Response object¶
- ClientWebSocketResponse¶
- Utilities¶
- ClientTimeout¶
- RequestInfo¶
- BasicAuth¶
- CookieJar¶
- FormData¶
- Client exceptions¶
- Response errors¶
- Connection errors¶
ServerDisconnectedError on subsequent requests only on Py 3.8 #4549
🐞 Describe the bug
Aiohttp throws aiohttp.ServerDisconnectedError on 3.8.1 on subsequent requests within scope of single ClientSession, but only on certain domains. There is no such issue on older Python 3.7.6 so it shouldn’t be issue on remote server.
💡 To Reproduce
- Run -> Throws error on second request
- Uncomment line await asyncio.sleep(0.0001) (this is just for experimentation, to demonstrate that somehow this issue doesn’t occur when there’s sleep).
- Run -> Doesn’t throw error on second request
💡 Expected behavior
Both requests should complete successfully sequentially — just like on Python 3.7.
📋 Logs/tracebacks
📋 Your version of the Python
📋 Your version of the aiohttp/yarl/multidict distributions
📋 Additional context
aiohttp client. Python 3.8.1 (pyenv), venv installed via poetry.
The text was updated successfully, but these errors were encountered:
Looks like this is a good starting point for debugging. Could you please specify a few URLs where this explodes and (separately) a few URLs where it works. Also, is your Python interpreter coming from some public OS distro or did you compile it?
I tried google.com for example, where it worked on 3.8.
As for python interpreter, pyenv, as far as I know, compiles from source. But I tried it on Manjaro/Archlinux’s Python 3.8.1 on which it also failed.
Is this happening to both HTTP and HTTPS URLs?
Yes, it happens on both https and http.
I’ve found the same problem in python 3.8.0 from pyenv.
The problem happens when I try to connect to a server in localhost created with aiohttp.
Why this got tagged with reproducer:missing? I provided code snippet, which I believe highlights the issue clearly enough. Was it not possible to reproduce?
I just tried the same code snippet again, this time on WinPython 3.7.6 and then 3.8.1 and could reproduce the issue.
On Python 3.7.6 the code snippet provided in first post, will print returned response twice. On Python 3.8.1 it will print response only for the first time, followed by aiohttp.client_exceptions.ServerDisconnectedError
I cannot reproduce this issue with the code snippet you provided running with python 3.8.1 on Archlinux, but I do have a similar issue as @gligneul with a test server on localhost.
@Gargauth I don’t remember why exactly but I probably marked is as missing because it looked like it’s not enough. Now that I confirmed that it’s reproducible under 3.8-dev from pyenv (on my Gentoo laptop), I can safely mark it as present.
@Gargauth please provide HTTP URL with this issue happening. Changing your current reproducer URLs to plain HTTP does not explode. This is needed to confirm if it’s related to TLS and it’s what’s unstable about your reproducer ATM.
By the way, the best way to share a reproducer is to contribute a reliably failing test and mark it as @pytest.mark.xfail per https://pganssle-talks.github.io/xfail-lightning. This would allow somebody to start fixing the behavior later and have some indicator if it’s actually fixed.
I got this problem on Python 3.8.4 (Windows) too.
And it’s interesting that I tried to add a await asyncio.sleep(0) which solve the problem.
I am seeing the same error, but on Python 3.6 (Ubuntu 20.04.1 LTS). The first request would succeed, the second would consistently and immediately fail.
Per @WH-2099, await asyncio.sleep(0) fixed the issue.
Using versions:
Python 3.6.13
aiohttp==3.7.4
Does anybody have any other URLs that this happens on? Trying a few URLs, only the one in the reproducer triggered the issue for me.
I see the error with the original code, but await asyncio.sleep(0) doesn’t fix it for me. 0.001 as per the original example works though. This suggests to me that it may be some kind of timing issue with the server. Maybe it disconnects because it’s too fast, or maybe some command is getting sent out of order and the server is aborting.
In fact I spent a lot of time trying to solve the same problem when I intuitively found these three issues.
And I was able to find effective solutions to alleviate (not completely solve) all of them.
In general, I think these issues are clearly related to the change in the Python 3.8 asyncio module to change the default event loop.https://docs.python.org/3/library/asyncio-platforms.html
And the most important feature that manifests itself is premature disconnection from the underlying connection to the server.
Personally, I would suggest that these three issues be investigated in connection, and I am available these days to assist if needed.
(Please excuse my rudimentary English)
In general, I think these issues are clearly related to the change in the Python 3.8 asyncio module to change the default event loop.https://docs.python.org/3/library/asyncio-platforms.html
Not sure if it changes your theory, but I reproduced the example for this on Linux, which is not using the ProactorEventLoop. I did not try testing in Python 3.7 though.
I am experiencing this with Python 3.9.6 running inside Docker container (official Python image, tag 3.9.6-buster ) running on Ubuntu 20.04 LTS (launched in WSL2).
Requests are being sent to:
It is possible to overcome it with asyncio.sleep(0.1) , but it works only if I put it before the second request, not after the first one (they are made in a for loop and there is some user input between iterations, so they are not exactly sequential in terms of time)
And it doesn’t happen with the requests lib.
I can’t reproduce with any of those URLs, still only the original URL.
Also, I’ve just realised I was reproducing it in an lxc container, but I can’t reproduce the issue on my host.
One additional datapoint is that the issue also seems to only appear if reading from the response (i.e. no error when removing resp.text() ).
@Dreamsorcerer Thanks for the reminder, I’m mainly using the windows platform and am not very familiar with linux, so I was not thinking well.
It looks like the event loop is not the culprit.
However, I still intuitively think that these issues may be caused by the same problem at the bottom.
@Dreamsorcerer here is the piece of code which reproduces the issue for me consistently both inside the docker container (Python 3.9.6), on the Ubuntu (WSL, Python 3.8.10) and on the Windows host (Python 3.9.6).
It uses pipenv to manage dependencies, so you want to:
- install pipenv
- launch the shell: pipenv shell
- install dependencies: pipenv install
- launch get_token.py
It will be asking you for the username and password — you can enter any random data.
After several retries it will ask you to fill captcha — open the url provided and then again you can enter random data instead of the captcha answer and the password.
After that it tries to login again and gives me ServerDisconnectedError .
Uncommenting sleep lines «fixes» the issue.
Strange that ServerDisconnectedError doesn’t appear for login attempts before the captcha request (but if I remember correctly it fails in case of the correct login without the captcha).
Also, it seems it doesn’t appear if you don’t open the captcha url (the server will reply with another error, that the captcha was not opened).
Источник
aiohttp client_exception ServerDisconnectedError — это проблема сервера API, aiohttp или моего кода?
Я получаю aiohttp client_exception.ServerDisconnectedError всякий раз, когда делаю более
200 запросов к API, который я использую с помощью asyncio и aiohttp. Похоже, это не мой код, потому что он стабильно работает с меньшим количеством запросов, но не работает с любым большим числом. Пытаетесь понять, связана ли эта ошибка с aiohttp, моим кодом или с самой конечной точкой API? Кажется, в Интернете не так много информации по этому поводу.
Вот часть кода для генерации асинхронных запросов:
После завершения задач метод self.load_results() просто анализирует json и обновляет БД.
3 ответа
Я думаю, что вполне возможно, что другие ответы верны, но есть и еще одна возможность — кажется, что у aiohttp есть по крайней мере одно в настоящее время [июнь 2021 г.] неисправленное состояние гонки в коде потоков:
Я вижу ту же проблему в своем проекте, и это достаточно редко (и отключение сервера — не единственный симптом, иногда я получаю «неполная полезная нагрузка»), это больше похоже на состояние гонки.
Я рассматривал возможность перехода на https://www.python-httpx.org в надежде, что это не у меня нет такой же проблемы, но (из-за того, что она в настоящее время все еще находится в бета-версии) еще не решилась на это.
Скорее всего, это вызвано конфигурацией HTTP-сервера. Есть как минимум две возможные причины ServerDisconnectedError:
- Сервер может ограничить количество параллельных TCP-подключений, которые можно установить с одного IP-адреса. По умолчанию aiohttp уже ограничивает количество параллельных подключений до 100. Вы можете попробовать уменьшить ограничение и посмотреть, решит ли это проблему. Для этого вы можете создать пользовательский TCPConnector с другим предельным значением и передать его в ClientSession :
- Сервер может ограничить продолжительность TCP-соединения. По умолчанию aiohttp использует поддержку активности HTTP, чтобы одно и то же соединение TCP можно было использовать для нескольких запросов. Это повышает производительность, поскольку для каждого запроса не требуется устанавливать новое TCP-соединение. Однако некоторые серверы ограничивают продолжительность TCP-соединения, и если вы используете одно и то же TCP-соединение для многих запросов, сервер может закрыть его до того, как вы закончите с ним. Вы можете отключить поддержку активности HTTP в качестве обходного пути. Для этого вы можете создать пользовательский TCPConnector с параметром force_close , установленным на True , и передать его в ClientSession :
У меня была та же проблема, и отключение поддержки активности HTTP было для меня решением. Надеюсь это поможет.
Скорее всего, API сервера не устраивает несколько запросов, выполняемых асинхронно. Вы можете ограничить количество одновременных вызовов с помощью семафоров asyncio. .
В вашем случае я бы использовал его в менеджере контекста как:
Источник
Connection Reset by peer error using aiohttp and asyncio #5969
<>’s edit
Describe the bug
I am getting aiohttp.client_exceptions.ClientConnectorError upon using aiohttp and asyncio together. I am gathering all tasks using one session by aiohttp and getting responses.
To Reproduce
Expected behavior
Tasks executed in queue with status code 200.
Logs/tracebacks
Python Version
aiohttp Version
multidict Version
yarl Version
macOS — Darwin Kernel Version 19.6.0
Additional context
Code of Conduct
- I agree to follow the aio-libs Code of Conduct
Beta Was this translation helpful? Give feedback.
1 You must be logged in to vote
[Errno 54] Connection reset by peer
This means that the server drops connections on the TCP level. Maybe because you’re spamming them. Verify that with the server side. There’s nothing we can do.
You could wrap tasks with try/except catching ClientConnectorError in order to ignore or retry after some backoff but that’s it.
[Errno 54] Connection reset by peer
This means that the server drops connections on the TCP level. Maybe because you’re spamming them. Verify that with the server side. There’s nothing we can do.
You could wrap tasks with try/except catching ClientConnectorError in order to ignore or retry after some backoff but that’s it.
Beta Was this translation helpful? Give feedback.
Marked as answer
2 You must be logged in to vote
Thanks mate for prompt response. Let me check further on this
Beta Was this translation helpful? Give feedback.
[Errno 54] Connection reset by peer
This means that the server drops connections on the TCP level. Maybe because you’re spamming them. Verify that with the server side. There’s nothing we can do.
You could wrap tasks with try/except catching ClientConnectorError in order to ignore or retry after some backoff but that’s it.
Is there a specific cooldown for this? I’m having this issue for the first time now. It’s a loop which does a specific task every 5 minutes.
Versions are similar to the one created this thread.
Beta Was this translation helpful? Give feedback.
Источник
Client Reference¶
Client Session¶
Client session is the recommended interface for making HTTP requests.
Session encapsulates a connection pool (connector instance) and supports keepalives by default. Unless you are connecting to a large, unknown number of different servers over the lifetime of your application, it is suggested you use a single session for the lifetime of your application to benefit from connection pooling.
The client session supports the context manager protocol for self closing.
class aiohttp. ClientSession ( base_url = None , * , connector = None , cookies = None , headers = None , skip_auto_headers = None , auth = None , json_serialize = json.dumps , version = aiohttp.HttpVersion11 , cookie_jar = None , read_timeout = None , conn_timeout = None , timeout = sentinel , raise_for_status = False , connector_owner = True , auto_decompress = True , read_bufsize = 2**16 , requote_redirect_url = False , trust_env = False , trace_configs = None ) [source] ¶
The class for creating client sessions and making requests.
base_url –
Base part of the URL (optional) If set it allows to skip the base part in request calls.
New in version 3.8.
connector (aiohttp.BaseConnector) – BaseConnector sub-class instance to support connection pooling.
loop –
event loop used for processing HTTP requests.
If loop is None the constructor borrows it from connector if specified.
asyncio.get_event_loop() is used for getting default event loop otherwise.
Deprecated since version 2.0.
cookies (dict) – Cookies to send with the request (optional)
headers –
HTTP Headers to send with every request (optional).
May be either iterable of key-value pairs or Mapping (e.g. dict , CIMultiDict ).
skip_auto_headers –
set of headers for which autogeneration should be skipped.
aiohttp autogenerates headers like User-Agent or Content-Type if these headers are not explicitly passed. Using skip_auto_headers parameter allows to skip that generation. Note that Content-Length autogeneration can’t be skipped.
Iterable of str or istr (optional)
auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization (optional)
version – supported HTTP version, HTTP 1.1 by default.
cookie_jar –
By default every session instance has own private cookie jar for automatic cookies processing but user may redefine this behavior by providing own jar implementation.
One example is not processing cookies at all when working in proxy mode.
If no cookie processing is needed, a aiohttp.DummyCookieJar instance can be provided.
Json serializer callable.
By default json.dumps() function.
raise_for_status (bool) –
Automatically call ClientResponse.raise_for_status() for each response, False by default.
This parameter can be overridden when you making a request, e.g.:
Set the parameter to True if you need raise_for_status for most of cases but override raise_for_status for those requests where you need to handle responses with status 400 or higher.
timeout – a ClientTimeout settings structure, 300 seconds (5min)
total timeout by default.
New in version 3.3.
Request operations timeout. read_timeout is cumulative for all request operations (request, redirects, responses, data consuming). By default, the read timeout is 5*60 seconds. Use None or 0 to disable timeout checks.
Deprecated since version 3.3: Use timeout parameter instead.
timeout for connection establishing (optional). Values 0 or None mean no timeout.
Deprecated since version 3.3: Use timeout parameter instead.
connector_owner (bool) –
Close connector instance on session closing.
Setting the parameter to False allows to share connection pool between sessions without sharing session state: cookies etc.
auto_decompress (bool) – Automatically decompress response body,
True by default
New in version 2.3.
read_bufsize (int) – Size of the read buffer ( ClientResponse.content ).
64 KiB by default.
New in version 3.7.
Get proxies information from HTTP_PROXY / HTTPS_PROXY environment variables if the parameter is True ( False by default).
Get proxy credentials from
/.netrc file if present.
New in version 2.3.
Changed in version 3.0: Added support for
requote_redirect_url (bool) – Apply URL requoting for redirection URLs if
automatic redirection is enabled ( True by default).
New in version 3.5.
trace_configs – A list of TraceConfig instances used for client tracing. None (default) is used for request tracing disabling. See Tracing Reference for more information.
True if the session has been closed, False otherwise.
A read-only property.
aiohttp.BaseConnector derived instance used for the session.
A read-only property.
The session cookies, AbstractCookieJar instance.
Gives access to cookie jar’s content and modifiers.
A read-only property.
aiohttp re quote’s redirect urls by default, but some servers require exact url from location header. To disable re-quote system set requote_redirect_url attribute to False .
New in version 2.1.
This parameter affects all subsequent requests.
Deprecated since version 3.5: The attribute modification is deprecated.
A loop instance used for session creation.
A read-only property.
Deprecated since version 3.5.
Default client timeouts, ClientTimeout instance. The value can be tuned by passing timeout parameter to ClientSession constructor.
New in version 3.7.
HTTP Headers that sent with every request
May be either iterable of key-value pairs or Mapping (e.g. dict , CIMultiDict ).
New in version 3.7.
Set of headers for which autogeneration skipped.
New in version 3.7.
An object that represents HTTP Basic Authorization.
New in version 3.7.
Json serializer callable.
By default json.dumps() function.
New in version 3.7.
Should connector be closed on session closing
New in version 3.7.
Should ClientResponse.raise_for_status() be called for each response
New in version 3.7.
Should the body response be automatically decompressed
bool default is True
New in version 3.7.
Should get proxies information from HTTP_PROXY / HTTPS_PROXY environment variables or
/.netrc file if present
bool default is False
New in version 3.7.
A list of TraceConfig instances used for client tracing. None (default) is used for request tracing disabling. See Tracing Reference for more information.
New in version 3.7.
Performs an asynchronous HTTP request. Returns a response object.
method (str) – HTTP method
url – Request URL, str or URL .
params –
Mapping, iterable of tuple of key/value pairs or string to be sent as parameters in the query string of the new request. Ignored for subsequent redirected requests (optional)
Allowed values are:
str with preferably url-encoded content (Warning: content will not be encoded by aiohttp)
data – The data to send in the body of the request. This can be a FormData object or anything that can be passed into FormData , e.g. a dictionary, bytes, or file-like object. (optional)
json – Any json compatible python object (optional). json and data parameters could not be used at the same time.
cookies (dict) – HTTP Cookies to send with
the request (optional)
Global session cookies and the explicitly set cookies will be merged when sending the request.
New in version 3.5.
headers (dict) – HTTP Headers to send with the request (optional)
skip_auto_headers –
set of headers for which autogeneration should be skipped.
aiohttp autogenerates headers like User-Agent or Content-Type if these headers are not explicitly passed. Using skip_auto_headers parameter allows to skip that generation.
Iterable of str or istr (optional)
auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization (optional)
allow_redirects (bool) – If set to False , do not follow redirects. True by default (optional).
max_redirects (int) – Maximum number of redirects to follow. 10 by default.
compress (bool) – Set to True if request has to be compressed with deflate encoding. If compress can not be combined with a Content-Encoding and Content-Length headers. None by default (optional).
chunked (int) – Enable chunked transfer encoding. It is up to the developer to decide how to chunk data streams. If chunking is enabled, aiohttp encodes the provided chunks in the “Transfer-encoding: chunked” format. If chunked is set, then the Transfer-encoding and content-length headers are disallowed. None by default (optional).
expect100 (bool) – Expect 100-continue response from server. False by default (optional).
raise_for_status (bool) – Automatically call ClientResponse.raise_for_status() for
response if set to True . If set to None value from ClientSession will be used. None by default (optional).
New in version 3.4.
read_until_eof (bool) – Read response until EOF if response does not have Content-Length header. True by default (optional).
read_bufsize (int) – Size of the read buffer ( ClientResponse.content ).
None by default, it means that the session global value is used.
New in version 3.7.
proxy – Proxy URL, str or URL (optional)
proxy_auth (aiohttp.BasicAuth) – an object that represents proxy HTTP Basic Authorization (optional)
override the session’s timeout.
Changed in version 3.3: The parameter is ClientTimeout instance, float is still supported for sake of backward compatibility.
If float is passed it is a total timeout (in seconds).
ssl – SSL validation mode. None for default SSL check
( ssl.create_default_context() is used), False for skip SSL certificate validation, aiohttp.Fingerprint for fingerprint validation, ssl.SSLContext for custom SSL certificate validation.
Supersedes verify_ssl, ssl_context and fingerprint parameters.
New in version 3.0.
Perform SSL certificate validation for HTTPS requests (enabled by default). May be disabled to skip validation for sites with invalid certificates.
New in version 2.3.
Deprecated since version 3.0: Use ssl=False
Pass the SHA256 digest of the expected certificate in DER format to verify that the certificate the server presents matches. Useful for certificate pinning.
Warning: use of MD5 or SHA1 digests is insecure and removed.
New in version 2.3.
Deprecated since version 3.0: Use ssl=aiohttp.Fingerprint(digest)
ssl context used for processing HTTPS requests (optional).
ssl_context may be used for configuring certification authority channel, supported SSL options etc.
New in version 2.3.
Deprecated since version 3.0: Use ssl=ssl_context
HTTP headers to send to the proxy if the parameter proxy has been provided.
New in version 2.3.
trace_request_ctx –
Object used to give as a kw param for each new TraceConfig object instantiated, used to give information to the tracers that is only available at request time.
New in version 3.0.
coroutine async-with get ( url , * , allow_redirects = True , ** kwargs ) [source] ¶
Perform a GET request.
In order to modify inner request parameters, provide kwargs .
url – Request URL, str or URL
allow_redirects (bool) – If set to False , do not follow redirects. True by default (optional).
coroutine async-with post ( url , * , data = None , ** kwargs ) [source] ¶
Perform a POST request.
In order to modify inner request parameters, provide kwargs .
url – Request URL, str or URL
data – Data to send in the body of the request; see request for details (optional)
coroutine async-with put ( url , * , data = None , ** kwargs ) [source] ¶
Perform a PUT request.
In order to modify inner request parameters, provide kwargs .
url – Request URL, str or URL
data – Data to send in the body of the request; see request for details (optional)
coroutine async-with delete ( url , ** kwargs ) [source] ¶
Perform a DELETE request.
In order to modify inner request parameters, provide kwargs .
url – Request URL, str or URL
coroutine async-with head ( url , * , allow_redirects = False , ** kwargs ) [source] ¶
Perform a HEAD request.
In order to modify inner request parameters, provide kwargs .
url – Request URL, str or URL
allow_redirects (bool) – If set to False , do not follow redirects. False by default (optional).
coroutine async-with options ( url , * , allow_redirects = True , ** kwargs ) [source] ¶
Perform an OPTIONS request.
In order to modify inner request parameters, provide kwargs .
url – Request URL, str or URL
allow_redirects (bool) – If set to False , do not follow redirects. True by default (optional).
coroutine async-with patch ( url , * , data = None , ** kwargs ) [source] ¶
Perform a PATCH request.
In order to modify inner request parameters, provide kwargs .
url – Request URL, str or URL
data – Data to send in the body of the request; see request for details (optional)
coroutine async-with ws_connect ( url , * , method = ‘GET’ , protocols = () , timeout = 10.0 , receive_timeout = None , auth = None , autoclose = True , autoping = True , heartbeat = None , origin = None , params = None , headers = None , proxy = None , proxy_auth = None , ssl = None , verify_ssl = None , fingerprint = None , ssl_context = None , proxy_headers = None , compress = 0 , max_msg_size = 4194304 ) [source] ¶
Create a websocket connection. Returns a ClientWebSocketResponse object.
url – Websocket server url, str or URL
protocols (tuple) – Websocket protocols
timeout (float) – Timeout for websocket to close. 10 seconds by default
receive_timeout (float) – Timeout for websocket to receive complete message. None (unlimited) seconds by default
auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization (optional)
autoclose (bool) – Automatically close websocket connection on close message from server. If autoclose is False then close procedure has to be handled manually. True by default
autoping (bool) – automatically send pong on ping message from server. True by default
heartbeat (float) – Send ping message every heartbeat seconds and wait pong response, if pong response is not received then close connection. The timer is reset on any data reception.(optional)
origin (str) – Origin header to send to server(optional)
params –
Mapping, iterable of tuple of key/value pairs or string to be sent as parameters in the query string of the new request. Ignored for subsequent redirected requests (optional)
Allowed values are:
str with preferably url-encoded content (Warning: content will not be encoded by aiohttp)
headers (dict) – HTTP Headers to send with the request (optional)
proxy (str) – Proxy URL, str or URL (optional)
proxy_auth (aiohttp.BasicAuth) – an object that represents proxy HTTP Basic Authorization (optional)
ssl – SSL validation mode. None for default SSL check
( ssl.create_default_context() is used), False for skip SSL certificate validation, aiohttp.Fingerprint for fingerprint validation, ssl.SSLContext for custom SSL certificate validation.
Supersedes verify_ssl, ssl_context and fingerprint parameters.
New in version 3.0.
Perform SSL certificate validation for HTTPS requests (enabled by default). May be disabled to skip validation for sites with invalid certificates.
New in version 2.3.
Deprecated since version 3.0: Use ssl=False
Pass the SHA256 digest of the expected certificate in DER format to verify that the certificate the server presents matches. Useful for certificate pinning.
Note: use of MD5 or SHA1 digests is insecure and deprecated.
New in version 2.3.
Deprecated since version 3.0: Use ssl=aiohttp.Fingerprint(digest)
ssl context used for processing HTTPS requests (optional).
ssl_context may be used for configuring certification authority channel, supported SSL options etc.
New in version 2.3.
Deprecated since version 3.0: Use ssl=ssl_context
proxy_headers (dict) –
HTTP headers to send to the proxy if the parameter proxy has been provided.
New in version 2.3.
compress (int) – Enable Per-Message Compress Extension support.
0 for disable, 9 to 15 for window bit support. Default value is 0.
New in version 2.3.
max_msg_size (int) – maximum size of read websocket message,
4 MB by default. To disable the size limit use 0 .
New in version 3.3.
method (str) – HTTP method to establish WebSocket connection,
New in version 3.5.
Close underlying connector.
Release all acquired resources.
Detach connector from session without closing the former.
Session is switched to closed state anyway.
Basic API¶
While we encourage ClientSession usage we also provide simple coroutines for making HTTP requests.
Basic API is good for performing simple HTTP requests without keepaliving, cookies and complex connection stuff like properly configured SSL certification chaining.
async-with aiohttp. request ( method , url , * , params = None , data = None , json = None , headers = None , cookies = None , auth = None , allow_redirects = True , max_redirects = 10 , encoding = ‘utf-8’ , version = HttpVersion(major=1, minor=1) , compress = None , chunked = None , expect100 = False , raise_for_status = False , read_bufsize = None , connector = None , loop = None , read_until_eof = True , timeout = sentinel ) [source] ¶
Asynchronous context manager for performing an asynchronous HTTP request. Returns a ClientResponse response object.
method (str) – HTTP method
url – Requested URL, str or URL
params (dict) – Parameters to be sent in the query string of the new request (optional)
data – The data to send in the body of the request. This can be a FormData object or anything that can be passed into FormData , e.g. a dictionary, bytes, or file-like object. (optional)
json – Any json compatible python object (optional). json and data parameters could not be used at the same time.
headers (dict) – HTTP Headers to send with the request (optional)
cookies (dict) – Cookies to send with the request (optional)
auth (aiohttp.BasicAuth) – an object that represents HTTP Basic Authorization (optional)
allow_redirects (bool) – If set to False , do not follow redirects. True by default (optional).
version (aiohttp.protocol.HttpVersion) – Request HTTP version (optional)
compress (bool) – Set to True if request has to be compressed with deflate encoding. False instructs aiohttp to not compress data. None by default (optional).
chunked (int) – Enables chunked transfer encoding. None by default (optional).
expect100 (bool) – Expect 100-continue response from server. False by default (optional).
raise_for_status (bool) – Automatically call
ClientResponse.raise_for_status() for response if set to True . If set to None value from ClientSession will be used. None by default (optional).
New in version 3.4.
connector (aiohttp.BaseConnector) – BaseConnector sub-class instance to support connection pooling.
read_until_eof (bool) – Read response until EOF if response does not have Content-Length header. True by default (optional).
read_bufsize (int) – Size of the read buffer ( ClientResponse.content ).
None by default, it means that the session global value is used.
New in version 3.7.
timeout – a ClientTimeout settings structure, 300 seconds (5min) total timeout by default.
used for processing HTTP requests. If param is None , asyncio.get_event_loop() is used for getting default event loop.
Deprecated since version 2.0.
Connectors¶
Connectors are transports for aiohttp client API.
There are standard connectors:
TCPConnector for regular TCP sockets (both HTTP and HTTPS schemes supported).
UnixConnector for connecting via UNIX socket (it’s used mostly for testing purposes).
All connector classes should be derived from BaseConnector .
By default all connectors support keep-alive connections (behavior is controlled by force_close constructor’s parameter).
BaseConnector¶
Base class for all connectors.
keepalive_timeout (float) – timeout for connection reusing after releasing (optional). Values 0 . For disabling keep-alive feature use force_close=True flag.
limit (int) – total number simultaneous connections. If limit is None the connector has no limit (default: 100).
limit_per_host (int) – limit simultaneous connections to the same endpoint. Endpoints are the same if they are have equal (host, port, is_ssl) triple. If limit is 0 the connector has no limit (default: 0).
force_close (bool) – close underlying sockets after connection releasing (optional).
enable_cleanup_closed (bool) – some SSL servers do not properly complete SSL shutdown process, in that case asyncio leaks ssl connections. If this parameter is set to True, aiohttp additionally aborts underlining transport after 2 seconds. It is off by default.
loop –
event loop used for handling connections. If param is None , asyncio.get_event_loop() is used for getting default event loop.
Deprecated since version 2.0.
Read-only property, True if connector is closed.
Read-only property, True if connector should ultimately close connections on releasing.
The total number for simultaneous connections. If limit is 0 the connector has no limit. The default limit size is 100.
The limit for simultaneous connections to the same endpoint.
Endpoints are the same if they are have equal (host, port, is_ssl) triple.
If limit_per_host is None the connector has no limit per host.
Close all opened connections.
coroutine connect ( request ) [source] ¶
Get a free connection from pool or create new one if connection is absent in the pool.
The call may be paused if limit is exhausted until used connections returns to pool.
request (aiohttp.ClientRequest) – request object which is connection initiator.
coroutine _create_connection ( req ) [source] ¶
Abstract method for actual connection establishing, should be overridden in subclasses.
TCPConnector¶
Connector for working with HTTP and HTTPS via TCP sockets.
The most common transport. When you don’t know what connector type to use, use a TCPConnector instance.
Constructor accepts all parameters suitable for BaseConnector plus several TCP-specific ones:
( ssl.create_default_context() is used), False for skip SSL certificate validation, aiohttp.Fingerprint for fingerprint validation, ssl.SSLContext for custom SSL certificate validation.
Supersedes verify_ssl, ssl_context and fingerprint parameters.
New in version 3.0.
perform SSL certificate validation for HTTPS requests (enabled by default). May be disabled to skip validation for sites with invalid certificates.
Deprecated since version 2.3: Pass verify_ssl to ClientSession.get() etc.
pass the SHA256 digest of the expected certificate in DER format to verify that the certificate the server presents matches. Useful for certificate pinning.
Note: use of MD5 or SHA1 digests is insecure and deprecated.
Deprecated since version 2.3: Pass verify_ssl to ClientSession.get() etc.
use_dns_cache (bool) –
use internal cache for DNS lookups, True by default.
Enabling an option may speedup connection establishing a bit but may introduce some side effects also.
ttl_dns_cache (int) –
expire after some seconds the DNS entries, None means cached forever. By default 10 seconds (optional).
In some environments the IP addresses related to a specific HOST can change after a specific time. Use this option to keep the DNS cache updated refreshing each entry after N seconds.
limit (int) – total number simultaneous connections. If limit is None the connector has no limit (default: 100).
limit_per_host (int) – limit simultaneous connections to the same endpoint. Endpoints are the same if they are have equal (host, port, is_ssl) triple. If limit is 0 the connector has no limit (default: 0).
resolver (aiohttp.abc.AbstractResolver) –
custom resolver instance to use. aiohttp.DefaultResolver by default (asynchronous if aiodns>=1.1 is installed).
Custom resolvers allow to resolve hostnames differently than the way the host is configured.
The resolver is aiohttp.ThreadedResolver by default, asynchronous version is pretty robust but might fail in very rare cases.
TCP socket family, both IPv4 and IPv6 by default. For IPv4 only use socket.AF_INET , for IPv6 only – socket.AF_INET6 .
family is 0 by default, that means both IPv4 and IPv6 are accepted. To specify only concrete version please pass socket.AF_INET or socket.AF_INET6 explicitly.
SSL context used for processing HTTPS requests (optional).
ssl_context may be used for configuring certification authority channel, supported SSL options etc.
local_addr (tuple) – tuple of (local_host, local_port) used to bind socket locally if specified.
force_close (bool) – close underlying sockets after connection releasing (optional).
enable_cleanup_closed (bool) – Some ssl servers do not properly complete SSL shutdown process, in that case asyncio leaks SSL connections. If this parameter is set to True, aiohttp additionally aborts underlining transport after 2 seconds. It is off by default.
Use quick lookup in internal DNS cache for host names if True .
The cache of resolved hosts if dns_cache is enabled.
clear_dns_cache ( self , host = None , port = None ) [source] ¶
Clear internal DNS cache.
Remove specific entry if both host and port are specified, clear all cache otherwise.
UnixConnector¶
Unix socket connector.
Use UnixConnector for sending HTTP/HTTPS requests through UNIX Sockets as underlying transport.
UNIX sockets are handy for writing tests and making very fast connections between processes on the same host.
Constructor accepts all parameters suitable for BaseConnector plus UNIX-specific one:
path (str) – Unix socket path
Path to UNIX socket, read-only str property.
Connection¶
Encapsulates single connection in connector object.
End user should never create Connection instances manually but get it by BaseConnector.connect() coroutine.
bool read-only property, True if connection was closed, released or detached.
Event loop used for connection
Deprecated since version 3.5.
Close connection with forcibly closing underlying socket.
Release connection back to connector.
Underlying socket is not closed, the connection may be reused later if timeout (30 seconds by default) for connection was not expired.
Response object¶
Client response returned by aiohttp.ClientSession.request() and family.
User never creates the instance of ClientResponse class but gets it from API calls.
ClientResponse supports async context manager protocol, e.g.:
After exiting from async with block response object will be released (see release() coroutine).
Response’s version, HttpVersion instance.
HTTP status code of response ( int ), e.g. 200 .
HTTP status reason of response ( str ), e.g. «OK» .
Boolean representation of HTTP status code ( bool ). True if status is less than 400 ; otherwise, False .
Unmodified URL of request with URL fragment unstripped ( URL ).
New in version 3.2.
Connection used for handling response.
Payload stream, which contains response’s BODY ( StreamReader ). It supports various reading methods depending on the expected format. When chunked transfer encoding is used by the server, allows retrieving the actual http chunks.
Reading from the stream may raise aiohttp.ClientPayloadError if the response object is closed before response receives all data or in case if any transfer encoding related errors like misformed chunked encoding of broken compression data.
HTTP cookies of response (Set-Cookie HTTP header, SimpleCookie ).
A case-insensitive multidict proxy with HTTP headers of response, CIMultiDictProxy .
Unmodified HTTP headers of response as unconverted bytes, a sequence of (key, value) pairs.
Link HTTP header parsed into a MultiDictProxy .
For each link, key is link param rel when it exists, or link url as str otherwise, and value is MultiDictProxy of link params and url at key url as URL instance.
New in version 3.2.
Read-only property with content part of Content-Type header.
Returns value is ‘application/octet-stream’ if no Content-Type header present in HTTP headers according to RFC 2616. To make sure Content-Type header is not present in the server reply, use headers or raw_headers , e.g. ‘CONTENT-TYPE’ not in resp.headers .
Read-only property that specifies the encoding for the request’s BODY.
The value is parsed from the Content-Type HTTP header.
Returns str like ‘utf-8’ or None if no Content-Type header present in HTTP headers or it has no charset information.
Read-only property that specified the Content-Disposition HTTP header.
Instance of ContentDisposition or None if no Content-Disposition header present in HTTP headers.
A Sequence of ClientResponse objects of preceding requests (earliest request first) if there were redirects, an empty sequence otherwise.
Close response and underlying connection.
Read the whole response’s body as bytes .
Close underlying connection if data reading gets an error, release connection otherwise.
Raise an aiohttp.ClientResponseError if the data can’t be read.
It is not required to call release on the response object. When the client fully receives the payload, the underlying connection automatically returns back to pool. If the payload is not fully read, the connection is closed
Raise an aiohttp.ClientResponseError if the response status is 400 or higher.
Do nothing for success responses (less than 400).
coroutine text ( encoding = None ) [source] ¶
Read response’s body and return decoded str using specified encoding parameter.
If encoding is None content encoding is autocalculated using Content-Type HTTP header and charset-normalizer tool if the header is not provided by server.
cchardet is used with fallback to charset-normalizer if cchardet is not available.
Close underlying connection if data reading gets an error, release connection otherwise.
encoding (str) – text encoding used for BODY decoding, or None for encoding autodetection (default).
LookupError – if the encoding detected by cchardet is unknown by Python (e.g. VISCII).
If response has no charset info in Content-Type HTTP header cchardet / charset-normalizer is used for content encoding autodetection.
It may hurt performance. If page encoding is known passing explicit encoding parameter might help:
Read response’s body as JSON, return dict using specified encoding and loader. If data is not still available a read call will be done,
If encoding is None content encoding is autocalculated using cchardet or charset-normalizer as fallback if cchardet is not available.
if response’s content-type does not match content_type parameter aiohttp.ContentTypeError get raised. To disable content type check pass None value.
text encoding used for BODY decoding, or None for encoding autodetection (default).
By the standard JSON encoding should be UTF-8 but practice beats purity: some servers return non-UTF responses. Autodetection works pretty fine anyway.
content_type (str) – specify response’s content-type, if content type does not match raise aiohttp.ClientResponseError . To disable content-type check, pass None as value. (default: application/json ).
BODY as JSON data parsed by loads parameter or None if BODY is empty or contains white-spaces only.
A namedtuple with request URL and headers from ClientRequest object, aiohttp.RequestInfo instance.
Automatically detect content encoding using charset info in Content-Type HTTP header. If this info is not exists or there are no appropriate codecs for encoding then cchardet / charset-normalizer is used.
Beware that it is not always safe to use the result of this function to decode a response. Some encodings detected by cchardet are not known by Python (e.g. VISCII). charset-normalizer is not concerned by that issue.
RuntimeError – if called before the body has been read, for cchardet usage
New in version 3.0.
ClientWebSocketResponse¶
To connect to a websocket server aiohttp.ws_connect() or aiohttp.ClientSession.ws_connect() coroutines should be used, do not create an instance of class ClientWebSocketResponse manually.
class aiohttp. ClientWebSocketResponse [source] ¶
Class for handling client-side websockets.
Read-only property, True if close() has been called or CLOSE message has been received from peer.
Websocket subprotocol chosen after start() call.
May be None if server and client protocols are not overlapping.
get_extra_info ( name , default = None ) [source] ¶
Reads extra info from connection’s transport
Returns exception if any occurs or returns None.
coroutine ping ( message = b» ) [source] ¶
message – optional payload of ping message, str (converted to UTF-8 encoded bytes) or bytes .
Changed in version 3.0: The method is converted into coroutine
message – optional payload of pong message, str (converted to UTF-8 encoded bytes) or bytes .
Changed in version 3.0: The method is converted into coroutine
Send data to peer as TEXT message.
data (str) – data to send.
compress (int) – sets specific level of compression for single message, None for not overriding per-socket setting.
Changed in version 3.0: The method is converted into coroutine , compress parameter added.
Send data to peer as BINARY message.
data – data to send.
compress (int) – sets specific level of compression for single message, None for not overriding per-socket setting.
Changed in version 3.0: The method is converted into coroutine , compress parameter added.
Send data to peer as JSON string.
data – data to send.
compress (int) – sets specific level of compression for single message, None for not overriding per-socket setting.
dumps (collections.abc.Callable) – any callable that accepts an object and returns a JSON string ( json.dumps() by default).
RuntimeError – if connection is not started or closing
ValueError – if data is not serializable object
TypeError – if value returned by dumps(data) is not str
Changed in version 3.0: The method is converted into coroutine , compress parameter added.
A coroutine that initiates closing handshake by sending CLOSE message. It waits for close response from server. To add a timeout to close() call just wrap the call with asyncio.wait() or asyncio.wait_for() .
code (int) – closing code. See also WSCloseCode .
message – optional payload of close message, str (converted to UTF-8 encoded bytes) or bytes .
A coroutine that waits upcoming data message from peer and returns it.
The coroutine implicitly handles PING , PONG and CLOSE without returning the message.
It process ping-pong game and performs closing handshake internally.
A coroutine that calls receive() but also asserts the message type is TEXT .
peer’s message content.
A coroutine that calls receive() but also asserts the message type is BINARY .
peer’s message content.
coroutine receive_json ( * , loads = json.loads ) [source] ¶
A coroutine that calls receive_str() and loads the JSON string to a Python dict.
loads (collections.abc.Callable) – any callable that accepts str and returns dict with parsed JSON ( json.loads() by default).
loaded JSON content
Utilities¶
ClientTimeout¶
A data class for client timeout settings.
See Timeouts for usage examples.
Total number of seconds for the whole request.
float , None by default.
Maximal number of seconds for acquiring a connection from pool. The time consists connection establishment for a new connection or waiting for a free connection from a pool if pool connection limits are exceeded.
For pure socket connection establishment time use sock_connect .
float , None by default.
Maximal number of seconds for connecting to a peer for a new connection, not given from a pool. See also connect .
float , None by default.
Maximal number of seconds for reading a portion of data from a peer.
float , None by default.
New in version 3.3.
Timeouts of 5 seconds or more are rounded for scheduling on the next second boundary (an absolute time where microseconds part is zero) for the sake of performance.
E.g., assume a timeout is 10 , absolute time when timeout should expire is loop.time() + 5 , and it points to 12345.67 + 10 which is equal to 12355.67 .
The absolute time for the timeout cancellation is 12356 .
It leads to grouping all close scheduled timeout expirations to exactly the same time to reduce amount of loop wakeups.
Changed in version 3.7: Rounding to the next seconds boundary is disabled for timeouts smaller than 5 seconds for the sake of easy debugging.
In turn, tiny timeouts can lead to significant performance degradation on production environment.
Represents ETag identifier.
Value of corresponding etag without quotes.
Flag indicates that etag is weak (has W/ prefix).
New in version 3.8.
RequestInfo¶
A data class with request URL and headers from ClientRequest object, available as ClientResponse.request_info attribute.
Requested url, yarl.URL instance.
Request HTTP method like ‘GET’ or ‘POST’ , str .
HTTP headers for request, multidict.CIMultiDict instance.
Requested url with URL fragment unstripped, yarl.URL instance.
New in version 3.2.
BasicAuth¶
HTTP basic authentication helper.
login (str) – login
password (str) – password
encoding (str) – encoding ( ‘latin1’ by default)
Should be used for specifying authorization data in client API, e.g. auth parameter for ClientSession.request() .
classmethod decode ( auth_header , encoding = ‘latin1’ ) [source] ¶
Decode HTTP basic authentication credentials.
auth_header (str) – The Authorization header to decode.
encoding (str) – (optional) encoding (‘latin1’ by default)
decoded authentication data, BasicAuth .
classmethod from_url ( url ) [source] ¶
Constructed credentials info from url’s user and password parts.
credentials data, BasicAuth or None is credentials are not provided.
New in version 2.3.
Encode credentials into string suitable for Authorization header etc.
encoded authentication data, str .
CookieJar¶
The cookie jar instance is available as ClientSession.cookie_jar .
The jar contains Morsel items for storing internal cookie data.
API provides a count of saved cookies:
These cookies may be iterated over:
Implements cookie storage adhering to RFC 6265.
unsafe (bool) – (optional) Whether to accept cookies from IPs.
quote_cookie (bool) – (optional) Whether to quote cookies according to
RFC 2109. Some backend systems (not compatible with RFC mentioned above) does not support quoted cookies.
New in version 3.7.
treat_as_secure_origin – (optional) Mark origins as secure
for cookies marked as Secured. Possible types are
Possible types are:
New in version 3.8.
update_cookies ( cookies , response_url = None ) [source]¶
Update cookies returned by server in Set-Cookie header.
cookies – a collections.abc.Mapping (e.g. dict , SimpleCookie ) or iterable of pairs with cookies returned by server’s response.
response_url (URL) – URL of response, None for shared cookies. Regular cookies are coupled with server’s URL and are sent only to this server, shared ones are sent in every client request.
Return jar’s cookies acceptable for URL and available in Cookie header for sending client requests for given URL.
response_url (URL) – request’s URL for which cookies are asked.
http.cookies.SimpleCookie with filtered cookies for given URL.
Write a pickled representation of cookies into the file at provided path.
file_path – Path to file where cookies will be serialized, str or pathlib.Path instance.
Load a pickled representation of cookies from the file at provided path.
file_path – Path to file from where cookies will be imported, str or pathlib.Path instance.
clear ( predicate = None ) [source] ¶
Removes all cookies from the jar if the predicate is None . Otherwise remove only those Morsel that predicate(morsel) returns True .
predicate –
callable that gets Morsel as a parameter and returns True if this Morsel must be deleted from the jar.
New in version 4.0.
Remove all cookies from the jar that belongs to the specified domain or its subdomains.
domain (str) – domain for which cookies must be deleted from the jar.
New in version 4.0.
Dummy cookie jar which does not store cookies but ignores them.
Could be useful e.g. for web crawlers to iterate over Internet without blowing up with saved cookies information.
To install dummy cookie jar pass it into session instance:
Fingerprint helper for checking SSL certificates by SHA256 digest.
digest (bytes) – SHA256 digest for certificate in DER-encoded binary form (see ssl.SSLSocket.getpeercert() ).
To check fingerprint pass the object into ClientSession.get() call, e.g.:
New in version 3.0.
FormData¶
A FormData object contains the form data and also handles encoding it into a body that is either multipart/form-data or application/x-www-form-urlencoded . multipart/form-data is used if at least one field is an io.IOBase object or was added with at least one optional argument to add_field ( content_type , filename , or content_transfer_encoding ). Otherwise, application/x-www-form-urlencoded is used.
FormData instances are callable and return a aiohttp.payload.Payload on being called.
class aiohttp. FormData ( fields , quote_fields = True , charset = None ) [source] ¶
Helper class for multipart/form-data and application/x-www-form-urlencoded body generation.
fields –
A container for the key/value pairs of this form.
Possible types are:
io.IOBase , e.g. a file-like object
If it is a tuple or list , it must be a valid argument for add_fields .
For dict , multidict.MultiDict , and multidict.MultiDictProxy , the keys and values must be valid name and value arguments to add_field , respectively.
add_field ( name , value , content_type = None , filename = None , content_transfer_encoding = None ) [source] ¶
Add a field to the form.
name (str) – Name of the field
value –
Value of the field
Possible types are:
io.IOBase , e.g. a file-like object
content_type (str) – The field’s content-type header (optional)
The field’s filename (optional)
If this is not set and value is a bytes , bytearray , or memoryview object, the name argument is used as the filename unless content_transfer_encoding is specified.
If filename is not set and value is an io.IOBase object, the filename is extracted from the object if possible.
content_transfer_encoding (str) – The field’s content-transfer-encoding header (optional)
Add one or more fields to the form.
fields –
An iterable containing:
io.IOBase , e.g. a file-like object
tuple or list of length two, containing a name-value pair
Client exceptions¶
Exception hierarchy has been significantly modified in version 2.0. aiohttp defines only exceptions that covers connection handling and server response misbehaviors. For developer specific mistakes, aiohttp uses python standard exceptions like ValueError or TypeError .
Reading a response content may raise a ClientPayloadError exception. This exception indicates errors specific to the payload encoding. Such as invalid compressed data, malformed chunked-encoded chunks or not enough data that satisfy the content-length header.
All exceptions are available as members of aiohttp module.
exception aiohttp. ClientError [source] ¶
Base class for all client specific exceptions.
class aiohttp. ClientPayloadError [source] ¶
This exception can only be raised while reading the response payload if one of these errors occurs:
malformed chunked encoding
not enough data that satisfy Content-Length HTTP header.
exception aiohttp. InvalidURL [source] ¶
URL used for fetching is malformed, e.g. it does not contain host part.
Invalid URL, yarl.URL instance.
class aiohttp. ContentDisposition ¶
Represent Content-Disposition header
A str instance. Value of Content-Disposition header itself, e.g. attachment .
A str instance. Content filename extracted from parameters. May be None .
Read-only mapping contains all parameters.
Response errors¶
These exceptions could happen after we get response from server.
Instance of RequestInfo object, contains information about request.
HTTP status code of response ( int ), e.g. 400 .
Message of response ( str ), e.g. «OK» .
Headers in response, a list of pairs.
History from failed response, if available, else empty tuple.
A tuple of ClientResponse objects used for handle redirection responses.
HTTP status code of response ( int ), e.g. 400 .
Deprecated since version 3.1.
Web socket server response error.
class aiohttp. ContentTypeError [source] ¶
Invalid content type.
New in version 2.3.
Client was redirected too many times.
Maximum number of redirects can be configured by using parameter max_redirects in request .
New in version 3.2.
Connection errors¶
These exceptions related to low-level connection problems.
class aiohttp. ClientOSError [source] ¶
Subset of connection errors that are initiated by an OSError exception.
class aiohttp. ClientConnectorError [source] ¶
Connector related exceptions.
class aiohttp. ClientProxyConnectionError [source] ¶
class aiohttp. UnixClientConnectorError ¶
class aiohttp. ServerConnectionError [source] ¶
class aiohttp. ClientSSLError [source] ¶
class aiohttp. ClientConnectorSSLError [source] ¶
Response ssl error.
class aiohttp. ClientConnectorCertificateError [source] ¶
Response certificate error.
class aiohttp. ServerDisconnectedError [source] ¶
Partially parsed HTTP message (optional).
class aiohttp. ServerTimeoutError [source] ¶
Server operation timeout: read timeout, etc.
class aiohttp. ServerFingerprintMismatch [source] ¶
Источник
I’m getting an aiohttp client_exception.ServerDisconnectedError whenever I do more than ~200 requests to an API I’m hitting using asyncio & aiohttp. It doesn’t seem to be my code because it works consistently with smaller number of requests, but fails on any larger number. Trying to understand if this error is related to aiohttp, or my code, or with the API endpoint itself? There doesn’t seem to be much info online around this.
Traceback (most recent call last):
File "C:/usr/PycharmProjects/api_framework/api_framework.py", line 27, in <module>
stuff = abc.do_stuff_2()
File "C:usrPycharmProjectsapi_frameworkapiabcabc.py", line 72, in do_stuff
self.queue_manager(self.do_stuff(json_data))
File "C:usrPycharmProjectsapi_frameworkapiabcabc.py", line 115, in queue_manager
loop.run_until_complete(future)
File "C:Python36x64libasynciobase_events.py", line 466, in run_until_complete
return future.result()
File "C:usrPycharmProjectsapi_frameworkapiabcabc.py", line 96, in do_stuff
result = await asyncio.gather(*tasks)
File "C:usrPycharmProjectsapi_frameworkapiabcabc.py", line 140, in async_post
async with session.post(self.api_attr.api_endpoint + resource, headers=self.headers, data=data) as response:
File "C:Python36x64libsite-packagesaiohttpclient.py", line 843, in __aenter__
self._resp = await self._coro
File "C:Python36x64libsite-packagesaiohttpclient.py", line 387, in _request
await resp.start(conn)
File "C:Python36x64libsite-packagesaiohttpclient_reqrep.py", line 748, in start
message, payload = await self._protocol.read()
File "C:Python36x64libsite-packagesaiohttpstreams.py", line 533, in read
await self._waiter
aiohttp.client_exceptions.ServerDisconnectedError: None
here’s some of the code to generate the async requests:
def some_other_method(self):
self.queue_manager(self.do_stuff(all_the_tasks))
def queue_manager(self, method):
print('starting event queue')
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(method)
loop.run_until_complete(future)
loop.close()
async def async_post(self, resource, session, data):
async with session.post(self.api_attr.api_endpoint + resource, headers=self.headers, data=data) as response:
resp = await response.read()
return resp
async def do_stuff(self, data):
print('queueing tasks')
tasks = []
async with aiohttp.ClientSession() as session:
for row in data:
task = asyncio.ensure_future(self.async_post('my_api_endpoint', session, row))
tasks.append(task)
result = await asyncio.gather(*tasks)
self.load_results(result)
Once the tasks have completed, self.load_results() method just parses the json and updates the DB.
So I’ve got a small project which scrapes through roughly ~150k URLS on a website, but to do this requires credentials (Authenticating is a multi-step process and requires gathering SAML information from login forms, several POST requests, etc).
Anyhoo, I originally used the requests library, but the synchronous nature of it meant that going through all the URLs was way too slow, so I decided to rewrite it using asyncio and aiohttp. The actual request part part for the URLs works, but authentication does not, despite being nearly a line-by-line reproduction of the code in the requests library. As best I can tell all the same requests are made with all the same payloads, except all of the sudden at my second-to-last POST request AIOHTTP throws a «Server disconnected error» with no explanation.
This is my first time using aiohttp and asyncio, and I don’t have too much experience with python in general, so if anyone has any ideas on what could cause this it would be greatly appreciated.
Might this be a bug with AIOHTTP? Has anyone run into a situation where the requests library works but AIOHTTP fails, especially concerning post requests and payload data?
Here’s my error message (I’ll provide more information such as code, etc. if needed)
File "c:/Users/user/scraper/sau_raw_scraper_async.py", line 108, in <module>
asyncio.get_event_loop().run_until_complete(get_all_profiles(urls))
File "C:Python38libasynciobase_events.py", line 616, in run_until_complete
return future.result()
File "c:/Users/user/scraper/sau_raw_scraper_async.py", line 89, in get_all_profiles
session = await login(session)
File "c:/Users/user/scraper/sau_raw_scraper_async.py", line 58, in login
async with session.post(saml_url, data = saml_postParams) as saml_r:
File "C:Python38libsite-packagesaiohttpclient.py", line 1117, in __aenter__
self._resp = await self._coro
File "C:Python38libsite-packagesaiohttpclient.py", line 544, in _request
await resp.start(conn)
File "C:Python38libsite-packagesaiohttpclient_reqrep.py", line 890, in start
message, payload = await self._protocol.read() # type: ignore
File "C:Python38libsite-packagesaiohttpstreams.py", line 604, in read
await self._waiter
aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected
У меня есть телеграмм бот, который работает на long_polling. С целью выявления ошибок и их последующего дебага записываю все возникающие исключения в лог, который периодически разбираю. В целом работает как должен, но заметил, что иногда даже без активности пользователей бота возникают нижеуказанные ошибки, которые по ощущениям не влияют на его работоспособность, но тем не менее хотелось бы разобраться и решить возможные проблемы:
Ошибка № 1
aiogram.dispatcher.dispatcher | 2022-07-25 04:10:18,904 | Cause exception while getting updates.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/aiogram/dispatcher/dispatcher.py", line 383, in start_polling
updates = await self.bot.get_updates(
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/bot.py", line 97, in get_updates
result = await self.request(api.Methods.GET_UPDATES, payload)
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/base.py", line 208, in request
return await api.make_request(self.session, self.server, self.__token, method, data, files,
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/api.py", line 140, in make_request
return check_result(method, response.content_type, response.status, await response.text())
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/api.py", line 128, in check_result
raise exceptions.TelegramAPIError(description)
aiogram.utils.exceptions.TelegramAPIError: Bad Gateway
Ошибка №2
aiogram.dispatcher.dispatcher | 2022-07-28 23:38:51,181 | Cause exception while getting updates.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/api.py", line 139, in make_request
async with session.post(url, data=req, **kwargs) as response:
File "/usr/local/lib/python3.8/dist-packages/aiohttp/client.py", line 1117, in __aenter__
self._resp = await self._coro
File "/usr/local/lib/python3.8/dist-packages/aiohttp/client.py", line 544, in _request
await resp.start(conn)
File "/usr/local/lib/python3.8/dist-packages/aiohttp/client_reqrep.py", line 890, in start
message, payload = await self._protocol.read() # type: ignore
File "/usr/local/lib/python3.8/dist-packages/aiohttp/streams.py", line 604, in read
await self._waiter
aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/aiogram/dispatcher/dispatcher.py", line 383, in start_polling
updates = await self.bot.get_updates(
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/bot.py", line 97, in get_updates
result = await self.request(api.Methods.GET_UPDATES, payload)
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/base.py", line 208, in request
return await api.make_request(self.session, self.server, self.__token, method, data, files,
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/api.py", line 142, in make_request
raise exceptions.NetworkError(f"aiohttp client throws an error: {e.__class__.__name__}: {e}")
aiogram.utils.exceptions.NetworkError: Aiohttp client throws an error: ServerDisconnectedError: Server disconnected
Ошибка №3
aiogram.dispatcher.dispatcher | 2022-07-31 03:43:00,577 | Cause exception while getting updates.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/aiogram/dispatcher/dispatcher.py", line 383, in start_polling
updates = await self.bot.get_updates(
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/bot.py", line 97, in get_updates
result = await self.request(api.Methods.GET_UPDATES, payload)
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/base.py", line 208, in request
return await api.make_request(self.session, self.server, self.__token, method, data, files,
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/api.py", line 140, in make_request
return check_result(method, response.content_type, response.status, await response.text())
File "/usr/local/lib/python3.8/dist-packages/aiogram/bot/api.py", line 111, in check_result
raise exceptions.RetryAfter(parameters.retry_after)
aiogram.utils.exceptions.RetryAfter: Flood control exceeded. Retry in 5 seconds.
Повторюсь, что ошибки возникают даже вне активности пользователей бота. Попытался что-то нагуглить или найти описание в документации — не смог разобраться…
Из-за чего они возникают? Влияют ли хоть как-то на работу бота? Можно ли их избежать?
Background description:
I just started to contact crawlers, read online tutorials and began to learn a little bit. All the knowledge points are relatively shallow. If there are better methods, please comment and share.
The initial crawler is very simple: crawl the data list in a web page, and the format returned by the web page is also very simple. It is in the form of a dictionary, which can be directly accessed by saving it as a dictionary with . Json()
.
At the beginning of contact with asynchronous collaborative process, after completing the exercise, try to transform the original crawler, resulting in an error.
Initial code:
async def download_page(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
await result = resp.text()
async def main(urls):
tasks = []
for url in urls:
tasks.append(asyncio.create_task(download_page(url))) # 我的python版本为3.9.6
await asyncio.await(tasks)
if __name__ == '__main__':
urls = [ url1, url2, …… ]
asyncio.run(main(urls))
This is the most basic asynchronous collaborative process framework. When the amount of data is small, it can basically meet the requirements. However, if the amount of data is slightly large, it will report errors. The error information I collected is as follows:
aiohttp.client_exceptions.ClientOSError: [WinError 64] The specified network name is no longer available.
Task exception was never retrieved
aiohttp.client_exceptions.ClientOSError: [WinError 121] The signal timeout has expired
Task exception was never retrieved
aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected
Task exception was never retrieved
The general error message is that there is a problem with the network request and connection
as a beginner, I didn’t have more in-depth understanding of network connection, nor did I learn more in-depth asynchronous co process operation. Therefore, it is difficult to solve the problem of error reporting.
Solution:
The big problem with the above error reports is that each task creates a session. When too many sessions are created, an error will be reported.
Solution:
Try to create only one session
async def download_page(url,session):
async with session.get(url) as resp:
await result = resp.text()
async def main(urls):
tasks = []
async with aiohttp.ClientSession() as session: # The session will be created in the main function, and the session will be passed to the download_page function as a variable.
for url in urls:
tasks.append(asyncio.create_task(download_page(url,session)))
#My python version is 3.9.6, python version 3.8 and above, if you need to create asynchronous tasks, you need to create them via asyncio.create_task(), otherwise it will run normally but will give a warning message await asyncio.await(tasks)
if __name__ == '__main__':
urls = [ url1, url2, …… ]
asyncio.run(main(urls))
In this way, the problem of connection error can be solved to a certain extent when crawling a large amount of data.
Sentiment: in the process of programming, thinking should be more flexible. A little change may improve the efficiency a lot.
throws an error: ServerDisconnectedError: Server disconnected
aiogram.utils.exceptions.NetworkError: Aiohttp client throws an error: ClientOSError: [Errno 32] Broken pipe
russian
programming
aiogram
19
ответов
потому что наш говённый мир таков, что всё где-нибудь не так
Aleksandr Danilov
потому что наш говённый мир таков, что всё где-ниб…
При запуске бот работает рандомное кол-во времени, а потом резко перестаёт любить поллинг
Попробуй создать нового бота или поменять ключи старого
Если исправилось значит ты дважды запустил один бот
Если нет ,значит ты в коде накосячил
No Profile
Попробуй создать нового бота или поменять ключи ст…
Смена ключа не помогла
Запуск происходит таким методом:
if name == «main»:
executor.start_polling(misc.dp, on_startup = setup_bot, skip_updates = True)
бд Postgres / orm: gino
BigPost Support
Смена ключа не помогла
Запуск происходит таким м…
misc.dp?
Я советую перепробовать все (удалять/изменять аргументы, передаваемые в start_polling)
Методом тыка найдешь выход
misc.dp = Dispatcher(
bot,
storage = MemoryStorage()
)
Вебхук поможет в данном случае?
У тебя dp тип в misc.py?
Или шо
Зачем
Не надо трогать лупы
Tishka17
Зачем
Не надо трогать лупы
У меня
loop = asyncio.get_event_loop()
misc.dp = Dispatcher(
bot,
storage = MemoryStorage(),
loop = loop
)
Мне поможет вебхук или нет смысла его поднимать?
Aleksandr Danilov
потому что наш говённый мир таков, что всё где-ниб…
у меня где-то раз в сутки падает поллинг. while true exept спасает
BigPost Support
Мне поможет вебхук или нет смысла его поднимать?
поможет, просто бот во время падения апи не будет получать сообщения, и все
TitsFoxy
поможет, просто бот во время падения апи не будет …
это наверняка не из за апи, потому-что если бы оно предположительно падало, то падали бы и все остальные боты
Где машину берёте?
BigPost Support
это наверняка не из за апи, потому-что если бы оно…
у меня пишет в логах что апи телеги перестало отвечать
TitsFoxy
у меня где-то раз в сутки падает поллинг. while tr…
зачем изобретать супервайзинг если есть уже изобретённый systemd?