Runtime error next asynchronous generator is already running

Description I try to use aioitertools.zip_longest in my code, but I get an error in Python 3.8: RuntimeError: anext(): asynchronous generator is already running Also the same code works fine in Pyt...

Description

I try to use aioitertools.zip_longest in my code, but I get an error in Python 3.8:

RuntimeError: anext(): asynchronous generator is already running

Also the same code works fine in Python 3.6 and 3.7

Details

A minimal code example for reproducing:

import asyncio
import aioitertools

async def producer_gen(n: int = 10):
    async for data in aioitertools.zip_longest(range(n), range(n), fillvalue=None):
        yield data

async def worker(w, producer):
    async for n in producer:
        print(w, n)
        await asyncio.sleep(0.1)

async def main():
    producer = producer_gen()
    tasks = [asyncio.ensure_future(worker(w, producer)) for w in range(5)]
    await asyncio.gather(*tasks)

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    loop.close()

Almost expected output:

0 (0, 0)
1 (1, 1)
2 (2, 2)
3 (3, 3)
4 (4, 4)
0 (5, 5)
2 (6, 6)
4 (7, 7)
1 (8, 8)
3 (9, 9)

Actual output:

0 (0, 0)
Traceback (most recent call last):
  File "C:/Users/Eugene/AppData/Roaming/JetBrains/PyCharm2020.1/scratches/scratch_67.py", line 20, in <module>
    loop.run_until_complete(main())
  File "C:ProgramsPythonPython38_x64libasynciobase_events.py", line 612, in run_until_complete
    return future.result()
  File "C:/Users/Eugene/AppData/Roaming/JetBrains/PyCharm2020.1/scratches/scratch_67.py", line 16, in main
    await asyncio.gather(*tasks)
  File "C:/Users/Eugene/AppData/Roaming/JetBrains/PyCharm2020.1/scratches/scratch_67.py", line 9, in worker
    async for n in producer:
RuntimeError: anext(): asynchronous generator is already running

If we do not use itertools.zip_longest the code works correclty:

async def producer_gen(n: int = 10):
    for i in range(n):
        yield i

...

async def main():
    producer = producer_gen()
    tasks = [asyncio.ensure_future(worker(w, producer)) for w in range(5)]
    await asyncio.gather(*tasks)
...

Output:

0 0
1 1
2 2
3 3
4 4
0 5
2 6
4 7
1 8
3 9
  • OS: Windows/Linux
  • Python version: 3.8.2
  • aioitertools version: 0.7.0
  • Can you repro on master? yes
  • Can you repro in a clean virtualenv? yes

generator is already running #703

Comments

kopax commented Dec 16, 2016 •

I have an error thrown in my console since weeks, and I cannot describe if it’s an error or not.

It does what I expect it to do, it’s just generating an error.

I have asked on stackoverflow and here and created on issue on mxstbr/react-boilerplate but I haven’t got any answers.

According to a certain Yassine Elouafi (maybe you)

The problem come from the cancel effect. If I cancel it I have this error pageLogin generator is already running .

  • Do you have any information regarding this issue ?
  • Is there a way to mask the error in the console ?
  • Is there a workaround ?

The text was updated successfully, but these errors were encountered:

Andarist commented Dec 16, 2016

Could you reproduce it on something like webpackbin? Would be really helpful to play around with full working example.

Or you could public your repo and prepare it for testing.

Andarist commented Dec 16, 2016

It seems that it is caught here for you — https://github.com/yelouafi/redux-saga/blob/fb4c69e7a9fe7e588a878b898ce6f342a1e9428a/src/internal/proc.js#L343-L347 . Thats probably why it has not impact on your code, beside the nasty warning.

It seems though that it might be related to some bug on our side, replicated example would help a lot.

yelouafi commented Dec 16, 2016 •

@kopax As a temporary workaround You can filter out that error by passing a custom logger to the middleware

I was able to get the same error executing the following code in Chrome’s console

you’ll see the status of the generator is running : the code tries to call next while there is already a next in progress.

It could be due to some subtle race conditions (it could be also a V8 bug like this one)

kopax commented Dec 17, 2016

@Andarist if you still need a gist example after the example of @yelouafi I will create one.

@yelouafi thanks for the temporary workaround, se we just disable the error and keep this open until we have more information ?

Andarist commented Dec 17, 2016

@kopax
Ye, it would still be cool to see a gist of saga example, as @yelouafi’s code probably shows what I would expect to happen. Within saga environment that shouldnt happen.

Andarist commented Oct 9, 2017

This thread got a little bit stale, so I’m closing it. If you need to discuss this further, please just reply back here.

martinkadlec0 commented Jul 14, 2019 •

@Andarist I am still running into this even with redux-saga v1.

Try clicking on «go to bar» and you should get the error in console.

It is somehow the combination of navigation + delay + takeLatest (or cancel actually I assume).

Edit: Andarist helped me to figure this one over twitter. The problem is that history.push causes synchronous imperative dispatch which the saga scheduler can’t properly handle. Solution is to use the call effect instead of calling history.push directly.

levenleven commented Nov 26, 2019

@martinkadlec0 @Andarist Thank you, I had similar issue and it was fixed by using call effect. Maybe we could put this into https://redux-saga.js.org/docs/Troubleshooting.html? Though I’m not sure how to word the issue correctly

Andarist commented Nov 26, 2019

@martinkadlec0’s issue was about synchronous cancellation attempt (from what I remember) on an already executing generator.

It looked smth like this:

When you enter that inner generator you can’t imperatively call dispatch , when you are still executing that generator, that would lead to that generator being called from outside with next / return / throw (and that happens in here because FOO leads to cancellation of currently running generator put into takeLatest).

I admit that this is hard to put into words, I’d appreciate if anyone could explain this in simple words and would gladly accept a documentation PR describing this.

Источник

RuntimeError: anext(): asynchronous generator is already running (Python 3.8) #23

Comments

espdev commented May 13, 2020 •

Description

I try to use aioitertools.zip_longest in my code, but I get an error in Python 3.8:

Also the same code works fine in Python 3.6 and 3.7

Details

A minimal code example for reproducing:

Almost expected output:

If we do not use itertools.zip_longest the code works correclty:

  • OS: Windows/Linux
  • Python version: 3.8.2
  • aioitertools version: 0.7.0
  • Can you repro on master? yes
  • Can you repro in a clean virtualenv? yes

The text was updated successfully, but these errors were encountered:

espdev commented Sep 12, 2020

@jreese Do you have any thoughts on this?

amyreese commented Sep 13, 2020

As best as I can tell, this is just an unfortunate byproduct of changes that were made in the standard library around dealing with async generators. In this case, because you create the producer_gen generator once, but attempt to iterate over it from five separate coroutines. The first worker starts to iterate the generator, at which point it awaits on something else and yields the event loop; then the next worker comes along and tries to iterate the generator, but it’s in a state where it doesn’t allow new iteration: https://github.com/python/cpython/blob/c75330605d4795850ec74fdc4d69aa5d92f76c00/Objects/genobject.c#L1562

You can see this happen without using anything from aioitertools:

Источник

Crashes after Runtime error #457

Comments

ECon87 commented May 25, 2021

Great idea. I love the idea of history and vim bindings, but for the life of me it cannot stay on for more than 10 mins. It keeps on crashing after a runtime error.

The exception is:

The text was updated successfully, but these errors were encountered:

dstei commented Jun 1, 2021 •

I have a very similar problem with the same AttributeError: ‘NoneType’ object has no attribute ‘type’ . For complete error message see below. This started occurring after updating several packages after a switch to conda-forge, most notably python 3.9.0 -> 3.9.4. (The other changed packages you can find below.) Atm I almost exlusively use pandas, where it often happens on join operations, maybe connected to NaN values.

dstei commented Jun 1, 2021

Ok, so I rolled back to 3.9.0, and the error still occurs (first time on a join , second time on a to_csv ). Unfortunately, it is not reproducible. Both times I repeated the steps to get to the same join/export operation, both times it worked on the second try. (While trying to reproduce, I left out steps not necessary to get to the result, so no exact reproduction.)

ECon87 commented Jun 2, 2021

The problem is still there for me too. And yes it’s occurs randomly (not reproducible). True shame, because I really really liked the concept. I guess I will be moving back to ipython.

dstei commented Sep 15, 2021

Some weeks later and updating to the latest release, I gave it another try – and the situation has improved for me. The very same error as quoted above (June 1) still occurs kind of randomly, but now does not force ptpython to quit. Instead, just retrying the command leading to the error works. So not unusable anymore 🙂

Источник

RuntimeError: Event Loop is Closed asyncio Fix

There are so many solutions to this Runtime error on Stack Overflow that don’t work. If you write Python code using asyncio, you either have or most likely will run into this runtime error at some point. I personally came across this error while running asynchronous API requests. When you’re running an async / await function using the asyncio library, you may get a RuntimeError: Event Loop is closed error even after your loop is already done running. This problem may not always affect the functionality of a single script but will affect the functionality of multiple scripts.

In this post we’ll go over:

  • The asyncio run vs loop.run_until_complete commands
  • The Runtime Error: Event loop is closed Problem
    • Error logs from the asyncio library
  • The Runtime Error: Event loop is closed Solution
    • Editing the Source Code of the asyncio library locally.

Click here for the source code you can copy and paste directly into your program → asyncio Runtime Error: Event Loop is closed Solution

The asyncio Library and run vs loop.run_until_complete

The aforementioned issue can come up when using the asyncio.run() or the loop.run_until_complete() functions. The documentation suggests using run() over run_until_complete() because run() handles the setting and closing of the event loop object. The run() command is actually just a wrapper around the run_until_complete() command. For a more thorough explanation, I suggest reading this guide on run() vs run_until_complete() .

Runtime Error: Event Loop is closed Problem

Usually, this error can be fixed by replacing the asyncio.run() command with the command asyncio.new_event_loop().run_until_complete() . However, when used in conjunction with the aiohttp library to make multiple API requests, that alternative will not work. This is due to multiple reasons. First, a TCP connector problem, second an SSL protocol problem, and thirdly an issue with the Proactor transport object. Let’s take a look at what the error logs for this problem may look like prior to the RuntimeError: Event Loop is closed line.

Error Logs from the asyncio Library

Here is what the logs look like for this error:

Runtime Error: Event Loop is closed Solution

The problem with this error being raised isn’t so much that we can’t run the function. It’s more so we have an error exit code, can’t run multiple functions in sequence, and get yelled at by the command line. The asyncio library not the most stable. So, this error is not too surprising, but you know what is surprising? This error has been around for a while. I found so many solutions on Stack Overflow that don’t work. Theoretically run() should close the event loop gracefully. Gracefully means no errors. Let’s look at how we can change the source code to force a graceful shutdown of the function.

Edit the Source Code of the asyncio Library run Command Locally

How are we going to fix the error? We’re going to wrap a modification around the class for the _ProactorBasePipeTransport ’s delete method. This is the method that shuts down the event loop throwing the error. To do this, we’re going to import the wraps annotation from functools and the _ProactorBasePipeTransport from asyncio.proactor_events . Technically we don’t have to import the Proactor class, but we’ll import for ease.

Let’s create our helper function to shut the interpreter up after we’ve already finished our loop. Our function will take one parameter, the function we’re wrapping. We’ll annotate a wrapper function that wraps the passed in function parameter. The inner wrapper function will take itself as an argument, and any number of unnamed and named arguments.

All the wrapper function does is try to execute the function as normal, except when there’s a RuntimeError , it’s not raised. After defining the functions, we’ll edit the __del__ function of the Proactor object and set it to the silenced version. Now the closing of the loop will not raise errors in the console.

Summary fixing the RuntimeError: Event Loop is closed asyncio Error

The problem we encountered was the program shutdown process raising an error when running asyncio.run() on an async event loop when it shouldn’t have. The solution we implemented doesn’t directly solve the issue of the program not closing all its processes gracefully, but it does protect us from the problem. We directly imported the responsible object and wrote a wrapper around it with the functools Python library. The wrapper function silences the RuntimeError .

Further Reading

I run this site to help you and others like you find cool projects and practice software skills. If this is helpful for you and you enjoy your ad free site, please help fund this site by donating below! If you can’t donate right now, please think of us next time.

Источник

“RuntimeError: This event loop is already running”; debugging aiohttp, asyncio and IDE “spyder3” in python 3.6.5 #7096

Comments

baumga34 commented May 9, 2018 •

Problem Description

https://stackoverflow.com/questions/50243393/runtimeerror-this-event-loop-is-already-running-debugging-aiohttp-asyncio-a
From the above link
I’m struggling to understand why I am getting the «RuntimeError: This event loop is already running» runtime error. I have tried to run snippets of code from «https://aiohttp.readthedocs.io/en/stable/» however, I keep getting the same issue.

What steps reproduce the problem?

1. Run the following code in Spyder IDE

2. Run the code from above with cmd.exe

3. The above results match what I expect.

What is the expected output? What do you see instead?

Paste Traceback/Error Below (if applicable)

Versions

  • Spyder version: 3.2.8
  • Python version: 3.6.5
  • Qt version: 5.9.3
  • PyQt version: 5.9.2
  • Operating System name/version: Windows

The text was updated successfully, but these errors were encountered:

baumga34 commented May 9, 2018

I am not using conda or anaconda.

ccordoba12 commented May 10, 2018

I really don’t know what happens in this case, sorry. We’ll try to take a look at it in the future, but I can’t make promises.

In the meantime, you can update IPython, tornado, jupyter_client and pyzmq to see if that solves this problem.

pgeorgan commented Jun 14, 2018 •

I’m having this same issue. Simply opening Spyder (latest version) spawns multiple Python 3.6 processes for me. Was not an issue until today, when I updated conda and it’s packages (—all). Code itself works fine if executed from Mac Terminal command line.

ccordoba12 commented Jun 14, 2018

I think this happens because IPython and/or ipykernel doesn’t have support for the asyncio event loop:

So I think it’s not something we can fix in Spyder.

pgeorgan commented Jun 14, 2018

Hmm. It worked with IPython 5.3.0 just fine.

Disregard the comment about spawning python processes. It appears this is a feature of code-completion features in the Editor (as per your own answer on StackOverflow I just came across).

Neverthless, why would it work with a previous version?

pgeorgan commented Jun 14, 2018

lucasgriff88 commented Aug 28, 2018

I am trying to learn how to use asyncio for asynchronous data acquisition. I found some example code from a tutorial:

I found that loop.close() did not ever run. A print command after loop.run_until_complete did not print. Lastly loop.stop() results in «Kernel died, restarting»

ccordoba12 commented Oct 11, 2018

This should be fixed now with the latest IPython and ipykernel versions.

RajshekarReddy commented Dec 23, 2018 •

I got the issue resolved by using the nest_async

pip install nest_asyncio

and adding below lines in my file.

EEflow commented Aug 1, 2019

Any updates? This simple script is still giving an error using Spyder:

ccordoba12 commented Aug 1, 2019

Use nest_asyncio , as mentioned above by @RajshekarReddy.

EEflow commented Aug 1, 2019

Is that considered a permanent solution or a quick fix?

ccordoba12 commented Aug 1, 2019

@EEflow, you’re welcome to submit a PR to spyder-kernels to add support for nest_asyncio , which would be similar to what we did for wurlitzer in spyder-ide/spyder-kernels#54. For now, the workaround is to activate net_asyncio manually.

@baumga34, your comments are simply not constructive (you already did it once, so I don’t understand why you’re repeating it again). If you don’t have anything substantial to add about Spyder, please refrain from doing the same kind of comments in the future.

rafwaf commented Aug 28, 2019 •

***Edit: I’m sorry, I got here from a google search and assumed it was for the asyncio package. I now realize it is for spyder so I’m definitely in the wrong place. Please ignore the rest of my message. I’ll leave it below the line for completeness of records.

The above code seems to solve most problems but it still crashes the python kernel when I call
loop.stop() from within Jupyter Notebooks. Presumably because of the same underlying reason as it does for Spyder.

Does anyone have any thoughts, solutions or workarounds?

Источник

Created on 2019-12-21 16:04 by twisteroid ambassador, last changed 2022-04-11 14:59 by admin.

Files
File name Uploaded Description Edit
prettysocks.py twisteroid ambassador,
2020-10-20 17:41
error_log_on_linux_python38.txt twisteroid ambassador,
2020-10-21 11:03
Messages (5)
msg358774 — (view) Author: twisteroid ambassador (twisteroid ambassador) * Date: 2019-12-21 16:04
I have been getting these strange exception since Python 3.8 on my Windows 10 machine. The external symptoms are many errors like "RuntimeError: aclose(): asynchronous generator is already running" and "Task was destroyed but it is pending!".

By adding try..except..logging around my code, I found that my StreamReaders would raise GeneratorExit on readexactly(). Digging deeper, it seems like the following line in StreamReader._wait_for_data():

await self._waiter

would raise a GeneratorExit.

There are only two other methods on StreamReader that actually does anything to _waiter, set_exception() and _wakeup_waiter(), but neither of these methods were called before GeneratorExit is raised. In fact, both these methods sets self._waiter to None, so normally after _wait_for_data() does "await self._waiter", self._waiter is None. However, after GeneratorExit is raised, I can see that self._waiter is not None. So it seems the GeneratorExit came from nowhere.

I have not been able to reproduce this behavior in other code. This is with Python 3.8.1 on latest Windows 10 1909, using ProactorEventLoop. I don't remember seeing this ever on Python 3.7.
msg378931 — (view) Author: twisteroid ambassador (twisteroid ambassador) * Date: 2020-10-19 08:07
This problem still exists on Python 3.9 and latest Windows 10.

I tried to catch the GeneratorExit and turn it into a normal Exception, and things only got weirder from here. Often several lines later another await statement would raise another GeneratorExit, such as writer.write() or even asyncio.sleep(). Doesn't matter whether I catch the additional GeneratorExit or not, once code exits this coroutine a RuntimeError('coroutine ignored GeneratorExit') is raised. And it doesn't matter what I do with this RuntimeError, the outermost coroutine's Task always generates an 'asyncio Task was destroyed but it is pending!' error message.

Taking a step back from this specific problem. Does a "casual" user of asyncio need to worry about handling GeneratorExits? Can I assume that I should not see GeneratorExits in user code?
msg379149 — (view) Author: twisteroid ambassador (twisteroid ambassador) * Date: 2020-10-20 17:41
I have attached a script that should be able to reproduces this problem. It's not a minimal reproduction, but hopefully easy enough to trigger.

The script is a SOCKS5 proxy server listening on localhost:1080. In its current form it does not need any external dependencies. Run it on Windows 10 + Python 3.9, set a browser to use the proxy server, and browse a little bit, it should soon start printing mysterious errors involving GeneratorExit.
msg379205 — (view) Author: twisteroid ambassador (twisteroid ambassador) * Date: 2020-10-21 11:03
Well this is unexpected, the same code running on Linux is throwing GeneratorExit-related mysterious exceptions as well. I'm not sure whether this is the same problem, but this one has a clearer traceback. I will attach the full error log, but the most pertinent part seems to be this:


During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.8/contextlib.py", line 662, in __aexit__
    cb_suppress = await cb(*exc_details)
  File "/usr/lib/python3.8/contextlib.py", line 189, in __aexit__
    await self.gen.athrow(typ, value, traceback)
  File "/opt/prettysocks/prettysocks.py", line 332, in closing_writer
    await writer.wait_closed()
  File "/usr/lib/python3.8/asyncio/streams.py", line 376, in wait_closed
    await self._protocol._get_close_waiter(self)
RuntimeError: cannot reuse already awaited coroutine


closing_writer() is an async context manager that calls close() and await wait_closed() on the given StreamWriter. So it looks like wait_closed() can occasionally reuse a coroutine?
msg382027 — (view) Author: Matthew (matthew) Date: 2020-11-29 00:12
Let me preface this by declaring that I am very new to Python async so it is very possible that I am missing something seemingly obvious. That being said, I've been looking at various resources to try to understand the internals of asyncio and it hasn't led to any insights on this problem thus far.
-----------------

This all sounds quite similar to an experience I am dealing with. I'm working with pub sub within aioredis which internally uses a StreamReader with a function equivalent to readexactly. This all started from debugging "Task was destroyed but it is pending!" to which attempted fixes led to multiple "RuntimeError: aclose(): asynchronous generator is already running" errors.

I did the same thing, adding try excepts everywhere in my code to understand what was happening and this led me to identifying that a regular async function would raise GeneratorExit during await. However, even if I suppress this, the caller awaiting on this function would also raise a GeneratorExit. Suppressing this exception at the top level leads to an unsuspecting (to me) error "coroutine ignored GeneratorExit".

I understand that GeneratorExit is raised in unfinished generators when garbage collected to handle cleanup. And I understand that async functions are essentially a generator in the sense that they yield when they await. So, if the entire coroutine were garbage collected this might trigger GeneratorExit in each nested coroutine. However, from all of my logging I am sure that prior to the GeneratorExit, nothing returns  upwards so there should still be valid references to every object.

I'll include some errors below, in case they may be of relevance:

=== Exception in await of inner async function ===
Traceback (most recent call last):
  File ".../site-packages/uvicorn/protocols/http/httptools_impl.py", line 165, in data_received
    self.parser.feed_data(data)
  File "httptools/parser/parser.pyx", line 196, in httptools.parser.parser.HttpParser.feed_data
httptools.parser.errors.HttpParserUpgrade: 858

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ".../my_code.py", line 199, in wait_for_update
    return await self.waiter.wait_for_value()
GeneratorExit

=== Exception when suppressing GeneratorExit on the top level ===
Exception ignored in: <coroutine object parent_async_function at 0x0b...>
Traceback (most recent call last):
  File ".../site-packages/websockets/protocol.py", line 229, in __init__
    self.reader = asyncio.StreamReader(limit=read_limit // 2, loop=loop)
RuntimeError: coroutine ignored GeneratorExit
History
Date User Action Args
2022-04-11 14:59:24 admin set github: 83297
2020-11-29 00:12:06 matthew set nosy:
+ matthew
messages:
+ msg382027
2020-10-21 11:03:28 twisteroid ambassador set files:
+ error_log_on_linux_python38.txt

messages:
+ msg379205

2020-10-20 17:41:55 twisteroid ambassador set files:
+ prettysocks.py

messages:
+ msg379149

2020-10-19 08:15:17 twisteroid ambassador set versions:
+ Python 3.9
2020-10-19 08:07:48 twisteroid ambassador set messages:
+ msg378931
2020-08-26 19:47:29 salgado set nosy:
+ salgado
2019-12-21 16:04:36 twisteroid ambassador create

Понравилась статья? Поделить с друзьями:
  • Runtime error что это значит python
  • Runtime error microsoft visual c runtime library как исправить корсары
  • Runtime error симс 4
  • Runtime error microsoft visual c runtime library как исправить windows 10
  • Runtime error при загрузке windows