You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fixtures are mostly encapsulated each in their own task, so the order in which they run doesn't matter much as long as we respect the dependency graph. But there's an important exception to this "mostly": fixtures and the test all share a contextvars context, so if two different fixtures each modify a contextvar in a way that expects stack discipline to be respected, surprising things can happen.
I ran into this with a test that did:
@pytest.fixture
async def trio_asyncio_loop():
async with trio_asyncio.open_loop() as loop:
yield loop
@pytest.fixture
async def some_service():
async with trio_asyncio.open_loop() as loop:
task = loop.create_task(<something>)
yield
task.cancel()
await trio_asyncio.run_aio_future(task)
await trio_asyncio.aio_as_trio(some_aio_cleanup)()
async def test_something(trio_asyncio_loop, some_service):
...
(A more correct solution would be to make some_service request the trio_asyncio_loop fixture instead of opening another loop of its own, but imagine that the fixtures came from different places and the author of test_something wasn't thinking about the fact that some_service used a trio-asyncio loop.)
If we ran fixture setup and teardown in the same order that a synchronous pytest would, there's no problem here; the some_service trio-asyncio loop is properly nested within the trio_asyncio_loop one, and teardown works fine. But since setups and teardowns can execute in parallel, and Trio task scheduling order is nondeterministic, it's possible to get the order:
setup trio_asyncio_loop
setup some_service
run test
teardown trio_asyncio_loop
teardown some_service
This causes the trio-asyncio loop contextvar to be set to None (the value that it had upon entry to trio_asyncio_loop) while some_service is still relying on having a loop available. Various exceptions result as asyncio.get_event_loop() starts returning None.
I think the right fix to this is to stop running fixtures in parallel; we can run them in the same order that pytest would if they were synchronous instead. It doesn't seem like the parallelism is buying us much, and it can create confusing outcomes in situations like this.
The text was updated successfully, but these errors were encountered:
Fixtures are mostly encapsulated each in their own task, so the order in which they run doesn't matter much as long as we respect the dependency graph. But there's an important exception to this "mostly": fixtures and the test all share a contextvars context, so if two different fixtures each modify a contextvar in a way that expects stack discipline to be respected, surprising things can happen.
I ran into this with a test that did:
(A more correct solution would be to make
some_service
request thetrio_asyncio_loop
fixture instead of opening another loop of its own, but imagine that the fixtures came from different places and the author oftest_something
wasn't thinking about the fact thatsome_service
used a trio-asyncio loop.)If we ran fixture setup and teardown in the same order that a synchronous pytest would, there's no problem here; the
some_service
trio-asyncio loop is properly nested within thetrio_asyncio_loop
one, and teardown works fine. But since setups and teardowns can execute in parallel, and Trio task scheduling order is nondeterministic, it's possible to get the order:This causes the trio-asyncio loop contextvar to be set to None (the value that it had upon entry to trio_asyncio_loop) while
some_service
is still relying on having a loop available. Various exceptions result asasyncio.get_event_loop()
starts returning None.I think the right fix to this is to stop running fixtures in parallel; we can run them in the same order that pytest would if they were synchronous instead. It doesn't seem like the parallelism is buying us much, and it can create confusing outcomes in situations like this.
The text was updated successfully, but these errors were encountered: