Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

thread_safe_bus.state (getter) should not use locks #1891

Open
IngmarPaetzold opened this issue Nov 7, 2024 · 0 comments
Open

thread_safe_bus.state (getter) should not use locks #1891

IngmarPaetzold opened this issue Nov 7, 2024 · 0 comments

Comments

@IngmarPaetzold
Copy link

Problem description

I have a small application that listens on one thread, and may send on another (using asyncio). Before sending, I used to check the hardware state by evaluating the .state property. I use the thread safe bus.
However, this leads to long wait phases, depending on incoming messages.

It turns out that getting the .state property locks both send and receive locks, whereas lock_recv is probably occupied by the listener most of the time, which causes the delays.
in thread_safe_bus.py:

    @property
    def state(self):
        with self._lock_send, self._lock_recv:
            return self.__wrapped__.state

Proposed change

I am not very familiar with thread-safe communication in Python, but derived from my C++ knowledge, the value of the .state property is only an enum value and should be atomic anyway, especially when reading.
So from my point of view, I would either just return the value without locks. Or - if any locking is needed for some reason - use a separate state-access-lock that is independent from lock_send and lock_recv.

Workaround

I guess checking the bus state before each send() call is not the correct way to do. I switched to a mere send() and catch a CanError exception afterwards, which works fine without delay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant