Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thread safety issues #44

Open
k-dominik opened this issue Jun 25, 2019 · 1 comment
Open

Thread safety issues #44

k-dominik opened this issue Jun 25, 2019 · 1 comment

Comments

@k-dominik
Copy link

k-dominik commented Jun 25, 2019

Heya, I've been looking into using libdvid-cpp in ilastik again and ran into some threading issues.

Right away the question, am I doing something wrong?

I have drafted the following minimal example (that expects the docker flyem example volume to be at localhost:8000):

from libdvid.voxels import VoxelsAccessor
import concurrent.futures


server = 'localhost:8000'
uuid = '5cc94d532799484cb01788fcdb7cd9f0'
dname = "grayscale"
va = VoxelsAccessor(server, uuid, dname)


def get_slice(va, slice_index):
    print(slice_index)
    va[slice_index, :]
    return True


def doit():
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        futs = {executor.submit(get_slice, va, x): x for x in range(va.shape[0])}
        print([f.result() for f in futs])


doit()

this resulted in various failures on different runs of the above example:

Segmentation fault (core dumped)
double free or corruption (!prev)
Aborted (core dumped)
munmap_chunk(): invalid pointer
Aborted (core dumped)
corrupted size vs. prev_size
Aborted (core dumped)

I haven't had the time yet to look into it, but I wanted to make you aware that this issue exists.

@stuarteberg
Copy link
Member

stuarteberg commented Jun 26, 2019

Internally, the VoxelsAccessor class contains a DVIDNodeService. Unfortunately, that class is not intended to be thread-safe. (It contains structures from libcurl which are not thread-safe.)

One option is to create a new VoxelsAccessor for each thread, and be careful to use each accessor only within the thread it belongs to. There is no code in libdvid-cpp to help you with that, but deafult_dvid_session() (mentioned below) shows how I do it with requests.Session objects.

BTW, I don't use libdvid-cpp much any more these days. I've started using mostly pure-python wrappers for my dvid calls. They're currently part of a larger library called neuclease, but I plan to put them in a stand-alone python library when I get the chance.

So far, neuclease does not contain a VoxelsAccessor class, but using the low-level calls is just as easy.

In case it's helpful, here's your example, adapted to use neuclease:

import concurrent.futures
from neuclease.dvid import fetch_volume_box, fetch_raw

server = 'localhost:8000'
uuid = '5cc94d532799484cb01788fcdb7cd9f0'
dname = "grayscale"

full_box = fetch_volume_box(server, uuid, dname)

def get_slice(z):
    print(z)
    slice_box = full_box.copy()
    slice_box[:,0] = (z, z+1)
    fetch_raw(server, uuid, dname, slice_box)
    return True

def doit():
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        futs = {executor.submit(get_slice, z): z for z in range(*full_box[:, 0])}
        print([f.result() for f in futs])

doit()

OK, fetch_raw() doesn't use DVIDNodeService. Instead, it uses a requests.Session, but that's not thread-safe either. So you still have to manage a pool of them somehow if you want to use multiple threads. In neuclease, you have the option of passing your own session (if you're managing the session pool yourself), or you can leave session empty, in which case a default session will be chosen from a global pool, in a thread-safe manner. (The default is provided via the dvid_api_wrapper decorator, which calls default_dvid_session().)

There may be a more elegant way of keeping a pool of Session objects, but this has worked for me so far. I think the new contextvar feature in Python 3.7 is supposed to be useful in cases like this, but I've never tried it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants