-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client needs to be able to send data for annotation to remote-server (MONAILabel server) #154
Comments
It would be great if this interface would be exactly the same as the AIAA server interface. It would allow the same clients (Slicer, OHIF, MITK, ...) to use either MONAILabel or AIAA server. |
There is little difference between aiaa and MONAILabel.. but MONAILabel supports apis to upload your own image.. In your client.. you can follow this to create similar behavior
The current MONAILabel plugin in slicer fetches the images based on active learning strategies.. thats one reason it is tagged as active learning solution.. thats one important difference between aiaa (infer) only solution |
Let me add a quick option to push the image to server from client box.. this should unblock for users sitting remote.. and use their own samples to run infer kind of tasks.. |
Ideal for our use case would be:
|
How is additional information from the user transferred to the MONAILabel server? Are just coordinates of background/foreground landmarks sent via REST API? |
You can pass anything in params.. deepgrow sends some foreground/background points and uses the same during pre transform to prepare the input for the model/network |
@SachidanandAlle #156 looks awesome, thank you for implementing so quickly! @che85 could you test it? For the leaflet segmentation the inputs are: an image, a closed curve, and a fiducial list with a few named points. @SachidanandAlle how do you think these could be added to the MONAILabel module GUI? A few options:
Slicer supports generating user interface for command-line executables using an XML file. The file lists all input and output parameters of an algorithm. It is used in MITK, MeVisLab, MedInria, Gimias, and a number of other applications - see more information here), See SEM interface XML files in each subfolder here: For example, this description file: Slicer also writes all the selected data objects to files and simple parameters (Booleans, floats, etc.) into a command-line argument list. This could be adapted to send parameters via network.
The MONAILabel app could send a Qt designer .ui file that Slicer can display in a GUI section. We could implement automatic sending of GUI content to the server (e.g., if the GUI element is a slider then we would send a number, if the GUI element is a node selector then we would write that node to file and send that file).
The client could send a Python class to the MONAILabel Slicer module, which would contain a few methods, such as 1. setting up GUI widgets, and 2. sending data that the user specified in the widgets to the server. Running Python code received via the network is a bit higher security risk than just interpreting XML descriptor files, so maybe this option is not desirable in the long term. |
@che85 I believe you guys are expert in slicer, so you should be able to quickly add additional interactions that you are looking in MONAILabel plugin which is kind of specific to an interaction model you are building (using annulus contour curve and landing marks) For a quick implementation (for project week), I suggest you can clone the main branch and add your interaction logic over there... Once you have the required contour curve and landing marks you can send it in params (JSON/Dict) and access the same in pre-transforms. If you are looking to send binary data then you can either make use of label field and access the same in your pre-transforms For example: Scribbles is one your kind where some other researcher from KCL tried to add extra interactions as part of the user-case @lassoan we can discuss how to make it flexible for others to add/inject new interactions into MONAILabel plugin. This is where I need your help. May be we can discuss ideas you suggested and the priority and importance of adding SegmentEditEffects kind of extensions for MONAILabel plugin... or how to import those bunch of standard interactions in MONAILabel plugin. |
in the base version we support only auto-segmentation and deepgrow models. First one doesn't need any input.. and second one needs foreground/background clicks from user |
@SachidanandAlle these are very good suggestions. I would be very happy to discuss about them. Would you be available to talk today or Thursday afternoon (Eastern time) on the Slicer project week discord? |
I would be interested in joining that call as well. |
Same here. |
Same here. Is it too late to set a meeting for today? |
can we setup a meeting then... @diazandr3s |
What about today at 19:00 BST? If that works for everyone, I'll send the zoom link to the meeting |
19:00 BST is OK for both Sachi and me. |
Yes, happy to chat now as well |
OK, let's meet at the ai-assisted-annotation voice channel on discord. Hopefully @che85 can make it, too. |
I made some good progress, but getting an error on the serverside:
|
@SachidanandAlle @diazandr3s @che85 this is really awesome. Thank you! |
- currently no training is supported ref: Project-MONAI/MONAILabel#154
For now, I believe we can close this issue. Feel free to reopen if needed. |
Is your feature request related to a problem? Please describe.
MONAILabel V1 only supports server-sided hosting of data.
For clinical use, we want our research assistants to be able to use MONAILabel from individual computers. A workstation with a dedicated GPU was assigned to be the remote-server, receiving client requests for annotating data and returning the annotation to the client.
Describe the solution you'd like
Clients should be able to send images for automatic annotation to the MONAILabel server.
Additional context
JolleyLab/DeepHeart#1
JolleyLab/DeepHeart#2
The text was updated successfully, but these errors were encountered: