-
Notifications
You must be signed in to change notification settings - Fork 7
/
hesperos.json
114 lines (114 loc) · 26.3 KB
/
hesperos.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
{
"name": "hesperos",
"display_name": "Hesperos application",
"visibility": "public",
"icon": "",
"categories": [],
"schema_version": "0.2.1",
"on_activate": null,
"on_deactivate": null,
"contributions": {
"commands": [
{
"id": "hesperos.make_manual_segmentation_widget",
"title": "Make Manual Segmentation Widget",
"python_name": "hesperos._manual_widget:ManualSegmentationWidget",
"short_title": null,
"category": null,
"icon": null,
"enablement": null
},
{
"id": "hesperos.make_oneshot_segmentation_widget",
"title": "Make OneShot Segmentation Widget",
"python_name": "hesperos._oneshot_widget:OneShotWidget",
"short_title": null,
"category": null,
"icon": null,
"enablement": null
}
],
"readers": null,
"writers": null,
"widgets": [
{
"command": "hesperos.make_manual_segmentation_widget",
"display_name": "Manual Segmentation or Correction",
"autogenerate": false
},
{
"command": "hesperos.make_oneshot_segmentation_widget",
"display_name": "OneShot Segmentation",
"autogenerate": false
}
],
"sample_data": null,
"themes": null,
"menus": {},
"submenus": null,
"keybindings": null,
"configuration": []
},
"package_metadata": {
"metadata_version": "2.1",
"name": "hesperos",
"version": "0.2.1",
"dynamic": null,
"platform": null,
"supported_platform": null,
"summary": "A plugin to manually or semi-automatically segment medical data and correct previous segmentation data.",
"description": "<div align=\"justify\">\n \n# HESPEROS PLUGIN FOR NAPARI\n\n[![License](https://img.shields.io/pypi/l/hesperos.svg?color=green)](https://github.com/DBC/hesperos/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/hesperos.svg?color=green)](https://pypi.org/project/hesperos)\n[![Python Version](https://img.shields.io/pypi/pyversions/hesperos.svg?color=green)](https://python.org)\n[![tests](https://github.com/DBC/hesperos/workflows/tests/badge.svg)](https://github.com/DBC/hesperos/actions)\n[![codecov](https://codecov.io/gh/DBC/hesperos/branch/main/graph/badge.svg)](https://codecov.io/gh/DBC/hesperos)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/hesperos)](https://napari-hub.org/plugins/hesperos)\n\nA Napari plugin for pre-defined manual segmentation or semi-automatic segmentation with a one-shot learning procedure. The objective was to simplify the interface as much as possible so that the user can concentrate on annotation tasks using a pen on a tablet, or a mouse on a computer. \n \nThis [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.\n\n \n# Table of Contents\n- [Installation and Usage](#installation-and-usage)\n * [Automatic installation](#automatic-installation)\n * [Manual installation](#manual-installation)\n * [Upgrade Hesperos version](#upgrade-hesperos-version)\n- [Hesperos: *Manual Segmentation and Correction* mode](#hesperos-manual-segmentation-and-correction-mode)\n * [Import and adjust your image](#import-and-adjust-your-image-use-panel-1)\n * [Layer controls](#layer-controls)\n * [Annotate your image](#annotate-your-image-use-panel-2)\n * [Select slices of interest](#select-slices-of-interest-use-panel-3----only-displayed-for-the-shoulder-bones-category)\n * [Export annotations](#export-annotations-use-panel-3----or-4-if-the-shoulder-bones-category-is-selected)\n- [Hesperos: *OneShot Segmentation* mode](#hesperos-oneshot-segmentation-mode)\n * [Import and adjust your image](#import-and-adjust-your-image-use-panel-1)\n * [Annotate your image](#annotate-your-image-use-panel-2)\n * [Run automatic segmentation](#run-automatic-segmentation-use-panel-3)\n * [Export annotations](#export-annotations-use-panel-4)\n\n \n# Installation and Usage\nThe Hesperos plugin is designed to run on Windows (11 or less) and MacOS with Python 3.8 / 3.9 / 3.10.\n \n \n## Automatic installation\n1. Install [Anaconda] and unselect *Add to PATH*. Keep in mind the path where you choose to install anaconda.\n2. Only download the *script_files* folder for [Windows](/script_files/for_Windows/) or [Macos](/script_files/for_Windows/). \n3. Add your Anaconda path in these script files:\n 1. <ins>For Windows</ins>: \n Right click on the .bat files (for [installation](/script_files/for_Windows/install_hesperos_env.bat) and [running](/script_files/for_Windows/run_hesperos.bat)) and select *Modify*. Change *PATH_TO_ADD* with your Anaconda path. Then save the changes.\n > for exemple:\n ```\n anaconda_dir=C:\\Users\\chgodard\\anaconda3\n ```\n 2. <ins>For Macos</ins>:\n 1. Right click on the .command files (for [installation](/script_files/for_Macos/install_hesperos_env.command) and [running](/script_files/for_Macos/run_hesperos.command)) and select *Open with TextEdit*. Change *PATH_TO_ADD* with your Anaconda path. Then save the changes.\n > for exemple:\n ```\n source ~/opt/anaconda3/etc/profile.d/conda.sh\n ```\n 2. In your terminal, change the permissions to allow the following .command files to be run (change *PATH* with the path of your .command files): \n ``` \n chmod u+x PATH/install_hesperos_env.command \n chmod u+x PATH/run_hesperos.command \n ```\n4. Double click on the **install_hesperos_env file** to create a virtual environment in Anaconda with python 3.9 and Napari 0.4.14. \n > /!\\ The Hesperos plugin is not yet compatible with Napari versions superior to 0.4.14.\n5. Double click on the **run_hesperos file** to run Napari from your virtual environment.\n6. In Napari: \n 1. Go to *Plugins/Install Plugins...*\n 2. Search for \"hesperos\" (it can take a while to load).\n 3. Install the **hesperos** plugin.\n 4. When the installation is done, close Napari. A restart of Napari is required to finish the plugin installation.\n7. Double click on the **run_hesperos file** to run Napari.\n8. In Napari, use the Hesperos plugin with *Plugins/hesperos*.\n\n \n## Manual installation\n1. Install [Anaconda] and unselect *Add to PATH*.\n2. Open your Anaconda prompt command.\n3. Create a virtual environment with Python 3.8 / 3.9 / 3.10:\n ```\n conda create -n hesperos_env python=3.9\n ```\n4. Install the required Python packages in your virtual environment:\n ```\n conda activate hesperos_env\n conda install -c conda-forge napari=0.4.14 \n conda install -c anaconda pyqt\n pip install hesperos\n ```\n > /!\\ Hesperos plugin is not yet compatible with napari version superior to 0.4.14.\n5. Launch Napari:\n ```\n napari\n ```\n \n## Upgrade Hesperos version\n1. Double click on the **run_hesperos file** to run Napari. \n2. In Napari: \n 1. Go to *Plugins/Install Plugins...*\n 2. Search for \"hesperos\" (it can take a while to load).\n 3. Click on *Update* if a new version of Hesperos has been found. You can check the latest version of Hesperos in the [Napari Hub](https://www.napari-hub.org/plugins/hesperos).\n 4. When the installation is done, close Napari. A restart of Napari is required to finish the plugin installation.\n \n \n# Hesperos: *Manual Segmentation and Correction* mode\n \n The ***Manual Segmentation and Correction*** mode of the Hesperos plugin is a simplified and optimized interface to do basic 2D manual segmentation of several structures in a 3D image using a mouse or a stylet with a tablet.\n\n \n <img src=\"https://user-images.githubusercontent.com/49953723/193262711-710673f2-5b53-4eb6-a7c7-6dada9d28d92.PNG\" width=\"1000px\"/>\n \n## Import and adjust your image *(use Panel 1)*\nThe Hesperos plugin can be used with Digital Imaging and COmmunications in Medicine (DICOM), Neuroimaging Informatics Technology Initiative (NIfTI) or Tagged Image File Format (TIFF) images. To improve performances, use images that are located on your own disk.\n\n1. To import data:\n - use the <img src=\"https://user-images.githubusercontent.com/49953723/193262334-3c28e733-36ab-4504-9a6d-acd298c15994.PNG\" width=\"100px\"/> button for *(.tiff, .tif, .nii or .nii.gz)* image files.\n - use the <img src=\"https://user-images.githubusercontent.com/49953723/193262624-149a4461-fbac-4498-a2b8-33bdd88e3a9f.PNG\" width=\"100px\"/> button for a DICOM serie. /!\\ Folder with multiple DICOM series is not supported. \n2. After the image has loaded, a slider appears that allows to zoom in/out: <img src=\"https://user-images.githubusercontent.com/49953723/193262738-7e6e68a9-0890-4e18-92a9-dbf2168a6bb5.PNG\" width=\"100px\"/>. Zooming is also possible with the <img src=\"https://user-images.githubusercontent.com/49953723/193262725-7d4f7b09-d119-45cf-a9d4-c42c5f848c1a.PNG\" width=\"25px\"/> button in the layer controls panel. \n3. If your data is a DICOM serie, you have the possibility to directly change the contrast of the image (according to the Hounsfield Unit):\n - by choosing one of the two predefined contrasts: *CT bone* or *CT Soft* in <img src=\"https://user-images.githubusercontent.com/49953723/193262708-17e1d301-0a9a-497f-9feb-613e69893c06.PNG\" width=\"150px\"/>.\n - by creating a custom default contrast with the <img src=\"https://user-images.githubusercontent.com/49953723/193262707-466917b4-b885-429b-9924-6481fa6410bb.PNG\" width=\"30px\"/> button and selecting *Custom Contrast*. Settings can be exported as a .json file with the <img src=\"https://user-images.githubusercontent.com/49953723/193262709-e1ad5321-1f60-4b60-a715-7c494670e1cd.PNG\" width=\"30px\"/> button.\n - by loading a saved default contrast with the <img src=\"https://user-images.githubusercontent.com/49953723/193262710-c9f66354-f896-4e59-8718-70e5509875af.PNG\" width=\"30px\"/> button and selecting *Custom Contrast*.\n4. In the bottom left corner of the application you also have the possibility to: \n - <img src=\"https://user-images.githubusercontent.com/49953723/193262716-d9947eb9-d87f-4251-af76-2d906cd36018.PNG\" width=\"25px\"/>: change the order of the visible axis (for example go to sagittal, axial or coronal planes).\n - <img src=\"https://user-images.githubusercontent.com/49953723/193262717-12afbfb1-49ae-4a77-a83e-5bc99850734a.PNG\" width=\"25px\"/>: transpose the 3D image on the current axis being displayed.\n\n\n## Layer controls\n\nWhen data is loading, two layers are created: the *`image`* layer and the *`annotations`* layer. Order in the layer list correspond to the overlayed order. By clicking on these layers you will have acces to different layer controls (at the top left corner of the application). All actions can be undone/redone with the Ctrl-Z/Shift-Ctrl-Z keyboard shortcuts. You can also hide a layer by clicking on its eye icon on the layer list.\n \n \n<ins>For the *image* layer:</ins>\n- *`opacity`*: a slider to control the global opacity of the layer.\n- *`contrast limits`*: a double slider to manually control the contrast of the image (same as the <img src=\"https://user-images.githubusercontent.com/49953723/193262708-17e1d301-0a9a-497f-9feb-613e69893c06.PNG\" width=\"150px\"/> option for DICOM data).\n \n\n<ins>For the *annotations* layer:</ins>\n- <img src=\"https://user-images.githubusercontent.com/49953723/193262718-30882770-59eb-4d2b-9cfe-8b88537560c4.PNG\" width=\"25px\"/>: erase brush to erase all labels at once (if *`preserve labels`* is not selected) or only erase the selected label (if *`preserve labels`* is selected).\n- <img src=\"https://user-images.githubusercontent.com/49953723/193262722-6bb6e6a4-ae7a-4ad1-b7f8-898e54ad62c3.PNG\" width=\"25px\"/>: paint brush with the same color than the *`label`* rectangle.\n- <img src=\"https://user-images.githubusercontent.com/49953723/193262719-f816b21e-78fd-4ba7-b415-30a461cbd652.PNG\" width=\"25px\"/>: fill bucket with the same color than the *`label`* rectangle.\n- <img src=\"https://user-images.githubusercontent.com/49953723/193262725-7d4f7b09-d119-45cf-a9d4-c42c5f848c1a.PNG\" width=\"25px\"/>: select to zoom in and out with the mouse wheel (same as the zoom slider at the top right corner in Panel 1).\n- *`label`*: a colored rectangle to represent the selected label. \n- *`opacity`*: a slider to control the global opacity of the layer. \n- *`brush size limits`*: a slider to control size of the paint/erase brush. \n- *`preserve labels`*: if selected, all actions are applied only on the selected label (see the *`label`* rectangle); if not selected, actions are applied on all labels.\n- *`show selected`*: if selected, only the selected label will be display on the layer; if not selected, all labels are displayed.\n \n \n>*Remark*: a second option for filling has been added\n>1. Drawn the egde of a closed shape with the paint brush mode. \n>2. Double click to activate the fill bucket. \n>3. Click inside the closed area to fill it. \n>4. Double click on the filled area to deactivate the fill bucket and reactivate the paint brush mode.\n \n\n## Annotate your image *(use Panel 2)*\n \nManual annotation and correction on the segmented file is done using the layer controls of the *`annotations`* layer. Click on the layer to display them. /!\\ You have to choose a structure to start annotating *(see 2.)*.\n1. To modify an existing segmentation, you can directy open the segmented file with the <img src=\"https://user-images.githubusercontent.com/49953723/193262702-df3b4fb8-63d0-4a1b-b1c9-8391cf8c3f22.PNG\" width=\"130px\"/> button. The file needs to have the same dimensions as the original image. \n > /!\\ Only .tiff, .tif, .nii and .nii.gz files are supported as segmented files. \n \n2. Choose a structure to annotate in the drop-down menu\n - *`Fetus`*: to annotate pregnancy image.\n - *`Shoulder`*: to annotate bones and muscles for shoulder surgery.\n - *`Shoulder Bones`*: to annotate only few bones for shoulder surgery.\n - *`Feta Challenge`*: to annotate fetal brain MRI with the same label than the FeTA Challenge (see ADD LIEN WEB).\n \n> When selecting a structure, a new panel appears with a list of elements to annotate. Each element has its own label and color. Select one element in the list to automatically activate the paint brush mode with the corresponding color (color is updated in the *`label`* rectangle in the layer controls panel).\n \n3. All actions can be undone with the <img src=\"https://user-images.githubusercontent.com/49953723/193265848-8c458035-609a-433e-aa82-5d9588971425.PNG\" width=\"30px\"/> button or Ctrl-Z.\n \n4. If you need to work on a specific slice of your 3D image, but also have to explore the volume to understand some complex structures, you can use the locking option to facilitate the annotation task.\n - <ins>To activate the functionality</ins>: \n 1. Go to the slice of interest.\n 2. Click on the <img src=\"https://user-images.githubusercontent.com/49953723/193262706-40f3dbca-5589-406d-81e8-e150ae8bfab6.PNG\" width=\"30px\"/> button => will change the button to <img src=\"https://user-images.githubusercontent.com/49953723/193262703-2b2ea2dc-24fa-438b-a75c-3aa42b210f53.PNG\" width=\"30px\"/> and save the layer index.\n 3. Scroll in the z-axis to explore the data (with the mouse wheel or the slider under the image).\n 4. To go back to your slice of interest, click on the <img src=\"https://user-images.githubusercontent.com/49953723/193262703-2b2ea2dc-24fa-438b-a75c-3aa42b210f53.PNG\" width=\"30px\"/> button.\n - <ins>To deactivate the functionality</ins> (or change the locked slice index): \n 1. Go to the locked slice.\n 2. Click on the <img src=\"https://user-images.githubusercontent.com/49953723/193262703-2b2ea2dc-24fa-438b-a75c-3aa42b210f53.PNG\" width=\"30px\"/> button => change the button to <img src=\"https://user-images.githubusercontent.com/49953723/193262706-40f3dbca-5589-406d-81e8-e150ae8bfab6.PNG\" width=\"30px\"/> and \"unlock\" the slice.\n\n\n## Select slices of interest *(use Panel 3 -- only displayed for the Shoulder Bones category)*\n\nThis panel will only be displayed if the *`Shoulder Bones`* category is selected. A maxiumum of 10 slices can be selected in a 3D image and the corresponding z-indexes will be integrated in the metadata during the exportation of the segmentation file.\n \n > /!\\ Metadata integration is available only for exported .tiff and .tif files and with the *`Unique`* save option. \n\n- <img src=\"https://user-images.githubusercontent.com/49953723/201736039-4ed10553-4a4b-4d5e-9d61-826dc139e437.png\" width=\"25px\"/> : to add the currently displayed z-index in the drop-down menu.\n- <img src=\"https://user-images.githubusercontent.com/49953723/201736105-a9c45264-412a-453b-8475-5a9ab856b07d.png\" width=\"25px\"/> : to remove the currently displayed z-index from the drop-down menu.\n- <img src=\"https://user-images.githubusercontent.com/49953723/201736152-319d8559-dbfc-4e52-aeb3-e8e34445f67a.png\" width=\"25px\"/> : to go to the z-index selected in the drop-down menu. The icon will be checked when the currently displayed z-index matches the selected z-index in the drop-down menu.\n- <img src=\"https://user-images.githubusercontent.com/49953723/201733835-7bee453a-bc07-416f-8b95-aaf803683cac.png\" width=\"100px\"/> : a drop-down menu containing the list of selected z-indexes. Select a z-index from the list to work with it more easily.\n\n\n## Export annotations *(use Panel 3 -- or 4 if the Shoulder Bones category is selected)*\n \n1. Annotations can be exported as .tif, .tiff, .nii or .nii.gz file with the <img src=\"https://user-images.githubusercontent.com/49953723/201735102-113f64b7-4da4-40ee-b058-9900268d270d.png\" width=\"95px\"/> button in one of the two following saving mode:\n - *`Unique`*: segmented data is exported as a unique 3D image with corresponding label ids (1-2-3-...). This file can be re-opened in the application.\n - *`Several`*: segmented data is exported as several binary 3D images (0 or 255), one for each label id.\n2. <img src=\"https://user-images.githubusercontent.com/49953723/193262699-95758bdb-ac40-439b-8959-d924781a2368.PNG\" width=\"100px\"/>: delete annotation data.\n3. *`Automatic segmentation backup`*: if selected, the segmentation data will be automatically exported as a unique 3D image when the image slice is changed.\n > /!\\ This process can slow down the display if the image is large.\n\n# Hesperos: *OneShot Segmentation* mode\n \n The ***OneShot Segmentation*** mode of the Hesperos plugin is a 2D version of the VoxelLearning method implemented in DIVA (see [our Github](https://github.com/DecBayComp/VoxelLearning) and the latest article [Gu\u00e9rinot, C., Marcon, V., Godard, C., et al. (2022). New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. _Frontiers in Bioinformatics_. doi:10.3389/fbinf.2021.777101](https://www.frontiersin.org/articles/10.3389/fbinf.2021.777101/full)).\n \n\nThe principle is to accelerate the segmentation without prior information. The procedure consists of:\n1. A **rapid tagging** of few pixels in the image with two labels: one for the structure of interest (named positive tags), and one for the other structures (named negative tags).\n2. A **training** of a simple random forest classifier with these tagged pixels and their features (mean, gaussian, ...).\n3. An **inference** of all the pixels of the image to automatically segment the structure of interest. The output is a probability image (0-255) of belonging to a specific class.\n4. Iterative corrections if needed.\n \n<img src=\"https://user-images.githubusercontent.com/49953723/193262714-8699cd59-3825-4d71-b27a-bbcad1e36d55.PNG\" width=\"1000px\"/>\n\n \n## Import and adjust your image *(use Panel 1)*\n \nSame panel as the *Manual Segmentation and Correction* mode *(see [panel 1 description](#import-and-adjust-your-image-use-panel-1))*.\n \n \n## Annotate your image *(use Panel 2)*\n \nAnnotations and corrections on the segmented file is done using the layer controls of the *`annotations`* layer. Click on the layer to display them. Only two labels are available: *`Structure of interest`* and *`Other`*. \n\nThe rapid manual tagging step of the one-shot learning method aims to learn and attribute different features to each label.\n<img align=\"right\" src=\"https://user-images.githubusercontent.com/49953723/193262735-5dce56fb-8a2c-4aeb-9ee7-9727122d8089.PNG\" width=\"220px\"/> \nTo achieve that, the user has to:\n- with the label *`Structure of interest`*, tag few pixels of the structure of interest.\n- with the label *`Other`*, tag the greatest diversity of uninteresting structures in the 3D image (avoid tagging too much pixels).\n\n> see the exemple image with *`Structure of interest`* label in red and *`Other`* label in cyan.\n \n1. To modify an existing segmentation, you can directy open the segmented file with the <img src=\"https://user-images.githubusercontent.com/49953723/193266118-dfd241f6-8f0b-4cb9-94e7-5e74a3ce8b6e.PNG\" width=\"130px\"/> button. The file needs to have the same dimensions as the original image. \n > /!\\ Only .tiff, .tif, .nii and .nii.gz files are supported as segmented files. \n2. All actions can be undone with the <img src=\"https://user-images.githubusercontent.com/49953723/193265848-8c458035-609a-433e-aa82-5d9588971425.PNG\" width=\"30px\"/> button or Ctrl-Z.\n\n \n## Run automatic segmentation *(use Panel 3)*\n\nFrom the previously tagged pixels, features are extracted and used to train a basic classifier : the Random Forest Classifier (RFC). When the training of the pixel classifier is done, it is applied to each pixel of the complete volume and outputs a probability to belong to the structure of interest.\n\nTo run training and inference, click on the <img src=\"https://user-images.githubusercontent.com/49953723/193262731-719c226a-f7c5-4252-b2bb-fade4ab7f5b3.PNG\" width=\"115px\"/> button:\n1. You will be asked to save a .pckl file which corresponds to the model.\n2. A new status will appears under the *Panel 4* : *`Computing...`*. You must wait for the message to change to: *`Ready`* before doing anything in the application (otherwise the application may freeze or crash).\n3. When the processing is done, two new layers will appear:\n - the *`probabilities`* layer which corresponds to the direct probability (between 0 and 1) of a pixel to belong to the structure of interest. This layer is disabled by default, to enable it click on its eye icon in the layer list.\n - the *`segmented probabilities`* layer which corresponds to a binary image obtained from the probability image normed and thresholded according to a value manually defined with the *`Probability threshold`* slider: <img src=\"https://user-images.githubusercontent.com/49953723/193262730-6998c8a5-92f1-4ff1-bbf5-6972a373afd2.PNG\" width=\"80px\"/>.\n\n>Remark: If the output is not perfect, you have two possibilities to improve the result:\n>1. Add some tags with the paint brush to take in consideration unintersting structures or add information in critical areas of your structure of interest (such as in thin sections). Then, run the training and inference process again. /!\\ This will overwrite all previous segmentation data.\n>2. Export your segmentation data and re-open it with the *Manual Annotation and Correction* mode of Hesperos to manually erase or add annotations.\n \n \n## Export annotations *(use Panel 4)*\n \n1. Segmented probabilites can be exported as .tif, .tiff, .nii or .nii.gz file with the <img src=\"https://user-images.githubusercontent.com/49953723/193262734-57159a97-2f46-4aba-b3bf-b55a35dfacbd.PNG\" width=\"105px\"/> button. The image is exported as a unique 3D binary image (value 0 and 255). This file can be re-opened in the application for correction.\n2. Probabilities can be exported as .tif, .tiff, .nii or .nii.gz file with the <img src=\"https://user-images.githubusercontent.com/49953723/193262733-26e37392-55b2-4c36-9287-b2f5d8d30e03.PNG\" width=\"105px\"/> button as a unique 3D image. The probabilities image is normed between 0 and 255.\n3. <img src=\"https://user-images.githubusercontent.com/49953723/193266056-9514b648-b3e0-43f5-901a-a45fa1390f00.PNG\" width=\"100px\"/>: delete annotation data.\n\n\n# License\n\nDistributed under the terms of the [BSD-3] license, **Hesperos** is a free and open source software.\n\n \n[napari]: https://github.com/napari/napari\n[Cookiecutter]: https://github.com/audreyr/cookiecutter\n[@napari]: https://github.com/napari\n[BSD-3]: http://opensource.org/licenses/BSD-3-Clause\n[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin\n\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/\n[Anaconda]: https://www.anaconda.com/products/distribution#Downloads\n[VoxelLearning]: https://github.com/DecBayComp/VoxelLearning\n",
"description_content_type": "text/markdown",
"keywords": null,
"home_page": "https://github.com/chgodard/hesperos",
"download_url": null,
"author": "Charlotte Godard",
"author_email": "[email protected]",
"maintainer": null,
"maintainer_email": null,
"license": "BSD-3-Clause",
"classifier": [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Framework :: napari",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"License :: OSI Approved :: BSD License"
],
"requires_dist": [
"numpy",
"qtpy",
"tifffile",
"scikit-image",
"scikit-learn",
"SimpleITK",
"pandas",
"napari (<0.4.15)",
"napari-plugin-engine",
"imageio-ffmpeg",
"tox ; extra == 'testing'",
"pytest ; extra == 'testing'",
"pytest-cov ; extra == 'testing'",
"pytest-qt ; extra == 'testing'",
"napari ; extra == 'testing'",
"pyqt5 ; extra == 'testing'"
],
"requires_python": ">=3.8",
"requires_external": null,
"project_url": [
"Documentation, https://github.com/chgodard/hesperos/blob/main/README.md",
"Source Code, https://github.com/chgodard/hesperos"
],
"provides_extra": [
"testing"
],
"provides_dist": null,
"obsoletes_dist": null
},
"npe1_shim": false
}