Skip to content

Commit 795b199

Browse files
Improvements based on use case partner feedback (opendr-eu#293)
* removed wrongly placed file * Update README.md * restructured projects * Update README.md * Create README.md * removed __init__.py * Create issues.md * Update issues.md * Update customize.md * Update README.md * Apply suggestions from code review Co-authored-by: Stefania Pedrazzi <[email protected]> * Link updated * Update index.md * Updated released index * Update index.md * Fix old link * Update test_license.py * Updated projects path * Updated links to projects * Updated links to projects * Fixed projects path * pep8 fixes * Categorized nodes according to their input * Custom model loading example * Issues for colab * Update issues.md * Update projects/opendr_ws/README.md Co-authored-by: Stefania Pedrazzi <[email protected]> Co-authored-by: Stefania Pedrazzi <[email protected]>
1 parent 021b948 commit 795b199

File tree

735 files changed

+256
-130
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

735 files changed

+256
-130
lines changed

.gitignore

+1-1
Original file line numberDiff line numberDiff line change
@@ -70,4 +70,4 @@ temp/
7070
# ROS interface
7171
projects/opendr_ws/.catkin_workspace
7272
projects/opendr_ws/devel/
73-
projects/control/eagerx/eagerx_ws/
73+
projects/python/control/eagerx/eagerx_ws/

README.md

+31-8
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,13 @@ ______________________________________________________________________
77

88
<p align="center">
99
<a href="https://www.opendr.eu/">Website</a> •
10-
<a href="#about">About</a> •
1110
<a href="docs/reference/installation.md">Installation</a> •
12-
<a href="#using-opendr-toolkit">Using OpenDR toolkit</a> •
13-
<a href="projects">Examples</a> •
11+
<a href="projects/python">Python Examples</a> •
12+
<a href="projects/opendr_ws">ROS1</a> •
13+
<a href="projects/opendr_ws_2">ROS2</a> •
14+
<a href="projects/c_api">C API</a> •
1415
<a href="docs/reference/customize.md">Customization</a> •
16+
<a href="docs/reference/issues.md">Known Issues</a> •
1517
<a href="#roadmap">Roadmap</a> •
1618
<a href="CHANGELOG.md">Changelog</a> •
1719
<a href="LICENSE">License</a>
@@ -34,19 +36,40 @@ OpenDR focuses on the **AI and Cognition core technology** in order to provide t
3436
As a result, the developed OpenDR toolkit will also enable cooperative human-robot interaction as well as the development of cognitive mechatronics where sensing and actuation are closely coupled with cognitive systems thus contributing to another two core technologies beyond AI and Cognition.
3537
OpenDR aims to develop, train, deploy and evaluate deep learning models that improve the technical capabilities of the core technologies beyond the current state of the art.
3638

37-
## Installing OpenDR Toolkit
3839

40+
## Where to start?
41+
42+
You can start by [installing](docs/reference/installation.md) the OpenDR toolkit.
3943
OpenDR can be installed in the following ways:
4044
1. By *cloning* this repository (CPU/GPU support)
4145
2. Using *pip* (CPU/GPU support only)
4246
3. Using *docker* (CPU/GPU support)
4347

44-
You can find detailed installation instruction in the [documentation](docs/reference/installation.md).
4548

46-
## Using OpenDR toolkit
49+
## What OpenDR provides?
50+
4751
OpenDR provides an intuitive and easy to use **[Python interface](src/opendr)**, a **[C API](src/c_api) for performance critical application**, a wealth of **[usage examples and supporting tools](projects)**, as well as **ready-to-use [ROS nodes](projects/opendr_ws)**.
4852
OpenDR is built to support [Webots Open Source Robot Simulator](https://cyberbotics.com/), while it also extensively follows industry standards, such as [ONNX model format](https://onnx.ai/) and [OpenAI Gym Interface](https://gym.openai.com/).
49-
You can find detailed documentation in OpenDR [wiki](https://github.com/tasostefas/opendr_internal/wiki), as well as in the [tools index](docs/reference/index.md).
53+
54+
## How can I start using OpenDR?
55+
56+
You can find detailed documentation in OpenDR [wiki](https://github.com/opendr-eu/opendr/wiki).
57+
The main point of reference after installing the toolkit is the [tools index](docs/reference/index.md).
58+
Starting from there, you can find detailed documentation for all the tools included in OpenDR.
59+
60+
- If you are interested in ready-to-use ROS nodes, then you can directly jump to our [ROS1](projects/opendr_ws) and [ROS2](projects/opendr_ws_2) workspaces.
61+
- If you are interested for ready-to-use examples, then you can checkout the [projects](projects/python) folder, which contains examples and tutorials for [perception](projects/python/perception), [control](projects/python/control), [simulation](projects/python/simulation) and [hyperparameter tuning](projects/python/utils) tools.
62+
- If you want to explore our C API, then you explore the provided [C demos](projects/c_api).
63+
64+
## How can I interface OpenDR?
65+
66+
OpenDR is built upon Python.
67+
Therefore, the main OpenDR interface is written in Python and it is available through the [opendr](src/opendr) package.
68+
Furthermore, OpenDR provides [ROS1](projects/opendr_ws) and [ROS2](projects/opendr_ws_2) interfaces, as well as a [C interface](projects/c_api).
69+
Note that you can use as many tools as you wish at the same time, since there is no hardware limitation on the number of tools that can run at the same time.
70+
However, hardware limitations (e.g., GPU memory) might restrict the number of tools that can run at any given moment.
71+
72+
5073

5174
## Roadmap
5275
OpenDR has the following roadmap:
@@ -55,7 +78,7 @@ OpenDR has the following roadmap:
5578
- **v3.0 (2023)**: Active perception-enabled deep learning tools for improved robotic perception
5679

5780
## How to contribute
58-
Please follow the instructions provided in the [wiki](https://github.com/tasostefas/opendr_internal/wiki).
81+
Please follow the instructions provided in the [wiki](https://github.com/opendr-eu/opendr/wiki).
5982

6083
## How to cite us
6184
If you use OpenDR for your research, please cite the following paper that introduces OpenDR architecture and design:

dependencies/parse_dependencies.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ def read_ini_key(key, summary_file):
6565
# Loop through tools and extract dependencies
6666
if not global_dependencies:
6767
opendr_home = os.environ.get('OPENDR_HOME')
68-
for dir_to_walk in ['src', 'projects/control/eagerx']:
68+
for dir_to_walk in ['src', 'projects/python/control/eagerx']:
6969
for subdir, dirs, files in os.walk(os.path.join(opendr_home, dir_to_walk)):
7070
for filename in files:
7171
if filename == 'dependencies.ini':

docs/reference/customize.md

+44-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,10 @@ For example, users can readily use the existing [ROS nodes](projects/opendr_ws),
66
Furthermore, note that several tools can be combined within a ROS node, as showcased in [face recognition ROS node](projects/opendr_ws/src/perception/scripts/face_recognition.py).
77
You can use these nodes as a template for customizing the toolkit to your own needs.
88
The rest of this document includes instructions for:
9-
1. Building docker images using the provided docker files.
9+
1. [Building docker images using the provided docker files](#building-custom-docker-images)
10+
2. [Customizing existing docker images](#customizing-existing-docker-images)
11+
3. [Changing the behavior of ROS nodes](#changing-the-behavior-of-ros-nodes)
12+
4. [Building docker images that do not contain the whole toolkit](#building-docker-images-that-do-not-contain-the-whole-toolkit)
1013

1114

1215
## Building custom docker images
@@ -56,3 +59,43 @@ and
5659
```
5760
sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda
5861
```
62+
63+
## Customizing existing docker images
64+
Building docker images from scratch can take a lot of time, especially for embedded systems without cross-compilation support.
65+
If you need to modify a docker image without rebuilding it (e.g., for changing some source files inside it or adding support for custom pipelines), then you can simply start with the image that you are interesting in, make the changes and use the [docker commit](https://docs.docker.com/engine/reference/commandline/commit/) command. In this way, the changes that have been made will be saved in a new image.
66+
67+
68+
## Changing the behavior of ROS nodes
69+
ROS nodes are provided as examples that demonstrate how various tools can be used.
70+
As a result, customization might be needed in order to make them appropriate for your specific needs.
71+
Currently, all nodes support changing the input/output topics.
72+
However, if you need to change anything else (e.g., load a custom model), then you should appropriately modify the source code of the nodes.
73+
This is very easy, since the Python API of the OpenDR is used in all of the provided nodes.
74+
You can refer to [Python API documentation](https://github.com/opendr-eu/opendr/blob/master/docs/reference/index.md) for more details for the tool that you are interested in.
75+
76+
### Loading a custom model
77+
Loading a custom model in a ROS node is very easy.
78+
First, locate the node that you want to modify (e.g., [pose estimation](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py)).
79+
Then, search for the line where the learner loads the model (i.e., calls the `load()` function).
80+
For the aforementioned node, this happens at [line 63](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py#L63).
81+
Then, replace the path to the `load()` function with the path to your custom model.
82+
You can also optionally remove the call to `download()` function (e.g., [line 62](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py#L62)) to make the node start up faster.
83+
84+
85+
## Building docker images that do not contain the whole toolkit
86+
To build custom docker images that do not contain the whole toolkit you should follow these steps:
87+
1. Identify the tools that are using and note them.
88+
2. Start from a clean clone of the repository and remove all modules under [src/opendr] that you are not using.
89+
To this end, use the `rm` command from the root folder of the toolkit and write down the commands that you are issuing.
90+
Please note that you should NOT remove the `engine` package.
91+
4. Add the `rm` commands that you have issued in the dockerfile (e.g., in the main [dockerfile](https://github.com/opendr-eu/opendr/blob/master/Dockerfile)) after the `WORKDIR command` and before the `RUN ./bin/install.sh` command.
92+
5. Build the dockerfile as usual.
93+
94+
By removing the tools that you are not using, you are also removing the corresponding `requirements.txt` file.
95+
In this way, the `install.sh` script will not pull and install the corresponding dependencies, allowing for having smaller and more lightweight docker images.
96+
97+
Things to keep in mind:
98+
1. ROS noetic is manually installed by the installation script.
99+
If you want to install another version, you should modify both `install.sh` and `Makefile`.
100+
2. `mxnet`, `torch` and `detectron` are manually installed by the `install.sh` script if you have set `OPENDR_DEVICE=gpu`.
101+
If you do not need these dependencies, then you should manually remove them.

docs/reference/detr.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -230,10 +230,10 @@ Documentation on how to use this node can be found [here](../../projects/opendr_
230230
#### Tutorials and Demos
231231

232232
A tutorial on performing inference is available
233-
[here](../../projects/perception/object_detection_2d/detr/inference_tutorial.ipynb).
234-
Furthermore, demos on performing [training](../../projects/perception/object_detection_2d/detr/train_demo.py),
235-
[evaluation](../../projects/perception/object_detection_2d/detr/eval_demo.py) and
236-
[inference](../../projects/perception/object_detection_2d/detr/inference_demo.py) are also available.
233+
[here](../../projects/python/perception/object_detection_2d/detr/inference_tutorial.ipynb).
234+
Furthermore, demos on performing [training](../../projects/python/perception/object_detection_2d/detr/train_demo.py),
235+
[evaluation](../../projects/python/perception/object_detection_2d/detr/eval_demo.py) and
236+
[inference](../../projects/python/perception/object_detection_2d/detr/inference_demo.py) are also available.
237237

238238

239239

docs/reference/eagerx.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -24,21 +24,21 @@ Documentation is available online: [https://eagerx.readthedocs.io](https://eager
2424

2525
**Prerequisites**: EAGERx requires ROS Noetic and Python 3.8 to be installed.
2626

27-
1. **[demo_full_state](../../projects/control/eagerx/demos/demo_full_state.py)**:
27+
1. **[demo_full_state](../../projects/python/control/eagerx/demos/demo_full_state.py)**:
2828
Here, we wrap the OpenAI gym within EAGERx.
2929
The agent learns to map low-dimensional angular observations to torques.
30-
2. **[demo_pid](../../projects/control/eagerx/demos/demo_pid.py)**:
30+
2. **[demo_pid](../../projects/python/control/eagerx/demos/demo_pid.py)**:
3131
Here, we add a PID controller, tuned to stabilize the pendulum in the upright position, as a pre-processing node.
3232
The agent now maps low-dimensional angular observations to reference torques.
3333
In turn, the reference torques are converted to torques by the PID controller, and applied to the system.
34-
3. **[demo_classifier](../../projects/control/eagerx/demos/demo_classifier.py)**:
34+
3. **[demo_classifier](../../projects/python/control/eagerx/demos/demo_classifier.py)**:
3535
Instead of using low-dimensional angular observations, the environment now produces pixel images of the pendulum.
3636
In order to speed-up learning, we use a pre-trained classifier to convert these pixel images to estimated angular observations.
3737
Then, the agent uses these estimated angular observations similarly as in 'demo_2_pid' to successfully swing-up the pendulum.
3838

3939
Example usage:
4040
```bash
41-
cd $OPENDR_HOME/projects/control/eagerx/demos
41+
cd $OPENDR_HOME/projects/python/control/eagerx/demos
4242
python3 [demo_name]
4343
```
4444

docs/reference/fmp_gmapping.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
Traditional *SLAM* algorithm for estimating a robot's position and a 2D, grid-based map of the environment from planar LiDAR scans.
44
Based on OpenSLAM GMapping, with additional functionality for computing the closed-form Full Map Posterior Distribution.
55

6-
For more details on the launchers and tools, see the [FMP_Eval Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/README.md).
6+
For more details on the launchers and tools, see the [FMP_Eval Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/README.md).
77

8-
For more details on the actual SLAM algorithm and its ROS node wrapper, see the [SLAM_GMapping Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/slam_gmapping/README.md).
8+
For more details on the actual SLAM algorithm and its ROS node wrapper, see the [SLAM_GMapping Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/slam_gmapping/README.md).
99

1010
## Demo Usage
1111
A demo ROSBag for a square corridor can be found in the Map Simulator submodule in `src/map_simulator/rosbags/`, as well as preconfigured ***roslaunch***
@@ -25,4 +25,4 @@ This will start the following processes and nodes:
2525

2626
Other ROSBags can be easily generated with the map simulator script from either new custom scenarios, or from the test configuration files in `src/map_simulator/scenarios/robots/` directory.
2727

28-
For more information on how to define custom test scenarios and converting them to ROSBags, see the [Map_Simulator Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/map_simulator/README.md).
28+
For more information on how to define custom test scenarios and converting them to ROSBags, see the [Map_Simulator Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/map_simulator/README.md).

docs/reference/gem.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -216,8 +216,8 @@ Parameters:
216216

217217
#### Demo and Tutorial
218218

219-
An inference [demo](../../projects/perception/object_detection_2d/gem/inference_demo.py) and
220-
[tutorial](../../projects/perception/object_detection_2d/gem/inference_tutorial.ipynb) are available.
219+
An inference [demo](../../projects/python/perception/object_detection_2d/gem/inference_demo.py) and
220+
[tutorial](../../projects/python/perception/object_detection_2d/gem/inference_tutorial.ipynb) are available.
221221

222222
#### Examples
223223

docs/reference/human-model-generation.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ Documentation on how to use this node can be found [here](../../projects/opendr_
7777
#### Tutorials and Demos
7878

7979
A demo in the form of a Jupyter Notebook is available
80-
[here](../../projects/simulation/human_model_generation/demos/model_generation.ipynb).
80+
[here](../../projects/python/simulation/human_model_generation/demos/model_generation.ipynb).
8181

8282
#### Example
8383

@@ -95,8 +95,8 @@ A demo in the form of a Jupyter Notebook is available
9595
OPENDR_HOME = os.environ["OPENDR_HOME"]
9696

9797
# We load a full-body image of a human as well as an image depicting its corresponding silhouette.
98-
rgb_img = Image.open(os.path.join(OPENDR_HOME, 'projects/simulation/human_model_generation/demos', 'imgs_input/rgb/result_0004.jpg'))
99-
msk_img = Image.open(os.path.join(OPENDR_HOME, 'projects/simulation/human_model_generation/demos', 'imgs_input/msk/result_0004.jpg'))
98+
rgb_img = Image.open(os.path.join(OPENDR_HOME, 'projects/python/simulation/human_model_generation/demos', 'imgs_input/rgb/result_0004.jpg'))
99+
msk_img = Image.open(os.path.join(OPENDR_HOME, 'projects/python/simulation/human_model_generation/demos', 'imgs_input/msk/result_0004.jpg'))
100100

101101
# We initialize learner. Using the infer method, we generate human 3D model.
102102
model_generator = PIFuGeneratorLearner(device='cuda', checkpoint_dir='./temp')

0 commit comments

Comments
 (0)