You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -34,19 +36,40 @@ OpenDR focuses on the **AI and Cognition core technology** in order to provide t
34
36
As a result, the developed OpenDR toolkit will also enable cooperative human-robot interaction as well as the development of cognitive mechatronics where sensing and actuation are closely coupled with cognitive systems thus contributing to another two core technologies beyond AI and Cognition.
35
37
OpenDR aims to develop, train, deploy and evaluate deep learning models that improve the technical capabilities of the core technologies beyond the current state of the art.
36
38
37
-
## Installing OpenDR Toolkit
38
39
40
+
## Where to start?
41
+
42
+
You can start by [installing](docs/reference/installation.md) the OpenDR toolkit.
39
43
OpenDR can be installed in the following ways:
40
44
1. By *cloning* this repository (CPU/GPU support)
41
45
2. Using *pip* (CPU/GPU support only)
42
46
3. Using *docker* (CPU/GPU support)
43
47
44
-
You can find detailed installation instruction in the [documentation](docs/reference/installation.md).
45
48
46
-
## Using OpenDR toolkit
49
+
## What OpenDR provides?
50
+
47
51
OpenDR provides an intuitive and easy to use **[Python interface](src/opendr)**, a **[C API](src/c_api) for performance critical application**, a wealth of **[usage examples and supporting tools](projects)**, as well as **ready-to-use [ROS nodes](projects/opendr_ws)**.
48
52
OpenDR is built to support [Webots Open Source Robot Simulator](https://cyberbotics.com/), while it also extensively follows industry standards, such as [ONNX model format](https://onnx.ai/) and [OpenAI Gym Interface](https://gym.openai.com/).
49
-
You can find detailed documentation in OpenDR [wiki](https://github.com/tasostefas/opendr_internal/wiki), as well as in the [tools index](docs/reference/index.md).
53
+
54
+
## How can I start using OpenDR?
55
+
56
+
You can find detailed documentation in OpenDR [wiki](https://github.com/opendr-eu/opendr/wiki).
57
+
The main point of reference after installing the toolkit is the [tools index](docs/reference/index.md).
58
+
Starting from there, you can find detailed documentation for all the tools included in OpenDR.
59
+
60
+
- If you are interested in ready-to-use ROS nodes, then you can directly jump to our [ROS1](projects/opendr_ws) and [ROS2](projects/opendr_ws_2) workspaces.
61
+
- If you are interested for ready-to-use examples, then you can checkout the [projects](projects/python) folder, which contains examples and tutorials for [perception](projects/python/perception), [control](projects/python/control), [simulation](projects/python/simulation) and [hyperparameter tuning](projects/python/utils) tools.
62
+
- If you want to explore our C API, then you explore the provided [C demos](projects/c_api).
63
+
64
+
## How can I interface OpenDR?
65
+
66
+
OpenDR is built upon Python.
67
+
Therefore, the main OpenDR interface is written in Python and it is available through the [opendr](src/opendr) package.
68
+
Furthermore, OpenDR provides [ROS1](projects/opendr_ws) and [ROS2](projects/opendr_ws_2) interfaces, as well as a [C interface](projects/c_api).
69
+
Note that you can use as many tools as you wish at the same time, since there is no hardware limitation on the number of tools that can run at the same time.
70
+
However, hardware limitations (e.g., GPU memory) might restrict the number of tools that can run at any given moment.
71
+
72
+
50
73
51
74
## Roadmap
52
75
OpenDR has the following roadmap:
@@ -55,7 +78,7 @@ OpenDR has the following roadmap:
55
78
-**v3.0 (2023)**: Active perception-enabled deep learning tools for improved robotic perception
56
79
57
80
## How to contribute
58
-
Please follow the instructions provided in the [wiki](https://github.com/tasostefas/opendr_internal/wiki).
81
+
Please follow the instructions provided in the [wiki](https://github.com/opendr-eu/opendr/wiki).
59
82
60
83
## How to cite us
61
84
If you use OpenDR for your research, please cite the following paper that introduces OpenDR architecture and design:
Copy file name to clipboardexpand all lines: docs/reference/customize.md
+44-1
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,10 @@ For example, users can readily use the existing [ROS nodes](projects/opendr_ws),
6
6
Furthermore, note that several tools can be combined within a ROS node, as showcased in [face recognition ROS node](projects/opendr_ws/src/perception/scripts/face_recognition.py).
7
7
You can use these nodes as a template for customizing the toolkit to your own needs.
8
8
The rest of this document includes instructions for:
9
-
1. Building docker images using the provided docker files.
9
+
1.[Building docker images using the provided docker files](#building-custom-docker-images)
3.[Changing the behavior of ROS nodes](#changing-the-behavior-of-ros-nodes)
12
+
4.[Building docker images that do not contain the whole toolkit](#building-docker-images-that-do-not-contain-the-whole-toolkit)
10
13
11
14
12
15
## Building custom docker images
@@ -56,3 +59,43 @@ and
56
59
```
57
60
sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda
58
61
```
62
+
63
+
## Customizing existing docker images
64
+
Building docker images from scratch can take a lot of time, especially for embedded systems without cross-compilation support.
65
+
If you need to modify a docker image without rebuilding it (e.g., for changing some source files inside it or adding support for custom pipelines), then you can simply start with the image that you are interesting in, make the changes and use the [docker commit](https://docs.docker.com/engine/reference/commandline/commit/) command. In this way, the changes that have been made will be saved in a new image.
66
+
67
+
68
+
## Changing the behavior of ROS nodes
69
+
ROS nodes are provided as examples that demonstrate how various tools can be used.
70
+
As a result, customization might be needed in order to make them appropriate for your specific needs.
71
+
Currently, all nodes support changing the input/output topics.
72
+
However, if you need to change anything else (e.g., load a custom model), then you should appropriately modify the source code of the nodes.
73
+
This is very easy, since the Python API of the OpenDR is used in all of the provided nodes.
74
+
You can refer to [Python API documentation](https://github.com/opendr-eu/opendr/blob/master/docs/reference/index.md) for more details for the tool that you are interested in.
75
+
76
+
### Loading a custom model
77
+
Loading a custom model in a ROS node is very easy.
78
+
First, locate the node that you want to modify (e.g., [pose estimation](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py)).
79
+
Then, search for the line where the learner loads the model (i.e., calls the `load()` function).
80
+
For the aforementioned node, this happens at [line 63](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py#L63).
81
+
Then, replace the path to the `load()` function with the path to your custom model.
82
+
You can also optionally remove the call to `download()` function (e.g., [line 62](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py#L62)) to make the node start up faster.
83
+
84
+
85
+
## Building docker images that do not contain the whole toolkit
86
+
To build custom docker images that do not contain the whole toolkit you should follow these steps:
87
+
1. Identify the tools that are using and note them.
88
+
2. Start from a clean clone of the repository and remove all modules under [src/opendr] that you are not using.
89
+
To this end, use the `rm` command from the root folder of the toolkit and write down the commands that you are issuing.
90
+
Please note that you should NOT remove the `engine` package.
91
+
4. Add the `rm` commands that you have issued in the dockerfile (e.g., in the main [dockerfile](https://github.com/opendr-eu/opendr/blob/master/Dockerfile)) after the `WORKDIR command` and before the `RUN ./bin/install.sh` command.
92
+
5. Build the dockerfile as usual.
93
+
94
+
By removing the tools that you are not using, you are also removing the corresponding `requirements.txt` file.
95
+
In this way, the `install.sh` script will not pull and install the corresponding dependencies, allowing for having smaller and more lightweight docker images.
96
+
97
+
Things to keep in mind:
98
+
1. ROS noetic is manually installed by the installation script.
99
+
If you want to install another version, you should modify both `install.sh` and `Makefile`.
100
+
2.`mxnet`, `torch` and `detectron` are manually installed by the `install.sh` script if you have set `OPENDR_DEVICE=gpu`.
101
+
If you do not need these dependencies, then you should manually remove them.
Copy file name to clipboardexpand all lines: docs/reference/fmp_gmapping.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -3,9 +3,9 @@
3
3
Traditional *SLAM* algorithm for estimating a robot's position and a 2D, grid-based map of the environment from planar LiDAR scans.
4
4
Based on OpenSLAM GMapping, with additional functionality for computing the closed-form Full Map Posterior Distribution.
5
5
6
-
For more details on the launchers and tools, see the [FMP_Eval Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/README.md).
6
+
For more details on the launchers and tools, see the [FMP_Eval Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/README.md).
7
7
8
-
For more details on the actual SLAM algorithm and its ROS node wrapper, see the [SLAM_GMapping Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/slam_gmapping/README.md).
8
+
For more details on the actual SLAM algorithm and its ROS node wrapper, see the [SLAM_GMapping Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/slam_gmapping/README.md).
9
9
10
10
## Demo Usage
11
11
A demo ROSBag for a square corridor can be found in the Map Simulator submodule in `src/map_simulator/rosbags/`, as well as preconfigured ***roslaunch***
@@ -25,4 +25,4 @@ This will start the following processes and nodes:
25
25
26
26
Other ROSBags can be easily generated with the map simulator script from either new custom scenarios, or from the test configuration files in `src/map_simulator/scenarios/robots/` directory.
27
27
28
-
For more information on how to define custom test scenarios and converting them to ROSBags, see the [Map_Simulator Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/map_simulator/README.md).
28
+
For more information on how to define custom test scenarios and converting them to ROSBags, see the [Map_Simulator Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/map_simulator/README.md).
0 commit comments