Skip to content

Commit

Permalink
Merge branch 'wallfollowing-improvements' of github.com:arpg-sophisti…
Browse files Browse the repository at this point in the history
…cated/ar-tu-do into wallfollowing-improvements
  • Loading branch information
Marcel Ebbrecht committed Aug 19, 2020
2 parents 038451d + b83e388 commit ae0357e
Show file tree
Hide file tree
Showing 31 changed files with 24,088 additions and 285 deletions.
6 changes: 4 additions & 2 deletions Doxyfile
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ PROJECT_BRIEF = "f1tenth Project Group of Technical University Dortmund
# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
# the logo to the output directory.

PROJECT_LOGO = $(TRAVIS_BUILD_DIR)/doc/logo.png
PROJECT_LOGO = doc/logo.png
#PROJECT_LOGO = $(TRAVIS_BUILD_DIR)/doc/logo.png

# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
# into which the generated documentation will be written. If a relative path is
Expand Down Expand Up @@ -790,7 +791,8 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.

INPUT = $(TRAVIS_BUILD_DIR)/docs/master
#INPUT = $(TRAVIS_BUILD_DIR)/docs/master
INPUT = ros_ws

# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
Expand Down
275 changes: 22 additions & 253 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,35 +4,37 @@

This repository contains software for 1/10th scale autonomous race cars to compete in the [F1/10 competition](http://f1tenth.org/). It is developed by the Autonomous Racing Project Group of [TU Dortmund](https://ls12-www.cs.tu-dortmund.de/daes/).

![](doc/racing_example.gif "Racing with a wallfollowing algorithm")
![](doc/racing_example_real.gif "Racing with the real car in our lab")

## Newsflash

### Connector Monitor/Supply/Board
## Documentation

These pictures show, how to connect these three parts correctly.
* For general information and documentation check out our [wiki page](https://github.com/arpg-sophisticated/ar-tu-do/wiki).
* For source code documentation check out the auto-generated [Doxygen documentation](https://arpg-sophisticated.github.io/doc/index.html).

![](doc/connector1.jpg "Connector Monitor")
![](doc/connector2.jpg "Connector Supply")
![](doc/connector3.jpg "Connector Board") |
## Video of final results and testing

### USB2Serial
<img src="doc/ARPG.gif" alt="Final presentation video" width="850"/>

Important: Since we included the IMU, there are now 2 USB2Serial devices in the system. Please ensure that
- VESC is always detected as ttyACM0
- IMU as ttyACM1
## Simulation

On Ubuntu 16.04. we used a file in /etc/udev/rules.d to ensure that, after update to 18.04. we didn't need that anymore as the system detects the devices by the order of the USB plugs on the hub.
<img src="doc/racing_example.gif" alt="Racing with a wallfollowing algorithm" width="850"/>

## Features

We provide several LIDAR based driving algorithms:

- Fast and efficient wallfollowing based on fitting circles into the LIDAR scan
- Sensorfusion of ZED camera and LIDAR data
- Boxing of sensor data
- Voxel based obstacle detection (experimental)
- Heavy workload code is in C++
- Full telemetry logging and HUD display
- Report creation of telemetry data
- [ROS navigation stack](http://wiki.ros.org/navigation) based implementation that uses SLAM, a precalculated map and path planning
- Deep Reinforcement Learning ([Q-Learning](https://en.wikipedia.org/wiki/Q-learning) and [Policy Gradient](https://en.wikipedia.org/wiki/Reinforcement_learning#Direct_policy_search))
- Neural Networks with evolutionary training
- Depth camera support
- Video recording
- Huge set of display options in RViz
- Management script

Our software works on physical hardware and in a simulated environment using [Gazebo](http://gazebosim.org/).
Further features are:
Expand All @@ -42,235 +44,7 @@ Further features are:
- Teleoperation via keyboard, Xbox and Playstation controller
- Speedometer and Lap Timer


## Installation

You need to install the [Robot Operating System (ROS)](https://www.ros.org/) to use our software. We target [ROS Kinetic](http://wiki.ros.org/kinetic/Installation) on Ubuntu 16.04, but [ROS Melodic](http://wiki.ros.org/melodic/Installation) seems to work as well.

Note for VM users: Gazebo 7.0.0, which is installed with ROS Kinetic by default, [does not work](https://bitbucket.org/osrf/gazebo/issues/1837/vmware-rendering-z-ordering-appears-random) on a virtual machine. To solve this, Gazebo has to be updated to at least 7.4.0 [as explained here](http://gazebosim.org/tutorials?cat=install&tut=install_ubuntu&ver=7.0#Alternativeinstallation:step-by-step).

Install dependencies:

```bash
sudo pip uninstall pip && sudo apt install python-pip
sudo apt install libsdl2-dev clang-format python-pyqtgraph
pip install torch autopep8 cython circle-fit

# RangeLibc
git clone http://github.com/kctess5/range_libc
cd range_libc/pywrapper
# Either:
./compile.sh # on VM
# Or:
./compile_with_cuda.sh # on car - compiles GPU ray casting methods
```

Clone the repository:

```bash
git clone --recurse-submodules https://github.com/arpg-sophisticated/ar-tu-do
```

Install missing ROS dependencies:

```bash
cd ar-tu-do/ros_ws
rosdep install -y --from-paths src --ignore-src --rosdistro ${ROS_DISTRO}
```


## Usage

Compile while inside the `ros_ws`directory:

```bash
catkin_make
```

Set up enviroment variables for your shell:

```bash
source devel/setup.bash # (or setup.zsh, depending on your shell)
```

Use a launch file to start ROS and Gazebo:

```bash
roslaunch launch/car.launch # (Physical car, Wallfollowing)
roslaunch launch/car_navigation_stack.launch # (Physical car, SLAM & ROS navigation)
roslaunch launch/gazebo.launch # (Simulation, Wallfollowing)
roslaunch launch/navigation_stack.launch # (Simulation, SLAM & ROS navigation)
roslaunch launch/q_learning.launch # (Simulation, Train the Q-Learning model)
roslaunch launch/policy_gradient.launch # (Simulation, Train the Policy Gradient model)
roslaunch launch/evolutionary.launch # (Simulation, Train the evolutionary neural network)
```

### Launch file arguments

These arguments can be passed to the launch files above. For example, to use the `gazebo.launch` file without emergency stop and with car highlighting, run:
```
roslaunch launch/gazebo.launch emergency_stop:=false car_highlighting:=true
```
The arguments can be changed permanently by editing the launch files.

<table>
<tr>
<th rowspan="2">Argument</th>
<th rowspan="2">Description</th>
<th colspan="5">Supported by <code>launch/&lt;file&gt;.launch</code></th>
</tr>
<tr>
<td>car</td>
<td>car_navstack</td>
<td>gazebo</td>
<td>navigation_stack</td>
<td>q_learning, policy_gradient, evolutionary</td>
</tr>
<tr>
<td><code>debug</code></td>
<td>Boolean value whether Gazebo should run in debug mode. Defaults to false.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>emergency_stop</code></td>
<td>Boolean value whether the emergency stop should be active. Defaults to true.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
<td>✗</td>
</tr>
<tr>
<td><code>gui</code></td>
<td>Boolean value whether Gazebo should show a user interface. Defaults to true.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td><code>joystick_type</code></td>
<td>The type of joystick controller. Possible values: <code>ps3</code>, <code>xbox360</code> and <code>xboxone</code></td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>map</code></td>
<td>Name of the map to be used by the particle filter. Defaults to a prerecorded map of <code>racetrack_decorated_2</code>.</td>
<td>✗</td>
<td>✓<br>(no default)</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>paused</code></td>
<td>Boolean value whether Gazebo should start paused. Defaults to false.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>pf_angle_step</code></td>
<td>Angle step of the particle filter. Defaults to 18.</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>pf_max_particles</code></td>
<td>Maximum amount of particles to be used by the particle filter. Defaults to 500.</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>pf_squash_factor</code></td>
<td>Squash factor of the particle filter. Defaults to 2.2.</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>plot_window</code></td>
<td>Integer value indicating the amount of episodes that should be plotted. Defaults to 200.</td>
<td>✗</td>
<td>✗</td>
<td>✗</td>
<td>✗</td>
<td>✓ / ✓ / ✗</td>
</tr>
<tr>
<td><code>realtime_simulation</code></td>
<td>Boolean value whether Gazebo should try to simulate with a real time factor of 1. If false, Gazebo tries to simulate as fast as possible. Defaults to true.</td>
<td>✗</td>
<td>✗</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
</tr>
<tr>
<td><code>use_gpu</code></td>
<td>Boolean value whether Gazebo should use the GPU when simulating the lidar. Defaults to true.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td><code>use_sim_time</code></td>
<td>Boolean value whether all ros nodes should use simulated Gazebo time instead of wall clock time. Defaults to true.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>verbose</code></td>
<td>Boolean value whether Gazebo should give verbose standard output. Defaults to true.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>visualize_lidar</code></td>
<td>Boolean value whether Gazebo should show the simulated lidar rays. Defaults to false.</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td><code>world</code></td>
<td>The name of the racetrack. Possible values: <code>racetrack_decorated</code>, <code>racetrack_decorated_2</code> (default) and <code>racetrack_decorated_2_big</code></td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
</table>
We also added some more stuff not directly connected to the software, please check out the wiki for more information.

## Hardware

Expand All @@ -280,16 +54,11 @@ Our car is based on a 1/10th scale RC car ([Traxxas Ford Fiesta](https://traxxas
- motor controller ([FOCBOX](https://www.enertionboards.com/FOCBOX-foc-motor-speed-controller.html))
- LIDAR scanner ([Hokuyo UST-10LX](https://www.hokuyo-usa.com/products/scanning-laser-rangefinders/ust-10lx))
- an inertial measurement unit ([Invensense MPU-9250](https://www.invensense.com/products/motion-tracking/9-axis/mpu-9250/))
- optional: brushless DC motor (replaces the standard brushed motor)
- optional: stereo camera ([ZED](https://www.stereolabs.com/zed/))



## Documentation

* For general information and documentation check out our [wiki page](https://github.com/Autonomous-Racing-PG/ar-tu-do/wiki).
* For source code documentation check out the auto-generated [Doxygen documentation](https://autonomous-racing-pg.github.io/ar-tu-do/html/index.html).
- Brushless DC motor (replaces the standard brushed motor)
- Stereo camera ([ZED](https://www.stereolabs.com/zed/))

## License

This project (excluding git submodules) is under MIT and GPLv3 dual licensed - see the [MIT.LICENSE](MIT.LICENSE) and [GPLv3.LICENSE](GPLv3.LICENSE) file for details.

<img src="doc/banner.png" alt="Racing with a wallfollowing algorithm" width="850"/>
Binary file added doc/ARPG.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/banner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit ae0357e

Please sign in to comment.