The vmodel
package implements an agent-based model to simulate visual-detection-based swarms in Python.
It is designed to generate statistical data of swarms with large groups sizes (~1000s of agents) and random initial conditions.
It features several flocking algorithms, neighbor selection methods, and sensor noise parameters.
The simulation can be run as a graphical session with visualizations for rapid prototyping or in headless mode to quickly generate statistical results from repeated runs.
This repository contains the source code accompanying the article:
F. Schilling, E. Soria, and D. Floreano, "On the Scalability of Vision-based Drone Swarms in the Presence of Occlusions," IEEE Access, vol. 10, pp. 1-14, 2022. [IEEE Xplore] [Citation]
The following video gives a high-level explanation of the findings of the article:
Clone this repository and move into its root folder:
git clone [email protected]:lis-epfl/vmodel.git
cd vmodel
Use pip
to install the vmodel
package (preferably in a newly created virtual environment):
python3 -m pip install --editable .
This command will install all necessary requirements and add the vmodel
simulator script (among others) to your $PATH
.
The --editable
flag ensures that any changes you make to the code in this repository will be reflected when you run the simulation.
To run the swarm simulation with live visualizations, issue the following example command:
vmodel --plot --plot-metrics --migration-dir 1 0
The --plot
flag shows a live visualization of the swarm state (agent positions and velocities) and the --plot-metrics
flags shows an additional time series of useful swarm metrics.
The --migration-dir x y
argument takes two arguments gives the swarm an optional migration direction (a vector with x
and y
components).
You should see two new windows on your screen, one showing a 2D visualization of the agents (left), and the other showing the time series plots (right):
The agents will migrate forever until the q
button is pressed to quit the simulation.
If the windows are unresponsive, only one window is showing, or the plots are not updating, try using a different Matplotlib backend. The plotting code has only been tested on Ubuntu and macOS (the latter only with XQuartz).
The graphical mode is useful for quick prototyping and to visualize the effects of the different simulation configurations. To easily produce statistical experiments, the simulation can also be run headless, as described below.
To run the swarm simulation headless and write the results to disk, issue to the following example command:
vmodel --num-runs 3 --num-timesteps 1000 --verbose --progress --migration-dir 1 0
The --num-runs N
argument runs the simulation N
times with the same (random) initial conditions.
The --num-timesteps N
argument means that the simulation will run for N
isochronous timesteps (of 0.1 second duration by default; configurable with --delta-time
) before starting a new run.
The --verbose
flag prints additional information to the console and the --progress
flag shows the simulation progress.
By default, all runs are processed in parallel using the built-in multiprocessing
package (use the --no-parallel-runs
flag to simulate runs sequentially instead).
To run multiple simulations with different arguments (and possibly with multiple runs each) we recommend the excellent GNU Parallel command-line utility.
When the simulation is done, two files will have been added to the current working directory:
- a
.nc
file containing the states (positions, velocities, etc.) of the agents over time, and - a
.yaml
file with the configuration arguments of the simulation.
The swarm behavior can be configured in a variety of ways. Issue the following command to get an overview of all the possible command line arguments:
vmodel --help
The most important command-line arguments are:
- Number of agents (
--num-agents N
): the group size (default:10
) - Reference distance (
--ref-distance F
): the desired equilibrium distance between agents (default:1
, in meters!) - Random seed (
--seed SEED
): random seed for repeatable experiments (default:None
for random seed, otherwise integer!)
The vmodel
package comes with several flocking algorithms, e.g.,
- Reynolds (
--algorithm reynolds
): Craig Reynolds' boids algorithm [ACM Digital Library] - Olfati (
--algorithm olfati
): Reza Olfati-Saber's algorithm [IEEE Xplore] - Others: see flocking folder (not as extensively tested as the two above)
The vmodel
package supports several different neighbor selection methods, e.g., metric, visual, topological, and voronoi (from left to right):
- Metric (
--perception-radius F
): select only agents in a given metric radiusF
- Visual (
--filter-occluded
): select only agents within the perception radius that are not visually occluded by closer ones - Topological (
--max-agents N
): select only theN
closest agents, irrespective of their metric distance - Voronoi (
--filter-voronoi
): select only the agents that share a Voronoi border with the focal agent
The neighbor selection methods can be freely combined with each other.
The vmodel
package models noisy visual detections in two components:
- Range uncertainty (
--range-std STD
): models the standard deviationSTD
of the distance to other agents (in meters!) - Bearing uncertainty (
--bearing-std STD
): models the standard deviationSTD
of the bearing towards other agents (in degrees!)
The vmodel
dataset (ca. 700 MB zipped, ca. 6 GB when extracted) can be downloaded here: [Google Drive] [SWITCHdrive].
The dataset is composed of several multi-dimensional arrays (including metadata) in the netCDF4 format (with .nc
file extension).
The files can be opened, e.g., using the excellent xarray library, and converted to NumPy arrays.
In the dataset, the different dimensions correspond, e.g., to the number of agents, the reference distance, the neighbor selection method, etc.
To generate the figures contained in the article, download the dataset and run the Jupyter notebooks in the figures folder (adjusting the data_dir
path for where you saved the datasets locally).
To run the unit tests, move to the test folder and run the following command:
python3 -m unittest -v test
Note: the tests only cover the most important functions related to geometry and agent-to-agent visibility. We do not aim for full test coverage.
If you use this work in an academic context, please cite the following article:
@article{schilling_vmodel_2022,
title = {On the Scalability of Vision-based Drone Swarms in the Presence of Occlusions},
author = {Schilling, Fabian and Soria, Enrica and Floreano, Dario},
journal = {IEEE Access},
year = {2022},
volume = {10},
pages = {1--14},
doi = {10.1109/ACCESS.2022.3158758},
issn = {2169-3536}
}
Special thanks to:
- SwarmLab for inspiration regarding flocking algorithms and visualizations,
- xarray for making the analysis of multi-dimensional data fun again, and
- Numba for speeding up numerical code and letting me be lazy with code vectorization.
This project is released under the MIT License.
Please refer to the LICENSE
for more details.