diff --git a/.dockerignore b/.dockerignore new file mode 100644 index 0000000..e39e673 --- /dev/null +++ b/.dockerignore @@ -0,0 +1,2 @@ +.git +addons diff --git a/.gitignore b/.gitignore index b581832..e7a8481 100644 --- a/.gitignore +++ b/.gitignore @@ -1,9 +1,7 @@ *.sw* -id_rsa +__pycache__/ # We never want to track BenchBot components +addons/ api/ -batches/ eval/ -examples/ -ground_truth/ diff --git a/README.md b/README.md index dfe89fa..9cb614a 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@

~ Our Semantic Scene Understanding Challenge is live on EvalAI ~
(prizes include $2,500USD provided by ACRV & GPUs provided by sponsors Nvidia)

-

~ Our BenchBot tutorial is the best place to get started developing with BenchBot ~

+

~ Our BenchBot tutorial is the best place to get started developing with BenchBot ~

# BenchBot Software Stack @@ -7,27 +7,29 @@ The BenchBot software stack is a collection of software packages that allow end users to control robots in real or simulated environments with a simple python API. It leverages the simple "observe, act, repeat" approach to robot problems prevalent in reinforcement learning communities ([OpenAI Gym](https://gym.openai.com/) users will find the BenchBot API interface very similar). -BenchBot has been created primarily as a tool to assist in the research challenges faced by the Semantic Scene Understanding community; challenges including understanding a scene in simulation, transferring algorithms to real world systems, & meaningfully evaluating algorithm performance. The "bench" in "BenchBot" refers to benchmarking, with our goal to provide a system that greatly simplifies the benchmarking of novel algorithms in both realistic 3D simulation & on real robot platforms. +BenchBot was created as a tool to assist in the research challenges faced by the semantic scene understanding community; challenges including understanding a scene in simulation, transferring algorithms to real world systems, and meaningfully evaluating algorithm performance. We've since realised, these challenges don't just exist for semantic scene understanding, they're prevalent in a wide range of robotic problems. -Users performing tasks other than Semantic Scene Understanding (like object detection, 3D mapping, RGB to depth reconstruction, active vision, etc.) will also find elements of the BenchBot software stack valuable. +This led us to create version 2 of BenchBot with a focus on allowing users to define their own functionality for BenchBot through [add-ons](https://github.com/qcr/benchbot_addons). Want to integrate your own environments? Plug-in new robot platforms? Define new tasks? Share examples with others? Add evaluation measures? This all now possible with add-ons, and you don't have to do anything more than add some YAML and Python files defining your new content! -This repository contains the software stack needed to develop solutions for BenchBot tasks on your local machine. It installs & configures a significant amount of software for you, wraps software in stable Docker images (~120GB), and provides simple interaction with the stack through 4 basic scripts: `benchbot_install`, `benchbot_run`, `benchbot_submit`, & `benchbot_eval`. +The "bench" in "BenchBot" refers to benchmarking, with our goal to provide a system that greatly simplifies the benchmarking of novel algorithms in both realistic 3D simulation and on real robot platforms. If there is something else you would like to use BenchBot for (like integrating different simulators), please let us know. We're very interested in BenchBot being the glue between your novel robotics research and whatever your robot platform may be. -## System recommendations & requirements +This repository contains the software stack needed to develop solutions for BenchBot tasks on your local machine. It installs and configures a significant amount of software for you, wraps software in stable Docker images (~50GB), and provides simple interaction with the stack through 4 basic scripts: `benchbot_install`, `benchbot_run`, `benchbot_submit`, and `benchbot_eval`. -The BenchBot software stack is designed to run seamlessly on a wide number of system configurations (currently limited to Ubuntu 18.04+). System hardware requirements are relatively high due to the nature of the software run for 3D simulation (Unreal Engine, Nvidia Isaac, Vulkan, etc.): +## System recommendations and requirements + +The BenchBot software stack is designed to run seamlessly on a wide number of system configurations (currently limited to Ubuntu 18.04+). System hardware requirements are relatively high due to the software run for 3D simulation (Unreal Engine, Nvidia Isaac, Vulkan, etc.): - Nvidia Graphics card (GeForce GTX 1080 minimum, Titan XP+ / GeForce RTX 2070+ recommended) - CPU with multiple cores (Intel i7-6800K minimum) - 32GB+ RAM -- 128GB+ spare storage (an SSD storage device is **strongly** recommended) +- 64GB+ spare storage (an SSD storage device is **strongly** recommended) -Having a system that meets the above hardware requirements is all that is required to begin installing the BenchBot software stack. The install script analyses your system configuration & offers to install any missing software components interactively. The list of 3rd party software components involved includes: +Having a system that meets the above hardware requirements is all that is required to begin installing the BenchBot software stack. The install script analyses your system configuration and offers to install any missing software components interactively. The list of 3rd party software components involved includes: -- Nvidia Driver (4.18+ required, 4.30+ recommended) +- Nvidia Driver (4.18+ required, 4.50+ recommended) - CUDA with GPU support (10.0+ required, 10.1+ recommended) - Docker Engine - Community Edition (19.03+ required, 19.03.2+ recommended) -- Nvidia Container Toolkit (1.0+ required, 1.0.5 recommended) +- Nvidia Container Toolkit (1.0+ required, 1.0.5+ recommended) - ISAAC 2019.2 SDK (requires an Nvidia developer login) ## Managing your installation @@ -35,13 +37,21 @@ Having a system that meets the above hardware requirements is all that is requir Installation is simple: ``` -u@pc:~$ git clone https://github.com/roboticvisionorg/benchbot && cd benchbot +u@pc:~$ git clone https://github.com/qcr/benchbot && cd benchbot u@pc:~$ ./install ``` -Any missing software components, or configuration issues with your system, should be detected by the install script & resolved interactively. The final step of installation asks if you want to add BenchBot helper scripts to your `PATH`. Choosing yes will make the following commands available from any directory: `benchbot_install` (same as `./install` above), `benchbot_run`, `benchbot_submit`, `benchbot_eval`, and `benchbot_batch`. +Any missing software components, or configuration issues with your system, should be detected by the install script and resolved interactively. The installation asks if you want to add BenchBot helper scripts to your `PATH`. Choosing yes will make the following commands available from any directory: `benchbot_install` (same as `./install` above), `benchbot_run`, `benchbot_submit`, `benchbot_eval`, and `benchbot_batch`. -The BenchBot software stack will frequently check for updates & can update itself automatically. To update simply run the install script again (add the `--force-clean` flag if you would like to install from scratch): +BenchBot installs a default set of add-ons (currently `'benchbot-addons/ssu'`), but this can be changed based on how you want to use BenchBot. For example, the following will also install the `'benchbot-addons/sqa'` add-ons: + +``` +u@pc:~$ benchbot_install --addons benchbot-addons/ssu,benchbot-addons/sqa +``` + +See the [BenchBot Add-ons Manager's documentation](https://github.com/qcr/benchbot_addons) for more information on using add-ons. + +The BenchBot software stack will frequently check for updates and can update itself automatically. To update simply run the install script again (add the `--force-clean` flag if you would like to install from scratch): ``` u@pc:~$ benchbot_install @@ -53,68 +63,93 @@ If you decide to uninstall the BenchBot software stack, run: u@pc:~$ benchbot_install --uninstall ``` +There are a number of other options to customise your BenchBot installation, which are all described by running: + +``` +u@pc:~$ benchbot_install --help +``` + ## Getting started -Getting a solution up & running with BenchBot is as simple as 1,2,3: +Getting a solution up and running with BenchBot is as simple as 1,2,3. Here's how to use BenchBot with content from the [semantic scene understanding add-on](https://github.com/benchbot-addons/ssu): + +1. Run a simulator with the BenchBot software stack by selecting an available robot, environment, and task definition: + + ``` + u@pc:~$ benchbot_run --robot carter --env miniroom:1 --task semantic_slam:active:ground_truth + ``` + + A number of useful flags exist to help you explore what content is available in your installation (see `--help` for full details). For example, you can list what tasks are available via `--list-tasks` and view the task specification via `--show-task TASK_NAME`. -1. Run a simulator with the BenchBot software stack by selecting a valid environment & task definition. See `--help`, `--list-tasks`, & `--list-envs` for details on valid options. As an example: +2. Create a solution to a BenchBot task, and run it against the software stack. To run a solution you must select a mode. For example, if you've created a solution in `my_solution.py` that you would like to run natively: - ``` - u@pc:~$ benchbot_run --env miniroom:1 --task semantic_slam:active:ground_truth - ``` + ``` + u@pc:~$ benchbot_submit --native python my_solution.py + ``` -2. Create a solution to a BenchBot task, & run it against the software stack. The `/examples` directory contains some basic "hello_world" style solutions. For example, the following commands run the `hello_active` example in either a container or natively respectively (see `--help` for more details of options): + See `--help` for other options. You also have access to all of the examples available in your installation. For instance, you can run the `hello_active` example in containerised mode via: - ``` - u@pc:~$ benchbot_submit --containerised /examples/hello_active/ - ``` - ``` - u@pc:~$ benchbot_submit --native python /examples/hello_active/hello_active - ``` + ``` + u@pc:~$ benchbot_submit --containerised --example hello_active + ``` -3. Evaluate the performance of your system either directly, or automatically after your submission completes respectively: + See `--list-examples` and `--show-example EXAMPLE_NAME` for full details on what's available out of the box. - ``` - u@pc:~$ benchbot_eval - ``` - ``` - u@pc:~$ benchbot_submit --evaluate-results --native python - ``` +3. Evaluate the performance of your system using a supported evaluation method (see `--list-methods`). To use the `omq` evaluation method on `my_results.json`: -The [BenchBot Tutorial](https://github.com/roboticvisionorg/benchbot/wiki/Tutorial:-Performing-Semantic-SLAM-with-Votenet) is a great place to start working with BenchBot; the tutorial takes the user all the way from a blank system to a working Semantic SLAM solution, with many steps educational steps along the way. Also, see [benchbot_examples](https://github.com/roboticvisionorg/benchbot_examples) for more examples of how to get up and running with the BenchBot software stack. + ``` + u@pc:~$ benchbot_eval --method omq my_results.json + ``` + + You can also simply run evaluation automatically after your submission completes: + + ``` + u@pc:~$ benchbot_submit --evaluate-results omq --native --example hello_eval_semantic_slam + ``` + +The [BenchBot Tutorial](https://github.com/qcr/benchbot/wiki/Tutorial:-Performing-Semantic-SLAM-with-Votenet) is a great place to start working with BenchBot; the tutorial takes you from a blank system to a working Semantic SLAM solution, with many educational steps along the way. Also remember the examples in your installation ([`benchbot-addons/examples_base`](https://github.com/benchbot-addons/examples_base) is a good starting point) which show how to get up and running with the BenchBot software stack. ## Power tools for autonomous algorithm evaluation -Once you are confident your algorithm is a solution to the chosen task, the BenchBot software stack's power tools allow you to comprehensively explore your algorithm's performance. You can autonomously run your algorithm over multiple environments, & evaluate it holistically to produce a single summary statistic of your algorithm's performance: - -1. Use the `benchbot_batch` script to autonomously run your algorithm over a number of environments & produce a set of results. The script has a number of toggles available to customise the process. See `benchbot_batch --help` for full details. Here's a basic example for autonomously running your `semantic_slam:active:ground_truth` algorithm over 3 environments: - ``` - u@pc:~$ benchbot_batch --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --native python - ``` - Alternatively, you can use one of the pre-defined environment batches included through [benchbot_batches](https://github.com/roboticvisionorg/benchbot_batches): - ``` - u@pc:~$ benchbot_batch --task semantic_slam:active:ground_truth --envs-file /batches/develop/sslam_active_gt --native python - ``` - Additionally, you can request a results ZIP to be created & even create an overall evaluation score at the end of the batch: - ``` - u@pc:~$ benchbot_batch --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --zip --score-results --native python - ``` - Lastly, both native & containerised submissions are supported exactly as in `benchbot_submit`: - ``` - u@pc:~$ benchbot_batch --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --containerised - ``` -2. The holistic evaluation performed internally by `benchbot_batch` above, can also be directly called through the `benchbot_eval` script. The script supports single result files, multiple results files, or a ZIP of multiple results files. See `benchbot_eval --help` for full details. Below are examples calling `benchbot_eval` with a series of results & a ZIP of results respectively: - ``` - u@pc:~$ benchbot_eval -o my_jsons_scores result_1.json result_2.json result_3.json - ``` - ``` - u@pc:~$ benchbot_eval -o my_zip_scores results.zip - ``` +Once you are confident your algorithm is a solution to the chosen task, the BenchBot software stack's power tools allow you to comprehensively explore your algorithm's performance. You can autonomously run your algorithm over multiple environments, and evaluate it holistically to produce a single summary statistic of your algorithm's performance. Here are some examples again with content from the [semantic scene understanding add-on](https://github.com/benchbot-addons/ssu): + +- Use `benchbot_batch` to run your algorithm in a number of environments and produce a set of results. The script has a number of toggles available to customise the process (see `--help` for full details). To autonomously run your `semantic_slam:active:ground_truth` algorithm over 3 environments: + + ``` + u@pc:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --native python my_solution.py + ``` + + Or you can use one of the pre-defined environment batches installed via add-ons (e.g. [`benchbot-addons/batches_isaac`](https://github.com/benchbot-addons/batches_isaac)): + + ``` + u@pc:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs-batch develop_1 --native python my_solution.py + ``` + + Additionally, you can create a results ZIP and request an overall evaluation score at the end of the batch: + + ``` + u@pc:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --zip --score-results omq --native python my_solution.py + ``` + + Lastly, both native and containerised submissions are supported exactly as in `benchbot_submit`: + + ``` + u@pc:~$ benchbot_batch --robot carter --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --containerised my_solution_folder/ + ``` + +- You can also directly call the holistic evaluation performed above by `benchbot_batch` through the `benchbot_eval` script. The script supports single result files, multiple results files, or a ZIP of multiple results files. See `benchbot_eval --help` for full details. Below are examples calling `benchbot_eval` with a series of results and a ZIP of results respectively: + ``` + u@pc:~$ benchbot_eval --method omq -o my_jsons_scores result_1.json result_2.json result_3.json + ``` + ``` + u@pc:~$ benchbot_eval --method omq -o my_zip_scores results.zip + ``` + ## Using BenchBot in your research -BenchBot was made to enable & assist the development of high quality, repeatable research results. We welcome any & all use of the BenchBot software stack in your research. +BenchBot was made to enable and assist the development of high quality, repeatable research results. We welcome any and all use of the BenchBot software stack in your research. -To use our system, we just ask that you cite our paper on the BenchBot system. This will help us follow uses of BenchBot in the research community, & understand how we can improve the system to help support future research results. Citation details are as follows: +To use our system, we just ask that you cite our paper on the BenchBot system. This will help us follow uses of BenchBot in the research community, and understand how we can improve the system to help support future research results. Citation details are as follows: ``` @misc{talbot2020benchbot, @@ -129,20 +164,19 @@ To use our system, we just ask that you cite our paper on the BenchBot system. T ## Components of the BenchBot software stack -The BenchBot software stack is split into a number of standalone components, each with their own GitHub repository & documentation. This repository glues them all together for you into a working system. The components of the stack are: +The BenchBot software stack is split into a number of standalone components, each with their own GitHub repository and documentation. This repository glues them all together for you into a working system. The components of the stack are: -- **[benchbot_api](https://github.com/roboticvisionorg/benchbot_api):** user-facing Python interface to the BenchBot system, allowing the user to control simulated or real robots in simulated or real world environments through simple commands -- **[benchbot_examples](https://github.com/roboticvisionorg/benchbot_examples):** a series of example submissions that use the API to drive a robot interactively, autonomously step through environments, evaluate dummy results, attempt semantic slam, & more -- **[benchbot_supervisor](https://github.com/roboticvisionorg/benchbot_supervisor):** a HTTP server facilitating communication between user-facing interfaces & the underlying robot controller -- **[benchbot_robot_controller](https://github.com/roboticvisionorg/benchbot_robot_controller):** a wrapping script which controls the low-level ROS functionality of a simulator or real robot, handles automated subprocess management, & exposes interaction via a HTTP server -- **[benchbot_simulator](https://github.com/roboticvisionorg/benchbot_simulator):** a realistic 3D simulator employing Nvidia's Isaac framework, in combination with Unreal Engine environments -- **[benchbot_eval](https://github.com/roboticvisionorg/benchbot_eval):** Python library for evaluating the performance in a task, based on the results produced by a submission -- **[benchbot_batches](https://github.com/roboticvisionorg/benchbot_batches):** Collection of static environment lists for each of the tasks, used to produce repeatable result sets & consistent evaluation requirements +- **[benchbot_api](https://github.com/qcr/benchbot_api):** user-facing Python interface to the BenchBot system, allowing the user to control simulated or real robots in simulated or real world environments through simple commands +- **[benchbot_addons](https://github.com/qcr/benchbot_addons):** a Python manager for add-ons to a BenchBot system, with full documentation on how to create and add your own add-ons +- **[benchbot_supervisor](https://github.com/qcr/benchbot_supervisor):** a HTTP server facilitating communication between user-facing interfaces and the underlying robot controller +- **[benchbot_robot_controller](https://github.com/qcr/benchbot_robot_controller):** a wrapping script which controls the low-level ROS functionality of a simulator or real robot, handles automated subprocess management, and exposes interaction via a HTTP server +- **[benchbot_simulator](https://github.com/qcr/benchbot_simulator):** a realistic 3D simulator employing Nvidia's Isaac framework, in combination with Unreal Engine environments +- **[benchbot_eval](https://github.com/qcr/benchbot_eval):** Python library for evaluating the performance in a task, based on the results produced by a submission ## Further information -- **[FAQs](https://github.com/roboticvisionorg/benchbot/wiki/FAQs):** Wiki page where answers to frequently asked questions & resolutions to common issues will be provided -- **[Semantic SLAM Tutorial](https://github.com/roboticvisionorg/benchbot/wiki/Tutorial:-Performing-Semantic-SLAM-with-Votenet):** a tutorial stepping through creating a semantic SLAM system in BenchBot that utilises the 3D object detector [VoteNet](https://github.com/facebookresearch/votenet) +- **[FAQs](https://github.com/qcr/benchbot/wiki/FAQs):** Wiki page where answers to frequently asked questions and resolutions to common issues will be provided +- **[Semantic SLAM Tutorial](https://github.com/qcr/benchbot/wiki/Tutorial:-Performing-Semantic-SLAM-with-Votenet):** a tutorial stepping through creating a semantic SLAM system in BenchBot that utilises the 3D object detector [VoteNet](https://github.com/facebookresearch/votenet) ## Supporters diff --git a/bin/.helpers b/bin/.helpers index 94114bf..a5bda82 100755 --- a/bin/.helpers +++ b/bin/.helpers @@ -1,5 +1,8 @@ #!/usr/bin/env bash +set -euo pipefail +IFS=$'\n\t' + ################################################################################ ########################### Global BenchBot Settings ########################### ################################################################################ @@ -14,14 +17,14 @@ DOCKER_NETWORK="benchbot_network" FILENAME_ENV_GROUND_TRUTH=".benchbot_object_maps" FILENAME_ENV_METADATA=".benchbot_data_files" -GIT_API="https://github.com/roboticvisionorg/benchbot_api" -GIT_BATCHES="https://github.com/roboticvisionorg/benchbot_batches" -GIT_BENCHBOT="https://github.com/roboticvisionorg/benchbot" -GIT_CONTROLLER="https://github.com/roboticvisionorg/benchbot_robot_controller" -GIT_EVAL="https://github.com/roboticvisionorg/benchbot_eval" -GIT_EXAMPLES="https://github.com/roboticvisionorg/benchbot_examples" -GIT_SIMULATOR="https://github.com/roboticvisionorg/benchbot_simulator" -GIT_SUPERVISOR="https://github.com/roboticvisionorg/benchbot_supervisor" +GIT_ADDONS="https://github.com/qcr/benchbot_addons" +GIT_API="https://github.com/qcr/benchbot_api" +GIT_BENCHBOT="https://github.com/qcr/benchbot" +GIT_CONTROLLER="https://github.com/qcr/benchbot_robot_controller" +GIT_EVAL="https://github.com/qcr/benchbot_eval" +GIT_MSGS="https://github.com/qcr/benchbot_msgs" +GIT_SIMULATOR="https://github.com/qcr/benchbot_simulator" +GIT_SUPERVISOR="https://github.com/qcr/benchbot_supervisor" HOSTNAME_DEBUG="benchbot_debug" HOSTNAME_ROS="benchbot_ros" @@ -33,22 +36,19 @@ MD5_ISAAC_SDK="06387f9c7a02afa0de835ef07927aadf" PATH_ROOT="$(realpath ..)" PATH_API="$PATH_ROOT/api" -PATH_BATCHES="$PATH_ROOT/batches" +PATH_ADDONS="$PATH_ROOT/addons" +PATH_ADDONS_INTERNAL="/benchbot/addons" PATH_DOCKERFILE_CORE="$PATH_ROOT/docker/core.Dockerfile" PATH_DOCKERFILE_BACKEND="$PATH_ROOT/docker/backend.Dockerfile" -PATH_DOCKERFILE_BACKEND_LITE="$PATH_ROOT/docker/backend_lite.Dockerfile" PATH_DOCKERFILE_SUBMISSION="$PATH_ROOT/docker/submission.Dockerfile" PATH_EVAL="$PATH_ROOT/eval" -PATH_EXAMPLES="$PATH_ROOT/examples" -PATH_GROUND_TRUTH="$PATH_ROOT/ground_truth" PATH_ISAAC_SRCS="$PATH_ROOT/isaac" PATH_SYMLINKS="/usr/local/bin" PORT_ROBOT=10000 PORT_SUPERVISOR=10000 -SIZE_GB_FULL=128 -SIZE_GB_LITE=15 +SIZE_GB_FULL=64 URL_DEBUG="172.20.0.200" URL_DOCKER_SUBNET="172.20.0.0/24" @@ -57,22 +57,6 @@ URL_ROS="172.20.0.100" URL_ROBOT="172.20.0.101" URL_SUPERVISOR="172.20.0.102" -# NOTE: link should end with */download (default CloudStor link returns a web -# page...) -# NOTE: this URL should point to a single-line text file describing the latest -# version of the environments. Description is 3 whitespace separated fields: -# - md5sum of latest env_*.zip -# - URL for latest env_*.zip - -# - timestamp of latest (field is optional / not used) -URLS_ENVS_INFO_FULL_DEFAULT=( - "https://cloudstor.aarnet.edu.au/plus/s/egb4u65MVZEVkPB/download" - "https://cloudstor.aarnet.edu.au/plus/s/tqz36vODDnOsJpQ/download" - "https://cloudstor.aarnet.edu.au/plus/s/b7JI8bHajapkcOe/download" -) -URLS_ENVS_INFO_LITE_DEFAULT=( - "https://cloudstor.aarnet.edu.au/plus/s/b7JI8bHajapkcOe/download" -) - ################################################################################ ################## Coloured terminal output & heading blocks ################### ################################################################################ @@ -103,15 +87,8 @@ function header_block() { ######################## Helpers for managing BenchBot ######################### ################################################################################ -function backend_type() { - echo "$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \ - docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \ - 'echo "$BENCHBOT_BACKEND_TYPE"' | tr -d '[:space:]')" - return -} - function clear_stdin() { - read -t 0.1 -d '' -n 10000 discard + read -t 0.1 -d '' -n 10000 discard || true } function close_network() { @@ -119,11 +96,6 @@ function close_network() { sudo iptables --policy FORWARD ${2:-DROP} } -function has_simulator() { - [ "$(backend_type)" == "full" ] - return -} - function eval_version() { # TODO this refers to evaluating whether an arbitrary version number meets # some arbitrary version requirement... it does not have anything to do with @@ -141,18 +113,18 @@ function eval_version() { function kill_benchbot() { # TODO make this quieter when I am confident it works as expected... - if [ -z "$1" ]; then + if [ -z "${1-}" ]; then header_block "CLEANING UP ALL BENCHBOT REMNANTS" ${colour_blue} fi - targets=$(pgrep -f "docker attach benchbot") + targets=$(pgrep -f "docker attach benchbot" || true) if [ $(echo -n "$targets" | wc -l) -gt 0 ]; then echo -e "${colour_blue}Detached from the following containers:${colour_nc}" echo "$targets" for pid in "$targets"; do kill -9 $pid; done fi - targets=$(docker ps -q -f name='benchbot*') + targets=$(docker ps -q -f name='benchbot*' || true) if [ $(echo -n "$targets" | wc -l) -gt 0 ]; then echo -e "\n${colour_blue}Stopped the following containers:${colour_nc}" docker stop $targets @@ -180,6 +152,12 @@ function print_version_info() { printf "(%s)\n" "$hash" } +function simulator_type() { + echo "$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \ + docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \ + 'echo "$BENCHBOT_SIMULATORS"' | tr -d '[:space:]')" + return +} ################################################################################ ############### Checking if updates are available for components ############### @@ -226,24 +204,14 @@ function is_latest_benchbot_eval() { return } -function is_latest_benchbot_examples() { - _is_latest_local_git "$PATH_EXAMPLES" "$GIT_EXAMPLES" "$1" \ - "BenchBot Examples" - return -} - -function is_latest_benchbot_envs() { - # Requires a URL as $1 & index number as $2 - # TODO this should loudly error if there are any URL failures! - current=$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \ +function is_latest_benchbot_msgs() { + current_hash=$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \ docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \ - '_md5s=($BENCHBOT_ENVS_MD5SUMS) && _urls=($BENCHBOT_ENVS_URLS) && \ - echo -n "${_md5s['$2']} ${_urls['$2']}"') - latest=$(wget -qO- "$1" | cut -d ' ' -f 1,2) - echo "$2" - echo "Current BenchBot Environments: $current" - echo "Latest BenchBot Environments: $latest" - [ "$current" == "$latest" ] + 'cd $BENCHBOT_MSGS_PATH && git rev-parse HEAD' | tr -d '[:space:]') + latest_hash=$(git ls-remote "$GIT_MSGS" "$1" | awk '{print $1}') + echo "Current BenchBot ROS Messages: $current_hash" + echo "Latest BenchBot ROS Messages: $latest_hash" + [ "$current_hash" == "$latest_hash" ] return } @@ -285,51 +253,35 @@ function update_check() { \"benchbot_install\" command, or run this with [-f|--force-updateless] to skip updates." - echo -ne "Checking BenchBot version ...\t\t\t" + echo -ne "Checking BenchBot version ...\t\t\t\t" is_latest_benchbot "$1" > /dev/null benchbot_valid=$? [ $benchbot_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." - echo -ne "Checking BenchBot API version ...\t\t" + echo -ne "Checking BenchBot API version ...\t\t\t" is_latest_benchbot_api "$1" > /dev/null api_valid=$? [ $api_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." - echo -ne "Checking BenchBot Eval version ...\t\t" + echo -ne "Checking BenchBot Eval version ...\t\t\t" is_latest_benchbot_eval "$1" > /dev/null eval_valid=$? [ $eval_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." - echo -ne "Checking BenchBot Examples version ...\t\t" - is_latest_benchbot_examples "$1" > /dev/null - examples_valid=$? - [ $examples_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." - echo -ne "Checking BenchBot Simulator version ...\t\t" + echo -ne "Checking BenchBot Simulator version ...\t\t\t" is_latest_benchbot_simulator "$1" > /dev/null simulator_valid=$? [ $simulator_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." - echo -ne "Checking BenchBot Supervisor version ...\t" + echo -ne "Checking BenchBot Supervisor version ...\t\t" is_latest_benchbot_supervisor "$1" > /dev/null supervisor_valid=$? [ $supervisor_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." - echo -ne "Checking BenchBot Environments version ...\t" - _urls=($(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \ - docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \ - 'echo -n "$BENCHBOT_ENVS_SRCS"')) - if [ -z "$_urls" ]; then - environments_valid=1 - else - environments_valid=0 - for i in "${!_urls[@]}"; do - if [ $environments_valid -eq 0 ]; then - is_latest_benchbot_envs "${_urls[$i]}" "$i" > /dev/null - environments_valid=$? - fi - done - fi - [ $environments_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." + + echo -ne "Checking installed BenchBot add-ons are up-to-date ...\t" + addons_up_to_date > /dev/null + addons_valid=$? + [ $addons_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str." [ $benchbot_valid -eq 0 ] && [ $api_valid -eq 0 ] && \ - [ $eval_valid -eq 0 ] && [ $examples_valid -eq 0 ] && \ - [ $simulator_valid -eq 0 ] && [ $supervisor_valid -eq 0 ] && \ - [ $environments_valid -eq 0 ] + [ $eval_valid -eq 0 ] && [ $simulator_valid -eq 0 ] && \ + [ $supervisor_valid -eq 0 ] && [ $addons_valid -eq 0 ] valid=$? if [ $valid -eq 0 ]; then echo -e "\n$colour_green$_valid_text$colour_nc" @@ -338,3 +290,155 @@ updates." fi return $valid } + +################################################################################ +######################### BenchBot Add-ons Management ########################## +################################################################################ + +function addons_up_to_date() { + outdated="$(run_manager_cmd 'print("\n".join(outdated_addons()))')" + echo -e "Outdated add-ons:\n${outdated}" + [ $(echo "$outdated" | sed '/^\s*$/d' | wc -l) -eq 0 ] + return +} + +function env_name() { + # $1 env string + echo $1 | sed 's/:[^:]*$//' +} + +function env_variant() { + # $1 env string + echo $1 | sed 's/^.*://' +} + +function install_addons() { + printf "\n${colour_blue}%s${colour_nc}\n" \ + "Installing add-ons based on the request string '${1}':" + run_manager_cmd 'install_addons("'$1'")' '\n' '\n' + + printf "\n${colour_blue}%s${colour_nc}\n" \ + "Installing external add-on dependencies:" + run_manager_cmd 'install_external_deps()' '\n' '\n' + + printf "\n${colour_blue}%s${colour_nc}\n\n" \ + "Baking external add-on dependencies into the Docker backend:" + docker run --name tmp --detach -it "$DOCKER_TAG_BACKEND" + docker exec -it tmp /bin/bash -c \ + "$(run_manager_cmd 'print(install_external_deps(True))')" + docker commit tmp "$DOCKER_TAG_BACKEND" + docker rm -f tmp + printf "\n" +} + +function list_addons() { + run_manager_cmd 'print_state()' '\n' '\n\n' +} + +function list_content() { + # $1 content type, $2 list prefix text, $3 optional "an" instead of "a", $4 + # optional remove n characters to get singular version + singular=${1::-${4:-1}} + l="$(run_manager_cmd '[print("\t%s" % r) for r in \ + sorted(get_field("'$1'", "name"))]')" + echo "$2" + if [ -z "$l" ]; then echo -e "\tNONE!"; else echo "$l"; fi + echo " +See the '--show-"$singular" "${singular^^}"_NAME' command for specific "\ +"details about +each "$singular", or check you have the appropriate add-on installed if you are +missing "${3:-a}" "${singular}". +" +} + +function list_environments() { + # $1 list prefix text, $2 optional "an" instead of "a" + text="environments" + singular=${text::-1} + l="$(run_manager_cmd '[print("\t%s" % r) for r in sorted([\ + ":".join(str(f) for f in e) \ + for e in get_fields("'$text'", ["name", "variant"])])]')" + echo "$1" + if [ -z "$l" ]; then echo -e "\tNONE!"; else echo "$l"; fi + echo " +See the '--show-"$singular" "${singular^^}"_NAME' command for specific "\ +"details about +each "$singular", or check you have the appropriate add-on installed if you are +missing "${2:-a}" "${singular}". +" +} + +function remove_addons() { + run_manager_cmd 'remove_addons("'$1'")' '\n' '\n\n' +} + +function run_manager_cmd() { + pushd "$PATH_ROOT/bin" &> /dev/null + bash addons "${1}" "${2-}" "${3-}" + popd &> /dev/null +} + +function show_content() { + # $1 content type, $2 name of selected content, $3 optional remove n + # characters to get singular version + singular=${1::-${3:-1}} + if [ "$(run_manager_cmd 'print(exists("'$1'", [("name", "'$2'")]))')" \ + != "True" ]; then + printf "%s %s\n" "${singular^} '$2' is not a supported ${singular}." \ + "Please check '--list-$1'." + exit 1 + fi + location=$(run_manager_cmd 'print(get_match("'$1'", [("name", "'$2'")]))') + printf "${singular^} '$2' was found at the following location:\n\n\t%s\n\n" \ + "$location" + printf "Printed below are the first 30 lines of the definition file:\n\n" + head -n 30 "$location" + printf "\n" +} + +function show_environment() { + # $1 name of selected environment + text="environments" + singular=${text::-1} + name="$(env_name $1)" + variant="$(env_variant $1)" + if [ "$(run_manager_cmd 'print(exists("'$text'", \ + [("name", "'$name'"), ("variant", "'$variant'")]))')" != "True" ]; then + printf "%s %s\n" "${singular^} '$1' is not a supported ${singular}." \ + "Please check '--list-$text'." + exit 1 + fi + location=$(run_manager_cmd 'print(get_match("'$text'", \ + [("name", "'$name'"), ("variant", "'$variant'")]))') + printf "${singular^} '$1' was found at the following location:\n\n\t%s\n\n" \ + "$location" + printf "Printed below are the first 30 lines of the definition file:\n\n" + head -n 30 "$location" + printf "\n" + : +} + +function validate_content() { + # $1 = content type; $2 = name; $3 = full name (optional); $4 override check + # with this value (optional); $5 optional remove n characters to get singular + # version + singular=${1::-${5:-1}} + full=$([ -z "${3-}" ] && echo "$2" || echo "$3") + check="$([ -z "${4-}" ] && \ + echo "$(run_manager_cmd 'print(exists("'$1'", [("name", "'$2'")]))')" || \ + echo "$4")" + if [ "$check" != "True" ]; then + printf "%s %s\n" "${singular^} '$2' is not a supported ${singular}." \ + "Please check '--list-$1'." + printf "\n${colour_red}%s${colour_nc}\n" \ + "ERROR: Invalid ${singular} selected (${singular} = '$full')" + fi +} + +function validate_environment() { + # $1 = name; $2 = full name + validate_content "environments" "$1" "${2-}" \ + "$(run_manager_cmd 'print(exists("environments", \ + [("name", "'$(env_name $1)'"), ("variant", "'$(env_variant $1)'")]))')" +} + diff --git a/bin/addons b/bin/addons new file mode 100755 index 0000000..fcc6ca5 --- /dev/null +++ b/bin/addons @@ -0,0 +1,14 @@ +#!/usr/bin/env bash +# +# Bash script for simplifying calls to the add-on manager +# +# Usage: +# $1 = command to run (required) +# $2 = string to print before command output (optional) +# $3 = string to print after command output (optional) +set -euo pipefail +IFS=$'\n\t' + +if [ ! -z "${2-}" ]; then printf "$2"; fi +python3 -c 'from benchbot_addons.manager import *; '"$1" +if [ ! -z "${3-}" ]; then printf "$3"; fi diff --git a/bin/benchbot_batch b/bin/benchbot_batch index 38aabe9..60c9e19 100755 --- a/bin/benchbot_batch +++ b/bin/benchbot_batch @@ -4,6 +4,8 @@ ################### Load Helpers & Global BenchBot Settings #################### ################################################################################ +set -euo pipefail +IFS=$'\n\t' abs_path=$(readlink -f $0) pushd $(dirname $abs_path) > /dev/null source .helpers @@ -22,8 +24,8 @@ DEFAULT_PREFIX="batch" usage_text="$(basename "$0") -- Helper script for running a solution against multiple environments in a single command. Use this script when you have developed a task solution that you would like to extensively test, or when you would like -create a submission to a challenge. The './batches/' directory contains -official environment lists, like those used for evaluating tasks in challenges. +create a submission to a challenge. Addons can include 'batches' which are +environment lists, like those used for evaluating tasks in official challenges. The $(basename "$0") script is roughly equivalent to the following: @@ -53,31 +55,33 @@ USAGE: [-n|--native] COMMAND_TO_RUN Run a submission for the scd:active:dead_reckoning task in a containerised - environment, for a list of scenes specified in a file called 'my/env_mix', - saving the results with the prefix 'mix'. Then evaluate the results to - produce a final score: + environment, for a list of scenes specified in the environment batch called + 'challenge_1', saving the results with the prefix 'mix'. Then evaluate the + results to produce a final score: $(basename "$0") [-t|--task] scd:active:dead_reckoning \\ - [-E|--envs-file] my/env_mix [-p|--prefix] mix [-s|--score-results] \\ + [-E|--envs-batch] challenge_1 [-p|--prefix] mix [-s|--score-results] \\ [-c|--containerised] DIRECTORY_FOR_SUBMISSION - ... (contents of 'my/env_mix') ... - miniroom:1:2 - house:2:3 - apartment:3:4 - office:4:5 - company:5:1 + ... (contents of 'challenge_1' batch) ... + name: challenge:1 + environments: + - miniroom:1:2 + - house:2:3 + - apartment:3:4 + - office:4:5 + - company:5:1 OPTION DETAILS: - -h,--help + -h, --help Show this help menu. -c, --containerised - Uses the Dockerfile in the specified directory to start a Docker - container running your solution for each environment. This requires - an extra parameter specifying the dierctory of the Dockerfile for - your solution. See '-c, --containerised' in 'benchbot_submit - --help' for further details on containerised BenchBot submissions. + Runs the submission in containerised mode. The directory containing + the Dockerfile describing your solution must be specified as a + trailing argument to this command. See '-c, --containerised' in + 'benchbot_submit --help' for further details on containerised + BenchBot submissions. -e, --envs A comma-separated list of environments for $(basename "$0") to @@ -86,16 +90,28 @@ OPTION DETAILS: specifying valid environments, & 'benchbot_run --list-envs' for a complete list of supported environments. - -E, --envs-file - A file specifying a single valid environment name on each line. The - $(basename "$0") script will iterate over each specified + -E, --envs-batch + The name of an environment batch specifying a list of environments. + The $(basename "$0") script will iterate over each specified environment. See '-e, --envs' above for further details on valid environment specifications. + --example + The name of an installed example to run instead of providing + explicit execution commands / Dockerfiles to run at the end of this + command. Note that you still have to specify whether you would like + to run the example in native or containerised mode. + + --list-batches + Lists all supported environment batches. These can be used with the + '--required-envs-batch' option. Use '--show-batch' to see more + details about a batch. + -n, --native - Runs everything after this flag as a command directly on your - system for each environment. See '-n, --native' in 'benchbot_submit - --help' for further details on native BenchBot submissions. + Runs the submission in native mode. The command to execute natively + will be taken from the trailing arguments provided to your command. + See '-n, --native' in 'benchbot_submit --help' for further details + on native BenchBot submissions. -p, --prefix Prefix to use in naming of files produced by $(basename "$0"). If @@ -110,12 +126,27 @@ OPTION DETAILS: semslam.zip semslam_scores.json + -r, --robot + Configure BenchBot to use a specifc robot. Every environment in the + requested batch will be run with this robot (so make sure they + support the robot). See '-r, --robot' in 'benchbot_run --help' for + further details on specifying valid robots, & 'benchbot_run + --list-robots' for a complete list of supported robots. + -s, --score-results - Perform evaluation on the batch of results produced by $(basename "$0"). - The scores from each results file in the batch are then combined - into a final set of scores for your algorithm, on the tested task. - Scores are combined using the approach described in 'benchbot_eval - --help'. + The name of the evaluation method to use to perform evaluation on + the batch of results produced by $(basename "$0"). The scores from + each results file in the batch are then combined into a final set + of scores for your algorithm, on the tested task. See + '-m, --method' in 'benchbot_eval' for supported methods and + 'benchbot_eval --help' for details of how scores are combined. If + this isn't provided, results will be saved to disk with no + evaluation performed. + + --show-batch + Prints information about the provided batch name if installed. The + corresponding file's location will be displayed, with a snippet of + its contents. -t, --task Configure BenchBot for a specific task style. Every environment in @@ -124,10 +155,10 @@ OPTION DETAILS: & 'benchbot_run --list-tasks' for a complete list of supported tasks. - -v,--version + -v, --version Print version info for current installation. - -z,--zip + -z, --zip Produce a ZIP file of the results once all environments in the batch have been run. The ZIP file will be named using the value provided by the '-p, --prefix' argument (i.e. 'PREFIX.zip'), with @@ -144,7 +175,7 @@ FURTHER DETAILS: " collision_warn="WARNING: Running of environment '%s' in passive mode resulted in a -collision. This should not happen, so this environment will be rerun!" +collision. This shouldn't happen, so this environment will be rerun!" run_err="ERROR: Running of environment '%s' failed with the error printed above. Quitting batch execution." @@ -152,11 +183,15 @@ Quitting batch execution." submission_err="ERROR: Submission for environment '%s' failed with the error printed above. Quitting batch execution." +_list_batches_pre=\ +"The following environment batches are available in your installation: + " + run_pid= function kill_run() { if [ ! -z $run_pid ]; then kill -TERM $run_pid &> /dev/null - wait $run_pid + wait $run_pid || true run_pid= fi } @@ -165,7 +200,7 @@ submit_pid= function kill_submit() { if [ ! -z $submit_pid ]; then kill -TERM $submit_pid &> /dev/null - wait $submit_pid + wait $submit_pid || true submit_pid= fi } @@ -177,37 +212,66 @@ function exit_gracefully() { exit ${1:-0} } +function validate_envs() { + # $1 requested envs, $2 requested envs batch + if [ ! -z "$1" ] && [ ! -z "$2" ]; then + printf "${colour_red}%s${colour_nc}\n" \ + "ERROR: Only '--envs' or '--envs-batch' is valid, not both." + elif [ -z "$1" ] && [ -z "$2" ]; then + printf "${colour_red}%s %s${colour_nc}\n" \ + "ERROR: No environments were provided via either" \ + "'--envs' or '--envs-batch'" + fi +} + ################################################################################ #################### Parse & handle command line arguments ##################### ################################################################################ # Safely parse options input -parse_out=$(getopt -o e:E:c:hn:p:st:vz --long \ - envs:,envs-file:,containerised:,help,native:,prefix:,score-results,task:,version,zip \ - -n "$(basename "$0")" -- "$@") +_args='envs:,envs-batch:,example:,containerised,help,list-batches,native,\ +prefix:,robot:,score-results:,show-batch,task:,version,zip' +parse_out=$(getopt -o e:E:chnp:r:s:t:vz --long "$_args" -n "$(basename "$0")" \ + -- "$@") if [ $? != 0 ]; then exit 1; fi +containerised= eval set -- "$parse_out" evaluate= +example= +example_prefix= envs_str= -envs_list= -submit_args= +envs_batch= +evaluate_method= +native= prefix="$DEFAULT_PREFIX" +robot= +submit_args= task= zip= while true; do case "$1" in + -c|--containerised) + containerised='--containerised' ; shift ;; -e|--envs) envs_str="$2" ; shift 2 ;; - -E|--envs-file) - envs_file="$2" ; shift 2 ;; + -E|--envs-batch) + envs_batch="$2" ; shift 2 ;; + --example) + example="$2" ; example_prefix="--example" ; shift 2 ;; -h|--help) echo "$usage_text" ; shift ; exit 0 ;; - -n|--native|-c|--containerised) - submit_args="$@"; break ;; + --list-batches) + list_content "batches" "$_list_batches_pre" "a" 2; exit $? ;; + -n|--native) + native='--native' ; shift ;; -p|--prefix) prefix="$2"; shift 2 ;; + -r|--robot) + robot="$2"; shift 2 ;; -s|--score-results) - evaluate=true; shift ;; + evaluate_method="$2"; shift 2 ;; + --show-batch) + show_content "batches" "$2" 2; exit $? ;; -t|--task) task="$2"; shift 2 ;; -v|--version) @@ -215,7 +279,7 @@ while true; do -z|--zip) zip=true; shift ;; --) - shift ; break ;; + shift ; submit_args="$@"; break ;; *) echo "$(basename "$0"): option '$1' is unknown"; shift ; exit 1 ;; esac @@ -223,35 +287,36 @@ done # Process envs & envs-file here (defer all other argument evaluation to the # appropriate scripts which use the values) -if [ ! -z "$envs_str" ] && [ ! -z "$envs_file" ]; then - printf "${colour_red}%s${colour_nc}\n" \ - "ERROR: Both '--envs' && '--envs-file' provided; please only provide 1!" - exit 1 -elif [ ! -z "$envs_str" ]; then - envs_list=(${envs_str//,/ }) -elif [ ! -z "$envs_file" ]; then - envs_list=($(cat "$envs_file" | tr '\n' ' ' | sed 's/[[:space:]]*$//')) +err="$(validate_envs "$envs_str" "$envs_batch")" +if [ ! -z "$err" ]; then echo "$err"; exit 1; fi +if [ ! -z "$envs_str" ]; then + envs_list=($(echo "$envs_str" | tr ',' '\n')) +elif [ ! -z "$envs_batch" ]; then + envs_list=($(run_manager_cmd 'print("\n".join(get_value_by_name(\ + "batches", "'$envs_batch'", "environments")))')) fi ################################################################################ ####################### Print settings prior to running ######################## ################################################################################ -trap exit_gracefully SIGINT SIGQUIT SIGKILL SIGTERM - header_block "Dumping settings before running batch" $colour_magenta _ind="$(printf "%0.s " {1..8})" printf "\nUsing the following static settings for each environment:\n" -printf "$_ind%-25s%s\n$_ind" "Selected task:" "${task:-None}" -if [[ $submit_args == "-c"* ]]; then +printf "$_ind%-25s%s\n" "Selected task:" "${task:-None}" +printf "$_ind%-25s%s\n$_ind" "Selected robot:" "${robot:-None}" +if [ ! -z "$example" ]; then + printf "%-25s%s\n" "Example to run:" \ + "$example \ + ($([ ! -z "$containerised" ] && echo "containerised" || echo "native"))" +elif [ ! -z "$containerised" ]; then printf "%-25s%s\n" "Dockerfile to build:" \ "$(echo "$submit_args" | awk '{print $2}')/Dockerfile" -elif [ -z "$submit_args" ]; then - printf "%-25s%s\n" "Command to execute:" "None" +elif [ ! -z "$native" ]; then + printf "%-25s%s\n" "Command to execute:" "$(echo "${submit_args[@]}")" else - printf "%-25s%s\n" "Command to execute:" \ - "$(echo "$submit_args" | awk '{for (i=2; i<=NF; i++) printf $i " "}')" + printf "%-25s%s\n" "Command to execute:" "None" fi printf "\nIterating through the following environment list:\n$_ind" @@ -265,13 +330,14 @@ printf "\nPerforming the following after all environments have been run:\n" printf "$_ind%-25s%s\n" "Create results *.zip:" \ "$([ -z "$zip" ] && echo "No" || echo "Yes")" printf "$_ind%-25s%s\n\n" "Evalute results batch:" \ - "$([ -z "$evaluate" ] && echo "No" || echo "Yes")" - + "$([ -z "$evaluate_method" ] && echo "No" || echo "Yes ($evaluate_method)")" ################################################################################ ############### Iterate over each of the requested environments ################ ################################################################################ +trap exit_gracefully SIGINT SIGQUIT SIGKILL SIGTERM + if [ -z "$envs_list" ]; then echo "No environments provided; exiting." exit 0 @@ -283,14 +349,16 @@ while [ $i -lt ${#envs_list[@]} ]; do # Run the submission in the environment, waiting until something finishes header_block "Gathering results for environment: ${envs_list[$i]}" \ $colour_magenta - benchbot_run -t "${task:-None}" -e "${envs_list[$i]}" -f &> /tmp/benchbot_run_out & + benchbot_run -t "${task:-None}" -e "${envs_list[$i]}" -r "${robot:-None}" -f \ + &> /tmp/benchbot_run_out & run_pid=$! - benchbot_submit -r "${prefix}_$i.json" $submit_args & + benchbot_submit -r "${prefix}_$i.json" $example_prefix $example \ + $containerised $native $submit_args & submit_pid=$! while ps -p $run_pid &>/dev/null && ps -p $submit_pid &>/dev/null; do sleep 1 done - sleep 5 + sleep 3 # Run should never die normally, so treat this as an error if ! $(ps -p $run_pid &>/dev/null); then @@ -306,8 +374,7 @@ while [ $i -lt ${#envs_list[@]} ]; do fi # Handle the result of failed submissions (looking for an error code) - wait $submit_pid - submit_result=$? + wait $submit_pid && submit_result=0 || submit_result=1 if [ $submit_result -ne 0 ]; then echo "" kill_run @@ -315,19 +382,22 @@ while [ $i -lt ${#envs_list[@]} ]; do exit 1 fi - # Move to next environment (excluding case of collisions whilst in passive - # mode) - if [ ! -z "$(echo "$task" | grep -v "passive")" ] || \ - [ -z "$(docker run --rm --network $DOCKER_NETWORK -it \ + # Skip moving on if we collided using 'move_next' actuation, otherwise move + # to the next environment + if [ ! -z "$(run_manager_cmd 'print(get_value_by_name("tasks", "'$task'", \ + "actions"))' | grep "'move_next'")" ] && \ + [ ! -z "$(docker run --rm --network $DOCKER_NETWORK -it \ "$DOCKER_TAG_BACKEND" /bin/bash -c \ - 'curl '$HOSTNAME_SUPERVISOR':'$PORT_SUPERVISOR'/simulator/is_collided' \ - | grep "true")" ]; then - results_list+=("${prefix}_$i.json") - ((i++)) - else + 'curl '$HOSTNAME_SUPERVISOR:$PORT_SUPERVISOR'/robot/is_collided' | \ + grep "true")" ]; then printf "\n${colour_yellow}$collision_warn${colour_nc}\n\n" \ "${envs_list[$i]}" + else + results_list+=("${prefix}_$i.json") + i=$((i+1)) fi + + # Start the next run kill_run done @@ -343,9 +413,10 @@ if [ ! -z "$zip" ]; then echo "" fi -if [ ! -z "$evaluate" ]; then +if [ ! -z "$evaluate_method" ]; then echo -e "${colour_magenta}Evaluating results... ${colour_nc}" - benchbot_eval -o "${prefix}_scores.json" --required-task "$task" \ + benchbot_eval -m "$evaluate_method" -o "${prefix}_scores.json" \ + --required-task "$task" \ --required-envs $(echo "${envs_list[@]}" | tr ' ' ',') \ $([ -z "$zip" ] && echo "${results_list[@]}" || echo "${prefix}.zip") else diff --git a/bin/benchbot_eval b/bin/benchbot_eval index b65b2df..7bf25bc 100755 --- a/bin/benchbot_eval +++ b/bin/benchbot_eval @@ -4,6 +4,8 @@ ################### Load Helpers & Global BenchBot Settings #################### ################################################################################ +set -euo pipefail +IFS=$'\n\t' abs_path=$(readlink -f $0) pushd $(dirname $abs_path) > /dev/null source .helpers @@ -15,17 +17,29 @@ popd > /dev/null ################################################################################ usage_text="$(basename "$0") -- Script for evaluating the performance of your solution -to a Scene Understanding Challenge against a running simulator. The script -simply calls the 'benchbot_eval' python module with your provided results -file/s. +to a task in a running environment. This script simply calls the installed +'benchbot_eval' python module with your provided results file/s. + +Results files are validated before evaluation. A results file must specify: + + - details of the task in which the results were gathered + - details for each of the environments the were gathered in (i.e. if a task + requires multiple scenes. This is NOT for denoting multiple different + results, which should each be in their own file) + - the set of results in the format described by format type (in task + details) + +Errors will be presented if validation fails, and evaluation will not proceed. +There are helper functions available in the BenchBot API for creating results +('BenchBot.empty_results()' & 'BenchBot.results_functions()'). Evaluation is performed on a set of results which are gathered from a set of -environments. For example, you can evaluate your algorithm just in house:1, or -evaluate the performance holistically in all 5 of the house scenes. As such, -the following modes are supported by benchbot_eval: +runs. For example, you can evaluate your algorithm just in house:1, or evaluate +the performance holistically in all 5 of the house scenes. As such, the +following modes are supported by benchbot_eval: - - Providing a single JSON results file (the score in this environment will - simply be returned as your final score) + - Providing a single JSON results file (the score in this run will simply + be returned as your final score) - Providing a list of JSON results files (the final score returned will be the average of the scores for each individual results file) @@ -34,11 +48,6 @@ the following modes are supported by benchbot_eval: will be the same as above, across all JSON files found in the *.zip archive) -Note: results must be of the format specified in the README here: - https://github.com/roboticvisionorg/benchbot_eval#the-results-format -The evaluation will error if missing any required fields (in particular the -fields describing task type). - USAGE: See this information about evaluation options: @@ -58,10 +67,27 @@ USAGE: OPTION DETAILS: - -h,--help + -h, --help Show this help menu. - -o,--output-location + --list-batches + Lists all supported environment batches. These can be used with the + '--required-envs-batch' option. Use '--show-batch' to see more + details about a batch. + + --list-methods + List all supported evaluation methods. The listed methods are + printed in the format needed for the '--method' option. Use + '--show-method' to see more details about a method. + + -m, --method + Name of method to be used for evaluation of results. All ground + truths in the method's 'ground_truth_format' will be passed to the + evaluation script. + + (use '--list-methods' to see a list of supported evaluation methods) + + -o, --output-location Change the location where the evaluation scores json is saved. If not provided, results are saved as 'scores.json' in the current directory. @@ -74,20 +100,34 @@ OPTION DETAILS: specifying valid environments, & 'benchbot_run --list-envs' for a complete list of supported environments. - --required-envs-file - A file specifying a single required environment name on each line. - Evaluation will not run unless a result is supplied for each of - these environments. See '--required-envs' above for further - details on valid environment specifications. + --required-envs-batch + An environments batch specifying a single required environment name + on each line. Evaluation will not run unless a result is supplied + for each of these environments. See '--required-envs' above for + further details on valid environment specifications. --required-task Forces the script to only accept results for the supplied task name. A list of supported task names can be found by running 'benchbot_run --list-tasks'. - -v,--version + --show-batch + Prints information about the provided batch name if installed. The + corresponding file's location will be displayed, with a snippet of + its contents. + + --show-method + Prints information about the provided method name if installed. The + corresponding YAML's location will be displayed, with a snippet of + its contents. + + -v, --version Print version info for current installation. + -V, --validate-only + Only perform validation of each provided results file, then exit + without performing evaluation + FURTHER DETAILS: See the 'benchbot_examples' repository for example results (& solutions @@ -97,48 +137,115 @@ FURTHER DETAILS: b.talbot@qut.edu.au " -_invalid_location_err=\ -"ERROR: The provided results file '%s' either does not exist, or is not a file. -Please ensure all provided results files are valid." - -_missing_results_err=\ -"ERROR: No results file was provided. Please run again with a valid results -file as input." - _ground_truth_err=\ "ERROR: The script was unable to find ground truth files in the expected location ('%s'). This should be created as part of the 'benchbot_install' process. Please re-run the installer." +_list_batches_pre=\ +"The following environment batches are available in your installation: + " +_list_methods_pre=\ +"The following evaluation methods are available in your installation: + " + +function validate_method() { + # $1 evaluation method, $2 validate only + err= + if [ -z "$1" ] && [ -z "$2" ]; then + err="$(printf "%s %s\n" "Evaluation was requested but no evaluation"\ + "method was selected. A selection is required.")" + elif [ -z "$2" ] && \ + [ "$(run_manager_cmd 'print(exists("evaluation_methods", \ + [("name", "'$1'")]))')" != "True" ]; then + err="$(printf "%s %s\n" "Evaluation method '$1' is not supported." \ + "Please check '--list-methods'.")" + fi + + if [ ! -z "$err" ]; then + printf "$err\n" + printf "\n${colour_red}%s${colour_nc}" \ + "ERROR: Invalid evaluation mode selected (evaluation method ='$1')" + fi +} + +function validate_required_envs() { + # $1 required envs, $2 required envs batch + if [ ! -z "$1" ] && [ ! -z "$2" ]; then + printf "${colour_red}%s %s${colour_nc}\n" \ + "ERROR: Only '--required-envs' or '--required-envs-batch' is valid,"\ + "not both." + elif [ ! -z "$2" ]; then + validate_content "batches" "$2" "" "" 2 + fi +} + +function validate_results_files() { + # $@ results files list + err= + if [ $# -eq 0 ]; then + err="$(printf "%s %s\n" "No results file/s were provided. Please run" \ + "again with a results file.")" + else + for r in "$@"; do + if [ ! -f "$r" ]; then + err="$(printf "%s %s\n" "Results file '$r' either doesn't exist," \ + "or isn't a file.")" + fi + done + fi + + if [ ! -z "$err" ]; then + printf "$err\n" + printf "\n${colour_red}%s${colour_nc}" \ + "ERROR: Results file/s provided were invalid. See errors above." + fi +} + ################################################################################ #################### Parse & handle command line arguments ##################### ################################################################################ # Safely parse options input -long_args='help,output-location:,required-envs:,required-envs-file:,\ -required-task:,version' -parse_out=$(getopt -o ho:v --long "$long_args" -n "$(basename "$0")" -- "$@") +_args='help,list-batches,list-methods,method:,output-location:,\ +required-envs:,required-envs-batch:,required-task:,show-batch:,\ +show-method:,validate-only,version' +parse_out=$(getopt -o ho:m:vV --long "$_args" -n "$(basename "$0")" -- "$@") if [ $? != 0 ]; then exit 1; fi eval set -- "$parse_out" +method= required_envs= -required_envs_file= +required_envs_batch= required_task= results_files= scores_location='scores.json' +validate_only= while true; do case "$1" in -h|--help) echo "$usage_text" ; shift ; exit 0 ;; + --list-batches) + list_content "batches" "$_list_batches_pre" "a" 2; exit $? ;; + --list-methods) + list_content "evaluation_methods" "$_list_methods_pre"; exit $? ;; + -m|--method) + method="$2"; shift 2 ;; -o|--output-location) scores_location="$2"; shift 2 ;; --required-envs) - required_envs=(${2//,/ }); shift 2 ;; - --required-envs-file) - required_envs_file="$2"; shift 2 ;; + required_envs=($(echo "$2" | tr ',' '\n')); shift 2 ;; + --required-envs-batch) + required_envs_batch="$2"; shift 2 ;; --required-task) required_task="$2"; shift 2 ;; + --show-batch) + show_content "batches" "$2" 2; exit $? ;; + --show-method) + show_content "evaluation_methods" "$2"; exit $? ;; -v|--version) print_version_info; exit ;; + -V|--validate-only) + validate_only=1; shift ;; --) shift ; results_files=("$@"); break;; *) @@ -146,47 +253,37 @@ while true; do esac done -# Accept only --required-envs or --required-envs-file -if [ ! -z "$required_envs" ] && [ ! -z "$required_envs_file" ]; then - printf "${colour_red}%s\n%s${colour_nc}\n" \ - "ERROR: Both '--required-envs' && '--required-envs-file' provided; please " \ - "only provide 1!" - exit 1 -elif [ ! -z "$required_envs_file" ]; then - required_envs=($(cat "$required_envs_file" | tr '\n' ' ' | \ - sed 's/[[:space:]]*$//')) +# Bail if any of the requested configurations are invalid +err="$(validate_required_envs "$required_envs" "$required_envs_batch")" +if [ ! -z "$err" ]; then echo "$err"; exit 1; fi +if [ ! -z "$required_envs_batch" ]; then + required_envs=($(run_manager_cmd 'print("\n".join(get_value_by_name(\ + "batches", "'$required_envs_batch'", "environments")))')) fi - -# Bail if any of the results files don't exist, or we got no results files -if [ -z "$results_files" ]; then - printf "${colour_red}${_missing_results_err}${colour_nc}\n" - exit 1 -else - for r in "${results_files[@]}"; do - if [ ! -f "$r" ]; then - printf "${colour_red}${_invalid_location_err}${colour_nc}\n" "$r" - exit 1 - fi +if [ ! -z "$required_envs" ]; then + for e in "${required_envs[@]}"; do + err="$(validate_environment "$e")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi done fi - -# Get an absolute path for ground truth (as everything else is relative to where -# the script was run...) -gt_abs=$(realpath "$PATH_GROUND_TRUTH") -if [ ! -d "$gt_abs" ]; then - printf "${colour_red}${_ground_truth_err}${colour_nc}\n" "$gt_abs" - exit 1 +if [ ! -z "$required_task" ]; then + err="$(validate_content "tasks" "$required_task")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi fi +err="$(validate_method "$method" "$validate_only")" +if [ ! -z "$err" ]; then echo "$err"; exit 1; fi +err="$(validate_results_files "${results_files[@]}")" +if [ ! -z "$err" ]; then echo "$err"; exit 1; fi ################################################################################ -################# Run evaluation on the provided results file ################## +##################### Validate the provided results files ###################### ################################################################################ -header_block "Running evaluation over ${#results_files[@]} input files" \ +header_block "Running validation over ${#results_files[@]} input files" \ $colour_green -# Form some strings of python code from our input arguments -python_results='["'"$(echo "${results_files[@]}" | sed 's/ /","/g')"'"]' +# Build up some strings for Python +python_results_files='["'"$(echo "${results_files[@]}" | sed 's/ /","/g')"'"]' python_req_task= if [ ! -z "$required_task" ]; then python_req_task=', required_task="'"$required_task"'"' @@ -197,13 +294,25 @@ if [ ! -z "$required_envs" ]; then sed 's/ /","/g')'"]' fi -# Run the python command & exit +# Validate provided results using the Validator class from 'benchbot_eval' +# Python module +python3 -c 'from benchbot_eval import Validator; \ + Validator('"$python_results_files$python_req_task$python_req_envs"')' + +if [ ! -z "$validate_only" ]; then exit 0; fi + +################################################################################ +##################### Evaluate the provided results files ###################### +################################################################################ + +header_block "Running evaluation over ${#results_files[@]} input files" \ + $colour_green + +# Evaluate results using the pickled Validator state from the step above python3 -c 'from benchbot_eval import Evaluator; \ - Evaluator('"$python_results"', "'"$gt_abs"'", "'$scores_location'"\ - '"$python_req_task$python_req_envs"').evaluate()' -result=$? -if [ $result -ne 0 ]; then + Evaluator("'$method'", "'$scores_location'").evaluate()' && ret=0 || ret=1 +if [ $ret -ne 0 ]; then printf "${colour_red}\n%s: %d${colour_nc}\n" \ - "Evaluation failed with result error code" "$result" + "Evaluation failed with result error code" "$ret" fi -exit $result +exit $ret diff --git a/bin/benchbot_install b/bin/benchbot_install index 495c739..f442df0 100755 --- a/bin/benchbot_install +++ b/bin/benchbot_install @@ -4,6 +4,8 @@ ################### Load Helpers & Global BenchBot Settings #################### ################################################################################ +set -euo pipefail +IFS=$'\n\t' abs_path=$(readlink -f $0) pushd $(dirname $abs_path) > /dev/null source .helpers @@ -20,9 +22,11 @@ $URL_ROBOT $HOSTNAME_ROBOT $URL_SUPERVISOR $HOSTNAME_SUPERVISOR $URL_DEBUG $HOSTNAME_DEBUG" -# Default program versions to install -NVIDIA_DEFAULT='nvidia-driver-455' +# Defaults for the installation process +ADDONS_DEFAULT='benchbot-addons/ssu' CUDA_DEFAULT='cuda' +NVIDIA_DEFAULT='nvidia-driver-455' +SIMULATOR_DEFAULT='sim_unreal' ################################################################################ ########################### Definitions for messages ########################### @@ -45,46 +49,75 @@ USAGE: OPTION DETAILS: - -h,--help + -h, --help Show this help menu. - -b,--branch + -a, --addons + Comma separated list of add-ons to install. Add-ons exist in GitHub + repositories, and are specified by their identifier: + 'username/repo_name'. Add-on installation also installs all + required add-on dependencies. See add-on manager documentation for + details: + https://github.com/qcr/benchbot-addons + + Also see '--list-addons' for a details on installed add-ons, and a + list of all known BenchBot add-ons. + + -A, --addons-only + Only perform the installation of the specified add-ons. Skip all + other steps in the install process, including Docker images and + local software installation. This flag is useful if you just want + to change add-ons inside your working installation. + + -b, --branch Specify a branch other than master to install. The only use for this flag is active development. The general user will never need to use this flag. - -e,--envs-url - Specify a custom URL to look for an \"environment information file\". - An \"environment information file\" is a single line file with space- - separated fields: MD5 checksum of environments *.zip, URL pointing - to the environments *.zip, & plaintext version details. A general - user should never need this flag. - - Multiple sets of environments can be installed by providing this - flag multiple times: - benchbot_install -e https://envs1.com -e https://envs2.com - - -f,--force-clean + -f, --force-clean Forces an install of the BenchBot software stack from scratch. It will run uninstall, then the full install process. + --list-addons + List all currently installed add-ons with their dependency + structure noted. Also list all official add-ons available in the + 'benchbot-addons' GitHub organisation. If you would like to add a + community contribution to the community list, please follow the + instructions here: + https://github.com/qcr/benchbot-addons + --no-simulator - Runs installation without installing the Nvidia Isaac simulator. - This option is useful for avoiding the excessive space requirements - (>100GB) when using the BenchBot software stack on a machine that - will only be used with a real robot. + Runs installation without installing the Nvidia Isaac Unreal + simulator. This option is useful for avoiding the excessive space + requirements (>100GB) when using the BenchBot software stack on a + machine that will only be used with a real robot. --no-update Skip checking for updates to the BenchBot software stack, & instead jump straight into the installation process. - -u,--uninstall + --remove-addons + Comma separated list of add-ons to remove (uses the same syntax as + the '--addons' flag). This command will also remove any dependent + add-ons as they will no longer work (e.g. if A depends on B, and + B is uninstalled, then there is no reason for A to remain). You + will be presented with a list of add-ons that will be removed, and + prompted before removal commences. + + -s, --simulators + Specify simulator/s to install (only 'sim_unreal' is currently + supported). Comma-separated lists are accepted to specify multiple + simulators. If this option isn't included, the default 'sim_unreal' + is installed. This flag will have more practical use in the future + when we add Omniverse support. + + -u, --uninstall Uninstall the BenchBot software stack from the machine. All BenchBot related Docker images will be removed from the system, the API removed from pip, & downloaded files removed from the BenchBot install. This flag is incompatible with all other flags. - -v,--version + -v, --version Print version info for current installation. FURTHER DETAILS: @@ -102,14 +135,6 @@ fix this. If the error is more generic, please contact us so that we can update our pre-install host system checks. " -envs_err="\ -Ensure that the URL: - "'$envs_url'" -points to a single-line text file with whitespace-separated fields. The first -field contains the md5sum of the latest envs_*.zip, second field is URL of -envs_*.zip, final field (optional) is YYYYMMDD timestamp of zip. A working -internet connection also helps! -" ################################################################################ ################### All checks for the host system (ouch...) ################### @@ -559,16 +584,15 @@ checks_list_post=( "Validating the build against the host system:" "cudadriverdep" # Does Nvidia driver satisfy cuda dep? "Validating BenchBot libraries on the host system:" + "addonscloned" + "addonsuptodate" + "addonsinstalled" "apicloned" "apiuptodate" "apiinstalled" - "examplescloned" - "examplesuptodate" "evalcloned" "evaluptodate" "evalinstalled" - "batchcloned" - "batchuptodate" "Integrating BenchBot with the host system:" "hostsavail" "symlinks" @@ -609,6 +633,66 @@ if [ -z "$v" ]; then v="cuda-10-1"; fi && sudo apt remove -y "$v" && sudo apt -y autoremove && sudo apt install -y "$v"' chk_cudadriverdep_reboot=1 +chk_addonscloned_name='BenchBot Add-ons Manager cloned' +chk_addonscloned_pass='Yes' +chk_addonscloned_fail='No' +chk_addonscloned_check=\ +'git -C '"$PATH_ADDONS"' rev-parse --show-toplevel 2>/dev/null' +chk_addonscloned_eval='[ "$check_result" == "$(realpath '"$PATH_ADDONS"')" ]' +chk_addonscloned_issue="\ + The BenchBot Add-ons Manager Python library is not cloned on the host system. + Having it installed is required for using add-ons, which contain all of the + pre-made content for BenchBot (robots, environments, task definitions, + evaluation methods, etc.)." +chk_addonscloned_fix=\ +'rm -rf '"$PATH_ADDONS"' && +git clone '"$GIT_ADDONS $PATH_ADDONS"' && +pushd '"$PATH_ADDONS"' && +git fetch --all && git checkout -t origin/$BRANCH_DEFAULT && popd' +chk_addonscloned_reboot=1 + +chk_addonsuptodate_name='BenchBot Add-ons Manager up-to-date' +chk_addonsuptodate_pass='Up-to-date' +chk_addonsuptodate_fail='Outdated' +chk_addonsuptodate_check=\ +'[ -d '"$PATH_ADDONS"' ] && git -C '"$PATH_ADDONS"' rev-parse HEAD && +git ls-remote '"$GIT_ADDONS"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'" +chk_addonsuptodate_eval='[ ! -z "$check_result" ] && + [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]' +chk_addonsuptodate_issue="\ + The version of the BenchBot Add-ons Manager Python library on the host system + is out of date. The current version hash & latest version hash respectively + are: + +"'$check_result'" + + Please move to the latest version." +chk_addonsuptodate_fix=\ +'pushd '"$PATH_ADDONS"' && +git fetch --all && git checkout -- . && +(git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) && +git pull && popd' +chk_addonsuptodate_reboot=1 + +chk_addonsinstalled_name='BenchBot Add-ons Manager installed' +chk_addonsinstalled_pass='Available' +chk_addonsinstalled_fail='Not found' +chk_addonsinstalled_check=\ +'python3 -c '"'"'import benchbot_addons; print(benchbot_addons.__file__);'"'"' \ + 2>/dev/null' +chk_addonsinstalled_eval='[ ! -z "$check_result" ]' +chk_addonsinstalled_issue="\ + BenchBot Add-ons Manager was not found in Python. It is either not installed, + or the current terminal is not correctly sourcing your installed Python + packages (could be a virtual environment, conda, ROS, etc). + + Please do not run the automatic fix if you intend to source a different Python + environment before running BenchBot." +chk_addonsinstalled_fix=\ +'pushd '"$PATH_ADDONS"' && +python3 -m pip install -e . && popd' +chk_addonsinstalled_reboot=1 + chk_apicloned_name='BenchBot API cloned' chk_apicloned_pass='Yes' chk_apicloned_fail='No' @@ -616,7 +700,7 @@ chk_apicloned_check=\ 'git -C '"$PATH_API"' rev-parse --show-toplevel 2>/dev/null' chk_apicloned_eval='[ "$check_result" == "$(realpath '"$PATH_API"')" ]' chk_apicloned_issue="\ - The BenchBot API python library is not cloned on the host system. Having it + The BenchBot API Python library is not cloned on the host system. Having it installed significantly improves the development experience, & allows you to run your submissions natively without containerisation." chk_apicloned_fix=\ @@ -635,7 +719,7 @@ git ls-remote '"$GIT_API"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'" chk_apiuptodate_eval='[ ! -z "$check_result" ] && [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]' chk_apiuptodate_issue="\ - The version of the BenchBot API python library on the host system is out of + The version of the BenchBot API Python library on the host system is out of date. The current version hash & latest version hash respectively are: "'$check_result'" @@ -656,56 +740,17 @@ chk_apiinstalled_check=\ 2>/dev/null' chk_apiinstalled_eval='[ ! -z "$check_result" ]' chk_apiinstalled_issue="\ - BenchBot API was not found in python. It is either not installed, or the - current terminal is not correctly sourcing your installed python packages + BenchBot API was not found in Python. It is either not installed, or the + current terminal is not correctly sourcing your installed Python packages (could be a virtual environment, conda, ROS, etc). - Please do not run the automatic fix if you intend to source a different python + Please do not run the automatic fix if you intend to source a different Python environment before running BenchBot." chk_apiinstalled_fix=\ 'pushd '"$PATH_API"' && python3 -m pip install -e . && popd' chk_apiinstalled_reboot=1 -chk_examplescloned_name='BenchBot examples cloned' -chk_examplescloned_pass='Yes' -chk_examplescloned_fail='No' -chk_examplescloned_check=\ -'git -C '"$PATH_EXAMPLES"' rev-parse --show-toplevel 2>/dev/null' -chk_examplescloned_eval='[ "$check_result" == "$(realpath '"$PATH_EXAMPLES"')" ]' -chk_examplescloned_issue="\ - The BenchBot examples python library is not cloned on the host system. Having - it installed provides hands on examples to get up & running with the BenchBot - system, including introductions to all of the different challenge modes." -chk_examplescloned_fix=\ -'rm -rf '"$PATH_EXAMPLES"' && -git clone '"$GIT_EXAMPLES $PATH_EXAMPLES"' && -pushd '"$PATH_EXAMPLES"' && -git fetch --all && git checkout -t origin/$BRANCH_DEFAULT && popd' -chk_examplescloned_reboot=1 - -chk_examplesuptodate_name='BenchBot examples up-to-date' -chk_examplesuptodate_pass='Up-to-date' -chk_examplesuptodate_fail='Outdated' -chk_examplesuptodate_check=\ -'[ -d '"$PATH_EXAMPLES"' ] && git -C '"$PATH_EXAMPLES"' rev-parse HEAD && -git ls-remote '"$GIT_EXAMPLES"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'" -chk_examplesuptodate_eval='[ ! -z "$check_result" ] && - [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]' -chk_examplesuptodate_issue="\ - The version of the BenchBot examples python library on the host system is out - of date. The current version hash & latest version hash respectively are: - -"'$check_result'" - - Please move to the latest version." -chk_examplesuptodate_fix=\ -'pushd '"$PATH_EXAMPLES"' && -git fetch --all && git checkout -- . && -(git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) && -git pull && popd' -chk_examplesuptodate_reboot=1 - chk_evalcloned_name='BenchBot evaluation cloned' chk_evalcloned_pass='Yes' chk_evalcloned_fail='No' @@ -713,9 +758,9 @@ chk_evalcloned_check=\ 'git -C '"$PATH_EVAL"' rev-parse --show-toplevel 2>/dev/null' chk_evalcloned_eval='[ "$check_result" == "$(realpath '"$PATH_EVAL"')" ]' chk_evalcloned_issue="\ - The BenchBot evaluation python library is not cloned on the host system. Having it - installed allows you to evaluate the performance of your semantic scene - understanding algorithms directly from your machine." + The BenchBot evaluation Python library is not cloned on the host system. + Having it installed allows you to evaluate the performance of your semantic + scene understanding algorithms directly from your machine." chk_evalcloned_fix=\ 'rm -rf '"$PATH_EVAL"' && git clone '"$GIT_EVAL $PATH_EVAL"' && @@ -732,7 +777,7 @@ git ls-remote '"$GIT_EVAL"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'" chk_evaluptodate_eval='[ ! -z "$check_result" ] && [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]' chk_evaluptodate_issue="\ - The version of the BenchBot evaluation python library on the host system is + The version of the BenchBot evaluation Python library on the host system is out of date. The current version hash & latest version hash respectively are: "'$check_result'" @@ -753,56 +798,17 @@ chk_evalinstalled_check=\ 2>/dev/null' chk_evalinstalled_eval='[ ! -z "$check_result" ]' chk_evalinstalled_issue="\ - BenchBot evaluation was not found in python. It is either not installed, or - the current terminal is not correctly sourcing your installed python packages + BenchBot evaluation was not found in Python. It is either not installed, or + the current terminal is not correctly sourcing your installed Python packages (could be a virtual environment, conda, ROS, etc). - Please do not run the automatic fix if you intend to source a different python + Please do not run the automatic fix if you intend to source a different Python environment before running BenchBot." chk_evalinstalled_fix=\ 'pushd '"$PATH_EVAL"' && python3 -m pip install -e . && popd' chk_evalinstalled_reboot=1 -chk_batchcloned_name='BenchBot batches cloned' -chk_batchcloned_pass='Yes' -chk_batchcloned_fail='No' -chk_batchcloned_check=\ -'git -C '"$PATH_BATCHES"' rev-parse --show-toplevel 2>/dev/null' -chk_batchcloned_eval='[ "$check_result" == "$(realpath '"$PATH_BATCHES"')" ]' -chk_batchcloned_issue="\ - The BenchBot batches library contains definitions for environment batches. - Environment batches robustly select a set of environments that may be - required, like in challenges using BenchBot." -chk_batchcloned_fix=\ -'rm -rf '"$PATH_BATCHES"' && -git clone '"$GIT_BATCHES $PATH_BATCHES"' && -pushd '"$PATH_BATCHES"' && -git fetch --all && git checkout -t origin/$BRANCH_DEFAULT && popd' -chk_batchcloned_reboot=1 - -chk_batchuptodate_name='BenchBot batches up-to-date' -chk_batchuptodate_pass='Up-to-date' -chk_batchuptodate_fail='Outdated' -chk_batchuptodate_check=\ -'[ -d '"$PATH_BATCHES"' ] && git -C '"$PATH_BATCHES"' rev-parse HEAD && -git ls-remote '"$GIT_BATCHES"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'" -chk_batchuptodate_eval='[ ! -z "$check_result" ] && - [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]' -chk_batchuptodate_issue="\ - The version of BenchBot batches on the host system is out of date. The - current version hash & latest version hash respectively are: - -"'$check_result'" - - Please move to the latest version." -chk_batchuptodate_fix=\ -'pushd '"$PATH_BATCHES"' && -git fetch --all && git checkout -- . && -(git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) && -git pull && popd' -chk_batchuptodate_reboot=1 - chk_hostsavail_name='BenchBot hosts available' chk_hostsavail_pass='Found' chk_hostsavail_fail='Not found' @@ -865,7 +871,7 @@ function handle_requirement() { # Perform the check, printing the result & resolving issues if possible retval=0 printf "\t$name: " - check_result=$(eval "$check") + check_result=$(eval "$check") && true printf "\033[46G" if $(eval $evaluate); then printf "${colour_green}%35s" "${pass//'$check_result'/$check_result}" @@ -943,7 +949,7 @@ function uninstall_benchbot() { docker rmi $targets; fi - rm -rfv "$PATH_API" "$PATH_EXAMPLES" "$PATH_EVAL" 2>/dev/null + rm -rfv "$PATH_ADDONS" "$PATH_API" "$PATH_EXAMPLES" "$PATH_EVAL" 2>/dev/null sudo rm -v "$PATH_SYMLINKS"/benchbot* 2>/dev/null echo -e "\nFinished uninstalling!" @@ -957,35 +963,46 @@ function uninstall_benchbot() { trap cleanup_terminal EXIT # Safely parse options input -input=$@ -parse_out=$(getopt -o hb:e:fuv \ - --long help,branch:,envs-url:,force-clean,uninstall,no-simulator,no-update,version \ - -n "$(basename "$abs_path")" -- "$@") +input=("$@") +_args="help,addons:,addons-only:,branch:,force-clean,uninstall,list-addons,\ +no-simulator,no-update,remove-addons:,simulators:,version" +parse_out=$(getopt -o ha:A:b:fs:uv --long $_args -n "$(basename "$abs_path")" \ + -- "$@") if [ $? != 0 ]; then exit 1; fi eval set -- "$parse_out" -updates_skip= +addons= no_simulator= -envs_urls=() +simulators="$SIMULATOR_DEFAULT" +updates_skip= while true; do case "$1" in -h|--help) echo "$usage_text" ; exit 0 ;; + -a|--addons) + addons="$2"; shift 2 ;; + -A|--addons-only) + install_addons "$2"; exit 0 ;; -b|--branch) # This is a real dirty way to do this... sorry BRANCH_DEFAULT="$2"; shift 2 - echo "Using branch '$BRANCH_DEFAULT' instead of the default!" ;; - -e|--envs-url) - envs_urls+=("$2"); shift 2 ;; + printf "\n${colour_yellow}%s${colour_nc}\n" \ + "Using branch '$BRANCH_DEFAULT' instead of the default!" ;; -f|--force-clean) uninstall_benchbot; shift ;; + -s|--simulators) + simulators="$2"; shift 2 ;; -u|--uninstall) uninstall_benchbot; exit ;; -v|--version) print_version_info; exit ;; + --list-addons) + list_addons; exit ;; --no-simulator) no_simulator=1; shift ;; --no-update) updates_skip=1; shift ;; + --remove-addons) + remove_addons "$2"; exit ;; --) shift ; break ;; *) @@ -993,22 +1010,17 @@ while true; do esac done -# Use the default environments URL if none was provided, & sort them so some of -# our Docker caching & env checking later on has an easier time -if [ -z "$envs_urls" ]; then - envs_urls=("$([ -z "$no_simulator" ] && \ - echo "${URLS_ENVS_INFO_FULL_DEFAULT[@]}" || \ - echo "${URLS_ENVS_INFO_LITE_DEFAULT[@]}")") -fi -envs_urls=($(echo "${envs_urls[@]}" | tr ' ' '\n' | sort | tr '\n' ' ')) +# Sanitise argument values +if [ -z "$addons" ]; then addons="$ADDONS_DEFAULT"; fi +if [ ! -z "$no_simulator" ]; then simulators=""; fi # Pre-install if [ -z "$updates_skip" ]; then header_block "CHECKING BENCHBOT SCRIPTS VERSION" ${colour_blue} echo -e "\nFetching latest hash for Benchbot scripts ... " - _benchbot_info=$(is_latest_benchbot $BRANCH_DEFAULT) - is_latest=$? + _benchbot_info=$(is_latest_benchbot $BRANCH_DEFAULT) && is_latest=0 || \ + is_latest=1 benchbot_latest_hash=$(echo "$_benchbot_info" | latest_version_info | \ cut -d ' ' -f 1) echo -e "\t\t$benchbot_latest_hash." @@ -1020,7 +1032,7 @@ if [ -z "$updates_skip" ]; then "restarting install script ...\n${colour_nc}" git fetch --all && git checkout -- . && git checkout "$benchbot_latest_hash" echo -e "\n${colour_yellow}Done.${colour_nc}" - popd > /dev/null && exec $0 $input --no-update + popd > /dev/null && exec $0 ${input[@]} --no-update else echo -e "$_benchbot_info" exit 1 @@ -1031,10 +1043,11 @@ fi header_block "PART 1: EXAMINING SYSTEM STATE" $colour_blue # Patch in any values that can only be determined at runtime -if [ ! -z "$no_simulator" ]; then +# TODO clean this all up for use with multiple simulators... +if [ -z "$simulators" ]; then checks_list_pre=( "${checks_list_pre[@]/isaac}" ) # Remove the Isaac check fi -_sz=$([ -z "$no_simulator" ] && echo "$SIZE_GB_FULL" || echo "$SIZE_GB_LITE") +_sz=$([ ! -z "$simulators" ] && echo "$SIZE_GB_FULL" || echo "$SIZE_GB_LITE") chk_fsspace_eval=${chk_fsspace_eval/SIZE/$_sz} chk_fsspace_issue=${chk_fsspace_issue/SIZE/$_sz} chk_fsspace_fix=${chk_fsspace_fix/SIZE/$_sz} @@ -1045,10 +1058,12 @@ for c in "${checks_list_pre[@]}"; do if [[ "$c" =~ :$ ]]; then printf "\n${colour_blue}$c${colour_nc}\n" elif [ ! -z "$c" ]; then + set +e handle_requirement "$c" res=$? + set -e if [ $res -eq 2 ]; then - popd > /dev/null && exec $0 $input --no-update + popd > /dev/null && exec $0 ${input[@]} elif [ $res -eq 1 ]; then exit 1 fi @@ -1073,73 +1088,51 @@ fi header_block "PART 2: FETCHING LATEST BENCHBOT VERSION INFO" $colour_blue # Get the latest commit hashes for each of our git repos -if [ -z "$no_simulator" ]; then +# TODO adapt this to handle multiple simulators... +hash_fail_msg="Failed (check Internet connection & valid branch name!). Exiting." +if [ ! -z "$simulators" ]; then echo -e "\nFetching latest hash for BenchBot Simulator ... " - benchbot_simulator_hash=$(is_latest_benchbot_simulator $BRANCH_DEFAULT | \ - latest_version_info) -if [ -z "$benchbot_simulator_hash" ]; then - printf "\n\n${colour_red}%s${colour_nc}\n\n" \ - "Failed (check Internet connection!). Exiting." - exit 1 -fi + benchbot_simulator_hash=$( (is_latest_benchbot_simulator $BRANCH_DEFAULT || \ + true) | latest_version_info) + if [ -z "$benchbot_simulator_hash" ]; then + printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n" + exit 1 + fi echo -e "\t\t$benchbot_simulator_hash." fi echo "Fetching latest hash for BenchBot Robot Controller ... " -benchbot_controller_hash=$(is_latest_benchbot_controller $BRANCH_DEFAULT | \ - latest_version_info) +benchbot_controller_hash=$( (is_latest_benchbot_controller $BRANCH_DEFAULT || \ + true) | latest_version_info) if [ -z "$benchbot_controller_hash" ]; then - printf "\n\n${colour_red}%s${colour_nc}\n\n" \ - "Failed (check Internet connection!). Exiting." + printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n" exit 1 fi echo -e "\t\t$benchbot_controller_hash." echo "Fetching latest hash for BenchBot Supervisor ... " -benchbot_supervisor_hash=$(is_latest_benchbot_supervisor $BRANCH_DEFAULT | \ - latest_version_info) +benchbot_supervisor_hash=$( (is_latest_benchbot_supervisor $BRANCH_DEFAULT || \ + true) | latest_version_info) if [ -z "$benchbot_supervisor_hash" ]; then - printf "\n\n${colour_red}%s${colour_nc}\n\n" \ - "Failed (check Internet connection!). Exiting." + printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n" exit 1 fi echo -e "\t\t$benchbot_supervisor_hash." echo "Fetching latest hash for BenchBot API ... " -benchbot_api_hash=$(is_latest_benchbot_api $BRANCH_DEFAULT | \ +benchbot_api_hash=$( (is_latest_benchbot_api $BRANCH_DEFAULT || true) | \ latest_version_info) if [ -z "$benchbot_api_hash" ]; then - printf "\n\n${colour_red}%s${colour_nc}\n\n" \ - "Failed (check Internet connection!). Exiting." + printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n" exit 1 fi echo -e "\t\t$benchbot_api_hash." -# Get md5sum of the latest from the provided envs_url -echo -e "\nFetching md5sum & URL for latest version of environments ... " -benchbot_envs_md5sums=() -benchbot_envs_urls=() -for i in "${!envs_urls[@]}"; do - u="${envs_urls[$i]}" - echo -e "\t$u ... " - - _info=$(is_latest_benchbot_envs "$u" "$i" | latest_version_info) - _md5sum=$(echo "$_info" | cut -d ' ' -f1) - _url=$(echo "$_info" | cut -d ' ' -f2) - if [ -z "$_md5sum" ] || [[ ! "$_md5sum" =~ ^[a-f0-9]{32}$ ]]; then - printf "\n\n${colour_red}%s\n\n%s${colour_nc}\n" \ - "ERROR: Failed to fetch valid MD5SUM for environments" \ - "${envs_err/'$envs_url'/$u}" - exit 1 - elif [ -z "$_url" ] || \ - [[ ! "$_url" =~ ^https://[-A-Za-z0-9\+@#/%?=~_|:,.";&!"]*$ ]]; then - printf "\n\n${colour_red}%s\n\n%s${colour_nc}\n" \ - "ERROR: Failed to fetch valid URL for environments" \ - "${envs_err/'$envs_url'/$u}" - exit 1 - fi - - benchbot_envs_md5sums+=("$_md5sum") - benchbot_envs_urls+=("$_url") - echo -e "\t\tDone." -done +echo "Fetching latest hash for BenchBot ROS Messages ... " +benchbot_msgs_hash=$( (is_latest_benchbot_msgs $BRANCH_DEFAULT || true) | \ + latest_version_info) +if [ -z "$benchbot_msgs_hash" ]; then + printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n" + exit 1 +fi +echo -e "\t\t$benchbot_msgs_hash." # PART 3: Build docker images (both simulator & submission base image) header_block "PART 3: BUILDING DOCKER IMAGES" $colour_blue @@ -1161,8 +1154,8 @@ docker build -t "$DOCKER_TAG_CORE" -f "$PATH_DOCKERFILE_CORE" \ --build-arg TZ=$(cat /etc/timezone) \ --build-arg NVIDIA_DRIVER_VERSION="${nvidia_driver_version}" \ --build-arg CUDA_DRIVERS_VERSION="${cuda_drivers_version}" \ - --build-arg CUDA_VERSION="${cuda_version}" $PATH_ROOT -build_ret=$? + --build-arg CUDA_VERSION="${cuda_version}" $PATH_ROOT && \ + build_ret=0 || build_ret=1 if [ $build_ret -ne 0 ]; then printf "\n${colour_red}%s: %d\n\n${build_err}${colour_nc}\n" \ "ERROR: Building BenchBot \"core\" returned a non-zero error code" \ @@ -1171,49 +1164,37 @@ if [ $build_ret -ne 0 ]; then fi # Build the BenchBot Backend Docker image -backend_dockerfile=$([ -z "$no_simulator" ] && \ - echo "$PATH_DOCKERFILE_BACKEND" || echo "$PATH_DOCKERFILE_BACKEND_LITE") -backend_name="$(basename "$backend_dockerfile")" +# TODO adapt this to handle multiple simulators printf "\n${colour_blue}%s${colour_nc}\n" \ - "BUILDING BENCHBOT BACKEND DOCKER IMAGE ($backend_name):" -docker build -t "$DOCKER_TAG_BACKEND" -f "$backend_dockerfile" \ - --build-arg ISAAC_SDK_TGZ="${sdk_file}" \ - --build-arg BENCHBOT_ENVS_MD5SUMS="${benchbot_envs_md5sums[*]}" \ - --build-arg BENCHBOT_ENVS_URLS="${benchbot_envs_urls[*]}" \ - --build-arg BENCHBOT_ENVS_SRCS="${envs_urls[*]}" \ - --build-arg BENCHBOT_CONTROLLER_GIT="${GIT_CONTROLLER}"\ - --build-arg BENCHBOT_CONTROLLER_HASH="${benchbot_controller_hash}"\ + "BUILDING BENCHBOT BACKEND DOCKER IMAGE:" +docker build -t "$DOCKER_TAG_BACKEND" -f "$PATH_DOCKERFILE_BACKEND" \ + --build-arg SIMULATORS="$simulators" \ + --build-arg ISAAC_SDK_DIR="$(dirname "$sdk_file")" \ + --build-arg ISAAC_SDK_TGZ="$(basename "$sdk_file")" \ + --build-arg BENCHBOT_CONTROLLER_GIT="${GIT_CONTROLLER}" \ + --build-arg BENCHBOT_CONTROLLER_HASH="${benchbot_controller_hash}" \ + --build-arg BENCHBOT_MSGS_GIT="${GIT_MSGS}" \ + --build-arg BENCHBOT_MSGS_HASH="${benchbot_msgs_hash}" \ --build-arg BENCHBOT_SIMULATOR_GIT="${GIT_SIMULATOR}" \ --build-arg BENCHBOT_SIMULATOR_HASH="${benchbot_simulator_hash}" \ --build-arg BENCHBOT_SUPERVISOR_GIT="${GIT_SUPERVISOR}" \ - --build-arg BENCHBOT_SUPERVISOR_HASH="${benchbot_supervisor_hash}" $PATH_ROOT -build_ret=$? + --build-arg BENCHBOT_SUPERVISOR_HASH="${benchbot_supervisor_hash}" \ + --build-arg ADDONS_PATH="${PATH_ADDONS_INTERNAL}" $PATH_ROOT && \ + build_ret=0 || build_ret=1 if [ $build_ret -ne 0 ]; then printf "\n${colour_red}%s: %d\n\n${build_err}${colour_nc}\n" \ - "ERROR: Building BenchBot \"${backend_name%.*}\" returned a non-zero error code" \ + "ERROR: Building BenchBot backend returned a non-zero error code" \ "$build_ret" exit 1 fi -# Pull out ground truth files from the simulator -rm -rf "$PATH_GROUND_TRUTH" -for i in "${!envs_urls[@]}"; do - location=$(docker run --name ground_truth "$DOCKER_TAG_BACKEND" \ - /bin/bash -c 'p="$BENCHBOT_ENVS_PATH/'$i'/'$FILENAME_ENV_GROUND_TRUTH'"; \ - [ -d "$p" ] && echo "$p" || echo ""') - if [ ! -z "$location" ]; then - docker cp ground_truth:"$location" "$PATH_GROUND_TRUTH" - fi - docker rm -f ground_truth -done - # Build the BenchBot Submission Docker image printf "\n${colour_blue}%s${colour_nc}\n" \ "BUILDING BENCHBOT SUBMISSION DOCKER IMAGE:" docker build -t "$DOCKER_TAG_SUBMISSION" -f "$PATH_DOCKERFILE_SUBMISSION" \ --build-arg BENCHBOT_API_GIT="${GIT_API}" \ - --build-arg BENCHBOT_API_HASH="${benchbot_api_hash}" $PATH_ROOT -build_ret=$? + --build-arg BENCHBOT_API_HASH="${benchbot_api_hash}" $PATH_ROOT && \ + build_ret=0 || build_ret=1 if [ $build_ret -ne 0 ]; then printf "\n${colour_red}%s: %d\n\n${build_err}${colour_nc}\n" \ "ERROR: Building BenchBot \"submission\" returned a non-zero error code" \ @@ -1240,5 +1221,11 @@ for c in "${checks_list_post[@]}"; do fi done +# PART 5: Installing requested BenchBot Add-ons +printf "\n" +header_block "PART 5: INSTALLING BENCHBOT ADD-ONS" $colour_blue + +install_addons "$addons" + # We are finally done... -echo -e "\nFinished!" +echo -e "Finished!" diff --git a/bin/benchbot_run b/bin/benchbot_run index 56d80c3..497913b 100755 --- a/bin/benchbot_run +++ b/bin/benchbot_run @@ -4,6 +4,8 @@ ################### Load Helpers & Global BenchBot Settings #################### ################################################################################ +set -euo pipefail +IFS=$'\n\t' abs_path=$(readlink -f $0) pushd $(dirname $abs_path) > /dev/null source .helpers @@ -34,47 +36,50 @@ USAGE: OPTION DETAILS: - -h,--help + -h, --help Show this help menu. - -e, --env + -e, --env, --environment Select an environment to launch in the simulator (this must be called with the --task option). Environments are identified via - \"ENVIRONMENT_NAME:VARIATION_NUMBER\" where ENVIRONMENT_NAME is the - name of simulated environment & VARIATION_NUMBER environment - variation to use. For example, the third variation of the office - environment would be: + \"ENVIRONMENT_NAME:VARIANT\" where ENVIRONMENT_NAME is the name of + environment & VARIANT is the environment variation to use. For + example, the variant 3 of the office environment would be: office:3 - Two variation numbers must be specified for scene change detection - using the format \"ENVIRONMENT_NAME:VARIATION_ONE:VARIATION_TWO\". - For example detecting the changes in the third variation of the - office with respect to the first variation would be: + Some tasks may require more than one environment variation (e.g. + scene change detection). Multiple variations are specified using + the format \"ENVIRONMENT_NAME:VARIANT_ONE:VARIANT_TWO\". For + example using the first, and then third variant of the office + environment would be specified via: office:1:3 - (Use --list-envs to see a list of available environments) + (use '--list-envs' to see a list of available environments) -f, --force-updateless BenchBot will exit if it detects updates to the software stack. Set this flag to continue using outdated software temporarily. Note that - limited support is available for outdated software stacks, and the - challenge will run on the latest software stack. You should only use - this flag when it is inconvenient to update immediately. + limited support is available for outdated software stacks, and all + novel work will focus on the latest software stack. You should only + use this flag when it is inconvenient to update immediately. - --list-envs + --list-envs, --list-environments Search for & list all installed environments. The listed - environment names are in the format needed for the --env option. + environment names are in the format needed for the '--env' option. + Use '--show-environment' to see more details about an environment. --list-robots List all supported robot targets. This list will adjust to include what is available in your current installation (i.e. there will be - no simulated robots listed if you installed with '--no-simulator') + no simulated robots listed if you installed with '--no-simulator'). + Use '--show-robot' to see more details about a robot. --list-tasks Lists all supported task combinations. The listed tasks are printed - in the format needed for the --task option. + in the format needed for the '--task' option. Use '--show-task' to + see more details about a task. -r, --robot Configure the BenchBot supervisor for a specific robot. This @@ -86,26 +91,42 @@ OPTION DETAILS: target robot will be used by default, otherwise the 'real' target robot will be the default. - (Use --list-robots to see a list of available robots) + (use '--list-robots' to see a list of available robots) + + --show-env, --show-environment + Prints information about the provided environment name if + installed. The corresponding YAML's location will be displayed, + with a snippet of its contents. + + --show-robot + Prints information about the provided robot name if installed. The + corresponding YAML's location will be displayed, with a snippet + of its contents. + + --show-task + Prints information about the provided task name if installed. The + corresponding YAML's location will be displayed, with a snippet + of its contents. + -t, --task Configure BenchBot for a specific task style (this must be called - with the --env option). A task is specified through the format - \"TYPE:CONTROL_MODE:LOCALISATION_MODE\" where TYPE is the type of - task, CONTROL_MODE is the control options available on the robot, & - LOCALISATION_MODE is the accuracy of localisation feedback - received. For example, a robot with passive control & ground truth - localisation completing semantic SLAM would be: + with the '--env' option). Tasks are specified based on their name in + the YAML file. The naming convention generally follows the format + \"TYPE:OPTION_1:OPTION_2:...\". For example: semantic_slam:passive:ground_truth - (Use --list-tasks to see a list of supported task options) + is a semantic SLAM task with passive robot control and observations + using a ground truth robot pose. + + (use '--list-tasks' to see a list of supported task options) - -u,--update-check + -u, --update-check Check for available updates to the BenchBot software stack and exit immediately. - -v,--version + -v, --version Print version info for current installation. FURTHER DETAILS: @@ -114,114 +135,33 @@ FURTHER DETAILS: b.talbot@qut.edu.au " -_map_details_err=\ -"ERROR: Somehow there is a difference between the number of map paths (%d), map -poses (%d), & environment files (%d) found. This should never happen; please -contact the developers." - -_robot_err=\ -"ERROR: The BenchBot Robot Controller container has exited unexpectedly. This -should not happen unless something is installed incorrectly. Please see the -complete log below for a dump of the crash output:" - -SELECTED_ENV= -SELECTED_TASK= - -_task_list=("semantic_slam:passive:ground_truth" - "semantic_slam:active:ground_truth" - "semantic_slam:active:dead_reckoning" - "scd:passive:ground_truth" - "scd:active:ground_truth" - "scd:active:dead_reckoning" -) - -_env_data_full_cached= - -function _env_files() { - envs=($(_expand_envs $1)) - fs= - for e in "${envs[@]}"; do - fs+="$(_env_data $e | head -1 | tr -d '[:space:]') " - done - echo "$fs" | sed -e 's/[[:space:]]*$//' -} - -function _env_data() { - found= - while read -r line; do - if [ -z "$found" ] && [[ "$line" =~ ^/.*"${1/:/_}"\.yaml ]]; then - found=true - echo "$line" - elif [ ! -z "$found" ] && [[ "$line" =~ ^/.*\.yaml ]]; then - return 0 - elif [ ! -z "$found" ]; then - echo "$line" - fi - done <<< "$_env_data_full_cached" - return 1 -} - -function _env_data_get() { - if [ -z "$_env_data_full_cached" ]; then - # A little hacky, but docker runs are costly so this gets everything in 1 go - _env_data_full_cached=$(docker run --rm -t $DOCKER_TAG_BACKEND /bin/bash \ - -c 'find $BENCHBOT_ENVS_PATH/*/'$FILENAME_ENV_METADATA' -name "*.yaml"\ - | while read env; do echo "$(realpath $env)"; cat $env; echo ""; done') - fi -} - -function _env_list() { - echo "$_env_data_full_cached" | grep "environment_name:" | \ - sed 's/.*: "\([^_]*\)_\([^"]*\).*/\1:\2/' | sort -u -} +_list_environments_pre=\ +"Either simulated or real world environments can be selected. Please see the +'--list-robots' command for the available robot platforms. Only simulated robots +can be run in simulated environments, and only real robots in real environments +(as you would expect). -function _env_map_paths() { - envs=($(_expand_envs $1)) - ps= - for e in "${envs[@]}"; do - # Nested while... not quite ideal - ps+="$(_env_data $e | sed -n 's/.*map_path: \(.*\)/\1/p' | \ - tr -d '[:space:]') " - done - echo "$ps" | sed -e 's/[[:space:]]*$//' -} +The following environments are supported in your BenchBot installation: + " -function _env_path() { - envs=($(_expand_envs $1)) - echo "$_env_data_full_cached" | \ - grep -m 1 "^\/.*$(echo "${envs[${2:-0}]}" | tr ':' '_')" -} +_list_formats_pre=\ +"Formats are used by a task to declare the formats of results in a re-usable +manner. You should ensure that tasks you use point to installed results +formats. The following formats are supported in your BenchBot installation: + " -function _env_poses() { - envs=($(_expand_envs $1)) - ps= - for e in "${envs[@]}"; do - # Nested while... not quite ideal - ps+="$(_env_data $e | sed -n 's/.*start_pose_local: \(.*\)/\1/p' | \ - tr -d '[:space:]') " - done - echo "$ps" | sed -e 's/[[:space:]]*$//' -} +_list_robots_pre=\ +"The following robot targets are supported in your BenchBot installation: + " -function _env_type() { - envs=($(_expand_envs $1)) - ts= - for e in "${envs[@]}"; do - ts+="$(_env_data $e | sed -n 's/.*type: \(.*\)/\1/p' | \ - tr -d '[:space:]' | tr -d '"') " - done; - echo "$ts" | sed -e 's/[[:space:]]*$//' -} +_list_tasks_pre=\ +"The following tasks are supported in your BenchBot installation: + " -function _expand_envs() { - env_name=$(echo "$1" | sed 's/:.*//') - env_nums=($(echo "$1" | sed 's/[^:]*://; s/:/ /')) - es= - for e in "${env_nums[@]}"; do - es+="$env_name:$e " - done - echo "$es" | sed -e 's/[[:space:]]*$//' -} +_robot_err=\ +"ERROR: The BenchBot Robot Controller container has exited unexpectedly. This +should not happen under normal operating conditions. Please see the complete +log below for a dump of the crash output:" function exit_gracefully() { if [ -z "$simulator_required" ]; then @@ -233,232 +173,116 @@ function exit_gracefully() { exit ${1:-0} } -function opt_list_envs() { - # TODO check if installed (docker image exists); provide a message if not - # telling them how to fix the issue - # Print the list with details - echo "A specific environment can be selected by entering a string in the format -ENV_NAME:VARIATION_NUMBER. Scene change detection tasks require an environment -string of the form ENV_NAME:VARIATION_NUMBER_1:VARIATION_NUMBER_2, as a second -variation is required in completing the task. - -Either simulated or real world environments can be selected. Please see the -'--list-robots' command for the available simulated and real robot platforms. -Only simulated robots can be run in simulated environments, and only real -robots in real environments (as you would expect). - -Support for the following environments is provided in your installed BenchBot -Docker image: - - Simulated: - " - if has_simulator; then - for e in $(_env_list); do - if [ -z "$(_env_type $e | grep "real")" ]; then - echo " $e" - fi - done - else - echo " NONE (simulator not installed)" +function validate_environment_count() { + # $1 = number of selected environments, $2 task + scene_count="$(run_manager_cmd 'print(\ + get_value_by_name("tasks", "'$2'", "scene_count"))')" + if [[ "$scene_count" == *"None"* ]]; then scene_count=1; fi + if [ $scene_count -ne $1 ]; then + printf "${colour_red}%s\n %s${colour_nc}\n" \ + "ERROR: Selected $1 environment/s for a task which requires $scene_count" \ + "environment/s ('$task')" fi - echo " - - Real word: - " - for e in $(_env_list); do - if [ ! -z "$(_env_type $e | grep "real")" ]; then - echo " $e" - fi - done - echo "" -} - -function opt_list_robots() { - # TODO this will need to be redone in a more flexible way to support a more - # dynamic style of "robot" target creation - echo "The following robot targets are supported in your BenchBot installation: - " - for r in $(_robot_list); do - echo " $r" - done; - echo -e "\nPlease ensure you run a robot in a valid environment (i.e. running a real -robot platform in a simulated environment is an invalid configuration)." } -function opt_list_tasks() { - # TODO this is hard coded for now... maybe should be done differently... - echo "The following tasks are supported by BenchBot: - " - for t in ${_task_list[@]}; do - echo " $t" - done - echo " -The string is of the format TYPE:CONTROL_MODE:LOCALISATION_MODE. - -TYPE DETAILS: - semantic_slam: - Use a semantic SLAM algorithm to build an object-based semantic - map. An object-based semantic map specifies a set of objects, where - each object is described by: - - a suggested label (or distribution over possible labels) - - a global axis-aligned 3D bounding box (cuboid) describing the - location of the object in 3D space. - - scd: - Apply a semantic SLAM algorithm to perform scene change detection - (SCD) between two scenes of the same environment. The two scenes - will have a number of objects either added or removed with respect - to each other. - - Successful SCD produces an object-based semantic map as described - above, with one key addition to each object description: - - a suggested state change (either added, removed, or unchanged) or - probability distribution over possible state changes. - -CONTROL_MODE DETAILS: - passive: - Control of the robot is passive; 'move_next' is the only actuation - modality provided to the user. The 'move_next' action automatically - moves the robot from one target pose to the next. - - All further action of the robot will be disabled once the robot has - traversed all possiible poses. - - active: - Control of the robot is active'; the user can command the robot to - either rotate on the spot with 'move_angle', or move forwards / - backwards with 'move_distance'. Positive angles are anti-clockwise, - & positive distances denote move forwards. Negative values perform - the inverse for both actuation modalities. - - All further actuation of the robot will be disabled if a collision - is detected with the environment. The task must be started again if - this occurs. - -LOCALISATION_MODE DETAILS: - ground_truth: - All localisation data is perfect, representing the exact position & - orientation of the robot in the environment. Poses in robot - observations can be considered correct without a need for - localisation. - - dead_reckoning: - Error in the robot's odometry measurements are not corrected. Poses - in robot observations will accumulate this error over time, & must - be corrected with a localisation process to enable the construction - of accurate maps. - " -} +function validate_type() { + # $1 = type + if [ "$1" == "real" ]; then + printf "\n${colour_yellow}%s\n%s${colour_nc}\n" \ + "WARNING: Requested running with '$1'. Assuming you have an available" \ + "real robot & environment." + return + fi -function opt_select_env() { - # Only update the SELECTED_ENV variable if it is in the list of envs - for e in $(_env_list); do - e=$(echo "$e" | tr -d '[:space:]') - # TODO the number range for 2nd variation should probably not be hardcoded... - if [[ "$1" =~ "$e"(:[1-5])?$ ]]; then - SELECTED_ENV="$1" - return 0 - fi - done - echo "Environment '$1' is not a supported environment. Please check --list-envs." - return 1 + simulators="$(simulator_type)" + if [ "$simulators" != "$1" ] && [ "$simulators" != *",$1" ] && \ + [ "$simulators" != *"$1,"* ]; then + printf "\n${colour_red}%s\n %s${colour_nc}\n" \ + "ERROR: Requested running with '$1', but that simulator isn't installed." \ + "Installed simulator/s are: '$simulators'" + fi } -function opt_select_robot() { - # Only update the SELECTED_ROBOT variable if it is in the list of robots - selection="${1:-carter:sim}" - for r in $(_robot_list); do - r=$(echo "$r" | tr -d '[:space:]') - if [ "$r" == "$selection" ]; then - SELECTED_ROBOT="$r" - return 0 - fi +function validate_types() { + # $1 = robot name; $2 = environment string, $3... environments + robot="$1" + env="$2" + shift 2 + envs=($@) + types=() + types+=("$(run_manager_cmd 'print(\ + get_value_by_name("robots", "'$robot'", "type"))')") + for e in "${envs[@]}"; do + types+=($(run_manager_cmd 'print(\ + get_value_by_name("environments", "'$e'", "type"))')) done - echo "Robot '$1' is not a supported robot platform. Please check --list-robots." - return 1 -} -function opt_select_task() { - # Only update the SELECTED_TASK variable if it is in the list of tasks - for t in ${_task_list[@]}; do - if [ "$t" == "$1" ]; then - SELECTED_TASK="$t" - return 0 - fi + err= + for i in "${!types[@]}"; do + if [ "${types[$i]}" != "${types[0]}" ]; then err=1; fi done - echo "Task '$1' is not a supported task specification. Please check --list-tasks." - return 1 -} -_robot_data_full_cached= - -function _robot_data() { - found= - while read -r line; do - if [ -z "$found" ] && [[ "$line" =~ ^/.*"$1"\.yaml ]]; then - found=true - echo "$line" - elif [ ! -z "$found" ] && [[ "$line" =~ ^/.*\.yaml ]]; then - return 0 - elif [ ! -z "$found" ]; then - echo "$line" - fi - done <<< "$_robot_data_full_cached" - return 1 -} - -function _robot_data_get() { - if [ -z "$_robot_data_full_cached" ]; then - # A little hacky, but docker runs are costly so this gets everything in 1 go - _robot_data_full_cached=$(docker run --rm -t $DOCKER_TAG_BACKEND /bin/bash \ - -c 'find $BENCHBOT_SUPERVISOR_PATH/benchbot_supervisor/robots -name "*.yaml" |\ - while read robot; do echo "$(realpath $robot)"; cat $robot; echo ""; \ - done') + if [ ! -z "$err" ]; then + printf "%s %s\n%s\n\n" "Robot & environment types aren't consistent." \ + "Please ensure each of the following" "have the same type:" + for i in "${!types[@]}"; do + if [ $i -eq 0 ]; then + printf "\tRobot '$robot' has type '${types[$i]}'\n" + else + printf "\tEnvironment '${envs[$((i-1))]}' has type '${types[$i]}'\n" + fi + done + printf "\n${colour_red}%s${colour_nc}\n" \ + "ERROR: Inconsistent types selected (robot = '$1', environment = '$3')" fi } -function _robot_list() { - echo "$_robot_data_full_cached" | sed -n 's/\(.*\)\.yaml/\1/p' | \ - awk -F'/' '{print $NF}' | sed 's/_/:/p'| sort -u -} - -function _robot_type() { - # TODO in the future this needs to be a lot less dumb - echo "$1" | sed 's/.*://' -} - ################################################################################ #################### Parse & handle command line arguments ##################### ################################################################################ # Safely parse options input -parse_out=$(getopt -o he:t:r:fuv \ - --long help,env:,force-updateless,list-envs,list-robots,list-tasks,robot:,task:,updates-check,version \ - -n "$(basename "$abs_path")" -- "$@") +_args="help,env:,environment:,force-updateless,list-envs,list-environments,\ +list-formats,list-robots,list-tasks,robot:,show-env:,show-environment:,\ +show-format:,show-robot:,show-task:,task:,updates-check,version" +parse_out=$(getopt -o he:t:r:fuv --long $_args -n "$(basename "$abs_path")" \ + -- "$@") if [ $? != 0 ]; then exit 1; fi eval set -- "$parse_out" updates_exit= updates_skip= -robot_selection= +environment= +robot= +task= while true; do case "$1" in -h|--help) echo "$usage_text" ; exit 0 ;; - -e|--env) - _env_data_get ; opt_select_env "$2"; shift 2 ;; + -e|--env|--environment) + environment="$2"; shift 2 ;; -f|--force-updateless) updates_skip=1 ; shift ;; - --list-envs) - _env_data_get ; opt_list_envs ; exit 0 ;; + --list-envs|--list-environments) + list_environments "$_list_environments_pre" "an"; exit $? ;; + --list-formats) + list_content "formats" "$_list_formats_pre"; exit $? ;; --list-robots) - _robot_data_get; opt_list_robots ; exit 0 ;; + list_content "robots" "$_list_robots_pre"; exit $? ;; --list-tasks) - opt_list_tasks ; exit 0 ;; + list_content "tasks" "$_list_tasks_pre"; exit $? ;; -r|--robot) - robot_selection="$2"; shift 2 ;; + robot="$2"; shift 2 ;; + --show-env|--show-environment) + show_environment "$2"; exit $? ;; + --show-format) + show_content "formats" "$2"; exit $? ;; + --show-robot) + show_content "robots" "$2"; exit $? ;; + --show-task) + show_content "tasks" "$2"; exit $? ;; -t|--task) - opt_select_task "$2"; shift 2 ;; + task="$2"; shift 2 ;; -u|--updates-check) updates_exit=1 ; shift ;; -v|--version) @@ -469,52 +293,38 @@ while true; do echo "$(basename "$abs_path"): option '$1' is unknown"; shift ; exit 1 ;; esac done -_robot_data_get; opt_select_robot "$robot_selection" -# Bail if we are running & we didn't get a valid robot, env, & task -if [ -z "$updates_exit" ]; then - if [ -z "$SELECTED_ROBOT" ]; then - printf "${colour_red}%s${colour_nc}\n" \ - "ERROR: No valid robot selected (selected_robot = "$SELECTED_ROBOT")" - exit 1 - fi - if [ -z "$SELECTED_ENV" ]; then - printf "${colour_red}%s${colour_nc}\n" \ - "ERROR: No valid environment selected (selected_env = "$SELECTED_ENV")" - exit 1 - fi - if [ -z "$SELECTED_TASK" ]; then - printf "${colour_red}%s${colour_nc}\n" \ - "ERROR: No valid task selected (selected_task = "$SELECTED_TASK")" - exit 1 - fi - if ! has_simulator && [[ "$(_robot_type "$SELECTED_ROBOT")" == *"sim"* ]]; then - printf "${colour_red}%s\n%s${colour_nc}\n" \ - "ERROR: Can't run a simulated robot (selected_robot = "$SELECTED_ROBOT") in an" \ - "installation that doesn't have a simulator installed." - exit 1 - fi - if [[ "$(_robot_type "$SELECTED_ROBOT")" \ - != *"$(_env_type "$SELECTED_ENV")"* ]]; then - printf "${colour_red}%s\n%s${colour_nc}\n" \ - "ERROR: Type mismatch between selected robot & environment" \ - "(selected_robot = "$SELECTED_ROBOT", selected_env= "$SELECTED_ENV")" - exit 1 - fi -fi +# Extract a list of environments from the provided environment string +_pre="$(echo "$environment" | sed 's/:.*//')" +environments=($(echo "$environment" | sed 's/[^:]*\(:.*\)/\1/; s/:/\n'$_pre':/g; s/^ *//')) +if [ ${#environments[@]} -eq 0 ]; then environments+=(""); fi +environments_string="$(printf '%s,' "${environments[@]}")" +environments_string="${environments_string::-1}" -# Ensure task & environment combo is valid -if [[ "$SELECTED_TASK" =~ .*"semantic_slam".* ]] && \ - [[ "$SELECTED_ENV" =~ ^[^:]+:[1-5]:[1-5]$ ]]; then - printf "${colour_red}%s${colour_nc}\n" \ - "ERROR: Can't run Semantic SLAM with two environment variations!" - exit 1 -elif [[ "$SELECTED_TASK" =~ .*"scd".* ]] && \ - [[ "$SELECTED_ENV" =~ ^[^:]+:[1-5]$ ]]; then - printf "${colour_red}%s%s${colour_nc}\n" \ - "ERROR: Can't run Scene Change Detection without "\ - "two environment variations!" - exit 1 +# Bail if any of the requested configurations are invalid +if [ -z "$updates_exit" ]; then + err="$(validate_content "robots" "$robot")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi + type="$(run_manager_cmd 'print(\ + get_value_by_name("robots", "'$robot'", "type"))')" + simulator_required=1 + err="$(validate_content "tasks" "$task")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi + results_format="$(run_manager_cmd 'print(\ + get_value_by_name("tasks", "'$task'", "results_format"))')" + err="$(validate_content "formats" "$results_format")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi + if [[ "$type" != "sim_"* ]]; then simulator_required=0; fi + for e in "${environments[@]}"; do + err="$(validate_environment "$e" "$environment")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi + done + err="$(validate_types "$robot" "$environment" "${environments[@]}")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi + err="$(validate_type "$type")"; echo "$err" + if [[ "$err" == *"ERROR:"* ]]; then exit 1; fi + err="$(validate_environment_count ${#environments[@]} "$task")" + if [ ! -z "$err" ]; then echo "$err"; exit 1; fi fi ################################################################################ @@ -526,57 +336,41 @@ header_block "CHECKING FOR BENCHBOT SOFTWARE STACK UPDATES" ${colour_blue} if [ ! -z "$updates_skip" ]; then echo -e "${colour_yellow}Skipping ...${colour_nc}" -elif ! update_check "$(git branch -a --contains HEAD | grep -v HEAD | grep '.*remotes/.*' | head -n 1 | sed 's/.*\/\(.*\)/\1/')"; then +elif ! update_check "$(git branch -a --contains HEAD | grep -v HEAD | \ + grep '.*remotes/.*' | head -n 1 | sed 's/.*\/\(.*\)/\1/')"; then exit 1; fi if [ ! -z "$updates_exit" ]; then exit 0; fi # Run the BenchBot software stack (kill whenever they exit) +kill_benchbot trap exit_gracefully SIGINT SIGQUIT SIGKILL SIGTERM header_block "STARTING THE BENCHBOT SOFTWARE STACK" ${colour_blue} -# Cache environment data (digging into a docker to view files is expensive... -# lets only do that to our users once) -_env_data_get - -# Pull out all useful data from the selected settings -selected_poses=($(_env_poses "$SELECTED_ENV")) -selected_map_paths=($(_env_map_paths "$SELECTED_ENV")) -selected_env_files=($(_env_files "$SELECTED_ENV")) -if [ ${#selected_poses[@]} -ne ${#selected_map_paths[@]} ] || \ - [ ${#selected_map_paths[@]} -ne ${#selected_env_files[@]} ]; then - printf "${colour_red}$_map_details_err${colour_nc}\n" \ - "${#selected_map_paths[@]}" "${#selected_poses[@]}" \ - "${#selected_env_files[@]}" - exit 1 -fi - -selected_actions=$(echo "$SELECTED_TASK" | sed 's/.*:\(.*\):.*/\1/') -selected_observations=$(echo "$SELECTED_TASK" | sed 's/.*:.*:\(.*\)/\1/') - -if has_simulator && [[ "$(_robot_type "$SELECTED_ROBOT")" == *"sim"* ]]; then - simulator_required=1 -fi - # Print some configuration information -echo -n -e "${colour_blue}Running the BenchBot system with the following settings:${colour_nc} - - Selected environment: $SELECTED_ENV - Selected task: $SELECTED_TASK - Selected robot: $SELECTED_ROBOT - Actions set: $selected_actions - Observations set: $selected_observations - Maps: " -for i in "${!selected_map_paths[@]}"; do +printf "${colour_blue}%s${colour_nc} + + Selected task: $task + Task results format: $results_format + Selected robot: $robot + Selected environment: $environment + Scene/s: " \ + "Running the BenchBot system with the following settings:" +for i in "${!environments[@]}"; do if [ $i -ne 0 ]; then printf "%*s" 26 fi - printf "%s\n" "${selected_map_paths[$i]}" + printf "%s, starting @ pose %s\n" "${environments[$i]}" \ + "$(run_manager_cmd 'print(get_value_by_name("environments", \ + "'${environments[$i]}'", "start_pose"))')" printf "%*s" 26 - printf "(starting @ %s)\n" "${selected_poses[$i]}" + printf "(map_path = '%s')\n" \ + "$(run_manager_cmd 'print(get_value_by_name("environments", \ + "'${environments[$i]}'", "map_path"))')" done printf " %-22s" "Simulator required:" -printf "%s\n" $([ -z "$simulator_required" ] && echo "No" || echo "Yes") +printf "%s (%s)\n" $([ -z "$simulator_required" ] && echo "No" || echo "Yes") \ + "$type" echo "" # Create the network for BenchBot software stack @@ -595,45 +389,47 @@ fi # Declare reusable parts to ensure our containers run with consistent settings xhost +local:root > /dev/null ros_master_host="benchbot_roscore" -docker_run="docker run -t --gpus all -v /tmp/.X11-unix:/tmp/.X11-unix \ - -e DISPLAY \ - -e ROS_MASTER_URI=http://$ros_master_host:11311 -e ROS_HOSTNAME=\$name \ - --network $DOCKER_NETWORK --name=\$name --hostname=\$name" +docker_run="docker run -t --gpus all \ + --env DISPLAY \ + --env ROS_MASTER_URI=http://$ros_master_host:11311 \ + --env ROS_HOSTNAME=\$name \ + --network $DOCKER_NETWORK \ + --name=\$name \ + --hostname=\$name \ + --volume /tmp/.X11-unix:/tmp/.X11-unix \ + --volume $PATH_ADDONS:$PATH_ADDONS_INTERNAL:ro" cmd_prefix='source $ROS_WS_PATH/devel/setup.bash && ' # Start containers for ROS, isaac_simulator, benchbot_simulator, & benchbot_supervisor echo -e "\n${colour_blue}Starting container for BenchBot ROS:${colour_nc}" -name="$ros_master_host" -${docker_run//'$name'/$name} --ip "$URL_ROS" -d $DOCKER_TAG_BACKEND /bin/bash -c \ +cmd="${docker_run//'$name'/$ros_master_host}" +${cmd// /$'\t'} --ip "$URL_ROS" -d $DOCKER_TAG_BACKEND /bin/bash -c \ "$cmd_prefix"'roscore' if [ ! -z "$simulator_required" ]; then printf "\n${colour_blue}%s${colour_nc}\n" \ "Starting container for BenchBot Robot Controller:" - name="benchbot_robot" - ${docker_run//'$name'/$name} --ip "$URL_ROBOT" -d $DOCKER_TAG_BACKEND /bin/bash -c \ + cmd="${docker_run//'$name'/benchbot_robot}" + ${cmd// /$'\t'} --ip "$URL_ROBOT" -d $DOCKER_TAG_BACKEND /bin/bash -c \ "$cmd_prefix"'rosrun benchbot_robot_controller benchbot_robot_controller' fi echo -e "\n${colour_blue}Starting container for BenchBot Supervisor:${colour_nc}" -name="benchbot_supervisor" -${docker_run//'$name'/$name} --ip "$URL_SUPERVISOR" -d $DOCKER_TAG_BACKEND /bin/bash -c \ - "$cmd_prefix"'python -m benchbot_supervisor \ - --task-name "'"$SELECTED_TASK"'" --robot-file "'"${SELECTED_ROBOT/:/_}"'.yaml" \ - --observations-file "'"$selected_observations"'.yaml" \ - --actions-file "'"$selected_actions"'_control.yaml"\ - --environment-files '"$( IFS=$':'; echo "${selected_env_files[*]}")" +cmd="${docker_run//'$name'/benchbot_supervisor}" +${cmd// /$'\t'} --ip "$URL_SUPERVISOR" -d $DOCKER_TAG_BACKEND /bin/bash -c \ + "$cmd_prefix"'python3 -m benchbot_supervisor --task-name "'$task'" \ + --robot-name "'$robot'" --environment-names "'$environments_string'" \ + --addons-path "'$PATH_ADDONS_INTERNAL'"' echo -e "\n${colour_blue}Starting container for BenchBot Debugging:${colour_nc}" -name="benchbot_debug" -${docker_run//'$name'/$name} --ip "$URL_DEBUG" -it -d $DOCKER_TAG_BACKEND /bin/bash +cmd="${docker_run//'$name'/benchbot_debug}" +${cmd// /$'\t'} --ip "$URL_DEBUG" -it -d $DOCKER_TAG_BACKEND /bin/bash xhost -local:root > /dev/null # Print the output of the Supervisor, watching for failures header_block "BENCHBOT IS RUNNING (Ctrl^C to exit) ..." ${colour_green} -docker logs benchbot_supervisor -docker attach benchbot_supervisor --no-stdin & +docker logs --follow benchbot_supervisor & while [ ! -z $(docker ps -q -f 'name=benchbot_supervisor') ] && \ ([ -z "$simulator_required" ] || \ diff --git a/bin/benchbot_submit b/bin/benchbot_submit index b622194..c4ac875 100755 --- a/bin/benchbot_submit +++ b/bin/benchbot_submit @@ -4,6 +4,8 @@ ################### Load Helpers & Global BenchBot Settings #################### ################################################################################ +set -euo pipefail +IFS=$'\n\t' abs_path=$(readlink -f $0) pushd $(dirname $abs_path) > /dev/null source .helpers @@ -13,7 +15,7 @@ popd > /dev/null ########################### Script Specific Settings ########################### ################################################################################ -SUBMISSION_CONTAINER_NAME="submission" +SUBMISSION_CONTAINER_NAME="benchbot_submission" ################################################################################ ######################## Helper functions for commands ######################### @@ -76,13 +78,25 @@ OPTION DETAILS: $(basename "$0") -c . + -E, --example + Name of an installed example to run. All examples support both + containerised and native operation. + + (use '--list-examples' to see a list of installed example) + -e, --evaluate-results - Evaluate the results produced by the provided submission after it - has finished running. This will assume that your submission saves - results to the location referenced by 'benchbot_api.RESULT_LOCATION' - (currently '/tmp/benchbot_result'). Evaluation will not work as + Evaluation method to use for evaluation on the provided submission + after it has finished running. No evaluation will be run if this + flag isn't provided. This assumes your submission saves results to + the location referenced by 'benchbot_api.RESULT_LOCATION' + (currently '/tmp/benchbot_result'). Evaluation will not work as expected if the submission saves results in any other location. + --list-examples + List all available examples. The listed examples are printed in the + format needed for the '--example' option. Use '--show-example' to + see more details about an example. + -n, --native Runs your solution directly on your system without applying any containerisation (useful when you are developing & testing your @@ -109,49 +123,147 @@ OPTION DETAILS: $(basename "$0") -s . \$HOME/Desktop + --show-example + Prints information about the provided example if installed. The + corresponding YAML's location will be displayed, with a snippet of + its contents. + -v,--version Print version info for current installation. FURTHER DETAILS: - See the 'benchbot_examples' repository for example solutions to test with - the submission system & simulator to get started. - Please contact the authors of BenchBot for support or to report bugs: b.talbot@qut.edu.au " +mode_duplicate_err="ERROR: Multiple submission modes were selected. Please ensure only +one of -n|-c|-s is provided." + mode_selection_err="ERROR: No valid submission mode was selected (-n|-c|-s). Please see 'benchbot_submit --help' for further details." -SELECTED_MODE= -SELECTED_OPTIONS= +_list_examples_pre=\ +"The following BenchBot examples are available in your installation: + " -function opt_select_mode() { - if [ ! -z $SELECTED_MODE ]; then - return 0 +function expand_mode() { + mode="${1//-}" + if [[ "$mode" == n* ]]; then + echo "native"; + elif [[ "$mode" == c* ]]; then + echo "containerised"; + elif [[ "$mode" == s* ]]; then + echo "submission"; fi - case "$1" in - -n|--native) - SELECTED_MODE="native" ;; - -c|--containerised) - SELECTED_MODE="containerised" ;; - -s|--submission) - SELECTED_MODE="submission" ;; - *) - echo "$(basename "$0"): '$1' mode is unsupported" ; exit 1 ;; - esac - shift - SELECTED_OPTIONS=( $(echo "$@" | sed 's/ --//') ) } -function remove_submission_container() { +active_pid= +function exit_gracefully() { + # $1 mode, $2 exit code + echo "" + + # Pass the signal to the currently running process + if [ ! -z "$active_pid" ]; then + kill -TERM $active_pid &> /dev/null + wait $active_pid || true + active_pid= + fi + # Cleanup containers if we ran in containerised mode - if [ "$SELECTED_MODE" == "containerised" ]; then + if [[ "$1" == c* ]]; then printf "\n" header_block "Cleaning up user containers" ${colour_blue} docker system prune -f # TODO this is probably too brutal fi + + exit ${2:-0} +} + +function submission_command() { + # $1 example name, $2 mode_args + if [[ -z "$2" && ! -z "$1" ]]; then + echo "pushd $(dirname $(run_manager_cmd 'print(get_match(\ + "examples", [("name", "'$1'")]))')); $(run_manager_cmd 'print(\ + get_value_by_name("examples", "'$1'", "native_command"))'); popd" + else + echo "$2" + fi +} + +function submission_directory() { + # $1 example name, $2 mode_args + if [[ -z "$2" && ! -z "$1" ]]; then + pushd "$(dirname $(run_manager_cmd 'print(get_match(\ + "examples", [("name", "'$1'")]))'))" > /dev/null + echo "$(realpath $(run_manager_cmd 'print(get_value_by_name(\ + "examples", "'$1'", "container_directory"))'))" + popd > /dev/null + else + echo "$2" + fi +} + +function validate_example() { + # $1 example name + text="examples" + singular=${text::-1} + if [ ! -z "$1" ] && \ + [ "$(run_manager_cmd 'print(exists("'$text'", [("name", "'$1'")]))')" \ + != "True" ]; then + printf "%s %s\n" "${singular^} '$1' is not a supported ${singular}." \ + "Please check '--list-$text'." + printf "\n${colour_red}%s${colour_nc}" \ + "ERROR: Invalid ${singular} selected (${singular} = '$1')" + fi +} + +function validate_mode() { + # $1 example name, $2 duplicate mode flag, $3 mode (expanded), $4 mode_args + err= + cmd="$(submission_command "$1" "$4")" + dir="$(submission_directory "$1" "$4")" + if [ -z "$3" ]; then + err="$(printf "%s %s\n" "No mode was selected. Please select a submission" \ + "mode from 'native', 'containerised', or 'submission'")" + elif [ ! -z "$2"]; then + err="$(printf "%s %s\n" "Selected more than 1 mode, please only select" \ + "one of 'native', 'containerised', or 'submission'.")" + elif [ -z "$1" ] && [ -z "$4" ]; then + err="$(printf "%s %s\n" "Mode '$3' requires arguments, but none were" \ + "provided. Please see '--help' for details.")" + elif [[ ("$3" == c* || "$3" == s*) && ! -d "$dir" ]]; then + err="$(printf "%s %s\n\t%s\n" "Mode '$3' requires a directory as an" \ + "argument. The provided directory does not exist:" "$dir")" + elif [[ "$3" == c* && ! -f "$dir/Dockerfile" ]]; then + err="$(printf "%s %s\n\t%s\n" "Mode '$3' requires a Dockerfile to run." \ + "The provided Dockerfile does not exist:" "$dir/Dockerfile")" + fi + + if [ ! -z "$err" ]; then + printf "$err\n" + printf "\n${colour_red}%s${colour_nc}" \ + "ERROR: Mode selection was invalid. See errors above." + fi +} + +function validate_results() { + # $1 mode (expanded), $2 evaluate method, $3 results_location + if [[ "$1" == s* && ( ! -z "$2" || ! -z "$3" ) ]]; then + printf "%s %s\n" "Cannot create results or perform evaluation in '$1'" \ + "mode. Please run again in a different mode." + printf "\n${colour_red}%s${colour_nc}" \ + "ERROR: Requested results evaluation from 'submission' mode." + fi +} + +function warning_mode() { + # $1 example name, $2 mode_args + if [[ ! -z "$1" && ! -z "$2" ]]; then + printf "${colour_yellow}%s\n%s\n${colour_nc}" \ + "WARNING: You selected an example & provided arguments for the mode" \ + "(usually you only want one or the other)" + fi } ################################################################################ @@ -159,120 +271,105 @@ function remove_submission_container() { ################################################################################ # Safely parse options input -parse_out=$(getopt -o ehc:n:r:s:v --long \ - evaluate-results,help,containerised:,native:,results-location:,submission:,version \ - -n "$(basename "$0")" -- "$@") +_args="evaluate-results:,example:,help,containerised,list-examples,native,\ +results-location:,show-example:,submission,version" +parse_out=$(getopt -o e:E:hcnr:sv --long $_args -n "$(basename "$0")" -- "$@") if [ $? != 0 ]; then exit 1; fi eval set -- "$parse_out" +evaluate_method= +example= +mode= +mode_args= +mode_dup= results_location= -evaluate= while true; do case "$1" in -e|--evaluate-results) - evaluate=true ; shift ;; + evaluate_method="$2" ; shift 2 ;; + -E|--example) + example="$2"; shift 2 ;; -h|--help) echo "$usage_text" ; shift ; exit 0 ;; + --list-examples) + list_content "examples" "$_list_examples_pre" "an"; exit $? ;; -n|--native|-c|--containerised|-s|--submission) - opt_select_mode "$@"; break ;; + if [ ! -z "$mode" ]; then mode_dup=1; fi + mode="$1"; shift ;; -r|--results-location) results_location="$2"; shift 2 ;; + --show-example) + show_content "examples" "$2"; exit $? ;; -v|--version) print_version_info; exit ;; --) - shift ; break ;; + mode_args=$(echo "$@" | sed 's/-- *//'); break ;; *) echo "$(basename "$0"): option '$1' is unknown"; shift ; exit 1 ;; esac done -if [ -z $SELECTED_MODE ]; then - echo -e "${colour_nc}$mode_selection_err${colour_nc}" - exit 1 -fi +mode="$(expand_mode "$mode")" -# Bail if we have received mode options we can't do anything with -# TODO handle passing evaluate and / or results when running in submission mode -case "$SELECTED_MODE" in - containerised|submission) - if [ ! -d "${SELECTED_OPTIONS[0]}" ]; then - printf "${colour_red}%s%s${colour_nc}\n" \ - "$(basename "$0"): directory '${SELECTED_OPTIONS[0]}' provided with "\ - "mode\n'$SELECTED_MODE' does not exist. Exiting..." - exit 1 - fi ;;& - submission) - if [ -z "$evaluate" ] || [ -z "$results_location" ]; then - printf "${colour_red}%s%s%s${colour_nc}\n" \ - "$(basename "$0"): cannot create results or perform evaluation from " \ - "\nsubmission mode as no code is run. Please run again in a different " \ - "\nmode." - exit 1 - fi - ;; -esac +# Bail if any of the requested configurations are invalid +err="$(validate_example "$example")" +if [ ! -z "$err" ]; then echo "$err"; exit 1; fi +err="$(validate_mode "$example" "$mode_dup" "$mode" "$mode_args")" +if [ ! -z "$err" ]; then echo "$err"; exit 1; fi +err="$(validate_results "$mode" "$evaluate_method" "$results_location")" +if [ ! -z "$err" ]; then echo "$err"; exit 1; fi +warning_mode "$example" "$mode_args" -# We are going to submit; pull all useful data from selected settings out -# before beginning -selected_command= -selected_code_dir= -selected_out_location= -case "$SELECTED_MODE" in +################################################################################ +################## Submit your BenchBot solution as requested ################## +################################################################################ + +# Before we start a submission, figure out all of our derived configuration +config_cmd= +config_dir= +config_out= +case "$mode" in native) - selected_command="${SELECTED_OPTIONS[@]}" ;; + config_cmd="$(submission_command "$example" "$mode_args")" ;; containerised|submission) - selected_code_dir="${SELECTED_OPTIONS[0]}" ;;& + config_dir="$(submission_directory "$example" "$mode_args")" ;;& submission) - if [ ${#SELECTED_OPTIONS[@]} -gt 1 ]; then - selected_out_location="${SELECTED_OPTIONS[1]}" - fi - abs_path=$(realpath "$selected_code_dir") - if [ -z "$selected_out_location" ]; then - selected_out_location="$abs_path" - fi - if [ -d "$selected_out_location" ]; then - selected_out_location="$selected_out_location/$(basename "$abs_path")" - fi - if [[ ! "$selected_out_location" =~ "." ]]; then - selected_out_location+=".tgz" - fi ;; + config_out="submission.tgz" ;; esac -################################################################################ -################## Submit your BenchBot solution as requested ################## -################################################################################ - -trap remove_submission_container EXIT - -# Print some configuration information +# Now print relevant configuration information echo "Submitting to the BenchBot system with the following settings: - Submission mode: $SELECTED_MODE" -if [ -n "$selected_command" ]; then + Submission mode: $mode" +echo \ +" Perform evaluation: "\ +"$([ -z "$evaluate_method" ] && echo "No" || echo "Yes ($evaluate_method)")" +if [ -n "$results_location" ]; then echo \ -" Command to execute: $selected_command" +" Results save location: $results_location" fi -if [ -n "$selected_code_dir" ]; then +echo "" +if [ -n "$config_cmd" ]; then echo \ -" Dockerfile to build: $selected_code_dir/Dockerfile" +" Command to execute: $config_cmd" fi -if [ -n "$selected_out_location" ]; then +if [ -n "$config_dir" ]; then echo \ -" Bundling output filename: $selected_out_location" +" Dockerfile to build: $config_dir/Dockerfile" fi -echo \ -" Perform evaluation: "\ -"$([ -z "$evaluate" ] && echo "No" || echo "Yes")" -if [ -n "$results_location" ]; then +if [ -n "$config_out" ]; then echo \ -" Results save location: $results_location" +" Bundling output filename: $config_out" fi echo "" # Actually perform the submission -header_block "Running submission in '$SELECTED_MODE' mode" ${colour_green} +header_block "Running submission in '$mode' mode" ${colour_green} + +trap "exit_gracefully $mode" SIGINT SIGQUIT SIGKILL SIGTERM # Clear out any previous results in default location -if [ ! -z "$results_location" ] || [ ! -z "$evaluate" ]; then +results_src= +if [ ! -z "$results_location" ] || [ ! -z "$evaluate_method" ]; then results_src=$(python3 -c \ 'from benchbot_api.benchbot import RESULT_LOCATION; print(RESULT_LOCATION)') rm -rf "$results_src" @@ -280,68 +377,73 @@ if [ ! -z "$results_location" ] || [ ! -z "$evaluate" ]; then fi # Handle the submission -if [ "$SELECTED_MODE" == "native" ]; then +if [ "$mode" == "native" ]; then # This is native submission mode echo -e \ - "Running submission natively via command:\n\t'$selected_command' ...\n" - eval "$selected_command" - result=$? -elif [ "$SELECTED_MODE" == "submission" ]; then + "Running submission natively via command:\n\t'$config_cmd' ...\n" + eval "$config_cmd" & + active_pid=$! + wait $active_pid && run_ret=0 || run_ret=1 +elif [ "$mode" == "submission" ]; then # This is bundling up submission mode - echo -e "Bundling up submission from '$selected_code_dir' ...\n" - pushd "$selected_code_dir" >/dev/null - tar -czvf "$selected_out_location" * - result=$? + echo -e "Bundling up submission from '$config_dir' ...\n" + pushd "$config_dir" >/dev/null + tar -czvf "$config_out" * && run_ret=0 || run_ret=1 popd >/dev/null - echo -e "\nSaved to: $selected_out_location" + echo -e "\nSaved to: $config_out" else # This is a containerised submission - echo "Running submission from '$selected_code_dir' with containerisation ..." - pushd "$selected_code_dir" >/dev/null + echo "Running submission from '$config_dir' with containerisation ..." + pushd "$config_dir" >/dev/null submission_tag="benchbot/submission:"$(echo "$(pwd)" | sha256sum | cut -c1-10) - docker build -t "$submission_tag" . - result=$? - if [ $result -ne 0 ]; then - echo "Docker build returned a non-zero error code: $build_ret" + docker build -t "$submission_tag" . & + active_pid=$! + wait $active_pid && run_ret=0 || run_ret=1 + if [ $run_ret -ne 0 ]; then + echo "Docker build returned a non-zero error code: $run_ret" else xhost +local:root + echo "Waiting for Docker network ('$DOCKER_NETWORK') to become available..." + while [ -z "$(docker network ls -q -f 'name='$DOCKER_NETWORK)" ]; do + sleep 1; + done docker run --gpus all -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY \ - --network "$DOCKER_NETWORK" --name="$SUBMISSION_CONTAINER_NAME" \ - --hostname="$name" -i -t "$submission_tag" - result=$? + --network "$DOCKER_NETWORK" --name="$SUBMISSION_CONTAINER_NAME" \ + --hostname="$SUBMISSION_CONTAINER_NAME" -t "$submission_tag" && \ + run_ret=0 || run_ret=1 xhost -local:root fi popd >/dev/null fi # Exit here if the submission failed -if [ $result -ne 0 ]; then +if [ $run_ret -ne 0 ]; then printf "${colour_red}\n%s: %d${colour_nc}\n" \ - "Submission failed with result error code" "$result" - exit $result + "Submission failed with result error code" "$run_ret" + exit $run_ret fi # Perform any evaluation that may have been requested by the caller -if [ ! -z "$results_location" ] || [ ! -z "$evaluate" ]; then +if [ ! -z "$results_location" ] || [ ! -z "$evaluate_method" ]; then header_block "Processing results" ${colour_blue} # Pull the results out of the container if appropriate - if [ "$SELECTED_MODE" == "containerised" ]; then + if [ "$mode" == "containerised" ]; then if ! docker cp "${SUBMISSION_CONTAINER_NAME}:${results_src}"\ "${results_src}" 2>/dev/null; then - printf "${colour_red}\n%s%s${colour_nc}\n" \ + printf "${colour_yellow}\n%s%s${colour_nc}\n" \ "Failed to extract results from submission container; were there any?" - exit 1 + echo "{}" > "${results_src}" fi printf "\nExtracted results from container '%s', to '%s'.\n" \ "$SUBMISSION_CONTAINER_NAME" "$results_src" fi - # Bail if there are no results available + # Warn & write some empty results if there are none available if [ ! -f "$results_src" ]; then - printf "\n${colour_red}%s\n ${results_src}${colour_nc}\n" \ + printf "\n${colour_yellow}%s\n ${results_src}${colour_nc}\n" \ "Requested use of results, but the submission saved no results to: " - exit 1 + echo "{}" > "${results_src}" fi # Copy results to a new location if requested @@ -352,11 +454,11 @@ if [ ! -z "$results_location" ] || [ ! -z "$evaluate" ]; then fi # Run evaluation on the results if requested - if [ ! -z "$evaluate" ]; then + if [ ! -z "$evaluate_method" ]; then if [ -z "$results_location" ]; then results_location="$results_src"; fi printf "\nRunning evaluation on results from '%s' ... \n" \ "$results_location" - benchbot_eval "$results_location" + benchbot_eval --method "$evaluate_method" "$results_location" fi fi diff --git a/docker/backend.Dockerfile b/docker/backend.Dockerfile index 01cfe31..c8c2d81 100644 --- a/docker/backend.Dockerfile +++ b/docker/backend.Dockerfile @@ -2,7 +2,7 @@ FROM benchbot/core:base # Install ROS Melodic -ENV ROS_WS_PATH /benchbot/ros_ws +ENV ROS_WS_PATH="/benchbot/ros_ws" RUN echo "deb http://packages.ros.org/ros/ubuntu bionic main" > \ /etc/apt/sources.list.d/ros-latest.list && \ apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key \ @@ -32,53 +32,54 @@ RUN sudo rosdep init && rosdep update && \ pushd ros_ws && catkin_make && source devel/setup.bash && popd # Install & build Isaac (using local copies of licensed libraries) +# TODO adapt this to handle multiple simulators +ARG SIMULATORS +ARG ISAAC_SDK_DIR ARG ISAAC_SDK_TGZ -ENV ISAAC_SDK_PATH /benchbot/isaac_sdk -ADD --chown=benchbot:benchbot ${ISAAC_SDK_TGZ} ${ISAAC_SDK_PATH} -RUN pushd "$ISAAC_SDK_PATH" && engine/build/scripts/install_dependencies.sh && \ - bazel build ... && bazel build ... - -# Install environments from a *.zip containing pre-compiled binaries -ARG BENCHBOT_ENVS_MD5SUMS -ENV BENCHBOT_ENVS_MD5SUMS=${BENCHBOT_ENVS_MD5SUMS} -ARG BENCHBOT_ENVS_URLS -ENV BENCHBOT_ENVS_URLS=${BENCHBOT_ENVS_URLS} -ARG BENCHBOT_ENVS_SRCS -ENV BENCHBOT_ENVS_SRCS=${BENCHBOT_ENVS_SRCS} -ENV BENCHBOT_ENVS_PATH /benchbot/benchbot_envs -RUN _urls=($BENCHBOT_ENVS_URLS) && _md5s=($BENCHBOT_ENVS_MD5SUMS) && \ - _srcs=($BENCHBOT_ENVS_SRCS) && mkdir benchbot_envs && pushd benchbot_envs && \ - for i in "${!_urls[@]}"; do \ - echo "Installing environments from '${_srcs[$i]}':" && \ - echo "Downloading ... " && wget -q "${_urls[$i]}" -O "$i".zip && \ - test "${_md5s[$i]}" = $(md5sum "$i".zip | cut -d ' ' -f1) && \ - echo "Extracting ... " && unzip -q "$i".zip && rm -v "$i".zip && \ - mv -v "$(find . -mindepth 1 -maxdepth 1 -type d -not -regex ".*/[0-9]*"| \ - head -n 1)" "$i" || exit 1; \ - done +ENV ISAAC_SDK_SRCS="/isaac_srcs" +COPY --chown=benchbot:benchbot ${ISAAC_SDK_DIR} ${ISAAC_SDK_SRCS} +ENV ISAAC_SDK_PATH="/benchbot/isaac_sdk" +RUN [ -z "$SIMULATORS" ] && exit 0 || mkdir "$ISAAC_SDK_PATH" && \ + tar -xf "$ISAAC_SDK_SRCS/$ISAAC_SDK_TGZ" -C "$ISAAC_SDK_PATH" && \ + pushd "$ISAAC_SDK_PATH" && engine/build/scripts/install_dependencies.sh # Install benchbot components, ordered by how expensive installation is +ARG BENCHBOT_MSGS_GIT +ARG BENCHBOT_MSGS_HASH +ENV BENCHBOT_MSGS_HASH="$BENCHBOT_MSGS_HASH" +ENV BENCHBOT_MSGS_PATH="/benchbot/benchbot_msgs" +RUN git clone $BENCHBOT_MSGS_GIT $BENCHBOT_MSGS_PATH && \ + pushd $BENCHBOT_MSGS_PATH && git checkout $BENCHBOT_MSGS_HASH && \ + pip install -r requirements.txt && pushd $ROS_WS_PATH && \ + ln -sv $BENCHBOT_MSGS_PATH src/ && source devel/setup.bash && catkin_make ARG BENCHBOT_SIMULATOR_GIT ARG BENCHBOT_SIMULATOR_HASH -ENV BENCHBOT_SIMULATOR_PATH /benchbot/benchbot_simulator -RUN git clone $BENCHBOT_SIMULATOR_GIT $BENCHBOT_SIMULATOR_PATH && \ +ENV BENCHBOT_SIMULATOR_PATH="/benchbot/benchbot_simulator" +RUN [ -z "$SIMULATORS" ] && exit 0 || \ + git clone $BENCHBOT_SIMULATOR_GIT $BENCHBOT_SIMULATOR_PATH && \ pushd $BENCHBOT_SIMULATOR_PATH && git checkout $BENCHBOT_SIMULATOR_HASH && \ - source $ROS_WS_PATH/devel/setup.bash && .isaac_patches/apply_patches && \ - ./bazelros build //apps/benchbot_simulator && pip install -r requirements.txt + .isaac_patches/apply_patches && source $ROS_WS_PATH/devel/setup.bash && \ + ./bazelros build //apps/benchbot_simulator && \ + pip install -r requirements.txt ARG BENCHBOT_SUPERVISOR_GIT ARG BENCHBOT_SUPERVISOR_HASH -ENV BENCHBOT_SUPERVISOR_PATH /benchbot/benchbot_supervisor +ENV BENCHBOT_SUPERVISOR_PATH="/benchbot/benchbot_supervisor" RUN git clone $BENCHBOT_SUPERVISOR_GIT $BENCHBOT_SUPERVISOR_PATH && \ pushd $BENCHBOT_SUPERVISOR_PATH && git checkout $BENCHBOT_SUPERVISOR_HASH && \ - pip install . + pip3 install . ARG BENCHBOT_CONTROLLER_GIT ARG BENCHBOT_CONTROLLER_HASH -ENV BENCHBOT_CONTROLLER_PATH /benchbot/benchbot_robot_controller +ENV BENCHBOT_CONTROLLER_PATH="/benchbot/benchbot_robot_controller" RUN git clone $BENCHBOT_CONTROLLER_GIT $BENCHBOT_CONTROLLER_PATH && \ pushd $BENCHBOT_CONTROLLER_PATH && git checkout $BENCHBOT_CONTROLLER_HASH && \ - pip install -r $BENCHBOT_CONTROLLER_PATH/requirements.txt && pushd $ROS_WS_PATH && \ + pip install -r requirements.txt && pushd $ROS_WS_PATH && \ pushd src && git clone https://github.com/eric-wieser/ros_numpy.git && popd && \ ln -sv $BENCHBOT_CONTROLLER_PATH src/ && source devel/setup.bash && catkin_make +# Create a place to mount our add-ons, & install manager dependencies +ARG ADDONS_PATH +ENV BENCHBOT_ADDONS_PATH=$ADDONS_PATH +RUN mkdir -p $BENCHBOT_ADDONS_PATH && pip3 install pyyaml + # Record the type of backend built -ENV BENCHBOT_BACKEND_TYPE full +ENV BENCHBOT_SIMULATORS="${SIMULATORS}" diff --git a/docker/backend_lite.Dockerfile b/docker/backend_lite.Dockerfile deleted file mode 100644 index 6df22db..0000000 --- a/docker/backend_lite.Dockerfile +++ /dev/null @@ -1,70 +0,0 @@ -# Extend the BenchBot Core image -FROM benchbot/core:base - -# Install ROS Melodic -ENV ROS_WS_PATH /benchbot/ros_ws -RUN echo "deb http://packages.ros.org/ros/ubuntu bionic main" > \ - /etc/apt/sources.list.d/ros-latest.list && \ - apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key \ - C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654 && \ - apt update && apt install -y ros-melodic-desktop-full python-rosdep \ - python-rosinstall python-rosinstall-generator python-wstool \ - python-catkin-tools python-pip build-essential - -# Install Vulkan -RUN wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | \ - apt-key add - && wget -qO /etc/apt/sources.list.d/lunarg-vulkan-bionic.list \ - http://packages.lunarg.com/vulkan/lunarg-vulkan-bionic.list && \ - apt update && DEBIAN_FRONTEND=noninteractive apt install -yq vulkan-sdk - -# Create a benchbot user with ownership of the benchbot software stack (Unreal -# for some irritating reason will not accept being run by root...) -RUN useradd --create-home --password "" benchbot && passwd -d benchbot && \ - apt update && apt install -yq sudo && usermod -aG sudo benchbot && \ - usermod -aG root benchbot && mkdir /benchbot && \ - chown benchbot:benchbot /benchbot -USER benchbot -WORKDIR /benchbot - -# Build ROS -RUN sudo rosdep init && rosdep update && \ - mkdir -p ros_ws/src && source /opt/ros/melodic/setup.bash && \ - pushd ros_ws && catkin_make && source devel/setup.bash && popd - -# Install environments from a *.zip containing pre-compiled binaries -ARG BENCHBOT_ENVS_MD5SUMS -ENV BENCHBOT_ENVS_MD5SUMS=${BENCHBOT_ENVS_MD5SUMS} -ARG BENCHBOT_ENVS_URLS -ENV BENCHBOT_ENVS_URLS=${BENCHBOT_ENVS_URLS} -ARG BENCHBOT_ENVS_SRCS -ENV BENCHBOT_ENVS_SRCS=${BENCHBOT_ENVS_SRCS} -ENV BENCHBOT_ENVS_PATH /benchbot/benchbot_envs -RUN _urls=($BENCHBOT_ENVS_URLS) && _md5s=($BENCHBOT_ENVS_MD5SUMS) && \ - _srcs=($BENCHBOT_ENVS_SRCS) && mkdir benchbot_envs && pushd benchbot_envs && \ - for i in "${!_urls[@]}"; do \ - echo "Installing environments from '${_srcs[$i]}':" && \ - echo "Downloading ... " && wget -q "${_urls[$i]}" -O "$i".zip && \ - test "${_md5s[$i]}" = $(md5sum "$i".zip | cut -d ' ' -f1) && \ - echo "Extracting ... " && unzip -q "$i".zip && rm -v "$i".zip && \ - mv "$(find . -mindepth 1 -maxdepth 1 -type d | head -n 1)" "$i" || \ - exit 1; \ - done - -# Install benchbot components, ordered by how expensive installation is -ARG BENCHBOT_SUPERVISOR_GIT -ARG BENCHBOT_SUPERVISOR_HASH -ENV BENCHBOT_SUPERVISOR_PATH /benchbot/benchbot_supervisor -RUN git clone $BENCHBOT_SUPERVISOR_GIT $BENCHBOT_SUPERVISOR_PATH && \ - pushd $BENCHBOT_SUPERVISOR_PATH && git checkout $BENCHBOT_SUPERVISOR_HASH && \ - pip install . -ARG BENCHBOT_CONTROLLER_GIT -ARG BENCHBOT_CONTROLLER_HASH -ENV BENCHBOT_CONTROLLER_PATH /benchbot/benchbot_robot_controller -RUN git clone $BENCHBOT_CONTROLLER_GIT $BENCHBOT_CONTROLLER_PATH && \ - pushd $BENCHBOT_CONTROLLER_PATH && git checkout $BENCHBOT_CONTROLLER_HASH && \ - pip install -r $BENCHBOT_CONTROLLER_PATH/requirements.txt && pushd $ROS_WS_PATH && \ - pushd src && git clone https://github.com/eric-wieser/ros_numpy.git && popd && \ - ln -sv $BENCHBOT_CONTROLLER_PATH src/ && source devel/setup.bash && catkin_make - -# Record the type of backend built -ENV BENCHBOT_BACKEND_TYPE lite diff --git a/docker/core.Dockerfile b/docker/core.Dockerfile index 6657f15..d80e1b7 100644 --- a/docker/core.Dockerfile +++ b/docker/core.Dockerfile @@ -16,8 +16,8 @@ RUN apt update && apt install -yq wget gnupg2 software-properties-common git \ ARG NVIDIA_DRIVER_VERSION ARG CUDA_DRIVERS_VERSION ARG CUDA_VERSION -ENV NVIDIA_VISIBLE_DEVICES all -ENV NVIDIA_DRIVER_CAPABILITIES compute,display,graphics,utility +ENV NVIDIA_VISIBLE_DEVICES="all" +ENV NVIDIA_DRIVER_CAPABILITIES="compute,display,graphics,utility" RUN add-apt-repository ppa:graphics-drivers && \ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin && \ mv -v cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 && \