Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
btalb committed Apr 15, 2021
2 parents 783e9ca + b3dd2d9 commit 3d41cab
Show file tree
Hide file tree
Showing 3 changed files with 30 additions and 20 deletions.
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,14 @@
<p align=center><strong>~ Our <a href="https://evalai.cloudcv.org/web/challenges/challenge-page/807/overview">Robotic Vision Scene Understanding (RVSU) Challenge is live on EvalAI</a> ~<br>(prizes include $2,500USD provided by <a href="https://www.roboticvision.org/">ACRV</a> & GPUs provided by sponsors <a href="https://www.nvidia.com/en-us/research/robotics/">NVIDIA</a>)</strong></p>
<p align=center><strong>~Access our Benchbot Environments for Active Robotics (BEAR) through our <a href="https://github.com/benchbot-addons/envs_isaac_challenge"> challenge </a> and <a href="https://github.com/benchbot-addons/envs_isaac_develop">development </a> environment addons (installed by default)~</strong></p>
<p align=center><strong>~ Our <a href="https://github.com/qcr/benchbot/wiki/Tutorial:-Performing-Semantic-SLAM-with-Votenet">BenchBot tutorial</a> is the best place to get started developing with BenchBot ~</strong></p>

# BenchBot Software Stack

[![BenchBot project](https://img.shields.io/badge/collection-BenchBot-%231a2857)](http://benchbot.org)
[![QUT Centre for Robotics Open Source](https://github.com/qcr/qcr.github.io/raw/master/misc/badge.svg)](https://qcr.github.io)
![Primary language](https://img.shields.io/github/languages/top/qcr/benchbot)
[![License](https://img.shields.io/github/license/qcr/benchbot)](./LICENSE.txt)

![benchbot_web](./docs/benchbot_web.gif)

The BenchBot software stack is a collection of software packages that allow end users to control robots in real or simulated environments with a simple python API. It leverages the simple "observe, act, repeat" approach to robot problems prevalent in reinforcement learning communities ([OpenAI Gym](https://gym.openai.com/) users will find the BenchBot API interface very similar).
Expand Down
32 changes: 18 additions & 14 deletions bin/benchbot_batch
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ _list_batches_pre=\
run_pid=
function kill_run() {
if [ -n $run_pid ]; then
kill -TERM $run_pid &> /dev/null
kill -TERM $run_pid &> /dev/null || true
wait $run_pid || true
run_pid=
fi
Expand All @@ -217,12 +217,12 @@ submit_pid=
submit_cat_pid=
function kill_submit() {
if [ -n $submit_pid ]; then
kill -TERM $submit_pid &> /dev/null
kill -TERM $submit_pid &> /dev/null || true
wait $submit_pid || true
submit_pid=
fi
if [ -n $submit_cat_pid ]; then
kill -TERM $submit_cat_pid &> /dev/null
kill -TERM $submit_cat_pid &> /dev/null || true
wait $submit_cat_pid || true
submit_cat_pid=
fi
Expand Down Expand Up @@ -396,7 +396,7 @@ while [ $i -lt ${#envs_list[@]} ]; do
sleep 3

# Run should never die normally, so treat this as an error
if ! $(ps -p $run_pid &>/dev/null); then
if [ 0 -ne $(ps -p $run_pid &>/dev/null; echo $?) ]; then
echo ""
kill_submit
header_block "Crash detected for environment: ${envs_list[$i]}" \
Expand All @@ -408,17 +408,9 @@ while [ $i -lt ${#envs_list[@]} ]; do
exit 1
fi

# Handle the result of failed submissions (looking for an error code)
wait $submit_pid && submit_result=0 || submit_result=1
if [ $submit_result -ne 0 ]; then
echo ""
kill_run
printf "\n${colour_red}$submission_err${colour_nc}\n" "${envs_list[$i]}"
exit 1
fi

# Skip moving on if we collided using 'move_next' actuation, otherwise move
# to the next environment
retry=
if [ -n "$(run_manager_cmd 'print(get_value_by_name("tasks", "'$task'", \
"actions"))' | grep "'move_next'")" ] && \
[ -n "$(docker run --rm --network $DOCKER_NETWORK -it \
Expand All @@ -427,7 +419,19 @@ while [ $i -lt ${#envs_list[@]} ]; do
grep "true")" ]; then
printf "\n${colour_yellow}$collision_warn${colour_nc}\n\n" \
"${envs_list[$i]}"
else
retry=1
fi

# Handle the result of failed submissions (looking for an error code)
wait $submit_pid && submit_result=0 || submit_result=1
if [ $submit_result -ne 0 ] && [ -z $retry ]; then
echo ""
kill_run
printf "\n${colour_red}$submission_err${colour_nc}\n" "${envs_list[$i]}"
exit 1
fi

if [ -z $retry ]; then
results_list+=("${prefix}_$i.json")
i=$((i+1))
fi
Expand Down
12 changes: 6 additions & 6 deletions bin/benchbot_install
Original file line number Diff line number Diff line change
Expand Up @@ -671,7 +671,7 @@ chk_addonsuptodate_fix=\
'pushd '"$PATH_ADDONS"' &&
git fetch --all && git checkout -- . &&
(git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) &&
git pull && popd'
git pull && popd && pip3 uninstall -y benchbot_addons'
chk_addonsuptodate_reboot=1

chk_addonsinstalled_name='BenchBot Add-ons Manager installed'
Expand All @@ -690,7 +690,7 @@ chk_addonsinstalled_issue="\
environment before running BenchBot."
chk_addonsinstalled_fix=\
'pushd '"$PATH_ADDONS"' &&
python3 -m pip install -e . && popd'
python3 -m pip install --upgrade -e . && popd'
chk_addonsinstalled_reboot=1

chk_apicloned_name='BenchBot API cloned'
Expand Down Expand Up @@ -729,7 +729,7 @@ chk_apiuptodate_fix=\
'pushd '"$PATH_API"' &&
git fetch --all && git checkout -- . &&
(git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) &&
git pull && popd'
git pull && popd && pip3 uninstall -y benchbot_api'
chk_apiuptodate_reboot=1

chk_apiinstalled_name='BenchBot API installed'
Expand All @@ -748,7 +748,7 @@ chk_apiinstalled_issue="\
environment before running BenchBot."
chk_apiinstalled_fix=\
'pushd '"$PATH_API"' &&
python3 -m pip install -e . && popd'
python3 -m pip install --upgrade -e . && popd'
chk_apiinstalled_reboot=1

chk_evalcloned_name='BenchBot evaluation cloned'
Expand Down Expand Up @@ -787,7 +787,7 @@ chk_evaluptodate_fix=\
'pushd '"$PATH_EVAL"' &&
git fetch --all && git checkout -- . &&
(git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) &&
git pull && popd'
git pull && popd && python3 -m pip uninstall -y benchbot_eval'
chk_evaluptodate_reboot=1

chk_evalinstalled_name='BenchBot evaluation installed'
Expand All @@ -806,7 +806,7 @@ chk_evalinstalled_issue="\
environment before running BenchBot."
chk_evalinstalled_fix=\
'pushd '"$PATH_EVAL"' &&
python3 -m pip install -e . && popd'
python3 -m pip install --upgrade -e . && popd'
chk_evalinstalled_reboot=1

chk_hostsavail_name='BenchBot hosts available'
Expand Down

0 comments on commit 3d41cab

Please sign in to comment.