You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From my reading of the code, currently the language we have for defining an agents behaviour in this library is:
initial position
terminal position
obstacles with trajectories
a set of options to tune obstacle avoidance
It's not clear how I would implement a goal like: if there is a red circle in front of me, I want to move around it on the left, otherwise, move around it on the right.
PDDL is a neat way to write high-level goals and constraints on the behaviour of some agent. With a PDDL like language, you'd write something like (this is pseudo code)
An alternative to using a formal langauge like PDDL could be to expose a cost function interface.
ie. users pass in a python function that can be queried by the planner to aid it's search.
defleft_of_red_obs_cost(vehicle, world):
forobsinworld.obstacles:
vehicle_is_on_left=is_vehicle_on_left(vehicle, obs)
if ((is_red(obs) andvehicle_is_on_left)
or (notis_red(obs) and (notvehicle_is_on_left)):
return0# this is what we wantelse:
return10# vehicle is in a no-good situation, return high costvehicle.attach_cost_function(left_of_red_obs_cost)
Thoughts?
The text was updated successfully, but these errors were encountered:
OMGtools solves a succession of motion plans with a fixed time-horizon.
For those motion plans you have a prediction how you yourself and your environment will change, an we use a gradient-based generic nonlinear optimization to find a (locally) optimal plan of action.
Introducing logical decision variables into this optimization/allow logic in the objective is non-trivial.
For the case you propose, the higher level 'cascader' will have to assume the color for the planning horizon.
It could then, according to this knowledge activate one or the other force field in the objective, or mandate that the trajectory intersect with a line segment on either the left or right side, or it could leave the optimization problem alone and simple pick a different intial guess for the motion plan produced using some sampling based planner running on top of omg-tools
From my reading of the code, currently the language we have for defining an agents behaviour in this library is:
It's not clear how I would implement a goal like:
if there is a red circle in front of me, I want to move around it on the left, otherwise, move around it on the right
.PDDL is a neat way to write high-level goals and constraints on the behaviour of some agent. With a PDDL like language, you'd write something like (this is pseudo code)
An alternative to using a formal langauge like PDDL could be to expose a cost function interface.
ie. users pass in a python function that can be queried by the planner to aid it's search.
Thoughts?
The text was updated successfully, but these errors were encountered: