Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug Report] Ik Absolute seems to not work properly with rotated base frame #911

Closed
1 of 3 tasks
zoctipus opened this issue Aug 30, 2024 · 14 comments · Fixed by #967
Closed
1 of 3 tasks

[Bug Report] Ik Absolute seems to not work properly with rotated base frame #911

zoctipus opened this issue Aug 30, 2024 · 14 comments · Fixed by #967
Assignees
Labels
bug Something isn't working

Comments

@zoctipus
Copy link
Contributor

zoctipus commented Aug 30, 2024

Describe the bug

Isaac-Lift-Cube-Franka-IK-Abs-v0 with robot based rotated 90 degree, step constant action in robot base frame

The Frame indicates the visualization of target pose
frankar1

Isaac-Lift-Cube-Franka-IK-Abs-v0 with no robot based rotation, step constant action in robot base frame

The Frame indicates the visualization of target pose
frankar2

As you can see from the gif, the rotation seems to cause the robot unable to track the target pose stably.

Steps to reproduce

I wrote below code that reproduce the bug, uncomment the denoted section to rotate robot

python source/standalone/environments/position_follow_agent.py --task Isaac-Lift-Cube-Franka-IK-Abs-v0 --num_envs 1

import argparse

from omni.isaac.lab.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Random agent for Isaac Lab environments.")
parser.add_argument(
    "--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()

# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app

"""Rest everything follows."""

import gymnasium as gym
import torch

import omni.isaac.lab_tasks  # noqa: F401
from omni.isaac.lab_tasks.utils import parse_env_cfg
from omni.isaac.lab.utils.math import combine_frame_transforms
from omni.isaac.lab.markers import VisualizationMarkers
from omni.isaac.lab.markers.config import FRAME_MARKER_CFG


def main():
    """Random actions agent with Isaac Lab environment."""
    # create environment configuration
    env_cfg = parse_env_cfg(
        args_cli.task, device=args_cli.device, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
    )
    # env_cfg.scene.robot.init_state.rot=(0.70711, 0, 0, -0.70711)  ----------------------Uncomment to rotate 90 degree
    # create environment
    env = gym.make(args_cli.task, cfg=env_cfg)
    # print info (this is vectorized environment)
    print(f"[INFO]: Gym observation space: {env.observation_space}")
    print(f"[INFO]: Gym action space: {env.action_space}")
    # reset environment
    env.reset()
    env.unwrapped.sim.step(render=True)
    env.unwrapped.scene.update(dt=env.unwrapped.physics_dt)
    cur_ee_pose = env.unwrapped.action_manager.get_term("arm_action")._compute_frame_pose()
    frame_marker_cfg = FRAME_MARKER_CFG.copy()  # type: ignore
    frame_marker_cfg.markers["frame"].scale = (0.05, 0.05, 0.05)
    goal_marker = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_goal"))
    # simulate environment
    actions = torch.zeros(env.unwrapped.action_space.shape, device=env.unwrapped.device)
    while simulation_app.is_running():
        # run everything in inference mode
        with torch.inference_mode():
            # sample actions from -1 to 1
            actions[:, :7] = torch.cat(cur_ee_pose, dim=1)
            
            world_pos, world_quat = combine_frame_transforms(
                env.unwrapped.scene['robot'].data.root_pos_w, env.unwrapped.scene['robot'].data.root_quat_w, actions[:, :3], actions[:, 3:7]
            )
            goal_marker.visualize(world_pos + env.unwrapped.scene.env_origins, world_quat)
            # apply actions
            env.step(actions)

    # close the simulator
    env.close()


if __name__ == "__main__":
    # run the main function
    main()
    # close sim app
    simulation_app.close()

System Info

Describe the characteristic of your environment:

  • Commit: 7452386
  • Isaac Sim Version: 4.1
  • OS: 20.04
  • GPU: 4090
  • CUDA: 12.5
  • GPU Driver: 555.42.02

Additional context

I verified in the code until compute_delta_jointpos in ik_differential.py where the pos_error and quat_error are both close to [0, 0, 0] and [1, 0, 0, 0] when the environment just starts, indicating the target position is implemented correctly, but rotated franka will just drift away

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

Acceptance Criteria

Add the criteria for which this task is considered done. If not known at issue creation time, you can add this once the issue is assigned.

  • Ik Abs can track any reachable point whether or not base rotates
@Mayankm96 Mayankm96 added the bug Something isn't working label Sep 5, 2024
@Mayankm96
Copy link
Contributor

Mayankm96 commented Sep 5, 2024

We don't read the robot's base pose for adjusting the geometric Jacobian and end-effector targets. This is definitely a bug that should be addressed. Would you be able to help on this? @zoctipus

Otherwise @jtigue-bdai maybe this goes along your current efforts on the controllers as well?

@zoctipus
Copy link
Contributor Author

zoctipus commented Sep 5, 2024

Let me give a try!

@jtigue-bdai
Copy link
Collaborator

Sounds good. Happy to discuss or assist as needed.

@zoctipus
Copy link
Contributor Author

zoctipus commented Sep 6, 2024

@Mayankm96 @jtigue-bdai
gif that shows franka, base rotated 90 degrees, can be stably controlled with teleoperation
franka5

Screenshot of edit that fix the bug
Screenshot from 2024-09-05 21-18-42

I got it, it is actually an easy fix, and got it to work with these four lines: 180 - 183
Before I PR any changes, I want to discuss here if there is a more efficient way to implement this. I was initially thinking just using quat apply directoly without converting to matrix, like below

inverse_base_quat = math_utils.quat_inv(base_rot)
jacobian[:, :3, :] = math_utils.quat_apply(inverse_base_quat, jacobian[:, :3, :])
jacobian[:, 3:, :] = math_utils.quat_apply(inverse_base_quat, jacobian[:, 3:, :])

but it is not working as I expected, and I needed to matrix multiply the rotation as shown in the code to get it work correctly.
Let me know your thought!

@jtigue-bdai
Copy link
Collaborator

So you are saying the quat_apply doesn't work? If so then you are correct quat_apply assumes a shape of the second argument is a 2d tensor with shape Nx3. Where N is usually number of clones/envs/samples/etc. In this case ou are sending in an [Nx3xn_Dof]

The matrix_from_quat combined with the torch.bmm is an effective route. So your lines 180-183 make sense.

Now one thing to think about is the implementation. The current functionality isn't necessarily a bug but more an unstated feature. The Jacobians being returned are in the "world" frame. Depending on the use case you may want to describe end_effector velocities in the world frame. When you implement this you may want to create a alternative properties of the asset. I.e. world frame and root frame jacobians.

@zoctipus
Copy link
Contributor Author

zoctipus commented Sep 6, 2024

Thanks for such a comprehensive response!
I love the philosophy went behind the implementation, I learned a lot!
below are my new implementations

Screenshot from 2024-09-06 14-45-15
Screenshot from 2024-09-06 14-45-40

Let me know what I can do better!

@pushkalkatara
Copy link

pushkalkatara commented Sep 20, 2024

Also required to update the end-effector targets for absolute poses to work:

    def process_actions(self, actions: torch.Tensor):
        # store the raw actions
        self._raw_actions[:] = actions
        self._processed_actions[:] = self.raw_actions * self._scale

        goal_pos_w, goal_rot_w = self._processed_actions[:, :3], self._processed_actions[:, 3:]
        root_pose_w = self._asset.data.root_state_w[:, :7]

        base_rot_matrix = math_utils.matrix_from_quat(math_utils.quat_inv(root_pose_w[:, 3:]))

        goal_rot_matrix_b = torch.bmm(base_rot_matrix, math_utils.matrix_from_quat(goal_rot_w))
        goal_rot_b = math_utils.quat_from_matrix(goal_rot_matrix_b)

        base_pos = root_pose_w[:, :3]
        base_pos_inv = - torch.einsum("nij,nj->ni", base_rot_matrix, base_pos)
        goal_pos_b = torch.einsum("nij,nj->ni", base_rot_matrix, goal_pos_w) + base_pos_inv

        self._processed_actions[:, :3] = goal_pos_b
        self._processed_actions[:, 3:] = goal_rot_b

        # obtain quantities from simulation
        ee_pos_curr, ee_quat_curr = self._compute_frame_pose()
        # set command into controller
        self._ik_controller.set_command(self._processed_actions, ee_pos_curr, ee_quat_curr)

Also, the scale parameter for quaternions would work differently.

@zoctipus
Copy link
Contributor Author

Thanks for the discussion,

This is good point,
though I suppose this change of frame can be done outside of ActionManager? so input actions are always in robot's base frame?

@pushkalkatara
Copy link

It's task space action so maybe by design it should receive targets in global frame, though it's a design choice, can be kept in robot frame as well.

@pushkalkatara
Copy link

Added visualization debug functions as well in case it's helpful.

    def _set_debug_vis_impl(self, debug_vis: bool):
        # create markers if necessary for the first tome
        if debug_vis:
            if not hasattr(self, "goal_pose_visualizer"):
                marker_cfg = FRAME_MARKER_CFG.copy()
                marker_cfg.markers["frame"].scale = (0.1, 0.1, 0.1)
                # -- goal pose
                marker_cfg.prim_path = "/Visuals/Action/goal_pose"
                self.goal_pose_visualizer = VisualizationMarkers(marker_cfg)
                # -- current body pose
                marker_cfg.prim_path = "/Visuals/Action/body_pose"
                self.body_pose_visualizer = VisualizationMarkers(marker_cfg)
            # set their visibility to true
            self.goal_pose_visualizer.set_visibility(True)
            self.body_pose_visualizer.set_visibility(True)
        else:
            if hasattr(self, "goal_pose_visualizer"):
                self.goal_pose_visualizer.set_visibility(False)
                self.body_pose_visualizer.set_visibility(False)

    def _debug_vis_callback(self, event):
        # check if robot is initialized
        # note: this is needed in-case the robot is de-initialized. we can't access the data
        if not self._asset.is_initialized:
            return
        # update the markers
        # -- goal pose
        self.goal_pose_visualizer.visualize(self.raw_actions[:, :3], self.raw_actions[:, 3:])
        
        # -- current body pose
        # visualze ee pose
        ee_pose_w = self._asset.data.body_state_w[:, self._body_idx, :7]
        self.body_pose_visualizer.visualize(ee_pose_w[:, :3], ee_pose_w[:, 3:])

@StrainFlow
Copy link
Collaborator

Is this issue resolved? If so I'll close it.

@zoctipus
Copy link
Contributor Author

zoctipus commented Dec 4, 2024

A pull request has been submitted, which I believe to resolved the issue and added a test unit to test against it. But hasn't been accepted yet.

@StrainFlow
Copy link
Collaborator

Can you share which pull request? I'll link it

@jtigue-bdai
Copy link
Collaborator

#967

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
5 participants