Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spin_thread is not working in rclpy. #743

Open
Tacha-S opened this issue Dec 17, 2024 · 3 comments
Open

spin_thread is not working in rclpy. #743

Tacha-S opened this issue Dec 17, 2024 · 3 comments
Labels
help wanted Extra attention is needed

Comments

@Tacha-S
Copy link

Tacha-S commented Dec 17, 2024

Bug report

Required Info:

  • Operating System:
    • Ubuntu 24.04
  • Installation type:
    • binaries
  • Version or commit hash:
    • 7.1.2
  • DDS implementation:
    • CycloneDDS
  • Client library (if applicable):
    • rclpy

The spin thread is not working as intended.
As mentioned in #438, when running a transform listener on the same node as the main process with a single-threaded executor specified, the TF updates can be delayed. (See the example code below.)

To avoid this, you can either create a separate node for the transform listener or switch the main executor to a multi-threaded executor. While these workarounds exist, it concerns me that the spin_thread argument alone is almost meaningless.

I understand this may not be the best solution, but for example, in rclcpp, a separate node is internally created for the transform listener. Wouldn't it have been possible for rclpy to do the same?

Steps to reproduce issue

import time

import rclpy
from rclpy.node import Node
from rclpy.time import Time
import tf2_ros


class TFListener(Node):

    def __init__(self, node_name):
        super().__init__(node_name)
        self.tf_buffer = tf2_ros.Buffer()
        self.tf_listener = tf2_ros.TransformListener(self.tf_buffer, self, spin_thread=True)

        self.timer = self.create_timer(0.01, self.timer_callback)
        self.timer2 = self.create_timer(0.1, self.timer2_callback)

    def timer_callback(self):
        time.sleep(1)

    def timer2_callback(self):
        try:
            transform = self.tf_buffer.lookup_transform('base_link', 'map', Time())
            self.get_logger().info(
                f'Latest transform stamp: {transform.header.stamp.sec + transform.header.stamp.nanosec * 1e-9}')
        except Exception as e:
            self.get_logger().error(str(e))


if __name__ == '__main__':
    rclpy.init()
    node = TFListener('test_tf_listener')
    rclpy.spin(node)
    rclpy.shutdown()

Expected behavior

Actual behavior

Additional information

@clalancette
Copy link
Contributor

I understand this may not be the best solution, but for example, in rclcpp, a separate node is internally created for the transform listener.

That's actually not true; the C++ code uses the same node. However, when spin_thread is true it does create a different callback group for the /tf and /tf_static subscriptions, and then put those in a dedicated thread with its own executor:

if (spin_thread_) {
// Create new callback group for message_subscription of tf and tf_static
callback_group_ = node_base_interface_->create_callback_group(
rclcpp::CallbackGroupType::MutuallyExclusive, false);
if (!static_only) {
// Duplicate to modify subscription options
rclcpp::SubscriptionOptionsWithAllocator<AllocatorT> tf_options = options;
tf_options.callback_group = callback_group_;
message_subscription_tf_ = rclcpp::create_subscription<tf2_msgs::msg::TFMessage>(
node_parameters, node_topics, "/tf", qos, std::move(cb), tf_options);
}
rclcpp::SubscriptionOptionsWithAllocator<AllocatorT> tf_static_options = static_options;
tf_static_options.callback_group = callback_group_;
message_subscription_tf_static_ = rclcpp::create_subscription<tf2_msgs::msg::TFMessage>(
node_parameters,
node_topics,
"/tf_static",
static_qos,
std::move(static_cb),
tf_static_options);
// Create executor with dedicated thread to spin.
executor_ = std::make_shared<rclcpp::executors::SingleThreadedExecutor>();
executor_->add_callback_group(callback_group_, node_base_interface_);
dedicated_listener_thread_ = std::make_unique<std::thread>([&]() {executor_->spin();});
// Tell the buffer we have a dedicated thread to enable timeouts
buffer_.setUsingDedicatedThread(true);
.

The tf2_ros_py code isn't as sophisticated as this, and instead just creates a thread with an executor, and adds the node to it:

if spin_thread:
self.executor = SingleThreadedExecutor()
def run_func():
self.executor.add_node(self.node)
self.executor.spin()
self.executor.remove_node(self.node)
self.dedicated_listener_thread = Thread(target=run_func)
self.dedicated_listener_thread.start()
. That means that if there are other things in the node, they can compete with the /tf and /tf_static topics, since they are all in the same callback group.

Probably the correct solution here is to make the tf2_ros_py code more like the tf2_ros C++ code, and add a new callback group to the executor. However, I'll note that the rclpy executor code isn't nearly as sophisticated as the rclcpp executor code, so there is some possibility this will also require changes to rclpy.

@clalancette clalancette added the help wanted Extra attention is needed label Dec 17, 2024
@Tacha-S
Copy link
Author

Tacha-S commented Dec 18, 2024

I thought a new node was being created here, but is that not?

rclcpp::NodeOptions options;
// create a unique name for the node
// but specify its name in .arguments to override any __node passed on the command line.
// avoiding sstream because it's behavior can be overridden by external libraries.
// See this issue: https://github.com/ros2/geometry2/issues/540
char node_name[42];
snprintf(
node_name, sizeof(node_name), "transform_listener_impl_%zx",
reinterpret_cast<size_t>(this)
);
options.arguments({"--ros-args", "-r", "__node:=" + std::string(node_name)});
options.start_parameter_event_publisher(false);
options.start_parameter_services(false);
optional_default_node_ = rclcpp::Node::make_shared("_", options);
init(
optional_default_node_->get_node_base_interface(),
optional_default_node_->get_node_logging_interface(),
optional_default_node_->get_node_parameters_interface(),
optional_default_node_->get_node_topics_interface(),
spin_thread, DynamicListenerQoS(), StaticListenerQoS(),
detail::get_default_transform_listener_sub_options(),
detail::get_default_transform_listener_static_sub_options(),
static_only);

@Tacha-S
Copy link
Author

Tacha-S commented Dec 18, 2024

The tf2_ros_py code simply creates a thread with an executor and adds the node to it.

This is correct, but it seems that if you spin the same node with another executor, the executor that was supposed to run in a separate thread stops functioning.

In the following script, the rclpy.spin_once(node) line effectively overwrites the executor. Just having this line causes the TF thread to stop working, while removing it allows the thread to function correctly.

import rclpy
from rclpy.duration import Duration
from rclpy.node import Node
from rclpy.time import Time
import tf2_ros


class TFListener(Node):

    def __init__(self, node_name):
        super().__init__(node_name)
        self.tf_buffer = tf2_ros.Buffer()
        self.tf_listener = tf2_ros.TransformListener(self.tf_buffer, self, spin_thread=True)

    def run(self):
        while rclpy.ok():
            try:
                transform = self.tf_buffer.lookup_transform('base_link', 'map', Time())
                self.get_logger().info(
                    f'Latest transform stamp: {transform.header.stamp.sec + transform.header.stamp.nanosec * 1e-9}')
            except Exception as e:
                self.get_logger().error(str(e))
            self.get_clock().sleep_for(Duration(seconds=0.1))


if __name__ == '__main__':
    rclpy.init()
    node = TFListener('test_tf_listener')
    rclpy.spin_once(node)  # overwrite executor?
    node.run()
    rclpy.shutdown()

This might be because the rclpy executor adds nodes instead of callback groups, resulting in the node being taken over.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants