-
Notifications
You must be signed in to change notification settings - Fork 0
Week 6 Line Follower
Copy all of the code below into your line_follower.py
file. Then, review the explainer (below) to understand how this all works.
#!/usr/bin/env python3
import rospy
import cv2
from cv_bridge import CvBridge, CvBridgeError
from sensor_msgs.msg import Image
from tb3 import Tb3Move
class LineFollower(object):
def __init__(self):
node_name = "line_follower"
rospy.init_node(node_name, anonymous=True)
self.rate = rospy.Rate(5)
self.cvbridge_interface = CvBridge()
self.img_sub = rospy.Subscriber("/camera/rgb/image_raw", Image, self.camera_cb)
self.robot_controller = Tb3Move()
self.ctrl_c = False
rospy.on_shutdown(self.shutdown_ops)
def shutdown_ops(self):
self.robot_controller.stop()
cv2.destroyAllWindows()
self.ctrl_c = True
def camera_cb(self, img_data):
try:
cv_img = self.cvbridge_interface.imgmsg_to_cv2(img_data, desired_encoding="bgr8")
except CvBridgeError as e:
print(e)
_, width, _ = cv_img.shape
crop_width = 500
crop_height = 40
crop_x = int((width / 2) - (crop_width / 2))
cropped_img = cv_img[10:crop_height, 10:crop_x+crop_width]
hsv_img = cv2.cvtColor(cropped_img, cv2.COLOR_BGR2HSV)
lower = (100, 100, 100)
upper = (255, 255, 255)
mask = cv2.inRange(hsv_img, lower, upper)
res = cv2.bitwise_and(cropped_img, cropped_img, mask = mask)
m = cv2.moments(mask)
cy = m['m10'] / (m['m00'] + 1e-5)
cz = m['m01'] / (m['m00'] + 1e-5)
cv2.circle(res, (int(cy), int(cz)), 10, (255, 0, 0), 2)
cv2.imshow("filtered image", res)
cv2.waitKey(1)
y_error = cy - (width / 2)
kp = 1.0 / 50.0
fwd_vel = 0.1
ang_vel = kp * y_error
print(f"Y-error = {y_error:.3f} pixels, ang_vel = {ang_vel:.3f} rad/s")
self.robot_controller.set_move_cmd(fwd_vel, ang_vel)
self.robot_controller.{BLANK} # publish the velocity command
def main(self):
while not self.ctrl_c:
self.rate.sleep()
if __name__ == '__main__':
lf_instance = LineFollower()
try:
lf_instance.main()
except rospy.ROSInterruptException:
pass
FILL IN THE {BLANK}! There is a method within the
Tb3Move()
class which allows us to publish a velocity command to the/cmd_vel
topic. What is it? (Have a look at thetb3.py
source code if you need a reminder).
A lot of the things in here you will already be familiar with from the previous exercises, so we won't go into too much detail on all of this again. The main thing you will notice is that we have once again built a class structure around this, which should now be familiar to you from the work you have done in previous weeks of this course.
In this case we want our robot to follow the line that is printed on the floor. We do this by applying the same image processing steps as in the previous exercises to isolate the colours associated with the line and calculate it's location in the robot's viewpoint.
The next step is to use the centroid component cy
to determine how far the robot needs to turn in order to keep the line in the centre of its vision:
y_error = cy - (width / 2)
kp = 1.0 / 50.0
fwd_vel = 0.1
ang_vel = kp * y_error
print(f"Y-error = {y_error:.3f} pixels, ang_vel = {ang_vel:.3f} rad/s")
self.robot_controller.set_move_cmd(fwd_vel, ang_vel)
self.robot_controller.publish()
We are implementing proportional control here, which (if you are a COM2009/COM3009 student) should know about from Prof Moore's lecture on PID Control (Lecture 6). Ideally, we want the centre of the line on the floor to be in the centre of the robot's viewpoint at all times: this is our target position. The actual position is where the line on the floor actually is, i.e.: the cy
centroid component. The position error is then the difference between the actual and target position:
y_error = cy - (width / 2)
The only way we can reduce this error is by changing the robot's angular velocity. The robot always needs to travel with forward velocity, so we define a fixed value of 0.1 m/s at all times to achieve this:
fwd_vel = 0.1
In order to correct for our position error, we multiply it by a proportional gain (kp
), which will provide us with an angular velocity that should start to make the error reduce:
ang_vel = kp * y_error
We then simply set these two velocities in our robot_controller
object and then publish them to the /cmd_vel
topic using methods from the Tb3Move()
class:
self.robot_controller.set_move_cmd(fwd_vel, ang_vel)
self.robot_controller.{BLANK} # publish the velocity command
We're doing all this inside the camera callback function, so that this process will repeat every time a new image is obtained. If the proportional gain is set appropriately, this should ensure that our position error (y_error
) is always kept to a minimum, so that the robot follows the line!
COM2009/3009 Robotics Lab Course
Updated for the 2021-22 Academic Year
Dr Tom Howard | Multidisciplinary Engineering Education (MEE) | The University of Sheffield
The documentation within this Wiki is licensed under Creative Commons License CC BY-NC:
You are free to distribute, remix, adapt, and build upon this work (for non-commercial purposes only) as long as credit is given to the original author.