Skip to content

Algorithms for autonomous navigation and track following car which can detect road signs and navigate to the end of the road.

Notifications You must be signed in to change notification settings

Pranay-Pandey/Self-driving-car

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-Driving Car with Pixy Camera, ROS2, Computer Vision, and Machine Learning

This project implements an autonomous self-driving car system utilizing a Pixy Camera for vision-based navigation and control. The car autonomously traverses a track containing various obstacles such as under- and over-bridges, zig-zag paths, and speed bumps. The project integrates ROS2 for control and Gazebo for simulation in an Ubuntu 20.04 environment. Additionally, the system uses Machine Learning (ML) and Computer Vision (CV) techniques for dynamic environment perception and decision-making.


Project Overview

The primary objective of this project is to autonomously navigate a car along a track, using the Pixy Camera for real-time vision processing. The track has several challenges, including:

  • Over-bridge and under-bridge navigation
  • Zig-zag turns
  • Speed bumps

The car relies on a combination of computer vision to identify the track and machine learning models to handle various complex scenarios like traffic signs and dynamic road conditions.


Final Track Path

Final Track

Track Image 1

Track Image 2


System Architecture Overview

This system is composed of several key layers:

  1. Computer Vision (CV) Layer:
    This layer is responsible for processing images captured by the Pixy Camera. It detects the track's boundaries, traffic signs, and other critical features.

  2. Machine Learning (ML) Layer:
    The ML layer is used to process traffic sign data and adjust the car’s behavior based on real-time road signs and environmental changes.

  3. Line Following and Control Layer:
    This layer interprets the data from the vision system and controls the car's steering, speed, and movement decisions.


Installation Instructions

To run this project on your system, follow the steps below:

  1. Install ROS2 and Required Packages
    Follow the installation guide for ROS2 and other dependencies here.

  2. Clone the Repository
    Clone this GitHub repository into your workspace:

    git clone https://github.com/Pranay-Pandey/self-driving-car.git
  3. Install Dependencies
    Install all the required packages as outlined in the NXP Gitbook.

  4. Replace the Script Files
    Replace the following files in your ROS2 workspace with the ones from this repository:

    • aim_line_follow.pyros2ws/src/aim_line_follow/aim_line_follow.py
    • nxp_track_vision.pyros2ws/src/nxp_cup_vision/nxp_cup_vision.py
  5. Launch the Simulation
    To start the simulation, use the following command:

    ros2 launch sim_gazebo_bringup sim_gazebo.launch.py

Technical Details and Algorithm Breakdown

Computer Vision (CV) Layer

The CV layer processes the input from the Pixy Camera to detect road boundaries and track features. It extracts crucial data points from images and sends these to the steering and control system.

Track Boundary Detection (nxp_track_vision.py)

The main script for track boundary detection is nxp_track_vision.py. This script processes the image captured by the Pixy Camera and extracts two vectors representing the left and right edges of the track. The algorithm works as follows:

  1. Image Preprocessing:
    The image captured by the Pixy Camera is preprocessed to enhance features relevant to road detection, such as color segmentation and edge detection.

  2. Vector Extraction:
    Two vectors are calculated to represent the track’s left and right boundaries. This is done using geometric methods that identify the positions of white pixels (representing the road) in the image.

  3. Track Boundary Adjustment:

    • If both vectors are on the same side of the track center, the car adjusts its steering to move away from that side.
    • The system also checks for the car’s position relative to over-bridges, adjusting the steering mechanism when crossing these areas using the calculated slope of the detected vectors.
  4. Vector Steering Adjustment:
    The steering angle is derived from the average slope of the two vectors, ensuring the car follows the track accurately. The system also adjusts for changes in the Y-values of the vectors, particularly useful for steering over bridges.

Traffic Sign Detection and Processing

Traffic sign recognition is an essential part of this project. Using machine learning (ML) techniques, the system can identify and interpret road signs, adjusting the car’s behavior accordingly.

  • Traffic Sign Dataset:
    A custom-trained model is used to detect traffic signs. The model is trained on a set of predefined signs (e.g., stop signs, speed limits).

  • Sign Processing:
    The car identifies the presence of traffic signs by analyzing the pixels in the captured image. If a sign is detected, the relevant action (e.g., stop or slow down) is triggered.


Machine Learning (ML) Layer

The ML layer is responsible for enhancing the car’s decision-making capabilities, specifically related to traffic sign recognition and adaptive behavior.

Traffic Sign Recognition

  1. Model Overview:
    The system uses a Convolutional Neural Network (CNN) trained to recognize various traffic signs. The training dataset includes images of common road signs, such as Stop, Speed Limit, Pedestrian Crossing, and more.

  2. Sign Detection Process:
    Once the Pixy Camera captures an image, the ML model processes the image to detect and classify any road signs present. The car then adjusts its behavior based on the identified sign. For instance:

    • Stop Sign: The car will stop if it detects a stop sign.
    • Speed Limit Sign: The car will adjust its speed according to the sign's speed limit.

Adaptive Decision-Making with ML

In addition to traffic sign recognition, the ML system also adjusts the car’s behavior in response to complex road conditions or obstacles. By continually learning from the environment, the system becomes more adaptive and responsive.


Line Following and Control Layer

The control layer processes the output from the computer vision and machine learning layers to control the car’s steering and speed.

  1. Steering Control (aim_line_follow.py):
    The aim_line_follow.py script receives steering commands from the vision system, specifically from the left and right pixel density calculations. The system works by:

    • Calculating the left and right pixel densities within a specific image window.
    • Determining the car’s steering angle based on these densities to ensure smooth, accurate turns.
  2. PID Control:
    A PID controller is applied to the pixel density values to prevent the car from making rapid or erratic movements. The formula for steering adjustment is as follows:

    self.steer_vector.z = self.steer_vector.z + (pid(L) - pid(R))/2.0
  3. Speed Adjustment:
    To ensure the car moves at an optimal speed, the system also adjusts the velocity in response to varying terrain conditions (e.g., uphill or downhill):

    self.speed_vector.x = self.speed_vector.x + kp * error

    Here, error is the difference between the desired and actual car velocities, and kp is a constant that adjusts the car’s speed.

  4. Over-Bridge Handling:
    The system uses a state variable (state == 1) to detect when the car is on the bridge. During bridge crossings, the steering and speed are adjusted to maintain stability.


Video Demonstrations

  • Final Track Demo:
    Watch the car navigating the final track:

    Final Track Demo

  • Car Navigation Video:
    Watch a full demonstration of the car navigating the track:

    Car Navigation Demo


Conclusion

This project integrates Computer Vision, Machine Learning, and ROS2 to create an autonomous self-driving

About

Algorithms for autonomous navigation and track following car which can detect road signs and navigate to the end of the road.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published