Skip to content

VADv2: End-to-End Vectorized Autonomous Driving via Probabilistic Planning

License

Notifications You must be signed in to change notification settings

priest-yang/VADv2

Repository files navigation

VADv2

description

Demo

Framework

Model Framework

Open-loop Eval

Nuplan offers ruled-based tags for each driving scenario. We perform the test in various aspects and store each with multiple metircs. For the detail, check test/scenario_test.

We include comparison results between VADv2 and VAD in this repo.

Visualization

Perception

False Positive Cars (Click to see details) Planner 1S L2 displacement
False Positive Pedestrian (Click to see details) Planner 1S L2 displacement
ADE Car (Click to see details) Planner 1S L2 displacement
ADE Pedestrian (Click to see details) Planner 1S L2 displacement
FDE Car (Click to see details) Planner 1S L2 displacement
FDE Pedestrian (Click to see details) Planner 1S L2 displacement

Planning

Planner 1S L2 displacement (Click to see details) Planner 1S L2 displacement
Planner 2S L2 displacement (Click to see details) Planner 2S L2 displacement
Planner 3S L2 displacement (Click to see details) Planner 3S L2 displacement
Planner 1S Object Box Collision (Click to see details) Planner 1S Object Box Collision
Planner 2S Object Box Collision (Click to see details) Planner 2S Object Box Collision
Planner 3S Object Box Collision (Click to see details) Planner 3S Object Box Collision

Tools

To correctly obtain map evaluation results, you first need to extract the ground truth map information from the test set. You can use the following command:

python tools/convert_gt_map_json.py \
    --data_root Path/to/nuplan \
    --pkl_path Path/to/nuplan/test/pkl \
    --save_path eval_map.json

Where:

  • --data_root: Path to your nuplan dataset root directory
  • --pkl_path: Path to the test set PKL files
  • --save_path: Output path for the converted JSON file containing ground truth map information

sample data

The test sample data could be found at data/sample_data/sample_ann.pkl. A mini-mini-mini Dataset of Nuplan could be found at data/sample_data/nuplan. Which includes the camera data and map data.

Run scripts

python lwad/convert_gt_map_json.py \            
    --data_root data/sample_data/nuplan/dataset \
    --pkl_path data/sample_data/sample_ann.pkl \
    --save_path data/sample_data/eval_map.json

The sample .JSON file could be found at data/sample_data/eval_map.json

This tool is used to compare evaluation metrics between two experiments across different scenarios and generate intuitive comparison charts. The charts include:

  • Raw metric values for both experiments (bar charts)
  • Performance improvement percentages (line graphs)

Usage

Execute the following command in the project root directory:

python tools/scenarios_compare_vis.py \
    --dir_baseline='test/scenario_test/VAD_baseline' \
    --dir_exp='new_experiment_directory' \
    --save_image_dir='output_directory' \

Parameter Description

  • --dir_baseline: Directory path for baseline experiment results (containing evaluation_results.json for each scenario). Defaults to test/scenario_test/VAD_baseline_1013. Stored in git, containing scenario evaluation results from the first model delivery.
  • --dir_exp: Directory path for the experiment results to be compared (containing evaluation_results.json for each scenario)
  • --save_image_dir: Path to save the charts, defaults to test/scenario_compare
  • --eval_metrics: Names of metrics to compare, generates all supported metrics if not specified
  • --keyword: Keyword used to extract experiment names from paths, defaults to "VAD" for convenient legend labeling

Supported Evaluation Metrics

The tool supports comparison of the following metrics:

  • Trajectory prediction: ADE/FDE (vehicles/pedestrians)
  • Detection: Hit rate/False alarm rate (vehicles/pedestrians)
  • Planning: L2 distance (1s/2s/3s)
  • Collision: Object collision/Bounding box collision (1s/2s/3s)

Note: For certain metrics (such as L2 distance, collision metrics), lower values indicate better performance, and the improvement rate calculation is automatically inverted.

Output images will be saved in the specified save_image_dir directory with filename format {metric_name}_comparison.png.

Coming Soon

  • Add Traffic Light Detector, as well as comparison results.
  • Add Data converter and visualizer for Nuplan Dataset

Reference