Demo
Model Framework
Nuplan offers ruled-based tags for each driving scenario. We perform the test in various aspects and store each with multiple metircs. For the detail, check test/scenario_test
.
We include comparison results between VADv2
and VAD
in this repo.
To correctly obtain map evaluation results, you first need to extract the ground truth map information from the test set. You can use the following command:
python tools/convert_gt_map_json.py \
--data_root Path/to/nuplan \
--pkl_path Path/to/nuplan/test/pkl \
--save_path eval_map.json
Where:
--data_root
: Path to your nuplan dataset root directory--pkl_path
: Path to the test set PKL files--save_path
: Output path for the converted JSON file containing ground truth map information
The test sample data could be found at data/sample_data/sample_ann.pkl
. A mini-mini-mini Dataset of Nuplan could be found at data/sample_data/nuplan
. Which includes the camera data and map data.
Run scripts
python lwad/convert_gt_map_json.py \
--data_root data/sample_data/nuplan/dataset \
--pkl_path data/sample_data/sample_ann.pkl \
--save_path data/sample_data/eval_map.json
The sample .JSON
file could be found at data/sample_data/eval_map.json
This tool is used to compare evaluation metrics between two experiments across different scenarios and generate intuitive comparison charts. The charts include:
- Raw metric values for both experiments (bar charts)
- Performance improvement percentages (line graphs)
Execute the following command in the project root directory:
python tools/scenarios_compare_vis.py \
--dir_baseline='test/scenario_test/VAD_baseline' \
--dir_exp='new_experiment_directory' \
--save_image_dir='output_directory' \
--dir_baseline
: Directory path for baseline experiment results (containing evaluation_results.json for each scenario). Defaults totest/scenario_test/VAD_baseline_1013
. Stored in git, containing scenario evaluation results from the first model delivery.--dir_exp
: Directory path for the experiment results to be compared (containing evaluation_results.json for each scenario)--save_image_dir
: Path to save the charts, defaults totest/scenario_compare
--eval_metrics
: Names of metrics to compare, generates all supported metrics if not specified--keyword
: Keyword used to extract experiment names from paths, defaults to "VAD" for convenient legend labeling
The tool supports comparison of the following metrics:
- Trajectory prediction: ADE/FDE (vehicles/pedestrians)
- Detection: Hit rate/False alarm rate (vehicles/pedestrians)
- Planning: L2 distance (1s/2s/3s)
- Collision: Object collision/Bounding box collision (1s/2s/3s)
Note: For certain metrics (such as L2 distance, collision metrics), lower values indicate better performance, and the improvement rate calculation is automatically inverted.
Output images will be saved in the specified save_image_dir
directory with filename format {metric_name}_comparison.png
.
- Add Traffic Light Detector, as well as comparison results.
- Add Data converter and visualizer for Nuplan Dataset