This documentation is translated by chatGPT, a language model developed by OpenAI.
Version preview-1.0
This project is based on the open-source project Carla, with the aim of helping people collect heterogeneous multi-viewpoint depth datasets, such as panoramic cameras, fisheye cameras, pinhole cameras, etc., including depth images from these cameras, for the purpose of creating autonomous driving datasets. The code for this project was written based on the API documentation and examples provided by Carla.
This project includes the following features:
- Collect RGB images and corresponding depth images from different camera models.
- These cameras can be synchronized to capture continuous frame images, and their positions, timestamps, etc. can be recorded.
- Tools are provided to convert cubemap images into panoramic and fisheye images.
- A simple script for recording carla scenes is also provided.
Before you start using this project, it is strongly recommended that you read the official documentation of Carla and familiarize yourself with the use of the API. This will make it easier for you to get started with this project.
Additionally, PyTorch is used for image post-processing steps in this project, so if you do not have PyTorch installed, please make sure to install it.
- Add depth images for fisheye camera models.
- Add detailed usage documentation for this project.
Init preview-1.0 version. There is no detailed user manual for this version yet, and it has not been tested by other users.