Skip to content

Commit

Permalink
[doc] deadlinks fix (PaddlePaddle#6434)
Browse files Browse the repository at this point in the history
  • Loading branch information
pkhk-1 authored Jul 14, 2022
1 parent 4a8fe37 commit 99f891b
Show file tree
Hide file tree
Showing 13 changed files with 19 additions and 21 deletions.
2 changes: 1 addition & 1 deletion configs/dota/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ DOTA数据集中总共有2806张图像,其中1411张图像作为训练集,45
- PaddlePaddle >= 2.1.1
- GCC == 8.2

推荐使用docker镜像[paddle:2.1.1-gpu-cuda10.1-cudnn7](registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.1-cudnn7)
推荐使用docker镜像 paddle:2.1.1-gpu-cuda10.1-cudnn7。

执行如下命令下载镜像并启动容器:
```
Expand Down
2 changes: 1 addition & 1 deletion configs/dota/README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ To use the rotating frame IOU to calculate the OP, the following conditions must
- PaddlePaddle >= 2.1.1
- GCC == 8.2

Docker images are recommended[paddle:2.1.1-gpu-cuda10.1-cudnn7](registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.1-cudnn7)
Docker images are recommended paddle:2.1.1-gpu-cuda10.1-cudnn7。

Run the following command to download the image and start the container:
```
Expand Down
2 changes: 1 addition & 1 deletion configs/mot/README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ PaddleDetection implement [JDE](https://github.com/Zhongdao/Towards-Realtime-MOT

**Notes:**
- Multi-Object Tracking(MOT) datasets are always used for single category tracking. DeepSORT, JDE and FairMOT are single category MOT models. 'MIX' dataset and it's sub datasets are also single category pedestrian tracking datasets. It can be considered that there are additional IDs ground truth for detection datasets.
- In order to train the feature models of more scenes, more datasets are also processed into the same format as the MIX dataset. PaddleDetection Team also provides feature datasets and models of [vehicle tracking](vehicle/readme.md), [head tracking](headtracking21/readme.md) and more general [pedestrian tracking](pedestrian/readme.md). User defined datasets can also be prepared by referring to data preparation [doc](../../docs/tutorials/PrepareMOTDataSet.md).
- In order to train the feature models of more scenes, more datasets are also processed into the same format as the MIX dataset. PaddleDetection Team also provides feature datasets and models of [vehicle tracking](vehicle/README.md), [head tracking](headtracking21/README.md) and more general [pedestrian tracking](pedestrian/README.md). User defined datasets can also be prepared by referring to data preparation [doc](../../docs/tutorials/data/PrepareMOTDataSet.md).
- The multipe category MOT model is [MCFairMOT] (mcfairmot/readme_cn.md), and the multi category dataset is the integrated version of VisDrone dataset. Please refer to the doc of [MCFairMOT](mcfairmot/README.md).
- The Multi-Target Multi-Camera Tracking (MTMCT) model is [AIC21 MTMCT](https://www.aicitychallenge.org)(CityFlow) Multi-Camera Vehicle Tracking dataset. The dataset and model can refer to the doc of [MTMCT](mtmct/README.md)

Expand Down
2 changes: 1 addition & 1 deletion configs/mot/deepsort/README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
- [简介](#简介)
- [模型库](#模型库)
- [快速开始](#快速开始)
- [适配其他检测器](适配其他检测器)
- [适配其他检测器](#适配其他检测器)
- [引用](#引用)

## 简介
Expand Down
6 changes: 3 additions & 3 deletions configs/mot/mcfairmot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ PP-tracking provides an AI studio public project tutorial. Please refer to this
| Model | Compression Strategy | Prediction Delay(T4) |Prediction Delay(V100)| Model Configuration File |Compression Algorithm Configuration File |
| :--------------| :------- | :------: | :----: | :----: | :----: |
| DLA-34 | baseline | 41.3 | 21.9 |[Configuration File](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)| - |
| DLA-34 | off-line quantization | 37.8 | 21.2 |[Configuration File](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[Configuration File](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.3/configs/slim/post_quant/mcfairmot_ptq.yml)|
| DLA-34 | off-line quantization | 37.8 | 21.2 |[Configuration File](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[Configuration File](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/slim/post_quant/mcfairmot_ptq.yml)|


## Getting Start
Expand Down Expand Up @@ -122,8 +122,8 @@ CUDA_VISIBLE_DEVICES=0 python3.7 tools/post_quant.py -c configs/mot/mcfairmot/mc
@ARTICLE{9573394,
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
year={2021},
volume={},
number={},
Expand Down
6 changes: 3 additions & 3 deletions configs/mot/mcfairmot/README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ PP-Tracking 提供了AI Studio公开项目案例,教程请参考[PP-Tracking
| 骨干网络 | 压缩策略 | 预测时延(T4) |预测时延(V100)| 配置文件 |压缩算法配置文件 |
| :--------------| :------- | :------: | :----: | :----: | :----: |
| DLA-34 | baseline | 41.3 | 21.9 |[配置文件](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)| - |
| DLA-34 | 离线量化 | 37.8 | 21.2 |[配置文件](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[配置文件](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.3/configs/slim/post_quant/mcfairmot_ptq.yml)|
| DLA-34 | 离线量化 | 37.8 | 21.2 |[配置文件](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[配置文件](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/slim/post_quant/mcfairmot_ptq.yml)|

## 快速开始

Expand Down Expand Up @@ -119,8 +119,8 @@ CUDA_VISIBLE_DEVICES=0 python3.7 tools/post_quant.py -c configs/mot/mcfairmot/mc
@ARTICLE{9573394,
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
year={2021},
volume={},
number={},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ PP-ShiTu图像识别任务中,训练主体检测模型时主要用到了以下
| LogoDet-3k | 155k | 155k | Logo检测 | [地址](https://github.com/Wangjing1551/LogoDet-3K-Dataset) |
| RPC | 54k | 54k | 商品检测 | [地址](https://rpc-dataset.github.io/) |

在实际训练的过程中,将所有数据集混合在一起。由于是主体检测,这里将所有标注出的检测框对应的类别都修改为 `前景` 的类别,最终融合的数据集中只包含 1 个类别,即前景,数据集定义配置可以参考[mainbody_detection.yml](./mainbody_detection.yml)
在实际训练的过程中,将所有数据集混合在一起。由于是主体检测,这里将所有标注出的检测框对应的类别都修改为 `前景` 的类别,最终融合的数据集中只包含 1 个类别,即前景,数据集定义配置可以参考[picodet_lcnet_x2_5_640_mainbody.yml](./picodet_lcnet_x2_5_640_mainbody.yml)


### 1.2 模型库
Expand Down
2 changes: 1 addition & 1 deletion deploy/pptracking/README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ PP-Tracking 提供了简洁的GUI可视化界面,教程请参考[PP-Tracking
PP-Tracking 支持单镜头跟踪(MOT)和跨镜头跟踪(MTMCT)两种模式。
- 单镜头跟踪同时支持**FairMOT****DeepSORT**两种多目标跟踪算法,跨镜头跟踪只支持**DeepSORT**算法。
- 单镜头跟踪的功能包括行人跟踪、车辆跟踪、多类别跟踪、小目标跟踪以及流量统计,模型主要是基于FairMOT进行优化,实现了实时跟踪的效果,同时基于不同应用场景提供了针对性的预训练模型。
- DeepSORT算法方案(包括跨镜头跟踪用到的DeepSORT),选用的检测器是PaddleDetection自研的高性能检测模型[PP-YOLOv2](../../ppyolo/)和轻量级特色检测模型[PP-PicoDet](../../picodet/),选用的ReID模型是PaddleClas自研的超轻量骨干网络模型[PP-LCNet](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/models/PP-LCNet.md)
- DeepSORT算法方案(包括跨镜头跟踪用到的DeepSORT),选用的检测器是PaddleDetection自研的高性能检测模型[PP-YOLOv2](../../configs/ppyolo/)和轻量级特色检测模型[PP-PicoDet](../../configs/picodet/),选用的ReID模型是PaddleClas自研的超轻量骨干网络模型[PP-LCNet](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/models/PP-LCNet.md)

PP-Tracking中提供的多场景预训练模型以及导出后的预测部署模型如下:

Expand Down
4 changes: 2 additions & 2 deletions docs/advanced_tutorials/READER.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ COCO数据集目前分为COCO2014和COCO2017,主要由json文件和image文件
│ │ ...
```

`source/coco.py`中定义并注册了`COCODataSet`数据集类,其继承自`DetDataSet`,并实现了parse_dataset方法,调用[COCO API](https://github.com/cocodataset/cocoapi)加载并解析COCO格式数据源`roidbs``cname2cid`,具体可参见`source/coco.py`源码。将其他数据集转换成COCO格式可以参考[用户数据转成COCO数据](../tutorials/PrepareDetDataSet.md#用户数据转成COCO数据)
`source/coco.py`中定义并注册了`COCODataSet`数据集类,其继承自`DetDataSet`,并实现了parse_dataset方法,调用[COCO API](https://github.com/cocodataset/cocoapi)加载并解析COCO格式数据源`roidbs``cname2cid`,具体可参见`source/coco.py`源码。将其他数据集转换成COCO格式可以参考[用户数据转成COCO数据](../tutorials/data/PrepareDetDataSet.md#用户数据转成COCO数据)

#### 2.2Pascal VOC数据集
该数据集目前分为VOC2007和VOC2012,主要由xml文件和image文件组成,其组织结构如下所示:
Expand Down Expand Up @@ -118,7 +118,7 @@ COCO数据集目前分为COCO2014和COCO2017,主要由json文件和image文件
│ ├── ImageSets
│ │ ...
```
`source/voc.py`中定义并注册了`VOCDataSet`数据集,它继承自`DetDataSet`基类,并重写了`parse_dataset`方法,解析VOC数据集中xml格式标注文件,更新`roidbs``cname2cid`。将其他数据集转换成VOC格式可以参考[用户数据转成VOC数据](../tutorials/PrepareDetDataSet.md#用户数据转成VOC数据)
`source/voc.py`中定义并注册了`VOCDataSet`数据集,它继承自`DetDataSet`基类,并重写了`parse_dataset`方法,解析VOC数据集中xml格式标注文件,更新`roidbs``cname2cid`。将其他数据集转换成VOC格式可以参考[用户数据转成VOC数据](../tutorials/data/PrepareDetDataSet.md#用户数据转成VOC数据)

#### 2.3自定义数据集
如果COCODataSet和VOCDataSet不能满足你的需求,可以通过自定义数据集的方式来加载你的数据集。只需要以下两步即可实现自定义数据集
Expand Down
4 changes: 2 additions & 2 deletions docs/advanced_tutorials/READER_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ COCO datasets are currently divided into COCO2014 and COCO2017, which are mainly
│ │ ...
```
class `COCODataSet` is defined and registered on `source/coco.py`. And implements the parse the dataset method, called [COCO API](https://github.com/cocodataset/cocoapi) to load and parse COCO format data source ` roidbs ` and ` cname2cid `, See `source/coco.py` source code for details. Converting other datasets to COCO format can be done by referring to [converting User Data to COCO Data](../tutorials/PrepareDataSet_en.md#convert-user-data-to-coco-data)
And implements the parse the dataset method, called [COCO API](https://github.com/cocodataset/cocoapi) to load and parse COCO format data source `roidbs` and `cname2cid`, See `source/coco.py` source code for details. Converting other datasets to COCO format can be done by referring to [converting User Data to COCO Data](../tutorials/PrepareDataSet_en.md#convert-user-data-to-coco-data)
And implements the parse the dataset method, called [COCO API](https://github.com/cocodataset/cocoapi) to load and parse COCO format data source `roidbs` and `cname2cid`, See `source/coco.py` source code for details. Converting other datasets to COCO format can be done by referring to [converting User Data to COCO Data](../tutorials/data/PrepareDetDataSet_en.md#convert-user-data-to-coco-data)


#### 2.2Pascal VOC dataset
Expand Down Expand Up @@ -120,7 +120,7 @@ The dataset is currently divided into VOC2007 and VOC2012, mainly composed of XM
│ ├── ImageSets
│ │ ...
```
The `VOCDataSet` dataset is defined and registered in `source/voc.py` . It inherits the `DetDataSet` base class and rewrites the `parse_dataset` method to parse XML annotations in the VOC dataset. Update `roidbs` and `cname2cid`. To convert other datasets to VOC format, refer to [User Data to VOC Data](../tutorials/PrepareDataSet_en.md#convert-user-data-to-voc-data)
The `VOCDataSet` dataset is defined and registered in `source/voc.py` . It inherits the `DetDataSet` base class and rewrites the `parse_dataset` method to parse XML annotations in the VOC dataset. Update `roidbs` and `cname2cid`. To convert other datasets to VOC format, refer to [User Data to VOC Data](../tutorials/data/PrepareDetDataSet_en.md#convert-user-data-to-voc-data)


#### 2.3Customize Dataset
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/GETTING_STARTED.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ instructions](INSTALL_cn.md).

## Data preparation

- Please refer to [PrepareDetDataSet](PrepareDetDataSet_en.md) for data preparation
- Please refer to [PrepareDetDataSet](./data/PrepareDetDataSet_en.md) for data preparation
- Please set the data path for data configuration file in ```configs/datasets```

## Training & Evaluation & Inference
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/GETTING_STARTED_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ PaddleDetection作为成熟的目标检测开发套件,提供了从数据准

## 2 准备数据
目前PaddleDetection支持:COCO VOC WiderFace, MOT四种数据格式。
- 首先按照[准备数据文档](PrepareDetDataSet.md) 准备数据。
- 首先按照[准备数据文档](./data/PrepareDetDataSet.md) 准备数据。
- 然后设置`configs/datasets`中相应的coco或voc等数据配置文件中的数据路径。
- 在本项目中,我们使用路标识别数据集
```bash
Expand Down Expand Up @@ -83,7 +83,7 @@ ppyolov2_reader.yml 主要说明数据读取器配置,如batch size,并发
* 关于数据的路径修改说明
在修改配置文件中,用户如何实现自定义数据集是非常关键的一步,如何定义数据集请参考[如何自定义数据集](https://aistudio.baidu.com/aistudio/projectdetail/1917140)
* 默认学习率是适配多GPU训练(8x GPU),若使用单GPU训练,须对应调整学习率(例如,除以8)
* 更多使用问题,请参考[FAQ](FAQ.md)
* 更多使用问题,请参考[FAQ](FAQ)

## 4 训练

Expand Down
2 changes: 0 additions & 2 deletions industrial_tutorial/README_cn.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
简体中文 | [English](README_en.md)

# 产业实践范例

为了缩小基础理论教学与产业落地间的差距,PaddleDetection联合产业头部企业,结合实际经验,选取经典场景,提供了从**数据准备、模型训练优化,到模型部署的全流程可复用方案**,降低产业落地门槛,让大家在真实数据环境下深入地了解这些案例,获取产业实现方案。
Expand Down

0 comments on commit 99f891b

Please sign in to comment.