From 37bde1c41ca8fd99c153f35b42acb1c99df7261a Mon Sep 17 00:00:00 2001 From: Fengting Yang Date: Wed, 4 May 2022 14:25:27 -0400 Subject: [PATCH] update README --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 4b4343a..061401d 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,7 @@ The code is developed and tested with - More details are available in ```requirements.txt``` ## Data Preparation -### Data Download +### Download The data used in our experiment are from [FoD500](https://github.com/dvl-tum/defocus-net), [DDFF-12](https://hazirbas.com/datasets/ddff12scene/), and [Mobile Depth](https://www.supasorn.com/dffdownload.html). @@ -26,10 +26,10 @@ and [Mobile Depth](https://www.supasorn.com/dffdownload.html). follow the instruction in the next section to prepare the train and validation set. The DDFF-12 test set is only needed if you wish to submit your test result to the [leaderboard](https://competitions.codalab.org/competitions/17807#learn_the_details). You can directly use the pre-processed test set at the [ddff-pytorch](https://github.com/soyers/ddff-pytorch) repository. -* For Mobile Depth, we need to reorganize the files. Please follow the steps shown in the next section. Note, no ground truth is provided in this dataset, and we only +* For Mobile Depth, we need to reorganize the files. Please follow the steps shown in the next section. Note that no ground truth is provided in this dataset, and we only use it for qualitative evaluation. -### Data Pre-processing +### Pre-processing For FoD500 dataset, no data pre-processing is needed. For DDFF-12 dataset, please first modify the ```data_pth``` and ```out_pth``` in @@ -50,7 +50,7 @@ For Mobile depth dataset, please modify the path variables in ```data_preproces python data_preprocess/reorganize_mobileDFF.py ``` -## Train +## Training Given the DDFF-12 h5.file in ``````, and FoD data folder in ``````, please run ``` CUDA_VISIBLE_DEVICES=0 python train.py --stack_num 5 --batchsize 20 --DDFF12_pth --FoD_pth --savemodel --use_diff 0/1 @@ -58,9 +58,9 @@ CUDA_VISIBLE_DEVICES=0 python train.py --stack_num 5 --batchsize 20 --DDFF12_pth to train the model. ```--use_diff 0``` refers to the simple focus volume model (Ours-FV), and ```--use_diff 1``` corresponds to the differential focus volume model (Ours-DFV). We have shared [Our-FV](https://drive.google.com/file/d/1oF0MZC3zBY-HRlXOYDlHqiTJ_KgPfEQP/view?usp=sharing) and [Our-DFV](https://drive.google.com/file/d/1kKJlZybv4Kbpn7Xa2f2K25VErOQyind8/view?usp=sharing) checkpoint pre-trained on the FoD500 and DDFF-12 training set. -Please note this is not the final model for our DDFF-12 submission, which we also include the DDFF-12 validation set in the training. +Please note this is not the final model for our DDFF-12 submission which we also include the DDFF-12 validation set in the training. -## Evaluate +## Evaluation ### DDFF-12 To evaluate on the DDFF-12 validation set, run ``` @@ -80,7 +80,7 @@ To generate test results, run ``` python FoD_test.py --data_path --loadmodel --use_diff 0/1 --outdir ``` -The code will also provide the ```avgUnc.``` result on FoD500. Next, the evaluation results can be generated by +The code will also provide the ```avgUnc.``` result on FoD500. Next, the evaluation results can be generated by running ``` python eval_FoD500.py --res_path ```