Skip to content

Commit

Permalink
update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
abikaki committed Mar 19, 2024
1 parent e51abf6 commit f9e3876
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 12 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -290,8 +290,8 @@ CVPR 2023 Papers: Explore a comprehensive collection of cutting-edge research pa
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-Papers/blob/main/sections/2023/main/recognition-categorization-detection-retrieval.md">Recognition: Categorization, Detection, Retrieval</a>
</td>
<!--60/139-->
<td colspan="4" align="center"><img src="https://geps.dev/progress/43?successColor=006600" alt="" /></td>
<!--70/139-->
<td colspan="4" align="center"><img src="https://geps.dev/progress/50?successColor=006600" alt="" /></td>
</tr>
<tr>
<td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,16 +89,16 @@
| Out-of-Distributed Semantic Pruning for Robust Semi-Supervised Learning | [![GitHub](https://img.shields.io/github/stars/RabbitBoss/Awesome-Realistic-Semi-Supervised-Learning?style=flat)](https://github.com/RabbitBoss/Awesome-Realistic-Semi-Supervised-Learning) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Wang_Out-of-Distributed_Semantic_Pruning_for_Robust_Semi-Supervised_Learning_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2305.18158-b31b1b.svg)](https://arxiv.org/abs/2305.18158) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=uGV24BwIRqU) |
| Glocal Energy-Based Learning for Few-Shot Open-Set Recognition | [![GitHub](https://img.shields.io/github/stars/00why00/Glocal?style=flat)](https://github.com/00why00/Glocal) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Wang_Glocal_Energy-Based_Learning_for_Few-Shot_Open-Set_Recognition_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2304.11855-b31b1b.svg)](http://arxiv.org/abs/2304.11855) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=UD19I7zdKKs) |
| Improving Image Recognition by Retrieving From Web-Scale Image-Text Data| :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Iscen_Improving_Image_Recognition_by_Retrieving_From_Web-Scale_Image-Text_Data_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2304.05173-b31b1b.svg)](http://arxiv.org/abs/2304.05173) | :heavy_minus_sign: |
| Deep Factorized Metric Learning | | | |
| Learning to Detect and Segment for Open Vocabulary Object Detection | | | |
| ConQueR: Query Contrast Voxel-DETR for 3D Object Detection | | | |
| Photo Pre-Training, But for Sketch | | | |
| InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | | | |
| Detecting Everything in the Open World: Towards Universal Object Detection | | | |
| Twin Contrastive Learning with Noisy Labels | | | |
| Feature Aggregated Queries for Transformer-based Video Object Detectors | | | |
| Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection | | | |
| Deep Hashing with Minimal-Distance-Separated Hash Centers | | | |
| Deep Factorized Metric Learning | [![GitHub](https://img.shields.io/github/stars/wangck20/DFML?style=flat)](https://github.com/wangck20/DFML) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Wang_Deep_Factorized_Metric_Learning_CVPR_2023_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=u1V_92eiyK0) |
| Learning To Detect and Segment for Open Vocabulary Object Detection | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Wang_Learning_To_Detect_and_Segment_for_Open_Vocabulary_Object_Detection_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2212.12130-b31b1b.svg)](http://arxiv.org/abs/2212.12130) | :heavy_minus_sign: |
| ConQueR: Query Contrast Voxel-DETR for 3D Object Detection <br/> [![CVPR - Highlight](https://img.shields.io/badge/CVPR-Highlight-FFFF00)]() | [![WEB Page](https://img.shields.io/badge/WEB-Page-159957.svg)](https://benjin.me/publication/cvpr2023_conquer/) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Zhu_ConQueR_Query_Contrast_Voxel-DETR_for_3D_Object_Detection_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2212.07289-b31b1b.svg)](http://arxiv.org/abs/2212.07289) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=kwHVjA4gIOA) |
| Photo Pre-Training, but for Sketch | [![GitHub](https://img.shields.io/github/stars/KeLi-SketchX/Photo-Pre-Training-But-for-Sketch?style=flat)](https://github.com/KeLi-SketchX/Photo-Pre-Training-But-for-Sketch) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Li_Photo_Pre-Training_but_for_Sketch_CVPR_2023_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=tmCmPZC756E) |
| InternImage: Exploring Large-Scale Vision Foundation Models With Deformable Convolutions | [![GitHub](https://img.shields.io/github/stars/OpenGVLab/InternImage?style=flat)](https://github.com/OpenGVLab/InternImage) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Wang_InternImage_Exploring_Large-Scale_Vision_Foundation_Models_With_Deformable_Convolutions_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2211.05778-b31b1b.svg)](http://arxiv.org/abs/2211.05778) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=_LEitBd5Tfs) |
| Detecting Everything in the Open World: Towards Universal Object Detection | [![GitHub](https://img.shields.io/github/stars/zhenyuw16/UniDetector?style=flat)](https://github.com/zhenyuw16/UniDetector) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Wang_Detecting_Everything_in_the_Open_World_Towards_Universal_Object_Detection_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.11749-b31b1b.svg)](http://arxiv.org/abs/2303.11749) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=Wz8p4Edcj6U) |
| Twin Contrastive Learning With Noisy Labels | [![GitHub](https://img.shields.io/github/stars/Hzzone/TCL?style=flat)](https://github.com/Hzzone/TCL) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Huang_Twin_Contrastive_Learning_With_Noisy_Labels_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.06930-b31b1b.svg)](http://arxiv.org/abs/2303.06930) | :heavy_minus_sign: |
| Feature Aggregated Queries for Transformer-Based Video Object Detectors | [![GitHub](https://img.shields.io/github/stars/YimingCuiCuiCui/FAQ?style=flat)](https://github.com/YimingCuiCuiCui/FAQ) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Cui_Feature_Aggregated_Queries_for_Transformer-Based_Video_Object_Detectors_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.08319-b31b1b.svg)](http://arxiv.org/abs/2303.08319) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=e1iTV5riSdo) |
| Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection | [![GitHub](https://img.shields.io/github/stars/chuangchuangtan/LGrad?style=flat)](https://github.com/chuangchuangtan/LGrad) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Tan_Learning_on_Gradients_Generalized_Artifacts_Representation_for_GAN-Generated_Images_Detection_CVPR_2023_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=-Wa-Si9LZyk) |
| Deep Hashing With Minimal-Distance-Separated Hash Centers| :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com//content/CVPR2023/papers/Wang_Deep_Hashing_With_Minimal-Distance-Separated_Hash_Centers_CVPR_2023_paper.pdf) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=hy5LkF3yJpI) |
| Knowledge Combination to Learn Rotated Detection without Rotated Annotation | | | |
| Good is Bad: Causality Inspired Cloth-Debiasing for Cloth-Changing Person Re-Identification | | | |
| Discriminating Known from Unknown Objects via Structure-Enhanced Recurrent Variational AutoEncoder | | | |
Expand Down

0 comments on commit f9e3876

Please sign in to comment.