Change the repository type filter
All
Repositories list
71 repositories
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
- [NeurIPS 2024 Spotlight ⭐️] Parameter-Inverted Image Pyramid Networks (PIIP)
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
MM-NIAH
Public[NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of existing MLLMs to comprehend long multimodal documents.OmniCorpus
PublicGUI-Odyssey
PublicVision-RWKV
Public.github
PublicOV-OAD
PublicInternVL-MMDetSeg
PublicTrain InternViT-6B in MMSegmentation and MMDetection with DeepSpeedPhyGenBench
PublicVisionLLM
PublicVisionLLM SeriesVideoMAEv2
Public[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual MaskingEfficientQAT
PublicOmniQuant
Public[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.MMIU
PublicChartAst
PublicEgoExoLearn
PublicInternGPT
PublicInternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)all-seeing
Public[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"Diffree
PublicInternImage
Public[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable ConvolutionsMMT-Bench
PublicControlLLM
Public