ML@Rutgers
- 43 followers
- United States of America
- http://wanghao.in/
- hoguewang@gmail.com
Popular repositories Loading
-
llm-continual-learning-survey
llm-continual-learning-survey PublicContinual Learning of Large Language Models: A Comprehensive Survey
-
unified-continual-learning
unified-continual-learning Public[NeurIPS 2023] A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm
-
multimodal-needle-in-a-haystack
multimodal-needle-in-a-haystack PublicCode and data for the benchmark "Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models"
-
bayesian-peft
bayesian-peft Public[NeurIPS 2024] BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
Repositories
- bayesian-peft Public
[NeurIPS 2024] BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
Wang-ML-Lab/bayesian-peft’s past year of commit activity - llm-continual-learning-survey Public
Continual Learning of Large Language Models: A Comprehensive Survey
Wang-ML-Lab/llm-continual-learning-survey’s past year of commit activity - interpretable-foundation-models Public
[ICML 2024] Probabilistic Conceptual Explainers (PACE): Trustworthy Conceptual Explanations for Vision Foundation Models
Wang-ML-Lab/interpretable-foundation-models’s past year of commit activity - multimodal-needle-in-a-haystack Public
Code and data for the benchmark "Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models"
Wang-ML-Lab/multimodal-needle-in-a-haystack’s past year of commit activity - multi-domain-active-learning Public
[AAAI 2024] Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees
Wang-ML-Lab/multi-domain-active-learning’s past year of commit activity - variational-imbalanced-regression Public
[NeurIPS 2023] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing
Wang-ML-Lab/variational-imbalanced-regression’s past year of commit activity - ECBM Public Forked from xmed-lab/ECBM
ICLR 2024: Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Wang-ML-Lab/ECBM’s past year of commit activity - Formal-LLM Public Forked from agiresearch/Formal-LLM
Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
Wang-ML-Lab/Formal-LLM’s past year of commit activity