Skip to content

IDEA-FinAI/LLM-as-a-Judge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 

Repository files navigation


This repo include the papers discussed in our survey paper A Survey on LLM-as-a-Judge

Reference

Feel free to cite if you find our survey is useful for your research:

@article{gu2024surveyllmasajudge,
	title   = {A Survey on LLM-as-a-Judge},
	author  = {Jiawei Gu and Xuhui Jiang and Zhichao Shi and Hexiang Tan and Xuehao Zhai and Chengjin Xu and Wei Li and Yinghan Shen and Shengjie Ma and Honghao Liu and Yuanzhuo Wang and Jian Guo},
	year    = {2024},
	journal = {arXiv preprint arXiv: 2411.15594}
}

🔔 News

🔥 [2025-01-28] We added analysis on LLM-as-a-Judge and o1-like Reasoning Enhancement, as well as meta-evaluation results on o1-mini, Gemini-2.0-Flash-Thinking-1219, and DeepSeek-R1!

🌟 [2025-01-16] We shared and discussed the methodologies, applications (Finance, RAG, and Synthetic Data), and future research directions of LLM-as-a-Judge at BAAI Talk! 🤗 [Replay] [Methodology] [RAG & Synthetic Data]

🚀 [2024-11-23] We released A Survey on LLM-as-a-Judge, exploring LLMs as reliable, scalable evaluators and outlining key challenges and future directions!

Overview of LLM-as-a-Judge

overview

Evaluation Pipelines

evaluation_pipeline

Improvement Strategies for LLM-as-a-Judge

improvement_strategy

Table of Content

A Survey on LLM-as-a-Judge

Paper List

1 What is LLM-as-a-Judge?

2 How to use LLM-as-a-Judge?

2.1 In-Context Learning

Generating scores
  • A Multi-Aspect Framework for Counter Narrative Evaluation using Large Language Models NAACL 2024

    Jaylen Jones, Lingbo Mo, Eric Fosler-Lussier, and Huan Sun. [Paper]

  • Generative judge for evaluating alignment. ArXiv preprint 2023

    Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper]

  • Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint 2023

    Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper]

  • Large Language Models are Better Reasoners with Self-Verification. EMNLP findings 2023

    Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao. [Paper]

  • Benchmarking Foundation Models with Language-Model-as-an-Examiner. NeurIPS 2023

    Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, and Lei Hou. [Paper]

  • Human-like summarization evaluation with chatgpt. ArXiv preprint 2023

    Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. [Paper]

Solving Yes/No questions
  • Reflexion: language agents with verbal reinforcement learning. NeurIPS 2023

    Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. [Paper]

  • MacGyver: Are Large Language Models Creative Problem Solvers? NAACL 2024

    Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas Griffiths, and Faeze Brahman. [Paper]

  • Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. ArXiv preprint 2023

    Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Heung-Yeung Shum, and Jian Guo. [Paper]

Conducting pairwise comparisons
  • Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. NAACL findings 2024

    Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. [Papaer]

  • **Aligning with human judgement: The role of pairwise preference in large language model evaluators. ** COLM 2024

    Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, and Nigel Collier. [Paper]

  • LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models. EACL 2024

    Adian Liusie, Potsawee Manakul, and Mark Gales. [Paper]

  • Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS 2023

    Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper]

  • Rrhf: Rank responses to align language models with human feedback without tears. ArXiv preprint 2023

    Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. [Paper]

  • PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint 2023

    Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper]

  • Human-like summarization evaluation with chatgpt. ArXiv preprint 2023

    Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. [Paper]

Making multiple-choice selections

2.2 Model Selection

General LLM
  • Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS 2023

    Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper]

  • AlpacaEval: An Automatic Evaluator of Instruction-following Models. 2023

    Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. [Code]

Fine-tuned LLM
  • PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint 2023

    Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper]

  • Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint 2023

    Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper]

  • Generative judge for evaluating alignment. ArXiv preprint 2023

    Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper]

  • Prometheus: Inducing Fine-grained Evaluation Capability in Language Models. ArXiv preprint 2023

    Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. [Paper]

2.3 Post-processing Method

Extracting specific tokens
  • xFinder: Robust and Pinpoint Answer Extraction for Large Language Models. ArXiv preprint 2024

    Qingchen Yu, Zifan Zheng, Shichao Song, Zhiyu Li, Feiyu Xiong, Bo Tang, and Ding Chen. [Paper]

  • MacGyver: Are Large Language Models Creative Problem Solvers? NAACL 2024

    Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas Griffiths, and Faeze Brahman. [Paper]

Constrained decoding
  • Guiding LLMs the right way: fast, non-invasive constrained generation. ICML 2024

    Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. [Paper]

  • XGrammar: Flexible and Efficient Structured Generation Engine for Large Language Models. ArXiv preprint 2024

    Yixin Dong, Charlie F. Ruan, Yaxing Cai, Ruihang Lai, Ziyi Xu, Yilong Zhao, and Tianqi Chen. [Paper]

  • SGLang: Efficient Execution of Structured Language Model Programs. NeurIPS 2025

    Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. [Paper]

Normalizing the output logits
  • Reasoning with Language Model is Planning with World Model. EMNLP 2023

    Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper]

  • Speculative rag: Enhancing retrieval augmented generation through drafting. ArXiv preprint 2024

    Zilong Wang, Zifeng Wang, Long Le, Huaixiu Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, et al. [Paper]

  • **Agent-as-a-Judge: Evaluate Agents with Agents. ** ArXiv preprint 2024

    Mingchen Zhuge, Changsheng Zhao, Dylan Ashley, Wenyi Wang, Dmitrii Khizbullin, Yunyang Xiong, Zechun Liu, Ernie Chang, Raghuraman Krishnamoorthi, Yuandong Tian, et al. [Paper]

Selecting sentences
  • Reasoning with Language Model is Planning with World Model. EMNLP 2023

    Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper]

2.4 Evaluation Pipeline

LLM-as-a-Judge for Models
  • AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback. NeurIPS 2023

    Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. [Paper]

  • Large language models are not fair evaluators. ACL 2024

    Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper]

  • Wider and deeper llm networks are fairer llm evaluators. ArXiv preprint 2023

    Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. [Paper]

  • Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS 2023

    Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper]

  • **SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation. ** Blog 2023

    Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. [Blog]

  • Shepherd: A Critic for Language Model Generation. ArXiv preprint 2023

    Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O’Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam Fazel-Zarandi, and Asli Celikyilmaz. [Paper]

  • PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint 2023

    Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper]

LLM-as-a-Judge for Data
  • RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment. ArXiv preprint 2023

    Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. [Paper]

  • Rrhf: Rank responses to align language models with human feedback without tears. ArXiv preprint 2023

    Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. [Paper]

  • Stanford Alpaca: An Instruction-following LLaMA model. 2023

    Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. [Code]

  • Languages are rewards: Hindsight finetuning using human feedback. ArXiv preprint 2023

    Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. [Paper]

  • The Wisdom of Hindsight Makes Language Models Better Instruction Followers. PMLR 2023

    Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E. Gonzalez. [Paper]

  • **Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. ** NeurIPS 2023

    Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David D. Cox, Yiming Yang, and Chuang Gan. [Paper]

  • Wizardmath: Empowering mathematical reasoning for large language models via

    **reinforced evol-instruct. ** ArXiv preprint 2023

    Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. [Paper]

  • Self-taught evaluators. ArXiv preprint 2024

    Tianlu Wang, Ilia Kulikov, Olga Golovneva, Ping Yu, Weizhe Yuan, Jane Dwivedi-Yu, Richard Yuanzhe Pang, Maryam Fazel-Zarandi, Jason Weston, and Xian Li. [Paper]

  • Holistic analysis of hallucination in gpt-4v (ision): Bias and interference challenges. ArXiv preprint 2023

    Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, and Huaxiu Yao. [Paper]

  • Evaluating Object Hallucination in Large Vision-Language Models. EMNLP 2023

    Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Xin Zhao, and Ji-Rong Wen. [Paper]

  • Evaluation and analysis of hallucination in large vision-language models. ArXiv preprint 2023

    Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, et al. [Paper]

  • Aligning large multimodal models with factually augmented rlhf. ArXiv preprint 2023

    Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. [Paper]

  • MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. ICML 2024

    Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. [Paper]

LLM-as-a-Judge for Agents
  • **Agent-as-a-Judge: Evaluate Agents with Agents. ** ArXiv preprint 2024

    Mingchen Zhuge, Changsheng Zhao, Dylan Ashley, Wenyi Wang, Dmitrii Khizbullin, Yunyang Xiong, Zechun Liu, Ernie Chang, Raghuraman Krishnamoorthi, Yuandong Tian, et al. [Paper]

  • Reasoning with Language Model is Planning with World Model. EMNLP 2023

    Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper]

  • Reflexion: language agents with verbal reinforcement learning. NeurIPS 2023

    Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. [Paper]

LLM-as-a-Judge for Reasoning/Thinking
  • Towards Reasoning in Large Language Models: A Survey. ACL findings 2023

    Jie Huang and Kevin Chen-Chuan Chang. [Paper]

  • Let’s verify step by step. ICLR 2023

    Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. [Paper]

3 How to improve LLM-as-a-Judge?

3.1 Design Strategy of Evaluation Prompts

Few-shot promping
  • FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. EMNLP 2023

    Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. [Paper]

  • SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models. ACL findings 2024

    Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, and Jing Shao. [Paper]

  • GPTScore: Evaluate as You Desire. NAACL 2024

    Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. [Paper]

Evaluation steps decomposition
  • G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. EMNLP 2023

    Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. [Paper]

  • DHP Benchmark: Are LLMs Good NLG Evaluators? ArXiv preprint 2024

    Yicheng Wang, Jiayi Yuan, Yu-Neng Chuang, Zhuoer Wang, Yingchi Liu, Mark Cusick, Param Kulkarni, Zhengping Ji, Yasser Ibrahim, and Xia Hu. [Paper]

  • SocREval: Large Language Models with the Socratic Method for Reference-free Reasoning Evaluation. NAACL findings 2024

    Hangfeng He, Hongming Zhang, and Dan Roth. [Paper]

  • Branch-Solve-Merge Improves Large Language Model Evaluation and Generation. NAACL 2024

    Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, and Xian Li. [Paper]

Evaluation criteria decomposition
  • HD-Eval: Aligning Large Language Model Evaluators Through Hierarchical Criteria Decomposition. ACL 2024

    Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. [Paper]

  • Are LLM-based Evaluators Confusing NLG Quality Criteria? ACL 2024

    Xinyu Hu, Mingqi Gao, Sen Hu, Yang Zhang, Yicheng Chen, Teng Xu, and Xiaojun Wan. [Paper]

Shuffling contents
  • Large language models are not fair evaluators. ACL 2024

    Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper]

  • Generative judge for evaluating alignment. ArXiv preprint 2023

    Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper]

  • Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint 2023

    Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper]

  • PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint 2023

    Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper]

Conversion of evaluation tasks
  • **Aligning with human judgement: The role of pairwise preference in large language model evaluators. ** COLM 2024

    Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, and Nigel Collier. [Paper]

Constraining outputs in structured formats
  • G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. EMNLP 2023

    Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. [Paper]

  • DHP Benchmark: Are LLMs Good NLG Evaluators? ArXiv preprint 2024

    Yicheng Wang, Jiayi Yuan, Yu-Neng Chuang, Zhuoer Wang, Yingchi Liu, Mark Cusick, Param Kulkarni, Zhengping Ji, Yasser Ibrahim, and Xia Hu. [Paper]

  • LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models. NLP4ConvAI 2023

    Yen-Ting Lin and Yun-Nung Chen. [Paper]

Providing evaluations with explanations
  • CLAIR: Evaluating Image Captions with Large Language Models. EMNLP 2023

    David Chan, Suzanne Petryk, Joseph Gonzalez, Trevor Darrell, and John Canny. [Paper]

  • FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model. ACL 2024

    Yebin Lee, Imseong Park, and Myungjoo Kang. [Paper]

3.2 Improvement Strategy of LLMs' Abilities

Fine-tuning via Meta Evaluation Dataset
  • PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint 2023

    Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper]

  • SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models. ACL findings 2024

    Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, and Jing Shao. [Paper]

  • Offsetbias: Leveraging debiased data for tuning evaluators. ArXiv preprint 2024

    Junsoo Park, Seungyeon Jwa, Meiying Ren, Daeyoung Kim, and Sanghyuk Choi. [Papaer]

  • Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint 2023

    Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper]

  • CritiqueLLM: Towards an Informative Critique Generation Model for Evaluation of Large Language Model Generation. ACL 2024

    Pei Ke, Bosi Wen, Andrew Feng, Xiao Liu, Xuanyu Lei, Jiale Cheng, Shengyuan Wang, Aohan Zeng, Yuxiao Dong, Hongning Wang, et al. [Paper]

Iterative Optimization Based on Feedbacks
  • INSTRUCTSCORE: Towards Explainable Text Generation Evaluation with Automatic Feedback. EMNLP 2023

    Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Wang, and Lei Li. [Paper]

  • Jade: A linguistics-based safety evaluation platform for llm. ArXiv preprint 2023

    Mi Zhang, Xudong Pan, and Min Yang. [Paper]

3.3 Optimization Strategy of Final Results

Summarize by multiple rounds
  • Evaluation Metrics in the Era of GPT-4: Reliably Evaluating Large Language Models on Sequence to Sequence Tasks. EMNLP 2023

    Andrea Sottana, Bin Liang, Kai Zou, and Zheng Yuan. [Paper]

  • On the humanity of conversational ai: Evaluating the psychological portrayal of llms. ICLR 2023

    Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael Lyu. [Paper]

  • Generative judge for evaluating alignment. ArXiv preprint 2023

    Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper]

Vote by multiple LLMs
  • Goal-Oriented Prompt Attack and Safety Evaluation for LLMs. ArXiv preprint 2023

    Chengyuan Liu, Fubang Zhao, Lizhi Qing, Yangyang Kang, Changlong Sun, Kun Kuang, and Fei Wu. [Paper]

  • Benchmarking Foundation Models with Language-Model-as-an-Examiner. NeurIPS 2023

    Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, and Lei Hou. [Paper]

Score smoothing
  • FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model. ACL 2024

    Yebin Lee, Imseong Park, and Myungjoo Kang. [Paper]

  • G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. EMNLP 2023

    Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. [Paper]

  • DHP Benchmark: Are LLMs Good NLG Evaluators? ArXiv preprint 2024

    Yicheng Wang, Jiayi Yuan, Yu-Neng Chuang, Zhuoer Wang, Yingchi Liu, Mark Cusick, Param Kulkarni, Zhengping Ji, Yasser Ibrahim, and Xia Hu. [Paper]

Self validation
  • TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models. EMNLP 2023

    Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. [Paper]

4 How to evaluate LLM-as-a-Judge?

4.1 Basic Metric

  • Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges. ArXiv preprint 2024

    Aman Singh Thakur, Kartik Choudhary, Venkat Srinik Ramayapally, Sankaran Vaidyanathan, and Dieuwke Hupkes. [Paper]

  • Benchmarking Foundation Models with Language-Model-as-an-Examiner. NeurIPS 2023

    Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, and Lei Hou. [Paper]

  • **Aligning with human judgement: The role of pairwise preference in large language model evaluators. ** COLM 2024

    Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, and Nigel Collier. [Paper]

  • MTBench & Chatbot Arena ConversationsJudging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS 2023

    Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper]

  • FairEvalLarge language models are not fair evaluators. ACL 2024

    Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper]

  • LLMBarEvaluating Large Language Models at Evaluating Instruction Following. ArXiv preprint 2023

    Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, and Danqi Chen. [Paper]

  • MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. ICML 2024

    Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. [Paper]

  • CodeJudge-Eval: Can Large Language Models be Good Judges in Code Understanding? COLING 2025

    Yuwei Zhao, Ziyang Luo, Yuchen Tian, Hongzhan Lin, Weixiang Yan, Annan Li, and Jing Ma. [Paper]

  • KUDGELLM-as-a-Judge & Reward Model: What They Can and Cannot Do. ArXiv preprint 2024

    Guijin Son, Hyunwoo Ko, Hoyoung Lee, Yewon Kim, and Seunghyeok Hong. [Paper]

  • CALMJustice or Prejudice? Quantifying Biases in LLM-as-a-Judge. ArXiv preprint 2024

    Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, et al. [Paper]

  • LLMEval$^2$Wider and deeper llm networks are fairer llm evaluators. ArXiv preprint 2023

    Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. [Paper]

4.2 Bias

Position Bias
  • Judging the Judges: A Systematic Investigation of Position Bias in Pairwise Comparative Assessments by LLMs. ArXiv preprint 2024

    Lin Shi, Weicheng Ma, and Soroush Vosoughi. [Paper]

  • Large language models are not fair evaluators. ACL 2024

    Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper]

  • Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge. ArXiv preprint 2024

    Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, et al. [Paper]

Length Bias
  • An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Model is not a General Substitute for GPT-4 ArXiv preprint 2024

    Hui Huang, Yingqi Qu, Xingyuan Bu, Hongli Zhou, Jing Liu, Muyun Yang, Bing Xu, Tiejun Zhao. [Paper]

  • Offsetbias: Leveraging debiased data for tuning evaluators. ArXiv preprint 2024

    Junsoo Park, Seungyeon Jwa, Meiying Ren, Daeyoung Kim, and Sanghyuk Choi. [Papaer]

  • Verbosity Bias in Preference Labeling by Large Language Models. ArXiv preprint 2023

    Keita Saito, Akifumi Wachi, Koki Wataoka, and Youhei Akimoto. [Paper]

Self-Enhancement Bias
  • Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge. ArXiv preprint 2024

    Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, et al. [Paper]

  • Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS 2023

    Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper]

Other Bias
  • Humans or LLMs as the Judge? A Study on Judgement Bias. EMNLP 2024

    Guiming Hardy Chen, Shunian Chen, Ziche Liu, Feng Jiang, Benyou Wang. [Paper]

  • **Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models. ** ACL 2024

    Abhishek Kumar, Sarfaroz Yunusov, Ali Emami. [Paper]

  • Examining Query Sentiment Bias Effects on Search Results in Large Language Models. ESSIR 2023

    Alice Li, and Luanne Sinnamon. [Paper]

4.3 Adversarial Robustness

  • Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment. EMNLP 2024

    Vyas Raina, Adian Liusie, Mark Gales. [Paper]

  • Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based Evaluation. ArXiv preprint 2024

    Dongryeol Lee, Yerin Hwang, Yongil Kim, Joonsuk Park, and Kyomin Jung. [Paper]

  • Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates. ICLR 2025

    Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Jing Jiang, and Min Lin. [Paper]

  • Benchmarking Cognitive Biases in Large Language Models as Evaluators. ACL Findings 2024

    Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. [Paper]

  • Baseline Defenses for Adversarial Attacks Against Aligned Language Models. ArXiv preprint 2023

    Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, and Tom Goldstein. [Paper]

5 Application

5.1 Machine Learning

Text Generation
  • Reference-Guided Verdict: LLMs-as-Judges in Automatic Evaluation of Free-Form Text. ArXiv preprint 2024

    Sher Badshah, and Hassan Sajjad. [Paper]

  • Enhancing Annotated Bibliography Generation with LLM Ensembles. ArXiv preprint 2024

    Sergio Bermejo. [Paper]

  • Human-like summarization evaluation with chatgpt. ArXiv preprint 2023

    Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. [Paper]

  • Large Language Models are Diverse Role-Players for Summarization Evaluation. NLPCC 2023

    Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. [Paper]

  • Evaluating Hallucinations in Chinese Large Language Models. ArXiv preprint 2023

    Qinyuan Cheng, Tianxiang Sun, Wenwei Zhang, Siyin Wang, Xiangyang Liu, Mozhi Zhang, Junliang He, Mianqiu Huang, Zhangyue Yin, Kai Chen, et al. [Paper]

  • Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model. ACL findings 2024

    Hengyuan Zhang, Yanru Wu, Dawei Li, Sak Yang, Rui Zhao, Yong Jiang, and Fei Tan. [Paper]

  • Halu-J: Critique-Based Hallucination Judge. ArXiv preprint 2024

    Binjie Wang, Steffi Chern, Ethan Chern, and Pengfei Liu. [Paper]

  • MD-Judge & MCQ-JudgeSALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models. ACL findings 2024

    Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, and Jing Shao. [Paper]

  • SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal. ArXiv preprint 2024

    Tinghao Xie, Xiangyu Qi, Yi Zeng, Yangsibo Huang, Udari Madhushani Sehwag, Kaixuan Huang, Luxi He, Boyi Wei, Dacheng Li, Ying Sheng, et al. [Paper]

  • L-eval: Instituting standardized evaluation for long context language models. ACL 2024

    Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. [Paper]

  • LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks. ArXiv preprint 2024

    Yushi Bai, Shangqing Tu, Jiajie Zhang, Hao Peng, Xiaozhi Wang, Xin Lv, Shulin Cao, Jiazheng Xu, Lei Hou, Yuxiao Dong, et al. 2024. [Paper]

  • ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. ICLR 2024

    Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. [Paper]

Reasoning
  • StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving. NeurIPS 2024

    Chang Gao, Haiyun Jiang, Deng Cai, Shuming Shi, and Wai Lam. [Paper]

  • Rationale-Aware Answer Verification by Pairwise Self-Evaluation. EMNLP 2024

    Akira Kawabata and Saku Sugawara. [Paper]

  • Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting. EMNLP 2023

    Preethi Lahoti, Nicholas Blumm, Xiao Ma, Raghavendra Kotikalapudi, Sahitya Potluri, Qijun Tan, Hansa Srinivasan, Ben Packer, Ahmad Beirami, Alex Beutel, and Jilin Chen. [Paper]

  • Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. EMNLP 2024

    Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. [Paper]

  • SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents. ArXiv preprint 2024

    Dawei Li, Zhen Tan, Peijia Qian, Yifan Li, Kumar Satvik Chaudhary, Lijie Hu, and Jiayi Shen. [Paper]

  • Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning. ICLR 2023

    Antonia Creswell, Murray Shanahan, and Irina Higgins. [Paper]

  • Improving Model Factuality with Fine-grained Critique-based Evaluator. ArXiv preprint 2024

    Yiqing Xie, Wenxuan Zhou, Pradyot Prakash, Di Jin, Yuning Mao, Quintin Fettes, Arya Talebzadeh, Sinong Wang, Han Fang, Carolyn Rose, et al. [Paper]

  • Let’s verify step by step. ICLR 2023

    Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. [Paper]

  • Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning. ArXiv preprint 2024

    Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. [Paper]

  • Reasoning with Language Model is Planning with World Model. EMNLP 2023

    Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper]

  • Graph of Thoughts: Solving Elaborate Problems with Large Language Models. AAAI 2024

    Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler. [Paper]

  • Critique-out-loud reward models. ArXiv preprint 2024

    Zachary Ankner, Mansheej Paul, Brandon Cui, Jonathan D Chang, and Prithviraj Ammanabrolu. [Paper]

  • CriticEval: Evaluating Large-scale Language Model as Critic. NeurIPS 2024

    Tian Lan, Wenwei Zhang, Chen Xu, Heyan Huang, Dahua Lin, Kai Chen, and Xian-Ling Mao. [Paper]

  • MCQG-SRefine: Multiple Choice Question Generation and Evaluation with Iterative Self-Critique, Correction, and Comparison Feedback. ArXiv preprint 2024

    Zonghai Yao, Aditya Parashar, Huixue Zhou, Won Seok Jang, Feiyun Ouyang, Zhichao Yang, and Hong Yu. [Paper]

  • A Multi-AI Agent System for Autonomous Optimization of Agentic AI Solutions via Iterative Refinement and LLM-Driven Feedback Loops. ArXiv preprint 2024

    Kamer Ali Yuksel, and Hassan Sawaf. [Paper]

  • ReAct: Synergizing Reasoning and Acting in Language Models. ICLR 2023

    Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. [Paper]

  • Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions. ArXiv preprint 2023

    Hui Yang, Sifu Yue, and Yunzhong He. [Paper]

  • LanguageMPC: Large Language Models as Decision Makers for Autonomous Driving. ArXiv preprint 2023

    Hao Sha, Yao Mu, Yuxuan Jiang, Li Chen, Chenfeng Xu, Ping Luo, Shengbo Eben Li, Masayoshi Tomizuka, Wei Zhan, and Mingyu Ding. [Paper]

  • SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures. NeurIPS 2024

    Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V Le, Ed H Chi, Denny Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. [Paper]

Retrieval
  • Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels. NAACL 2024

    Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, and Michael Bendersky. [Paper]

  • Zero-Shot Listwise Document Reranking with a Large Language Model. ArXiv preprint 2023

    Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. [Paper]

  • A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models. SIGIR 2024

    Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. [Paper]

  • Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models. NAACL 2024

    Raphael Tang, Crystina Zhang, Xueguang Ma, Jimmy Lin, and Ferhan Ture. [Paper]

  • Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. NAACL findings 2024

    Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. [Papaer]

  • Self-Retrieval: Building an Information Retrieval System with One Large Language Model. ArXiv preprint 2024

    Qiaoyu Tang, Jiawei Chen, Bowen Yu, Yaojie Lu, Cheng Fu, Haiyang Yu, Hongyu Lin, Fei Huang, Ben He, Xianpei Han, et al. [Paper]

  • Evaluating RAG-Fusion with RAGElo: an Automated Elo-based Framework. LLM4Eval @ SIGIR 2024

    Zackary Rackauckas, Arthur Câmara, and Jakub Zavrel. [Paper]

  • Are Large Language Models Good at Utility Judgments? SIGIR 2024

    Hengran Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, and Xueqi Cheng. [Paper]

  • BioRAG: A RAG-LLM Framework for Biological Question Reasoning. ArXiv preprint 2024

    Chengrui Wang, Qingqing Long, Xiao Meng, Xunxin Cai, Chengjun Wu, Zhen Meng, Xuezhi Wang, and Yuanchun Zhou. [Paper]

  • DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer’s Disease Questions with Scientific Literature. EMNLP findings 2024

    Dawei Li, Shu Yang, Zhen Tan, Jae Young Baik, Sunkwon Yun, Joseph Lee, Aaron Chacko, Bojian Hou, Duy Duong-Tran, Ying Ding, et al. [Paper]

  • Improving medical reasoning through retrieval and self-reflection with retrieval-augmented large language models. Bioinformatics 2024

    Minbyul Jeong, Jiwoong Sohn, Mujeen Sung, and Jaewoo Kang. [Paper]

5.2 Social Intelligence

  • Academically intelligent LLMs are not necessarily socially intelligent. ArXiv preprint 2024

    Ruoxi Xu, Hongyu Lin, Xianpei Han, Le Sun, and Yingfei Sun. [Paper]

  • SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents. ICLR 2024

    Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, and Maarten Sap. [Paper]

5.3 Multi-Modal

  • MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. ICML 2024

    Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. [Paper]

  • AlignMMBench: Evaluating Chinese Multimodal Alignment in Large Vision-Language Models. ArXiv preprint 2024

    Yuhang Wu, Wenmeng Yu, Yean Cheng, Yan Wang, Xiaohan Zhang, Jiazheng Xu, Ming Ding, and Yuxiao Dong. [Paper]

  • Multi-Modal and Multi-Agent Systems Meet Rationality: A Survey. ICML Workshop on LLMs and Cognition 2024

    Bowen Jiang, Yangxinyu Xie, Xiaomeng Wang, Weijie J Su, Camillo Jose Taylor, and Tanwi Mallick. [Paper]

  • LLaVA-Critic: Learning to Evaluate Multimodal Models. ArXiv preprint 2024

    Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi Fan, Quanquan Gu, Heng Huang, and Chunyuan Li. [Paper]

  • Automated Evaluation of Large Vision-Language Models on Self-driving Corner Cases. ArXiv preprint 2024

    Kai Chen, Yanze Li, Wenhua Zhang, Yanxin Liu, Pengxiang Li, Ruiyuan Gao, Lanqing Hong, Meng Tian, Xinhai Zhao, Zhenguo Li, et al. [Paper]

5.4 Other Specific Domains

Finance
  • Revolutionizing Finance with LLMs: An Overview of Applications and Insights. ArXiv preprint 2024

    Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, et al. [Paper]

  • Mixing It Up: The Cocktail Effect of Multi-Task Fine-Tuning on LLM Performance -- A Case Study in Finance. ArXiv preprint 2024

    Meni Brief, Oded Ovadia, Gil Shenderovitz, Noga Ben Yoash, Rachel Lemberg, and Eitam Sheetrit. [Paper]

  • FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making. NeurIPS 2024

    Yangyang Yu, Zhiyuan Yao, Haohang Li, Zhiyang Deng, Yupeng Cao, Zhi Chen, Jordan W Suchow, Rong Liu, Zhenyu Cui, Denghui Zhang, et al. [Paper]

  • UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models. ArXiv preprint 2024

    Yuzhe Yang, Yifei Zhang, Yan Hu, Yilin Guo, Ruoli Gan, Yueru He, Mingcong Lei, Xiao Zhang, Haining Wang, Qianqian Xie, et al. [Paper]

  • Cracking the Code: Multi-domain LLM Evaluation on Real-World Professional Exams in Indonesia. ArXiv preprint 2024

    Fajri Koto. [Paper]

  • Constructing Domain-Specific Evaluation Sets for LLM-as-a-judge. Workshop CustomNLP4U 2024

    Ravi Raju, Swayambhoo Jain, Bo Li, Jonathan Li, and Urmish Thakkar. [Paper]

  • QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large Language Model. ArXiv preprint 2024

    Saizhuo Wang, Hang Yuan, Lionel M. Ni, and Jian Guo. [Paper]

  • GPT classifications, with application to credit lending. Machine Learning with Applications 2024

    Golnoosh Babaei and Paolo Giudici. [Paper]

  • Design and Implementation of an LLM system to Improve Response Time for SMEs Technology Credit Evaluation. IJASC 2023

    Sungwook Yoon. [Paper]

Law
  • Leveraging Large Language Models for Relevance Judgments in Legal Case Retrieval. ArXiv preprint 2024

    Shengjie Ma, Chong Chen, Qi Chu, and Jiaxin Mao. [Paper]

  • (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice. FACCT 2024

    Inyoung Cheong, King Xia, KJ Kevin Feng, Quan Ze Chen, and Amy X Zhang. [Paper]

  • Retrieval-based Evaluation for LLMs: A Case Study in Korean Legal QA. Workshop NLLP 2023

    Cheol Ryu, Seolhwa Lee, Subeen Pang, Chanyeol Choi, Hojun Choi, Myeonggee Min, and Jy-Yong Sohn. [Paper]

  • LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models. NeurIPS 2023

    Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya K, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N. Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John J. Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, and Zehua Li. [Paper]

  • LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large Language Models. NeurIPS 2024

    Haitao Li, You Chen, Qingyao Ai, Yueyue Wu, Ruizhe Zhang, and Yiqun Liu. [Paper]

  • Evaluation Ethics of LLMs in Legal Domain. ArXiv preprint 2024

    Ruizhe Zhang, Haitao Li, Yueyue Wu, Qingyao Ai, Yiqun Liu, Min Zhang, and Shaoping Ma. [Paper]

AI for Science
  • LLMs in medicine: The need for advanced evaluation systems for disruptive technologies. The Innovation 2024

    Yi-Da Tang, Er-Dan Dong, and Wen Gao. [Paper]

  • Artificial intelligence for geoscience: Progress, challenges, and perspectives. The Inovation 2024

    Tianjie Zhao, Sheng Wang, Chaojun Ouyang, Min Chen, Chenying Liu, Jin Zhang, Long Yu, Fei Wang, Yong Xie, Jun Li, et al. [Paper]

  • Harnessing the power of artificial intelligence to combat infectious diseases: Progress, challenges, and future outlook. The Innovation Medicine 2024

    Hang-Yu Zhou, Yaling Li, Jia-Ying Li, Jing Meng, and Aiping Wu. [Paper]

  • Comparing Two Model Designs for Clinical Note Generation; Is an LLM a Useful Evaluator of Consistency? NAACL findings 2024

    Nathan Brake and Thomas Schaaf. [Paper]

  • Towards Leveraging Large Language Models for Automated Medical Q&A Evaluation. ArXiv preprint 2024

    Jack Krolik, Herprit Mahal, Feroz Ahmad, Gaurav Trivedi, and Bahador Saket. [Paper]

  • MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts. ICLR 2024

    Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. [Paper]

  • **Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. ** ArXiv preprint 2023

    Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. [Paper]

  • Solving Math Word Problems via Cooperative Reasoning induced Language Models. ACL 2023

    Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. [Paper]

Others
  • LLMs as Evaluators: A Novel Approach to Evaluate Bug Report Summarization. ArXiv preprint 2024

    Abhishek Kumar, Sonia Haiduc, Partha Pratim Das, and Partha Pratim Chakrabarti. [Paper]

  • Automated Essay Scoring and Revising Based on Open-Source Large Language Models. IEEE Transactions on Learning Technologies 2024

    Yishen Song, Qianta Zhu, Huaibo Wang, and Qinhua Zheng. [Paper]

  • LLM-Mod: Can Large Language Models Assist Content Moderation? CHI EA 2024

    Mahi Kolla, Siddharth Salunkhe, Eshwar Chandrasekharan, and Koustuv Saha. [Paper]

  • Can LLM be a Personalized Judge? EMNLP findings 2024

    Yijiang River Dong, Tiancheng Hu, and Nigel Collier. [Paper]

6 Challenges

6.1 Reliability

Overconfidence
  • Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback. EMNLP 2023

    Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher Manning. [Paper]

Fairness and Generalization

6.2 Robustness

  • Prompt Packer: Deceiving LLMs through Compositional Instruction with Hidden Attacks. ArXiv preprint 2023

    Shuyu Jiang, Xingshu Chen, and Rui Tang. [Paper]

  • "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. CCS 2024

    Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. [Paper]

  • Universal and Transferable Adversarial Attacks on Aligned Language Models. ArXiv preprint 2023

    Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. [Paper]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •