diff --git a/blog/2024-05-02-kaggle-competition.md b/blog/2024-05-02-kaggle-competition.md index b02ff67f..b0205893 100644 --- a/blog/2024-05-02-kaggle-competition.md +++ b/blog/2024-05-02-kaggle-competition.md @@ -5,13 +5,13 @@ date: "May 2, 2024" previewImg: /images/blog/kaggle_competition/thumb_4x.png --- - - ### Overview LMSYS and Kaggle are launching a human preference prediction competition! You are challenged to predict which responses users will prefer in head-to-head battles between Large Language Models (LLMs). You'll work with a dataset from the [Chatbot Arena](https://chat.lmsys.org), containing conversations and user preferences across various LLMs. By developing a model that accurately predicts human preferences, you'll contribute to improving chatbot performance and alignment with user expectations. The training dataset includes over 55,000 real-world user and LLM conversations and user preferences, with personally identifiable information removed. Your solution submission will be tested on a hidden test set of 25,000 samples. The dataset includes real-world conversations with over 70 state-of-the-art LLMs, such as GPT-4, Claude 2, Llama 2, Gemini, and Mistral models. [Click here to join the competition](https://www.kaggle.com/competitions/lmsys-chatbot-arena/overview) and download the dataset! + + ### Background Current LLM benchmarks often fail to capture real-world LLM usage, resulting in a discrepancy between model performance and user satisfaction. Platforms like Chatbot Arena allow users to submit questions and vote on preferred responses; however, the potential of this data has been largely untapped in developing models that predict and optimize for user preferences at scale. Predicting user preferences is essential for creating human-aligned conversational AI that delivers a satisfying user experience. Successful models could enable language models to dynamically adapt their output based on individual preferences across different contexts and use cases. Moreover, this competition aims to uncover the factors that drive user preferences beyond objective correctness. Many user questions are open-ended, and we have already found a correlation between user preference and subjective qualities like conversationality. This could also be one of the best testbeds for reward modeling in your RLHF algorithms.