Skip to content

Commit

Permalink
adjust image positions to fix overview
Browse files Browse the repository at this point in the history
  • Loading branch information
merrymercy committed May 4, 2024
1 parent ce59383 commit 91c7740
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions blog/2024-05-02-kaggle-competition.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ date: "May 2, 2024"
previewImg: /images/blog/kaggle_competition/thumb_4x.png
---

<img src="/images/blog/kaggle_competition/header_4x.png" style="width: 60%; max-width: 60%; margin-left: auto; margin-right: auto; margin-top: 0px; margin-bottom: 0px"></img>

### Overview

LMSYS and Kaggle are launching a human preference prediction competition! You are challenged to predict which responses users will prefer in head-to-head battles between Large Language Models (LLMs). You'll work with a dataset from the [Chatbot Arena](https://chat.lmsys.org), containing conversations and user preferences across various LLMs. By developing a model that accurately predicts human preferences, you'll contribute to improving chatbot performance and alignment with user expectations. The training dataset includes over 55,000 real-world user and LLM conversations and user preferences, with personally identifiable information removed. Your solution submission will be tested on a hidden test set of 25,000 samples.
The dataset includes real-world conversations with over 70 state-of-the-art LLMs, such as GPT-4, Claude 2, Llama 2, Gemini, and Mistral models. [Click here to join the competition](https://www.kaggle.com/competitions/lmsys-chatbot-arena/overview) and download the dataset!

<img src="/images/blog/kaggle_competition/header_4x.png" style="width: 60%; max-width: 60%; margin-left: auto; margin-right: auto; margin-top: 0px; margin-bottom: 0px"></img>

### Background

Current LLM benchmarks often fail to capture real-world LLM usage, resulting in a discrepancy between model performance and user satisfaction. Platforms like Chatbot Arena allow users to submit questions and vote on preferred responses; however, the potential of this data has been largely untapped in developing models that predict and optimize for user preferences at scale. Predicting user preferences is essential for creating human-aligned conversational AI that delivers a satisfying user experience. Successful models could enable language models to dynamically adapt their output based on individual preferences across different contexts and use cases. Moreover, this competition aims to uncover the factors that drive user preferences beyond objective correctness. Many user questions are open-ended, and we have already found a correlation between user preference and subjective qualities like conversationality. This could also be one of the best testbeds for reward modeling in your RLHF algorithms.
Expand Down

0 comments on commit 91c7740

Please sign in to comment.