diff --git a/bootcamp/tutorials/quickstart/use_ColPali_with_milvus.ipynb b/bootcamp/tutorials/quickstart/use_ColPali_with_milvus.ipynb index 95aa089b3..1253cf5c4 100644 --- a/bootcamp/tutorials/quickstart/use_ColPali_with_milvus.ipynb +++ b/bootcamp/tutorials/quickstart/use_ColPali_with_milvus.ipynb @@ -16,9 +16,8 @@ "\n", "Modern retrieval models typically use a single embedding to represent text or images. ColBERT, however, is a neural model that utilizes a list of embeddings for each data instance and employs a \"MaxSim\" operation to calculate the similarity between two texts. Beyond textual data, figures, tables, and diagrams also contain rich information, which is often disregarded in text-based information retrieval.\n", "\n", - "$$\n", - "S_{q,d} := \\sum_{i \\in |E_q|} \\max_{j \\in |E_d|} E_{q_i} \\cdot E_{d_j}^T\n", - "$$\n", + "![](../../../images/colpali_formula.png)\n", + "\n", "MaxSim function compares a query with a document (what you're searching in) by looking at their token embeddings. For each word in the query, it picks the most similar word from the document (using cosine similarity or squared L2 distance) and sums these maximum similarities across all words in the query\n", "\n", "ColPali is a method that combines ColBERT's multi-vector representation with PaliGemma (a multimodal large language model) to leverage its strong understanding capabilities. This approach enables a page with both text and images to be represented using a unified multi-vector embedding. The embeddings within this multi-vector representation can capture detailed information, improving the performance of retrieval-augmented generation (RAG) for multimodal data.\n", @@ -473,4 +472,4 @@ }, "nbformat": 4, "nbformat_minor": 2 -} +} \ No newline at end of file diff --git a/images/colpali_formula.png b/images/colpali_formula.png new file mode 100644 index 000000000..ba6385922 Binary files /dev/null and b/images/colpali_formula.png differ