# Precise Zero-Shot Dense Retrieval without Relevance Labels (HyDE) ###### tags: `筆記`, `NLP` > https://arxiv.org/abs/2212.10496 > ACL 2023 ## Abstract - 在各種任務和語言中,密集檢索已顯示出其有效性和效率。然而,在缺乏相關性標籤的情況下創建有效的完全零樣本密集檢索系統仍然是一大挑戰。本文提出了透過假想文件嵌入(HyDE)方法,首先使用指令跟隨型語言模型(例如 InstructGPT)零樣本生成一個假想文檔,然後使用非監督式對比學習的編碼器(例如 Contriever)將文檔編碼成嵌入向量,此向量定位在語料庫嵌入空間的一個鄰域內,從而檢索出基於向量相似性的相關真實文檔。 ## Introduction - Background: Dense retrieval has shown success in tasks like web search and question answering. However, zero-shot dense retrieval remains challenging without relevance labels. The paper introduces HyDE to overcome these limitations by leveraging instruction-following language models and unsupervised contrastive encoders. - Problem Statement: Developing a fully zero-shot dense retrieval system that requires no relevance supervision and generalizes across tasks and languages. ## Methodology - Hypothetical Document Embeddings (HyDE) ![image](https://hackmd.io/_uploads/r15DhCWA6.png) - HyDE通過兩個任務來分解密集檢索:一個是由指令跟隨型語言模型完成的生成任務,另一個是由對比編碼器完成的文檔相似性任務。這種方法旨在捕捉"相關性",生成的假想文檔可能包含事實錯誤,但類似於相關文檔。第二步中使用的非監督式對比編碼器的密集瓶頸將過濾掉額外的(幻想的)細節。 - HyDE Model: The paper proposes the HyDE model which consists of two main components: an instruction-following language model (e.g., InstructGPT) to generate a hypothetical document based on the query, and an unsupervised contrastively learned encoder (e.g., Contriever) to encode the generated document for retrieval. - HyDE retrievers share the exact same embedding spaces with Contriever and mContriever. - Implementation Details: HyDE operates by first instructing the language model to generate a document that answers the query. This hypothetical document, despite potentially containing factual inaccuracies, captures the essence of relevance. The document is then encoded into an embedding vector, which is used to retrieve similar documents from the corpus. ## Experiments - Datasets: The evaluation of HyDE is conducted on multiple datasets, including web search query sets (TREC DL19 and DL20), low-resource datasets from the BEIR dataset, and non-English datasets (Mr.Tydi) covering languages such as Swahili, Korean, and Japanese. - Metrics: The paper evaluates the performance of HyDE using standard retrieval metrics, such as mean average precision (map), normalized discounted cumulative gain (ndcg), and recall. The experiments demonstrate that HyDE significantly improves upon baseline models, including unsupervised dense retrievers and lexical retrievers like BM25. - ![image](https://hackmd.io/_uploads/rJdI6zDxkl.png) - ![image](https://hackmd.io/_uploads/HyIQHQDx1x.png) - ![image](https://hackmd.io/_uploads/ryjDaMwl1g.png) - 實驗顯示,HyDE significantly outperforms the SOTA unsupervised dense retriever Contriever and shows strong performance comparable to fine-tuned retrievers. ## Conclusion - 本文引入了一種新的與大型語言模型和密集編碼器/檢索器之間的互動範式,證明了(部分)相關性建模和指令理解可以委派給更強大且靈活的大型語言模型。因此,消除了對相關性標籤的需求。本研究對於實際應用也具有價值,尤其是在搜索系統的生命初期,使用HyDE可以提供與細調模型相當的性能。 ## Takeaways - HyDE first zero-shot instructs an instruction-following language model (e.g. InstructGPT) to generate a **hypothetical document**. - an unsupervised **contrastively** learned encoder (e.g.**Contriever**) encodes the document into anembedding vector. 用對比學習的方式去拉近生成文件與真實文件的距離 - strong performance comparable to fine-tuned retrievers, across various tasks (e.g. web search, QA, fact verification) and languages. - 作者認為有實務用途。在初期沒有資料的時候可以用,沒有 other relevance-free model can offer,等到搜尋紀錄累積了後,用其他supervised dense retriever取代推出。 > STATEMENT: The contents shared herein are quoted verbatim from the original author and are intended solely for personal note-taking and reference purposes following a thorough reading. Any interpretation or annotation provided is strictly personal and does not claim to reflect the author's intended meaning or context.