Try   HackMD

AI / ML領域相關學習筆記入口頁面

Deeplearning.ai GenAI/LLM系列課程筆記


2022。AACL-IJCNLP。Recent Advances in Pre-trained Language Models: Why Do They Work and How to Use Them

  • 作者:Cheng-Han Chiang,Yung-Sung Chuang, Hung-yi Lee

李弘毅老師實驗室出品,非常棒的語言模型近期(2022年)進展的介紹,出自chatGPT問世前的世代

投影片大綱:

  • Part 1 Introduction
    • Framework of Pre-training
      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →
  • Part 2 Why do PLMs work
    • Contextualized Word Representations
      • Context is considered!
        Image Not Showing Possible Reasons
        • The image was uploaded to a note which you don't have access to
        • The note which the image was originally uploaded to has been deleted
        Learn More →
      • "lie" 在不同語境(Context)下的分群
        Image Not Showing Possible Reasons
        • The image was uploaded to a note which you don't have access to
        • The note which the image was originally uploaded to has been deleted
        Learn More →
  • Part 3 How to use PLMs: Contrastive Learning for PLMs
  • Part 4 How to use PLMs: Parameter-efficient fine-tuning
  • Part 5 How to use PLMs: Using PLMs with different amounts of data

個人補充 - 從數值分布觀點理解 提示工程(Prompt Engineering)與參數效率微調PEFT

  • Prompt Engineering(提示工程)

    • 在數學上,Prompt Engineering可以被看作是在一個離散的搜索空間內尋找最佳解的過程。這個搜索空間由所有可能的自然語言提示組成。在這個框架下,每一個可能的提示都是一個候選解,我們的目標是找到一個能夠引導語言模型(Language Model, LM)產生我們期望輸出的最佳提示
    • 離散性:提示是用自然語言構成的,不能通過簡單的數學操作(比如加法或乘法)來調整。每一個提示都是獨立且不連續的,你不能在兩個提示之間找到一個清晰的數學路徑。
    • 搜索方法:通過試錯,或者更高級的搜索演算法,在這個離散空間中尋找最適合的提示。這通常涉及到創建多個提示變體,對它們進行評分,然後選擇最好的
    • 提示策略或框架:們不僅僅是關於找到一個單一的有效提示,而是關於如何設計提示來引導模型以一種更加結構化和一致的方式思考,方法包含Chain-of-Thought(CoT)或Tree of Thoughts(ToT)、Self-Consistency等
  • PEFT(Parameter-efficient Fine-tuning,參數高效微調)

    • PEFT則是在一個連續的數學空間中進行的,這個空間是由模型的參數構成的。在這個框架下,微調是通過應用梯度下降(或其他優化算法)在參數空間中尋找一組參數,使得模型的性能最優化

將這兩種方法對比,可以這樣理解:Prompt Engineering是一種直觀且更接近人類思維的方法,而PEFT是一種更數學化、需要計算資源進行數值優化的方法。Prompt Engineering在概念上更簡單,但可能需要更多的創造性和人工參與。相反,PEFT雖然在計算上更複雜,但它提供了一種更細膩控制模型的方式,並且可以自動化進行

Part 4 How to use PLMs: Parameter-efficient fine-tuning

以下摘錄PEFT的四種重要方法(2022年)

  • Part 4 How do PLMs work: Parameter-efficient fine-tuning
    • intro
      • PLMs are gigantic

        • Need a copy for each downstream task
        • Problem: PLMs are gigantic (in terms of numbers of parameters, model size, and the storage needed to store the model)
        • Solution: Reduce the number of parameters by parameter-efficient fine-tuning
        • Use a small amount of parameters for each downstream task

        Image Not Showing Possible Reasons
        • The image was uploaded to a note which you don't have access to
        • The note which the image was originally uploaded to has been deleted
        Learn More →

        是在BERT時代的預訓練模型為了解決下游任務的模型參數量而提出的解決辦法

      • What is standard fine-tuning really doing?

        • Modify the hidden representations (
          h
          ) of the PLM such that it can perform well on downstream task
          Image Not Showing Possible Reasons
          • The image was uploaded to a note which you don't have access to
          • The note which the image was originally uploaded to has been deleted
          Learn More →
      • Fine-tuning = modifying the hidden representation based on a PLM

        Image Not Showing Possible Reasons
        • The image was uploaded to a note which you don't have access to
        • The note which the image was originally uploaded to has been deleted
        Learn More →

4-1 Adapter

  • 4-1 Adapter
    • pseudo code
      ​​​​​​​​def transformer_block_with_adapter(x): ​​​​​​​​ residual = x ​​​​​​​​ x = SelfAttention(x) ​​​​​​​​ x = FFN(x) # adapter ​​​​​​​​ x = LN(x + residual) ​​​​​​​​ residual = x ​​​​​​​​ x = FFN(x) # transformer FFN ​​​​​​​​ x = FFN(x) # adapter ​​​​​​​​ x = LN(x + residual) ​​​​​​​​ return x
    應該是參數效率調節的開山之作
    • Use special submodules to modify hidden representations!

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

    • small trainable submodules inserted in transformers
      在transformers模塊後面插入小型的可訓練運算模算

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

      • adptters內部結構
        • 使用類似resnet讀跳接設計,一個module level的殘差概念,類似小型UNET
          Image Not Showing Possible Reasons
          • The image was uploaded to a note which you don't have access to
          • The note which the image was originally uploaded to has been deleted
          Learn More →
    • During fine-tuning, only update the adpaters and the classifier head
      梯度更新時凍結預訓練模型的部分、只更新adpaters跟橋接下游任務的分類頭

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

    • All downstream tasks share the PLM; the adapters in each layer and the classifier heads are the task-specific modules
      所有下游任務共享預訓練的語言模型(PLM,即現代所說的LLM)、針對下游特定任務則有專屬的adapters跟分類頭

4-2 LoRA: Low-Rank Adaptation of Large Language Models

  • 4-2 LoRA: Low-Rank Adaptation of Large Language Models
    • pseudo code
      ​​​​​​​​def lora_linear(x): ​​​​​​​​ h = x @ W # regular linear ​​​​​​​​ h += x @ W_A @ W_B # low-rank update ​​​​​​​​ return scale * h
    Image Not Showing Possible Reasons
    • The image was uploaded to a note which you don't have access to
    • The note which the image was originally uploaded to has been deleted
    Learn More →
    • 在Transformer層的上採樣與下採樣計算模塊,分別搭配一組可訓練的參數

      • 這裡採用近年常見加寬的並聯設計、猜測除了增加表徵學習與表達的豐富性外,也方便將計算模快平行化做分散式運算
        Image Not Showing Possible Reasons
        • The image was uploaded to a note which you don't have access to
        • The note which the image was originally uploaded to has been deleted
        Learn More →

        Image Not Showing Possible Reasons
        • The image was uploaded to a note which you don't have access to
        • The note which the image was originally uploaded to has been deleted
        Learn More →
    • LoRA設計概念

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

      1. 動機: Downstream fine-tunings have low intrinsic dimension

        • 儘管原始的大型語言模型可能有大量的參數,但實際上只需要調整其中的一小部分參數就可以達到特定的下游任務
      2. 權重更新: Weight after fine-tuning =

        Wo (pre-trained weight) +
        ΔW
        (updates to the weight)
        微調後的權重可以表示為
        Wo
        (原始的預訓練權重) +
        ΔW
        (權重的更新)

      3. 假設: The updates to the weight

        ΔW also gave a low intrinsic rank
        權重的更新
        ΔW
        也具有低的固有秩,意即儘管有大量的權重可以更新,但實際上只有一小部分的權重需要被調整。

      4. 微調後的權重: Fine-tuned weight =

        Wo+ΔW=Wo+BA
        , rank
        rmin(dFFW,dmodel)

        微調後的權重

        W 可以表示為原始的權重
        Wo
        加上某種低秩矩陣
        B
        A
        的乘積,其中
        r
        是這個低秩矩陣的秩,且
        r
        遠小於
        dFFW
        dmodel
        中的較小值。

    • All downstream tasks share the PLM; the LoRA in each layer and the classifier heads are the task-specific modules

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

4-3 Prefix tuning

  • 4-3 Prefix tuning

    • pseudo code
      ​​​​​​​​def transformer_block_for_prefix_tuning(x): ​​​​​​​​ soft_prompt = FFN(soft_prompt) ​​​​​​​​ x = concat([soft_prompt, x], dim=seq) ​​​​​​​​ return transformer_block(x)

    前綴微調是一種特殊的微調技術,其中只有模型的一部分參數(即前綴)是可訓練的,而模型的其餘部分保持固定

    • Insert trainable prefix in each layer
      在模型的每一層(Prompt tuning只有在第一層)中插入一個可訓練的前綴。這些前綴參數在微調過程中會被更新,而模型的主體部分則不會

    Image Not Showing Possible Reasons
    • The image was uploaded to a note which you don't have access to
    • The note which the image was originally uploaded to has been deleted
    Learn More →

    • Standard Self-Attention

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

    • Insert trainable prefix

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

    • Only the prefix (key and value) are updated during finetuning

      Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

    • 個人想法

      • 雖然可訓練參數只有前綴部分,但矩陣運算的計算量還是被增加?

4-4 (Soft) Prompt tuning / Soft Prompting

  • 4-4 (Soft) Prompt tuning / Soft Prompting

    • Prompt tuning:在輸入層前加入一個可訓練的前綴嵌入(prefix embedding),而不是直接在輸入句子中添加固定的詞彙(即硬提示)。Prefix tuning則在每一層插入可參數化的前綴
      • 白話來說,是把提示詞編碼為數值表示(embeddingd)的參數,且這些參數是可以透過梯度下降、反向傳播來優化
    • pseudo code
      ​​​​​​​​def soft_prompted_model(input_ids): ​​​​​​​​ x = Embed(input_ids) ​​​​​​​​ x = concat([soft_prompt, x], dim=seq) ​​​​​​​​ return model(x)
    • Prepend the prefix embedding at the input layer
    • Soft Prompting can be considered as the soften version of prompting
      • (Hard) prompting: add words in the input sentence

      • Hard Prompts: words (that are originally in the vocabulary)
        直接在輸入句子中添加固定的詞彙。這些詞彙是原始詞彙表中的詞彙

      • Soft Prompts: vectors (can be initialized from some word embeddings)

        • 在輸入層前加入向量(可以從某些詞嵌入中初始化)
        • 這些向量不一定直接對應於詞彙表中的詞彙,但可以被視為柔和版本的提示
        • 通過反向傳播學習(i.e. 利用梯度方法在連續空間上進行優化),並且可以調整以合併來自任意數量的標記示例的信號(tokens)
    • 軟提示的長度:
      • 軟提示的長度需要足夠長,以便模型可以從中捕獲足夠的信息。
      • 當軟提示的長度足夠長時,增加提示的長度會顯示出遞減的性能增益。這意味著在某一點之後,增加更多的提示不再帶來顯著的性能提升。
    • 軟提示的限制:
      • 詞向量的限制: 軟提示的向量可以從某些詞嵌入中初始化,但不一定要直接對應於詞彙表中的詞彙。
      • 長度的限制: 雖然軟提示的長度應該足夠長,但過長的提示可能不會帶來更多的性能增益,並可能增加計算成本

結論

  • Benefit 1: Drastically decreases the task-specific parameters
    PEFT 通過專注於模型中的特定參數來減少任務特定的參數,從而減少了模型的大小和計算需求
  • Benefit 2: Less easier to overfit on training data; better out-of-domain performance
    於 PEFT 使用的參數較少,因此模型較不容易在訓練數據上過度擬合。這意味著模型在未見過的數據上的性能會更好,特別是在領域外的情境中
  • Benefit 3: Fewer parameters to fine-tune, making them good candidates when training with small dataset
    PEFT 方法需要微調的參數較少,這使得它們成為在小數據集上訓練時的理想選擇。這是因為在小數據集上訓練大型模型時,過多的參數可能會導致過度擬合
  • Which parameter-efficient fine-tuning should one select?
    • 沒有放之四海而皆準的方案

補充資料

Model Customization

2023.08。Nvidia。Selecting Large Language Model Customization Techniques

image.png

Parameter-efficient fine-tuning

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
低秩矩陣分解( Low-rank Matrix Decomposition)與 LoRA

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
Low-rank Matrix Decomposition

Prompt Learning

2023。Nvidia。Prompt Learning - p-tuning and prompt tuning

提供理論說明、實做程式碼
image.png

由於原始模型參數被凍結且不會被這兩種方法所改變,p-tuning/prompt tuning也避免了常在細微調整模型時遇到的災難性遺忘問題
p-tuning和prompt tuning不是選擇手動或自動的方式來選擇離散的文本提示,而是使用可以通過梯度下降來優化的虛擬提示嵌入。在NeMo-Megatron中,prompt tuning和p-tuning的唯一區別在於在訓練過程中用來調整軟提示token的架構

我們將使用continuous、soft和virtual token這些術語來交替指代插入到模型提示中的嵌入,這些嵌入在模型的詞彙中沒有具體映射到字符串或字符。這些虛擬token嵌入與構成模型詞彙的離散、硬或真實token形成對比。虛擬token純粹是1D向量,其維度等於每個真實token嵌入,匹配hidden_size超參數。在訓練和推理中,連續token嵌入根據您在模型的配置中提供的模板,插入到離散token嵌入中。我們將在下面展示如何做到這一點。

  • Prompt Tuning

    • 左側:Model Tuning。每一個特定的任務創建一個獨立的模型副本,並對這些副本進行微調
    • 右側:在Prompt Tuning中,只需要在原始的預訓練模型前加入少量的任務特定的prompt(提示)
      • 這些prompt作為輸入的一部分,引導模型生成特定任務的輸出。這樣就不需要對模型的核心參數進行改變,從而大幅減少所需的參數量和存儲空間(T5 "XXL" Model: 11B > 20K, with 5 prompt token )

    在對預訓練的GPT模型進行prompt tuning時,軟提示嵌入作為一個2D矩陣初始化,大小為total_virtual_tokens X hidden_size。每個任務都有其自己的2D嵌入矩陣與之關聯。任務在訓練或推理期間不共享任何參數。所有GPT模型參數被凍結,只有每個任務的嵌入參數在訓練期間更新(各任務間參數獨立)。

    在prompt tuning中,您可以指定每個任務的嵌入如何初始化。您可以:

    • 根據某些隨機分佈初始化嵌入參數
    • 從現有的詞彙嵌入初始化嵌入參數(推薦)

    如果您選擇從現有的嵌入權重初始化虛擬token嵌入,您可以在模型的配置中提供您想用於初始化的單詞字符串。這個字符串將被分詞並鋪平或截斷,以匹配您希望使用的虛擬token總數(total_virtual_tokens)。詞彙嵌入被複製並用於為每個任務初始化軟提示嵌入矩陣。詞彙嵌入本身在prompt tuning過程中不更新或改變。

  • P-Tuning

    • 圖a(Discrete Prompt Search)即所謂的以自然語言提示
      • 在這個過程中,一個提示生成器(Prompt Generator)被用來為一個給定的輸入和目標創建一個或多個discrete prompts(離散提示)。
        • 例如,"The capital of Britain is [MASK]",其中"[MASK]"是要模型預測的部分。
          提示(橙色區域)是根據輸入(藍色區域)而生成的,而模型則根據這些提示生成對應的輸出。在這個過程中,提示是不可微分的,並且通常是通過試錯來進行優化的。
    • 圖b(P-Tuning)
      • 在P-Tuning過程中,一個稱為prompt encoder的結構(通常是一個LSTM)被用來生成prompt的嵌入。與Prompt Search不同的是,P-Tuning中的prompt是連續的(可以微分的),這意味著prompt的嵌入和prompt encoder可以通過梯度下降(反向傳播)來優化。
      • 在這個例子中,prompt encoder產生一系列的prompt嵌入(P0到Pj),它們與輸入一起進行處理,以協助模型預測出隱藏的詞("[MASK]")
      • 這種方法的好處是,它允許對提示進行微調,以最大化性能,而不僅僅是依賴於固定的、離散的提示

    在p-tuning中,使用一個LSTM模型來預測虛擬token嵌入。我們將這個LSTM模型稱為prompt_encoder。LSTM參數在p-tuning開始時隨機初始化。所有GPT模型參數被凍結,只有LSTM權重在每個訓練步驟中更新。LSTM參數所有任務間共享,但LSTM模型為每個任務輸出唯一的虛擬token嵌入。LSTM預測的虛擬token以與prompt-tuning完全相同的方式插入到離散token輸入中。您仍然通過設置total_virtual_tokens來指定您想使用的虛擬token數量,每個虛擬token嵌入仍然是大小為hidden_size的1D向量。

  • Using Both Prompt and P-Tuning

    • 可以同時使用
    • p-tuning通常需要較少的虛擬token來達到好的結果,但相較於prompt-tuning,使用的參數數量更多

    由於p-tuning在訓練期間在任務間共享參數,因此在多個相似任務上對模型進行p-tuning可能會使您的模型在任務間分享洞察力。同樣地,一次在許多非常不同的任務上進行p-tuning可能比prompt tuning表現更差,後者為每個任務調整一套獨立的參數。通常我們推薦使用p-tuning而不是prompt tuning。

    Soft prompts can be trained for the input layer only (Liu et al.,2021; Lester et al., 2021) or for all layers (Li and Liang, 2021).