### [AI / ML領域相關學習筆記入口頁面](https://hackmd.io/@YungHuiHsu/BySsb5dfp) ### [Deeplearning.ai GenAI/LLM系列課程筆記](https://learn.deeplearning.ai/) #### [Large Language Models with Semantic Search。大型語言模型與語義搜索 ](https://hackmd.io/@YungHuiHsu/rku-vjhZT) #### [Finetuning Large Language Models。微調大型語言模型](https://hackmd.io/@YungHuiHsu/HJ6AT8XG6) #### [LangChain for LLM Application Development](https://hackmd.io/1r4pzdfFRwOIRrhtF9iFKQ) --- [LangChain for LLM Application Development](https://www.youtube.com/watch?v=jFo_gDOOusk) 系列課程筆記 - [Models, Prompts and Output Parsers](https://hackmd.io/1r4pzdfFRwOIRrhtF9iFKQ) - [Memory](https://hackmd.io/@YungHuiHsu/Hy120mR23) - [Chains](https://hackmd.io/@YungHuiHsu/SJJvZ-ya2) - [Question-and-Answer](https://hackmd.io/@YungHuiHsu/BJ10qunzp) - [Evaluation](https://hackmd.io/@YungHuiHsu/Hkg0SgazT) - [Agents](https://hackmd.io/@YungHuiHsu/rkBMDgRM6) ![](https://hackmd.io/_uploads/r1WTGXRhn.png =400x) source : [LangChain.dart](https://pub.dev/packages/langchain) --- # [LangChain - Chains](https://learn.deeplearning.ai/langchain/lesson/4/chains) ## Outline * LLMChain * Sequential Chains * SimpleSequentialChain * SequentialChain * Router Chain ![](https://hackmd.io/_uploads/SyxLmhlKf6.png =600x) ## **LangChain的基礎:鏈 (Chain)** - **主要元素**:LLM (large language model) 與提示 (prompt)。 - **功能**:多個此鏈組合可處理一系列的文字或資料操作。 - 使用 pandas DataFrame 載入數據,如產品和評論。 **LLM鏈 (LLM Chain)** - 包括:OpenAI模型、chat提示和LLM鏈本身。 - 通過將產品名稱和提示組合,可生成如何描述該產品的公司的最佳名稱。 **連續鏈 (Sequential Chains)** - **類型**: - 簡單連續鏈 (SimpleSequentialChain):一個輸入和一個輸出。 - 一般連續鏈 (SequentialChain):多個輸入或輸出。 - **功能**:按順序運行多個鏈。 **路由鏈 (Router Chain)** - 根據輸入類型,決定使用哪個子鏈。 - **實例**:有不同的提示用於物理、數學、歷史和計算機科學問題。 - 當路由器無法確定使用哪個子鏈時,會調用默認鏈。 ### LLMChain 最基本的`LLMChain`單元使用示意 ```python! from langchain.chat_models import ChatOpenAI from langchain.prompts import ChatPromptTemplate from langchain.chains import LLMChain llm = ChatOpenAI(temperature=0.9) prompt = ChatPromptTemplate.from_template( "What is the best name to describe \ a company that makes {product}?" ) chain = LLMChain(llm=llm, prompt=prompt) product = "Queen Size Sheet Set" chain.run(product) 'Royal Comfort Linens' ``` ### SimpleSequentialChain ![](https://hackmd.io/_uploads/r10S41tM6.png =600x) :::info 在只有一個輸入和一個輸出的簡單情境下,藉由`SimpleSequentialChain`串連兩個`LLMChain` ::: `SimpleSequentialChain` 是一個用於組合多個 `LLMChain` 的結構,允許按順序連續處理不同的語言任務。它比 `SequentialChain` 簡單,因為它直接將一個 `LLMChain` 的輸出作為下一個 `LLMChain` 的輸入,沒有額外的中間變量或複雜的資料流程管理。這種設計使得 `SimpleSequentialChain` 非常適合於簡單的順序任務,如在本例中先生成公司名稱,然後再為這個名稱創建描述。它**簡化了連續處理步驟的實施,適合於那些需要順序處理但又不需要複雜資料管理的情景**。 #### Demo of `SimpleSequentialChain` ```mermaid flowchart LR 1["chain1 'What is the best name to describe a company that makes {product}?'"] -- "company_name" --> 2["chain2 'Write a 20 words description for the following company:{company_name}'"] ``` 範例中可以看到chain1產出的結果(company_name)傳遞給chain2 * 命名公司(Chain 1):基於提供的產品名稱({product}),生成一個適合的公司名稱 * 撰寫公司描述(Chain 2):針對生成的公司名稱,撰寫一段 20 字的簡短描述 #### 完整程式碼 - code :::spoiler ```python! from langchain.chains import SimpleSequentialChain llm = ChatOpenAI(temperature=0.9) # prompt template 1 first_prompt = ChatPromptTemplate.from_template( "What is the best name to describe \ a company that makes {product}?" ) # Chain 1 chain_one = LLMChain(llm=llm, prompt=first_prompt) # prompt template 2 second_prompt = ChatPromptTemplate.from_template( "Write a 20 words description for the following \ company:{company_name}" ) # chain 2 chain_two = LLMChain(llm=llm, prompt=second_prompt) ``` 串連兩個LLMChain產出結果 ```python! overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two], verbose=True) overall_simple_chain.run(product) > Entering new SimpleSequentialChain chain... RegalRest Linens RegalRest Linens offers luxurious and high-quality linens for hotels, resorts, and high-end establishments. Experience ultimate comfort and elegance. ``` ::: ### SequentialChain :::info 多個輸入或輸出的情境 ::: ![](https://hackmd.io/_uploads/HJGjV1FM6.png =600x) `SequentialChain`是一種結構,用於將多個 LLMChain (語言模型鏈)依序連接起來。在這個案例中,每一個 `LLMChain` 代表一個特定的處理步驟(如翻譯、摘要製作等),而 `SequentialChain `則確保這些步驟按順序執行。這種結構**允許複雜的資料處理流程被拆分為較小、較易於管理的單元,並依序進行,從而有效地處理複雜的語言任務**。 #### Demo of `SequentialChain` - 灰色字體是各個chain中間傳輸的資料(使用`output_key`指定) ```mermaid flowchart TD userinput["User Input\n{Review}"]-->1 1["Chain 1 Can you summarize the following review in 1 sentence: {Review}"] 1 --"English_Review"--> 2 2["Chain 2 Can you summarize the following review in 1 sentence: {English_Review}"] userinput --> 3 3["Chain 3 What language is the following review:\n{Review}"] 2 --"summary"--> 4 4["Chain 4 Write a follow up response to the following summary in the specified language: Summary: {summary}\nLanguage: {language}"] 3 --"language"--> 4 4 --"followup_message"--> output output["Output of Chain 4: {English_Review}, {summary}, {followup_message}"] ``` * 將翻譯評論為英語(Chain 1):使用 LLMChain 與定義的提示模板(first_prompt),將原始評論翻譯成英語。 * 生成評論的一句話摘要(Chain 2):對翻譯後的評論進行摘要。 * 識別評論的語言(Chain 3):判斷原始評論的語言。 * 生成跟進回應(Chain 4):根據摘要和評論語言,生成一個回應 * 整合以上步驟(overall_chain):將上述四個步驟結合為一個 SequentialChain,對給定的評論進行處理,最終輸出翻譯後的評論、摘要和跟進回應 ```PYTHON= overall_chain = SequentialChain( chains=[chain_one, chain_two, chain_three, chain_four], input_variables=["Review"], output_variables=["English_Review", "summary","followup_message"]) ``` #### 完整程式碼 :::spoiler code ```python= from langchain.chains import SequentialChain llm = ChatOpenAI(temperature=0.9, model=llm_model) # prompt template 1: translate to english first_prompt = ChatPromptTemplate.from_template( "Translate the following review to english:" "\n\n{Review}" ) # chain 1: input= Review and output= English_Review chain_one = LLMChain(llm=llm, prompt=first_prompt, output_key="English_Review") second_prompt = ChatPromptTemplate.from_template( "Can you summarize the following review in 1 sentence:" "\n\n{English_Review}") # chain 2: input= English_Review and output= summary chain_two = LLMChain(llm=llm, prompt=second_prompt, output_key="summary") # prompt template 3: translate to english third_prompt = ChatPromptTemplate.from_template( "What language is the following review:\n\n{Review}" ) # chain 3: input= Review and output= language chain_three = LLMChain(llm=llm, prompt=third_prompt, output_key="language" ) # prompt template 4: follow up message fourth_prompt = ChatPromptTemplate.from_template( "Write a follow up response to the following " "summary in the specified language:" "\n\nSummary: {summary}\n\nLanguage: {language}" ) # chain 4: input= summary, language and output= followup_message chain_four = LLMChain(llm=llm, prompt=fourth_prompt, output_key="followup_message" ) # overall_chain: input= Review # and output= English_Review,summary, followup_message overall_chain = SequentialChain( chains=[chain_one, chain_two, chain_three, chain_four], input_variables=["Review"], output_variables=["English_Review", "summary","followup_message"], verbose=True ) review = df.Review[5] # overall_chain(review) results = overall_chain(review) pd.set_option('max_colwidth', None) pd.DataFrame.from_dict(results,orient='index').T ``` ::: - results ![](https://hackmd.io/_uploads/B1lGPCAufT.png) ### Router Chain ![](https://hackmd.io/_uploads/rJZP51tzp.png =600x) Router Chain的目的是當接收到用戶的問題時,它能自動判斷哪一個範疇的模板最適合用來回答該問題。例如,如果問題是關於物理的,它將選擇物理模板來生成回答。這大大提高了回答的質量和相關性 #### Demo of `Router Chain` * 定義了四個範疇的模板: 物理、數學、歷史和計算機科學,每個模板都描述了該範疇專家的特點以及一個問題的格式。 * 這些模板的資訊被存儲在`prompt_infos`清單中 * 接著,對每一個範疇,建立了一個`LLMChain`,其任務是使用該範疇的模板進行問題的回答。 * 定義了一個`MULTI_PROMPT_ROUTER_TEMPLATE`,其功能是根據用戶的輸入選擇最適合的範疇模板 * 使用這個路由模板建立了一個名為`router_chain`的`LLMRouterChain` * 結合了`router_chain`、所有範疇的`destination_chains`和一個`default_chain`(當無法找到適合的範疇時使用)建立了一個名為chain的MultiPromptChain ```mermaid graph TD A["MULTI_PROMPT_ROUTER_TEMPLATE"] --> B["Router Template"] B --> C["Router Prompt"] C --> D["Router Chain"] E["Destination Chains"] --> F["MultiPrompt Chain"] G["Default Chain"] --> F D --> F H["Input"] --> D F --> I["Output (Response)"] subgraph "Router Setup" B["Router Template"] C["Router Prompt"] D["Router Chain"] end subgraph "Expert Templates" E["Destination Chains\n(physics, math, \nhistory,...)"] end subgraph "Default Response" G["Default Chain"] end subgraph "Final Chain (MultiPromptChain)" F["MultiPrompt Chain"] end style H fill:#f9d7e5 ``` #### 完整程式碼 ##### template - 設計用於導引語言模型以特定領域的專家身份回答問題。使語言模型能夠更加專注於某一領域的知識,並根據該領域的專家身份提供答案 :::spoiler ```python! physics_template = """You are a very smart physics professor. \ You are great at answering questions about physics in a concise\ and easy to understand manner. \ When you don't know the answer to a question you admit\ that you don't know. Here is a question: {input}""" math_template = """You are a very good mathematician. \ You are great at answering math questions. \ You are so good because you are able to break down \ hard problems into their component parts, answer the component parts, and then put them together\ to answer the broader question. Here is a question: {input}""" history_template = """You are a very good historian. \ You have an excellent knowledge of and understanding of people,\ events and contexts from a range of historical periods. \ You have the ability to think, reflect, debate, discuss and \ evaluate the past. You have a respect for historical evidence\ and the ability to make use of it to support your explanations \ and judgements. Here is a question: {input}""" computerscience_template = """ You are a successful computer scientist.\ You have a passion for creativity, collaboration,\ forward-thinking, confidence, strong problem-solving capabilities,\ understanding of theories and algorithms, and excellent communication \ skills. You are great at answering coding questions. \ You are so good because you know how to solve a problem by \ describing the solution in imperative steps \ that a machine can easily interpret and you know how to \ choose a solution that has a good balance between \ time complexity and space complexity. Here is a question: {input}""" ``` ::: ##### prompt_infos - prompt_infos 是一個包含四個字典的列表。每個字典代表一個專門的問答模板,旨在幫助語言模型以某一領域的專家身份回答問題。以下以physics為例 - physics: * `name`:模板的名稱,此處為 "physics" * `description`:描述此模板的用途,此處表示它適合回答有關物理的問題 * `prompt_template`:使用先前定義的 physics_template 作為該模板的具體內容 :::spoiler ```PYTHON= prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template }, { "name": "math", "description": "Good for answering math questions", "prompt_template": math_template }, { "name": "History", "description": "Good for answering history questions", "prompt_template": history_template }, { "name": "computer science", "description": "Good for answering computer science questions", "prompt_template": computerscience_template }] ``` ::: ##### call llm model ```PYTHON= from langchain.chains.router import MultiPromptChain from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser from langchain.prompts import PromptTemplate llm = ChatOpenAI(temperature=0, model=llm_model) ``` ##### `destination_chains` 與`default_prompt` 為不同的問答領域建立專門的LLMChain物件,並同時保留一個預設的LLMChain用於通用問題 - `destination_chains` - 為每一個模板建立一個對應的 LLMChain 物件。這些物件會被儲存在 destination_chains 字典中,其中鍵為模板的名稱,值為對應的 LLMChain 物件。 ```python= destination_chains = {} for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = ChatPromptTemplate.from_template(template=prompt_template) chain = LLMChain(llm=llm, prompt=prompt) destination_chains[name] = chain destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos] destinations_str = "\n".join(destinations) # destinations # ['physics: Good for answering questions about physics', # 'math: Good for answering math questions', # 'History: Good for answering history questions', # 'computer science: Good for answering computer science questions'] ``` - `default_prompt` - 建立了一個預設的LLMChain物件,名為 default_chain。該物件使用了一個非常簡單的模板,它只是回傳輸入值。 - 這個預設LLMChain可以在其他特定的LLMChain不適用時被使用 ```python= default_prompt = ChatPromptTemplate.from_template("{input}") default_chain = LLMChain(llm=llm, prompt=default_prompt) ``` ##### **多重提示路由器模板`MULTI_PROMPT_ROUTER_TEMPLATE`** Router Chain最核心邏輯的部分 :::info 根據一系列的候選提示和給定的輸入,導引語言模型選擇和(可能)修改最合適的提示。這有助於根據特定的問題或輸入定制語言模型的響應 ::: * 語言模型的原始文本輸入: * 這是一個指南,告訴語言模型將獲得一個原始文本輸入,目的是選擇最佳的提示 * << FORMATTING >>: * 這部分描述了期望的輸出格式。它告訴模型應該返回一個markdown格式的JSON對象 * JSON對象應該有兩個字段:destination 和 next_inputs * `destination`: 用來指定要使用的提示名稱或"DEFAULT" * `next_inputs`: 原始輸入的可能修改版本 * REMEMBER 部分: * 這部分提供了兩個重要的指南: * `destination` 字段必須是下面指定的候選提示名稱之一,或者如果輸入不適合任何候選提示,則可以是"DEFAULT" * 如果認為不需要任何修改,則 `next_inputs` 可以是原始輸入 * << CANDIDATE PROMPTS >>: * 這部分將列出所有可用的提示名稱和描述,以幫助語言模型選擇最合適的提示 * << INPUT >>: * 這部分將包含實際的用戶輸入,這是模型將基於其做出決策的輸入 - code :::spoiler ```PYTHON= MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \ language model select the model prompt best suited for the input. \ You will be given the names of the available prompts and a \ description of what the prompt is best suited for. \ You may also revise the original input if you think that revising\ it will ultimately lead to a better response from the language model. << FORMATTING >> Return a markdown code snippet with a JSON object formatted to look like: \```json {{{{ "destination": string \ name of the prompt to use or "DEFAULT" "next_inputs": string \ a potentially modified version of the original input }}}} \``` REMEMBER: "destination" MUST be one of the candidate prompt \ names specified below OR it can be "DEFAULT" if the input is not\ well suited for any of the candidate prompts. REMEMBER: "next_inputs" can just be the original input \ if you don't think any modifications are needed. << CANDIDATE PROMPTS >> {destinations} << INPUT >> {{input}} << OUTPUT (remember to include the ```json)>>""" ``` ::: ##### 建立多重提示的路由鍊 `LLMRouterChain`、`MultiPromptChain` 將所有之前創建的專家模板、默認鏈接和路由鏈接組合起來,形成一個完整的、能夠智能地根據輸入選擇最合適回答方法的鏈接系統 - 流程 1. 設置路由模板 (router_template): ```python= router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format( destinations=destinations_str ) ``` 這裡使用MULTI_PROMPT_ROUTER_TEMPLATE字符串模板並替換其內部的{destinations}部分,填充之前計算的destinations_str。結果是一個自定義的模板,其中包含了所有可用的模板名稱和描述。 2. 創建路由提示 (router_prompt): ```python= router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(), ) ``` 這裡使用剛才生成的router_template來創建一個PromptTemplate物件,名稱為router_prompt。 這個提示將用於指導語言模型決定根據提供的輸入使用哪一個專家模板來回答問題。 3. 創建路由鏈接 (router_chain): ```python= router_chain = LLMRouterChain.from_llm(llm, router_prompt) ``` 使用上一步中的router_prompt和llm語言模型,我們創建了一個名為router_chain的LLMRouterChain物件。該物件的目的是在接收到一個輸入問題時,根據router_prompt的指引,決定使用哪一個專家模板來回答。 4. 創建多提示鏈接 (chain): ```python= chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True ) ``` 最後,使用前面的結果創建了一個MultiPromptChain物件,名稱為chain。這個物件將協調所有的鏈接:它首先使用router_chain來決定要使用哪一個destination_chains中的專家模板。如果沒有合適的專家模板,則會使用default_chain來回答問題。