--- title: RAI for Wistron tags: [RAI, wistron] --- # Azure AI Foundry Responsible AI ![image](https://hackmd.io/_uploads/S1K2y-sdee.png) --- ## 一、前置需求 + [Azure AI Foundry環境建立](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/create-projects?tabs=ai-foundry&pivots=fdp-project) - 目前可用 Region: East US2, Sweden Central, Switzerland West - 請使用Foundry Project (非Hub base Project) + [Private Link for AI Foudndry](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/configure-private-link?tabs=azure-portal&pivots=fdp-project) - PoC階段不會使用到Agent Services + Python Virtual Environemnt - 請安裝相關packages ```shell pip install -r requirements.txt ``` --- ## 測試1 - Prompt Injection (REST API) + 參考代碼: 1 PromptShiled.ipynb + 在AI Foundry Project 取得 API Enpoint 及 AI Key ![image](https://hackmd.io/_uploads/ByofBZoOel.png) + 準備惡意Prompt進行測試 ![image](https://hackmd.io/_uploads/BkjPS-oOll.png) + Output ![image](https://hackmd.io/_uploads/ryi0rZidex.png) --- ## 測試2. Improper Output & RAG Reference + 參考代碼: 2 ImproperOutput & RAG Reference .ipynb + 在AI Foundry Project 取得 Project Enpoint + Azure OpenAI Endpoint 及 Azure OpenAI Key ![image](https://hackmd.io/_uploads/ByofBZoOel.png) + Evaluator測項請依需求自行調整 **General purpose**:`CoherenceEvaluator`, `FluencyEvaluator` **Textual similarity**: `SimilarityEvaluator` **RAG**: `RetrievalEvaluator`, `GroundednessEvaluator`, `RelevanceEvaluator` **Risk and safety**: `IndirectAttackEvaluator`, `UngroundedAttributesEvaluator`, `ContentSafetyEvaluator` ![image](https://hackmd.io/_uploads/HybufXn_lg.png) + 測試資料集(csv或JSONL): 請置換為您的資料集名稱 ![image](https://hackmd.io/_uploads/By3udbidgl.png) + 測試資料集需包含下以下四個欄位 + query: 使用者的問題 + response: LLM的回覆 + ground_truth: 標準答案 + context: 對應的文本上下文 + Azure Foundry Portal Dashboard + 點擊 Evaluation ![image](https://hackmd.io/_uploads/BkoZq-iOlg.png) + 選取 Automated evaluations並選取最近一筆 ![image](https://hackmd.io/_uploads/HyQPq-oOgx.png) + 相關指標: 可切換 `AI quality (AI Assisted)` 及 `Risk and safety (preview)` ![image](https://hackmd.io/_uploads/SkctJzjdgg.png) --- ## 測試3. Red Teaming + 參考代碼: 3 AI Red Teaming locally.ipynb + 在AI Foundry Project 取得 Project Enpoint + 功擊目標 LLM: 請將 ask_LLM 函式替換為符合您需求的目標模型。本範例程式碼,展示如何攻擊 Azure OpenAI 模型。請確保您已提供 Azure OpenAI 的 Endpoint、API Key 以及 Model Name ![image](https://hackmd.io/_uploads/ByuQR-oOex.png) + [Risk Categories](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent#supported-risk-categories): 依測試需求選擇對應的風險類 + num_objectives: 依測試需求調整各別對應的`功擊次數` ![image](https://hackmd.io/_uploads/ry6onQ2Oex.png) + [attack_straregies](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent#specific-attack-strategies): 依測試需求調整各別對應的`功擊策略` ![image](https://hackmd.io/_uploads/B1PPTX2_ex.png) + Azure Foundry Portal Dashboard + 點擊 Evaluation ![image](https://hackmd.io/_uploads/BkoZq-iOlg.png) + 選取 AI red teaming並選取最近一筆 ![image](https://hackmd.io/_uploads/r14-yGouxe.png) + 相關指標 ![image](https://hackmd.io/_uploads/S1OrJGo_xg.png)