# 架構:使用 AWS 服務進行 AIML 商業開發 ## Amazon Q Business AWS blog: [Find answers accurately and quickly using Amazon Q Business with the SharePoint Online connector](https://aws.amazon.com/blogs/machine-learning/find-answers-accurately-and-quickly-using-amazon-q-business-with-the-sharepoint-online-connector/) --- ## 設計企業級 AI 平台:AI Hub/Platform ### Qualcomm --- ### Amazon :::success AWS blog: [Scaling Rufus, the Amazon generative AI-powered conversational shopping assistant with over 80,000 AWS Inferentia and AWS Trainium chips, for Prime Day](https://aws.amazon.com/blogs/machine-learning/scaling-rufus-the-amazon-generative-ai-powered-conversational-shopping-assistant-with-over-80000-aws-inferentia-and-aws-trainium-chips-for-prime-day/) 更新時間:2024-10-10 ::: ### SalesForce 企業級 AI 平台 :::success [AWS Blog: Salesforce Einstein AI Platform](https://aws.amazon.com/blogs/machine-learning/boosting-salesforce-einsteins-code-generating-model-performance-with-amazon-sagemaker/) 更新時間:2024-07-24 [AWS Blog: Build generative AI–powered Salesforce applications with Amazon Bedrock](https://aws.amazon.com/blogs/machine-learning/build-generative-ai-powered-salesforce-applications-with-amazon-bedrock/) 更新時間:2024-07-29 [Salesforece CodeGen2.5 Eval](https://blog.salesforceairesearch.com/codegen25/) ::: ![CodeGenLatencyGraph-ML17038](https://hackmd.io/_uploads/HkivxV6tR.png) --- ## 建構 AI 應用程式和工作流程:AI Applications ### AWS blog: [Transitioning from Amazon Rekognition people pathing: Exploring other alternatives](https://aws.amazon.com/blogs/machine-learning/transitioning-from-amazon-rekognition-people-pathing-exploring-other-alternatives/) 更新時間: 2024-10-24 [People pathing video](https://d2908q01vomqb2.cloudfront.net/artifacts/DBSBlogs/ml-17744/processed_video.mp4?_=2) ### LLM Evaluation Pipeline AWS samples: [Build an automated large language model evaluation pipeline on AWS](https://github.com/aws-samples/build-an-automated-large-langurage-model-evaluation-pipeline-on-aws) ![auto-llm-eval-pipeline-v6-ragas-br.drawio](https://hackmd.io/_uploads/ByfcGP7Z1l.png) --- ### Identity Security AWS blog: [Entitlement Enlightenment: SailPoint and AWS Enhance Identity Security](https://aws.amazon.com/blogs/apn/entitlement-enlightenment-sailpoint-and-aws-enhance-identity-security/) 更新時間: 2024-10-18 --- ### Invoice 解析 AWS blog: [Automate parsing of PDFs, xlsx files, images, and other document formats into structured JSON, leveraging LLMs on Amazon Bedrock](https://community.aws/content/2j3fWuVgl7aexosRx0asAt71bpU/automate-parsing-pdfs-xlsx-files-images-and-other-document-formats-into-structured-json-leveraging-llms-on-amazon-bedrock?lang=en) 更新時間: 2024-07-29 --- ### SDLC AWS blog: [Use Amazon Bedrock to generate, evaluate, and understand code in your software development pipeline](https://aws.amazon.com/blogs/machine-learning/use-amazon-bedrock-to-generate-evaluate-and-understand-code-in-your-software-development-pipeline/) 更新時間: 2024-08-06 ![Screenshot 2024-09-04 at 10.40.20 PM](https://hackmd.io/_uploads/ryBJMeL3A.png) ![Screenshot 2024-09-04 at 10.41.04 PM](https://hackmd.io/_uploads/H1TJfxU3C.png) --- ### PGA TOUR 使用 Bedrock 自動生成逐球評論 :::success 成功客戶案例 [PGA TOUR](https://aws.amazon.com/blogs/media/pga-tour-automates-play-by-play-commentary-with-amazon-bedrock/):革新高爾夫賽事報導,運用 Bedrock 實現 AI 驅動的自動評論 更新時間: 2024-08-30 ::: :::spoiler PGA TOUR成功故事 一場職業高爾夫錦標賽橫跨多天,多達156名運動員每天在200英畝的球道、沙坑、果嶺、水域等場地上打18洞。每項賽事的超大規模和分散性質,使 PGA TOUR 在讓球迷了解廣闊賽場上發生的動態時面臨許多既有挑戰。 ::: ![Screenshot 2024-09-04 at 2.17.51 PM](https://hackmd.io/_uploads/ryFgnur3C.png) 這個技術架構展示了 AWS 服務如何接收擊球資料和 TOUR 資料,最終生成評論。 PGA TOUR 資料通過分發系統分發到 「Shot Feed」 Amazon SNS 主題。「Shot Queue」 Amazon SQS 隊列訂閱了 「Shot Feed」 主題。Amazon SQS 隊列中的擊球事件觸發一個 AWS Lambda 函數,該函數通過 Amazon API Gateway 向運行在 Amazon ECS 容器中的 GenAI 邏輯發起 API 調用,並根據接收到的 TOUR 資料創建提示。Amazon Bedrock LLMs 使用創建的提示作為輸入生成基於上下文的敘事評論。 --- ### 打造生成式 AI 教育平台 :::success 成功客戶案例 [Médica Panamericana](https://aws.amazon.com/blogs/publicsector/medica-panamericana-revolutionizes-medical-exam-prep-with-aws-generative-ai/):開發人工智能驅動的考試平台,幫助醫學生準備他們的住院醫師入學考試(MIR) 更新時間: 2024-08-30 ::: :::spoiler Médica Panamericana成功故事 Médica Panamericana的生成式AI驅動平台還解決了無障礙性的問題。通過自動生成無障礙內容,該解決方案確保視障或其他殘障學生可以充分參與考試準備過程。 生成式AI正在改善用戶的學習體驗,其成功要素三關鍵: * 可靠的醫學內容。這些內容涵蓋了主要的醫學專業,加上一個超過36,000個問題的高級知識庫,這些問題由每個領域的著名專科醫生團隊創建,他們都有教學經驗。 * 第二是基於問題學習、主動回顧和間隔重複(spaced repetition)的創新方法。 * 第三是最先進的技術工具,由AI領導,能夠為每個學生量身定制教材。 ::: --- ### Media & Entertainment 解決方案 :::success [AWS blog: Unlock the full potential of your media supply chain at IBC 2024](https://aws.amazon.com/blogs/media/unlock-the-full-potential-of-your-media-supply-chain-at-ibc-2024/) 更新時間:2024-09-03 ::: 展示六個真實世界的案例: * 檔案遷移 * Archive migration as a service (AMaaS) * [Media2Cloud on AWS ](https://aws.amazon.com/solutions/guidance/media2cloud-on-aws/) * 低程式碼工作流自動化 * 基於AWS的 Newsroom 和 Daytime Cloud Production * 影視資料湖和Meta資料湖 * 智能品質管理及永續性 * 內容在地化和無障礙性 --- ### Amazon Bedrock 和 AWS CDK 構建 AI 圖像描述 :::success [AWS blog: Build a generative AI image description application with Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock and AWS CDK](https://aws.amazon.com/blogs/machine-learning/build-a-generative-ai-image-description-application-with-anthropics-claude-3-5-sonnet-on-amazon-bedrock-and-aws-cdk/) 更新時間:2024-09-03 [GitHub](https://github.com/aws-samples/generative-ai-cdk-constructs-samples/blob/main/samples/image-description/README.md) ::: 創建 AI 驅動的圖像描述應用程式,革新內容管理與提升無障礙功能(Accessibility)。 解決方案概述: * Anthropic 的 Claude 3.5 Sonnet 模型:AI 模型能夠「看懂」圖片並描述內容 * AWS CDK 和 Generative AI CDK Construct 簡化了複雜的雲端基礎設施部署 * 無伺服器架構利用 AWS Lambda、AppSync 和 Step Functions, 確保應用的高效能和可擴展性 ![Screenshot 2024-09-04 at 11.41.58 AM](https://hackmd.io/_uploads/BkKVwIr30.png) --- ### 使用生成式 AI 來獲取 CloudWatch 日誌的洞察 :::success [AWS blog: Using Generative AI to Gain Insights into CloudWatch Logs](https://aws.amazon.com/blogs/mt/using-generative-ai-to-gain-insights-into-cloudwatch-logs/) 更新時間:2024-09-03 ::: Amazon Bedrock 模型用於總結 CloudWatch 日誌,並以自定義小部件的形式顯示在 CloudWatch 儀表板上。該小部件可讓您串接其他日誌功能(如 Logs Insights),因此您可以根據生成的洞察進一步分析日誌。 CloudWatch 自定義小部件允許您創建由 AWS Lambda 函數支援的小部件。當儀表板(重新)加載時,CloudWatch 調用 Lambda 函數,並顯示它返回的資訊。 Lambda 函數是無服務器運算,可以用多種語言編寫。在 Lambda 函數中,您可以通過 API 調用(包括 Bedrock)從多個來源收集資料,並且可以使用程式碼邏輯來編輯資料。對於自定義小部件,返回的內容為 HTML 或 JSON,其內容將顯示在 CloudWatch 儀表板的小部件上。 ![Screenshot 2024-09-04 at 11.59.43 AM](https://hackmd.io/_uploads/S1yI38B2R.png =60%x) --- ### 智能 Support Ticket 工單助手開發 [Email EAR (extract, act and respond) Solution](https://www.hcltech.com/blogs/generative-ai-powered-email-ear-on-aws) [Support Ticket Solution on DataRobot](https://www.datarobot.com/partner-solutions/support-ticket-response-automation-solution-using-both-generative-and-predictive-ai/) --- ### Agent 平台 :::success AWS blog: [Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock](https://aws.amazon.com/blogs/machine-learning/unlocking-generative-ai-for-enterprises-how-snaplogic-powers-their-low-code-agent-creator-using-amazon-bedrock/) 更新時間:2024-10-23 ::: ### 智能文件處理:金融服務產業 智能文件處理 (Intelligent Document Processing),簡稱 IDP。 #### AWS GitHub 專案 Demo 請先在 SageMaker JupyterLab 中使用 **Terminal** 來下載[專案程式碼](https://github.com/aws-samples/generative-ai-financial-services-samples/tree/main)。 ![Screenshot 2024-04-17 at 5.23.38 PM](https://hackmd.io/_uploads/BJe3otTlA.png) 在命令列中,複製貼上並執行以下命令行。 ```{shell} cd /home/ec2-user/SageMaker git clone https://github.com/aws-samples/generative-ai-financial-services-samples.git ``` ![Screenshot 2024-04-17 at 5.28.09 PM](https://hackmd.io/_uploads/r1JnjY6lR.png) 至專案的路徑底下,安裝指定的套件。 ```{shell} cd /home/ec2-user/SageMaker/generative-ai-financial-services-samples/intelligent-mutual-fund-prospectus-document-processing pip install --upgrade -r requirements.txt ``` 下載執行程式碼所需的靜態檔案。在這裡新增一個 notebook,在 notebook 中複製貼上並執行程式下方兩行程式碼。 ```{python} import nltk nltk.download("stopwords") ``` ![Screenshot 2024-04-17 at 5.49.42 PM](https://hackmd.io/_uploads/H1lhiK6lA.png) 回到 terminal 執行下方命令行。 ```{shell} chmod +x ./run.sh ./run.sh ``` 開一個新的瀏覽器分頁,將下方的 URL 域名標示為 {xxxx} 的地方用圖片中紅框內的域名改寫後,搜尋網址。 ``` https://{xxxx}.notebook.us-west-2.sagemaker.aws/proxy/8501/ ``` ![Screenshot 2024-04-17 at 5.58.24 PM](https://hackmd.io/_uploads/BkT1e9TxA.png) #### 使用 pdfplumber 第三方函式庫讀取 PDF 檔案內容 開啟新的 terminal,執行以下命令行。 ``` cd /home/ec2-user/SageMaker git clone https://github.com/jsvine/pdfplumber.git cd /home/ec2-user/SageMaker/pdfplumber cp /home/ec2-user/SageMaker/generative-ai-financial-services-samples/intelligent-mutual-fund-prospectus-document-processing/docs/ab.pdf . ``` 下載 notebook,將檔案拖曳上傳至 Amazon SageMaker JupyterLab (下圖中紅色框內)。 ![Screenshot 2024-04-18 at 2.04.47 AM](https://hackmd.io/_uploads/HJMpGcTg0.png) 請逐步執行 notebook 上的每個 cell。 --- ### 文字機器人 :::success AWS Solution: [AI 聊天機器人解決方案](https://aws-samples.github.io/aws-genai-llm-chatbot/) ::: :::success AWS blog: [Driving Innovation in Social Services with Thoughtworks’ GenAI Digital Product Accelerator powered by Amazon Bedrock](https://aws.amazon.com/blogs/apn/driving-innovation-in-social-services-with-thoughtworks-genai-digital-product-accelerator-powered-by-amazon-bedrock/) 更新時間:2024-10-24 ::: ![TW-GenAI-ProductAccelerator-3](https://hackmd.io/_uploads/HybrEpOeyx.png) :::success ⭐ AWS blog: [Create a next generation chat assistant with Amazon Bedrock, Amazon Connect, Amazon Lex, LangChain, and WhatsApp](https://aws.amazon.com/blogs/machine-learning/create-a-next-generation-chat-assistant-with-amazon-bedrock-amazon-connect-amazon-lex-langchain-and-whatsapp/) 更新時間:2024-10-23 ::: ![architecture](https://hackmd.io/_uploads/SJPqMDie1x.png) --- ### 語音機器人 :::success AWS blog: [How to digitize and automate vehicle assembly inspection process with voice-enabled AWS services](https://aws.amazon.com/blogs/iot/how-to-digitize-and-automate-vehicle-assembly-inspection-process-with-voice-enabled-aws-services/) 更新時間:2024-10-23 ::: ![IOTB-681-Solution_Architecture-scaled](https://hackmd.io/_uploads/r1Z6DwoeJl.jpg) --- ### 會議助手 AWS Blog: [Live Meeting Assistant with Amazon Transcribe, Amazon Bedrock, and Knowledge Bases for Amazon Bedrock](https://aws.amazon.com/blogs/machine-learning/live-meeting-assistant-with-amazon-transcribe-amazon-bedrock-and-knowledge-bases-for-amazon-bedrock/) --- ### IDP :::success AWS Blog: [Streamline insurance underwriting with generative AI using Amazon Bedrock – Part 1](https://aws.amazon.com/blogs/machine-learning/streamline-insurance-underwriting-with-generative-ai-using-amazon-bedrock-part-1/) 更新時間:2024-08-01 ::: ![Screenshot 2024-08-17 at 4.39.22 PM](https://hackmd.io/_uploads/Skw8BJC9A.png) 生成式AI和Amazon Bedrock在保險核保中的應用 * 規則驗證 – 生成式AI模型可以自動驗證申請文件中提供的資訊是否符合保險公司的核保準則。通過使用檢索增強生成(RAG)或上下文提示等技術,這些模型可以從文件中提取相關資訊,並與預定義的規則進行比較,標記任何差異或不合規之處。這降低了錯誤風險,並提供了核保過程的一致性。 * 核保準則遵守 – 生成式AI使保險公司能夠將其核保準則直接嵌入到提供給模型的提示詞中。通過設計這些提示,保險公司可以將其AI驅動的決策過程與公司的風險管理策略保持一致。這種方法最大限度地減少了核保決策中的不一致性和潛在偏見。 * 決策理由說明 – 生成式AI模型可以為核保決策生成清晰簡潔的解釋,提供過程的透明度和客觀性。這些模型可以根據從文件中提取的資訊和保險公司的準則,以及在其決策中使用的源文件,闡述每個決策背後的理由。這使得核保人員能夠直接審查預測結果,並改善與申請人、審計員和監管機構的溝通。 :::success AWS Blog: [Intelligent document processing using Amazon Bedrock and Anthropic Claude](https://aws.amazon.com/blogs/machine-learning/intelligent-document-processing-using-amazon-bedrock-and-anthropic-claude/) 更新時間:2024-07-18 ::: ![Screenshot 2024-08-17 at 4.54.23 PM](https://hackmd.io/_uploads/HJowHJ05C.png) --- ### Text-to-SQL 應用 :::success [Paper](https://arxiv.org/pdf/2305.12586) AWS Blog: [Build a robust text-to-SQL solution generating complex queries, self-correcting, and querying diverse data sources](https://aws.amazon.com/blogs/machine-learning/build-a-robust-text-to-sql-solution-generating-complex-queries-self-correcting-and-querying-diverse-data-sources/) AWS Samples: [Link](https://github.com/aws-samples/text-to-sql-for-athena/blob/main/BedrockTextToSql_for_Athena.ipynb) AWS Blog: [Best practices for prompt engineering with Meta Llama 3 for Text-to-SQL use cases](https://aws.amazon.com/blogs/machine-learning/best-practices-for-prompt-engineering-with-meta-llama-3-for-text-to-sql-use-cases/) 更新時間:2024-08-30 [GitHub](https://github.com/aws-samples/Meta-Llama-on-AWS/blob/main/text2sql-recipes/llama3-chromadb-text2sql.ipynb) ::: :::success AWS Blog: [How Twilio generated SQL using Looker Modeling Language data with Amazon Bedrock](https://aws.amazon.com/blogs/machine-learning/how-twilio-generated-sql-using-looker-modeling-language-data-with-amazon-bedrock/) ::: :::spoiler Twilio 利用 Amazon Bedrock 和 RAG 技術開發 AskData 虛擬助手工具, 將自然語言問題轉換為 SQL 查詢, 使員工能更高效地探索和檢索資料湖中的資訊 **主要難點及解決方法** * 複雜資料湖中檢索相關資訊 : 使用 RAG 方法和向量相似度搜索 * 將自然語言轉換為 SQL : 利用 Anthropic Claude 3 模型的大上下文窗口和增強推理能力 * 減少 LLM 產生的幻覺 : 通過調整 temperature 參數、優化提示工程、和改進資料質量 * 處理大量 LookML 元數據 : 將 views 和 models 分開存儲,並使用整個文件而非分塊 **使用的 AWS 服務架構** 該解決方案利用了多個 AWS 服務。Amazon Bedrock 提供了 LLM 能力用於生成 SQL 查詢。Amazon RDS 搭配 pgvector 擴展用作向量資料庫,儲存 LookML metadata 和 embedding。DynamoDB 用於儲存對話歷史。Amazon S3 用於存儲原始的 LookML 文件。這些服務共同構建了一個可擴展、高效的架構,支持從自然語言到 SQL 的轉換過程。 ::: :::success AWS Blog: [Generating value from enterprise data: Best practices for Text2SQL and generative AI](https://aws.amazon.com/blogs/machine-learning/generating-value-from-enterprise-data-best-practices-for-text2sql-and-generative-ai/) ::: **Text-to-SQL 最佳實踐** 快取(Caching) - 為了改善延遲、控制成本和標準化,你可以快取從文字轉SQL的大型語言模型(LLM)中解析出的SQL和識別的查詢提示。這樣可以避免重複處理相同的查詢。 監控(Monitoring) - 應收集關於查詢解析、提示識別、SQL生成和SQL結果的日誌和指標,以監控文字轉SQL的LLM系統。這為優化提供了可見性,例如更新提示或重新審視使用更新數據集的微調過程。 物化視圖 vs. 表格(Materialized views vs. tables) - 物化視圖可以簡化SQL生成,並提高常見文字轉SQL查詢的性能。直接查詢表格可能會導致複雜的SQL,也可能引起性能問題,包括不斷創建索引等性能技術。此外,你可以避免同一個表格同時被應用程序的其他區域使用時可能出現的性能問題。 資料刷新(Refreshing data) - 物化視圖需要按計劃刷新,以保持文字轉SQL查詢的資料時效性。你可以使用批量或增量刷新方法來平衡開銷。 中央資料目錄(Central data catalog) - 創建一個中央化的資料目錄,為組織的資料源提供單一視圖,這將幫助LLM選擇適當的表格和架構,從而提供更準確的回應。從中央資料目錄創建的向量嵌入可以與所請求的資訊一起提供給LLM,以生成相關且精確的SQL回應。 ![ML-15590-arch-diagram-prompt-image003](https://hackmd.io/_uploads/rkjXoknt0.png) 重要元件: * Retrieval Augmented Generation (RAG) 以及資料集的 metadata * 多步驟自我修正 (multi-step self-correction) * 採用的 SQL 引擎 ```python AthenaQueryExecute.syntax_checker(sql_query) def generate_sql(self,prompt, max_attempt=4) ->str: """ Generate and Validate SQL query. Args: - prompt (str): Prompt is user input and metadata from Rag to generating SQL. - max_attempt (int): Maximum number of attempts correct the syntax SQL. Returns: - string: Sql query is returned . """ attempt = 0 error_messages = [] prompts = [prompt] while attempt < max_attempt: logger.info(f'Sql Generation attempt Count: {attempt+1}') try: logger.info(f'we are in Try block to generate the sql and count is :{attempt+1}') generated_sql = self.llm.predict(prompt) query_str = generated_sql.split("```")[1] query_str = " ".join(query_str.split("\n")).strip() sql_query = query_str[3:] if query_str.startswith("sql") else query_str # return sql_query syntaxcheckmsg=rqstath.syntax_checker(sql_query) if syntaxcheckmsg=='Passed': logger.info(f'syntax checked for query passed in attempt number :{attempt+1}') return sql_query else: prompt = f"""{prompt} This is syntax error: {syntaxcheckmsg}. To correct this, please generate an alternative SQL query which will correct the syntax error. The updated query should take care of all the syntax issues encountered. Follow the instructions mentioned above to remediate the error. Update the below SQL query to resolve the issue: {sqlgenerated} Make sure the updated SQL query aligns with the requirements provided in the initial question.""" prompts.append(prompt) attempt += 1 except Exception as e: logger.error('FAILED') msg = str(e) error_messages.append(msg) attempt += 1 return sql_query ``` ```python Status : {'State': 'FAILED', 'StateChangeReason': "COLUMN_NOT_FOUND: line 1:50: Column 'table_description' \ cannot be resolved or requester is not authorized to access requested resources", 'SubmissionDateTime': datetime.datetime(2024, 1, 14, 14, 38, 57, 501000, tzinfo=tzlocal()), 'CompletionDateTime': datetime.datetime(2024, 1, 14, 14, 38, 57, 778000, tzinfo=tzlocal()), 'AthenaError': {'ErrorCategory': 2, 'ErrorType': 1006, 'Retryable': False, 'ErrorMessage': "COLUMN_NOT_FOUND: \ line 1:50: Column 'table_description' cannot be resolved or requester is not authorized to \ access requested resources"}} COLUMN_NOT_FOUND: line 1:50: Column 'table_description' cannot be resolved or requester is not authorized to access requested resources Try Count: 2 2024-01-14 14:39:02,521,llm_execute,MainProcess,INFO,Try Count: 2 we are in Try block to generate the sql and count is :2 2024-01-14 14:39:02,521,llm_execute,MainProcess,INFO,we are in Try block to generate the sql and count is :2 Executing: Explain WITH tables AS ( SELECT table_name FROM information_schema.tables WHERE table_schema = 'imdb_stg' ), columns AS ( SELECT c.table_name, c.column_name, c.data_type, c.is_nullable, c.column_default, c.ordinal_position FROM information_schema.columns c WHERE c.table_schema = 'imdb_stg' ) SELECT t.table_name, c.column_name, c.data_type, c.is_nullable, c.column_default, c.ordinal_position FROM tables t INNER JOIN columns c ON t.table_name = c.table_name ORDER BY t.table_name, c.ordinal_position LIMIT 10; I am checking the syntax here execution_id: 904857c3-b7ac-47d0-8e7e-6b9d0456099b Status : {'State': 'SUCCEEDED', 'SubmissionDateTime': datetime.datetime(2024, 1, 14, 14, 39, 29, 537000, tzinfo=tzlocal()), 'CompletionDateTime': datetime.datetime(2024, 1, 14, 14, 39, 30, 183000, tzinfo=tzlocal())} syntax checked for query passed in tries number :2 ``` --- ### Geolocation 機器學習應用 :::success AWS blog: [Using ArcGIS GeoAnalytics Engine on Amazon EMR to predict rideshare demand](https://aws.amazon.com/blogs/publicsector/using-arcgis-geoanalytics-engine-on-amazon-emr-to-predict-rideshare-demand/) 更新時間:2024-10-23 ::: 使用 PySpark 和 GeoAnalytics Engine 函式庫。 --- ## 生成式 AI 資安 :::success AWS blog: [Hardening the RAG chatbot architecture powered by Amazon Bedrock: Blueprint for secure design and anti-pattern mitigation](https://aws.amazon.com/blogs/security/hardening-the-rag-chatbot-architecture-powered-by-amazon-bedrock-blueprint-for-secure-design-and-anti-pattern-migration/) 更新時間: 2024-08-26 ::: :::success AWS blog: [Building automations to accelerate remediation of AWS Security Hub control findings using Amazon Bedrock and AWS Systems Manager](https://aws.amazon.com/blogs/machine-learning/building-automations-to-accelerate-remediation-of-aws-security-hub-control-findings-using-amazon-bedrock-and-aws-systems-manager/) 更新時間: 2024-08-27 ::: ![Screenshot 2024-09-04 at 3.06.15 PM](https://hackmd.io/_uploads/HyEFatS30.png) --- ## 合作夥伴 [DataRobot](https://www.datarobot.com/partners/cloud-partners/aws/)