Try   HackMD

AI / ML領域相關學習筆記入口頁面


Machine Learning

Basic

[ML] 機器學習基礎指標:Maximum Likelihood Estimation、Softmax、Entropy、 KL Divergence

Dimensionality Reduction

[ML] Dimensionality Reduction 降維演算法 - Deep Targeted Discriminant Analysis (DeepTDA) ,t-SNE 與 UMAP之外的新選擇

Explainable Machine Learning/AI

[Explainable AI] 可解釋的機器學習與因果推斷
[Explainable AI] Transformer Interpretability Beyond Attention Visualization。Transformer可解釋性與視覺化

Deep Learning

入門

Deep Learning:用Python進行深度學習的基礎理論實作 - 讀書筆記

LLM(NLP) / Generative AI

有用的資源

awesome-generative-ai-guide

  • AWS的 Gen AI Tech Lead,內有豐富學習資源,包括:
    • Monthly Best GenAI Papers List
    • GenAI Interview Resources
    • [Applied LLMs Mastery 2024 (created by Aishwarya Naresh Reganti) course material](Applied LLMs Mastery 2024 (created by Aishwarya Naresh Reganti) course material)
    • [List of all GenAI-related free courses (over 85 listed)](List of all GenAI-related free courses (over 85 listed))
    • [List of code repositories/notebooks for developing generative AI applications](List of code repositories/notebooks for developing generative AI applications)
OpenAI Cookbook
  • OpenAI 官方範例,有許多有效利用的範例
  • 不管是Langchain vs LlamaIndex,預設底層還是call OPENAI的API,最輕量有效、且文件細節最詳細的還是要看OPANAI提供的範例

Prompt Engineering

筆記、講座、文章摘要

State of AI Report 2023筆記

掌握近期AI趨勢的快速摘要,內容深淺適中

2022。AACL-IJCNLP。Recent Advances in Pre-trained Language Models:Why Do They Work and How to Use Them

李弘毅老師實驗室出品,非常棒的語言模型近期(2022年)進展的介紹
主要整理PEFT: Parameter-Efficient Finetuning部分說明

  • 投影片大綱:
    • Part 4 How to use PLMs: Parameter-efficient fine-tuning
How Agents for LLM Perform Task Planning。大型語言模型的代理如何進行任務規劃
  • 這篇文章很好的回顧了當前(2023.06為止)LLM Agents(任務代理)重要的核心思想
2023.09。Dr. Ed H. Chi。The Large Language Model Revolution
  • 模型的推理能力 Reasoning
    • 連鎖思考提示 (Chain-of-thought prompting):<問題 => 解釋 => 答案>
    • 自我一致性 (Self-consistency):生成相同問題的多個答案,然後選擇最常見的
    • 由小至大的提示 (Least-to-most prompting):將問題分解並解決子問題
    • 指令微調 (Instruction finetuning):教導大型語言模型 (LLMs) 遵循指令

大型語言模型優化與客製系列
LLM Optimization and Customization Series

[LLM] 大型語言模型的客製方案(施工中)
Customized solutions for LLM
  • Prompt engineering
  • Prompt learning
  • Parameter-efficient fine-tuning (PEFT)
  • Fine-tuning
[LLM] 索增強生成(Retrieval Augmented Generation,RAG)。讓模型開啟知識包外掛(施工中)

Deeplearning.ai GenAI/LLM系列課程筆記

  • Generative AI with Large Language Models。 使用大型語言模型的生成式人工智能
    Image Not Showing Possible Reasons
    • The image was uploaded to a note which you don't have access to
    • The note which the image was originally uploaded to has been deleted
    Learn More →
    Image Not Showing Possible Reasons
    • The image was uploaded to a note which you don't have access to
    • The note which the image was originally uploaded to has been deleted
    Learn More →

short cources

GenAI
RAG
AI Agents

Mulimodal

[Mulimodal] CVPR 2023。Multimodal Foundation Models : From Specialists to General-Purpose Assistants
多模態基礎模型研究回顧
(施工中)[Multimodal]多模態學習(MultiModal Learning)與CLIP
[Multimodal] LLaVA鍊成術-視覺指令調節 Visual Instruction Tuning
[Multimodal] 通用多模態模型資訊檢索器模型 UniIR: Training and Benchmarking Universal Multimodal Information Retrievers
[Multimodal] 多模態語言模型在自駕領域筆記 Multimodal Large Language Models for Autonomous Driving

Computer Vision

Object Detection

[Object Detection_YOLO] YOLOv7 論文筆記

ViT與Transformer相關

[Transformer_CV] Vision Transformer(ViT)重點筆記
[Transformer] Self-Attention與Transformer
[Transformer_CV] Masked Autoencoders(MAE)論文筆記

Deep Learning相關筆記

Self-supervised Learning

[Self-supervised] Self-supervised Learning 與 Vision Transformer重點筆記與近期(2021)發展

Autoencoder相關

[Autoencoder] Variational Sparse Coding (VSC)論文筆記
[Transformer_CV] Masked Autoencoders(MAE)論文筆記

Time Series

Deep Learning with Time Series data

彙整2022年底深度學習方法在處理時間序列資料上的資源、趨勢與SOTA模型,

TS2Vec(Towards Universal Representation of Time Series) 論文筆記

Reinforcement learning

[RL] Fine-Tuning Language Models from Human Preferences (RLHF)論文筆記-ChatGPT鍊成術

image

[RL] Proximal Policy Optimization(PPO)
[RL] Q learning 與 Deep Q Network(DQN)


Edge AI / Deployment

模型部署與加速

[Deployment] AI模型部屬入門相關筆記
[Object Detection_YOLO] YOLOv7 論文筆記
Deploy YOLOv7 on Nvidia Jetson
Convert PyTorch model to TensorRT for 3-8x speedup
將PyTorch模型轉換為TensorRT,實現3-8倍加速
Accelerate multi-streaming cameras with DeepStream and deploy custom (YOLO) models
使用DeepStream加速多串流攝影機並部署客製(YOLO)模型
Use Deepstream python API to extract the model output tensor and customize model post-processing (e.g., YOLO-Pose)
使用Deepstream python API提取模型輸出張量並定製模型后處理(如:YOLO-Pose)
Model Quantization Note 模型量化筆記

NVIDIA Jetson 平台部署相關筆記

基本環境設定
Jetson AGX Xavier 系統環境設定1_在windows10環境連接與安裝
Jetson AGX Xavier 系統環境設定2_Docker安裝或從源程式碼編譯
NVIDIA Container Toolkit 安裝筆記
Jetson 邊緣裝置查詢系統性能指令jtop
Jetson Network Setup 網路設定
OpenCV turns on cuda acceleration in Nvidia Jetson platform
OpenCV在Nvidia Jetson平台開啟cuda加速