Corn
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # 【CUDA】CUDA Programming Note ## A Scalable Programming Model CUDA 為平行程式設計模型,用於需要大量計算的任務,並且為了熟悉 C 等標準程式語言的設計師能夠保持較低的學習取線而設計而成。Scalable 意思為可擴展的,在這邊代表意義為我們撰寫的程式碼可以透過 CUDA 輕易的實現在任何 Nvidia GPU 的裝置上,因為 CUDA 在調度 GPU 資源會自動去調整,如下圖所示 ![image](https://hackmd.io/_uploads/S12VM76Mgx.png) 而 CUDA 有三個關鍵抽象概念 1. A Hierarchy of thread groups 層序式執行序 階層式的執行序在實際硬體是不存在的,只是抽象化出來讓設計者可以方便處理不同維度的資料 2. Shared Memories 共享記憶體 Thread Block 內 Threads 共同使用的一塊高速記憶體區塊,存在於每個 SM 中 3. Barrier Synchronization 屏障同步 Thread block 內所有 threads 必須等到彼此都完成某段任務後才繼續執行,CUDA 提供一系列 API 讓我們可以處理異布與同步問題 結論是 CUDA 可以讓程式設計師專注在將任務拆分成子任務,並且這個子任務可以由 CUDA Thread 執行,藉由平行化處理子任務來完成整個任務 ## Streaming Multiprocessor Streaming Multiprocessor 又稱作 SM 為 GPU 中的核心運算單位,是實際的硬體單元,SM 是執行 CUDA 程式的地方,架構如圖下所示 ![image](https://hackmd.io/_uploads/SywsMrTfex.png) * SP 為 Steaming Processor * SM 由多個 SP 組成 * SPU 為 Speical Process Uint,執行特殊數學函數例如 `_cos(x)` * Warp 為 SM 中最小執行單位,每次會以 32 Thread 去執行 ## Kernel CUDA C++ 藉由擴展 C++ 讓程式設計師能夠定義一個 C++ Kernel 函數, Kernel 函數能夠被執行 N 次 並藉由 N 個不同的 CUDA Thread 來執行程式碼 Kernel 函數定義方式需要藉由 `__global__` 關鍵字來宣告,而執行函數的執行敘的數量是透過 `<<<...>>>` 的語法來指定的,每一個執行 Kernel 執行序都會被分配一個獨一無二的執行序 ID 來達成,這個 ID 可以在 Kernel 內部透過**內建變數**進行計算後存取 ```cpp // Kernel definition __global__ void VecAdd(float* A, float* B, float* C) { int i = threadIdx.x; C[i] = A[i] + B[i]; } int main() { ... // Kernel invocation with N threads VecAdd<<<1, N>>>(A, B, C); ... } ``` Kernel 函數參數資料型態只能是 * Build-in type 例如 `int`, `float`, `char` 等等 ## Thread Hierarchy **在 CUDA 中,Thread 是階層式的抽象概念,並非具體對應硬體元件**,多個 Thread 組成一個 Thread Block,而多個 Thread Block 組成一個 Grid。每次呼叫 Kernel 函數時,會啟動一個 Grid 進行平行運算。CUDA 執行時系統會將 Thread Block 自動分配給多個 Streaming Multiprocessors (SM) 執行。每個 Thread Block 中的 Thread 被劃分成多個 Warp (每個 Warp 包含 32 個 Thread),GPU 以 Warp 為單位使用 SIMT (Single Instruction, Multiple Threads) 指令來執行 ### CUDA Build-In Variable `threadIdx` 為 3-component vector 可以代表 3 個維度的執行序索引,並且可以建構出 1 維、2 維、3 維的 Thread Block,這樣方式可以更自然的處理向量、矩陣、 $n$ 維陣列等資料結構中的元素進行運算 執行緒的索引與其執行緒的 ID 之間關係是可以計算的 * 對於大小為 $(D_x, \ D_y)$ 的 2-D Block,索引為 $(x, \ y)$ 其 ID 為 $x + y \times D_x$ * 對於大小為 $(D_x, \ D_y, \ D_z)$ 的 3-D Block ,索引為 $(x, \ y, \ z)$ 其 ID 為 $x + y \times D_x + z \times D_x \times D_y$ ```cpp // Kernel definition __global__ void MatAdd(float A[N][N], float B[N][N], float C[N][N]) { int i = threadIdx.x; int j = threadIdx.y; C[i][j] = A[i][j] + B[i][j]; } int main() { ... // Kernel invocation with one block of N * N * 1 threads int numBlocks = 1; dim3 threadsPerBlock(N, N); MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C); ... } ``` 由於一個 Thread Block 內所有的執行緒預期都會停留在同一個 SM (Streaming Multiprocessor) 上,並共享該核心有限的記憶體資源,所以 Block 中的執行緒數量是有限,而通常一個 block 中 Thread 數量為 $1024$ 個執行緒 而一個 Kernel 函數可以由多個形狀相同的 Thread Block 來執行,因此 Blocks 可以組成一維、二維或三維的 Grid,而 Thread Block 中的 Thread,也可以組成一維、二維或三維的 Thread $$ \text{總執行緒的數量} = \text{每個 block 的執行緒} \times \text{block 數量} $$ ```cpp // Kernel definition __global__ void MatAdd(float A[N][N], float B[N][N], float C[N][N]) { int i = blockIdx.x * blockDim.x + threadIdx.x; int j = blockIdx.y * blockDim.y + threadIdx.y; if (i < N && j < N) C[i][j] = A[i][j] + B[i][j]; } int main() { ... // Kernel invocation dim3 threadsPerBlock(16, 16); dim3 numBlocks(N / threadsPerBlock.x, N / threadsPerBlock.y); MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C); ... } ``` > Kernel 執行不論是在 1D 、 2D 上在實際物理意義都是相同的,只是方便使用者更直觀地選取對應的 Thread Index 並處理資料而已 ## CUDA 核心呼叫語法 CUDA kernel (核⼼函數) 的執⾏配置 (Execution Configuration) 是透過以下格式呼叫: ```cpp <<< Dg, Db, Ns, S >>> ``` 其中參數說明如下 : - `Dg` 為 `dim3` 型別,代表 **grid 的尺寸** (即要啟動多少個 block ),其總 block 數為 `Dg.x * Dg.y * Dg.z` - `Db` 為 `dim3` 型別,代表 **每個 block 的尺寸** (即每個 block 有多少個 thread) ,其總 thread 數為 `Db.x * Db.y * Db.z` - `Ns` 為 `size_t` 型別,代表每個 block 動態分配的 **Shared Memory** 大小 (bytes),是 `__shared__` 記憶體額外配置用的。這個參數是**選擇性設定**,預設為 0 - `S` 為 `cudaStream_t` 型別,代表這個 kernel 所屬的 CUDA stream。這也是**選擇性的**,預設為 0 若你有一個 kernel 被定義為 ```cpp __global__ void Func(float* parameter); ``` 呼叫方式如下 ```cpp Func<<< Dg, Db, Ns >>>(parameter); ``` ## Macro CUDA API 函數出錯時,會回傳一個 `cudaError_t` 資料型態,我們要處理這個 `cudaError_t` 才能夠知道程式碼哪裡出現問題,方便 Debug ```cpp static inline void _safe_cuda_call( cudaError err, const char* msg, const char* file_name, const int line_number ) { if(err != cudaSuccess) { fprintf(stderr,"%s\n\nFile: %s\n\nLine Number: %d\n\nReason: %s\n",msg,file_name,line_number,cudaGetErrorString(err)); std::cin.get(); exit(EXIT_FAILURE); } } // Safe call macro. #define SAFE_CALL(call,msg) _safe_cuda_call((call),(msg),__FILE__,__LINE__) ``` ## 注意事項 * Kernel 函數輸入資料盡可能是連續記憶體空間分布,避免操作指標發生例外 * Threads 總數需要是 32 的倍數,每一次 wrap thread 就是 32 個 Thread 同時出去

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully