Cuda 學習 - 定義唯一 ID === # 先來看個一維的向量 * 這是一個 ```<<<block_per_grid, thread_per_block>>>``` 的一個例子, 送入cuda 打印出來可得到, 而block_per_grid 及 thread_per_block 分別為 * ```dim3 block_per_grid(1);``` * ```dim3 thread_per_block(8);``` ![image](https://hackmd.io/_uploads/rJWNfX16C.png) ```cpp= __global__ void print_with_pure_threadIDx(int*src) { int tid = threadIdx.x; printf("blockIDx %d and threadIDx %d, data %d \n", blockIdx.x , threadIdx.x, src[tid]); } void test1() { int size = 8; int byte_size = sizeof(int) * size; int host_data[] = {8,10,2,5,4,23,9,7}; int *device_data; cudaMalloc((void**)&device_data, byte_size); cudaMemcpy(device_data, host_data, byte_size, cudaMemcpyHostToDevice); dim3 block_per_grid(1); dim3 thread_per_block(8); print_unique_idx_1D<<<block_per_grid, thread_per_block>>>(device_data); cudaDeviceSynchronize(); cudaFree(device_data); cudaDeviceReset(); } ``` ![image](https://hackmd.io/_uploads/Sk1tEXk6A.png) # 將一維向量拆半 ( 針對 block 更動) ![image](https://hackmd.io/_uploads/H1ZtMmkpR.png) * 這次我們使用以下例子, 目的是將一維的向量分成兩個 block 每個 block 有4個 thread * ```dim3 block_per_grid(2);``` * ```dim3 thread_per_block(4);``` ![image](https://hackmd.io/_uploads/H1FMSm1aC.png) * 原因在於每個 block 內 threadIDx 均是從 0~3 (途中的Index), 造成兩個 block 都只打印了 host_data[0]~host_data[3]的資料 * 為了修正錯誤需要知道正確的block的位置進而找出正確的 global ID (又稱 gid 在表中為index_R), 改成使用以下 * 找到正確的 block * Offset_block = blockDIm.x * blockIdx.x * 找到正確的 thread * gid = Offset_block + threadIdx.x ```cpp= __global__ void print_unique_idx_1D(int *src) { int tid = threadIdx.x; int gid = blockDim.x*blockIdx.x + tid; printf("blockIDx %d and threadIDx %d, data %d \n", blockIdx.x , threadIdx.x, src[gid]); } ``` ![image](https://hackmd.io/_uploads/S1gYLmyTC.png) # 二維的 block * 把剛剛的 host_data 改成是以下配置送入 cuda * ```dim3 block_per_grid(2,2);``` * ```dim3 thread_per_block(2);``` * 資料形式會變成以下 * ![image](https://hackmd.io/_uploads/ByVHu7Jp0.png) * 可想而知我們的 gid 又會被有所更動故此我們改使用以下 * 找 block 的在哪個 row * Offset_block_row = gridDim.x * blockDim.x*blockIdx.y * 找正確的 block * Offset_block = blockDIm.x * blockIdx.x * 找正確的 thread * gid = Offset_block _row + Offset_block + threadIdx.x ```cpp= __global__ void print_unique_idx_2D(int *src) { int tid = threadIdx.x; int offset_b = blockDim.x*blockIdx.x; int offset_r = blockDim.x*gridDim.x*blockIdx.y; int gid = offset_r + offset_b + tid; printf("BlockIDx.x %d , blockIDx.y %d, threadIDx %d, data %d \n", blockIdx.x, blockIdx.y , threadIdx.x, src[gid]); } ``` * 輸出結果如下 ![image](https://hackmd.io/_uploads/ry3WtmkpA.png) # 再進階點 * 新定義一個 host_data 改成是以下配置送入 cuda * dim3 block_per_grid(2, 2); * dim3 thread_per_block(2, 2); ![image](https://hackmd.io/_uploads/BJaXgEk6A.png) ```cpp= __global__ void print_unique_idx_2D_2D(int *src) { int tid = blockDim.x*threadIdx.y + threadIdx.x; int thread_per_block = blockDim.x*blockDim.y; int offset_b = thread_per_block*blockIdx.x; int thread_per_row = thread_per_block*gridDim.x; int offset_r = thread_per_row*blockIdx.y; int gid = offset_r + offset_b + tid; printf("BlockIDx.x %d , blockIDx.y %d, threadIDx.x %d threadIDX.y %d data %d\n", blockIdx.x, blockIdx.y , threadIdx.x, threadIdx. y,src[gid]); } ``` # 總結 * 在 block_per_grid、thread_per_block 中變數均為 dim3可展開成一個張量的格式,根據資料維度的不同, 在使用 cuda 時需要注意正確的 gid 才能準確的使用數據資料。 [github](https://github.com/GU-Lin/Cuda_Practice/tree/main/3_Unique_index) * ###### tags : `Cuda`