And ye shall know the truth, and the truth shall make you free.
引子
在实验失败的一周之后,我在游戏中放松紧张的神经,这个时候我突然发现,GPU真是一个神奇的设备,能够以如此快的速度进行运算。
这背后的原理究竟如何?我很好奇。除此之外,这也和大语言模型的优化有着千丝万缕的联系,那么本次就来手搓一些CUDA代码,进行一些算法上的优化吧!
CUDA的介绍
GPU的内核要比CPU多得多,比如笔者的CPU是AMD的R7 5700X,具有8核,而GPU则是4060,有3072个核心。粗略来讲,GPU执行并行任务的效率是CPU的384倍。但是,GPU的一个核心要比CPU的核心能力弱,所以,GPU更加适合高度并行的、比较简单的任务。
而CUDA则是NVIDIA公司推出的,专门用于并行计算编程的工具和一套编程模型。在CUDA下,GPU的逻辑单元是流式多处理器(Streaming Multiprocessor, SM),而物理单元则是我们之前提到的CUDA核心(CUDA Core)。对于4060显卡来说,SM一共有24个,当运行程序的时候,就把任务分配到这24个SM上运算,而每个SM又包括128个CUDA核心,这就得到了我们之前说的3072个核心数。
每个SM中,不仅包括CUDA核心,也包括寄存器堆(Register File)和统一数据缓存(Unified Data Cache),后者又包括L1缓存和共享内存……这些都是为了存储,记住这点就好。
SM也会被分组,若干个SM组成一个图像处理簇(Graphic Processing Cluster, GPC),而GPU内部包括若干GPC。具体结构如下:
这便是硬件的结构,在软件中,数据的流向是Grid-Thread Block-Warp-Thread。对于一个函数,首先在GPU设备中开辟一个Grid,一个Grid必须要在一个设备上开辟。随后任务在Thread Block上进行,一个Thread Block需要在一个SM中,而一个SM可以包含若干Thread Block。这里,我们从软件过渡到了硬件层面,在SM中,任务是由Warp分配的,一个Warp包含32个Thread,这些Thread做同样的任务。
比如我们写好了一个CUDA函数,那么执行时,CUDA首先会为这个函数创造一个grid,随后将任务分配到不同的Thread Block上,随后是Warp,最后是Thread。
Grid是一座工厂,Thread Block是部门,Warp是项目组,thread是具体的工人。
矩阵初始化的加速
整体代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <stdio.h> #include <ctime> void initMatrix (float * matrix, int matrix_m, int matrix_n) { for (int i = 0 ; i < matrix_m * matrix_n; i++) { matrix[i] = (float )i; } } __global__ void initMatrix_CUDA (float * matrix, int total_elems) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < total_elems) { matrix[i] = (float )i; } } int main () { int matrix_m = 8192 , matrix_n = 8192 ; int total_elems = matrix_m * matrix_n; size_t total_size = total_elems * sizeof (float ); float * matrix_cpu = new float [total_elems]; clock_t cpu_start, cpu_end; cpu_start = clock (); initMatrix (matrix_cpu, matrix_m, matrix_n); cpu_end = clock (); double cpu_duration = (cpu_end - cpu_start) / (double )CLOCKS_PER_SEC; printf ("CPU Time: %.6f s\n" , cpu_duration); float * d_matrix_gpu; cudaMalloc (&d_matrix_gpu, total_size); cudaEvent_t gpu_start, gpu_end; cudaEventCreate (&gpu_start); cudaEventCreate (&gpu_end); cudaEventRecord (gpu_start); int block_size = 256 ; int grid_size = 32 ; initMatrix_CUDA <<<grid_size, block_size >>> (d_matrix_gpu, total_elems); cudaEventRecord (gpu_end); cudaEventSynchronize (gpu_end); float gpu_duration_ms; cudaEventElapsedTime (&gpu_duration_ms, gpu_start, gpu_end); double gpu_duration = gpu_duration_ms / 1000.0 ; printf ("GPU Time: %.6f s\n" , gpu_duration); delete [] matrix_cpu; cudaFree (d_matrix_gpu); cudaEventDestroy (gpu_start); cudaEventDestroy (gpu_end); return 0 ; }
在这里我们看到,一个CUDA函数(严谨叫做kernel,核函数)要以__global__ void开头,随后通过线程索引threadIdx、线程块索引blockIdx和线程块线程数blockDim,运算得到当前所运算的索引。除此之外,在GPU上申请空间,也需要用到CUDA提供的APIcudaMalloc。在主函数中,核函数后面用了三个箭头,这个是CUDA特有的Chevron记号,里面的两个参数分别是grid中的thread block数量,和每个thread block数量中的thread数。
上面的输出是:
1 2 CPU Time: 0.064000 s GPU Time: 0.001471 s
矩阵加法的加速
整体代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <stdio.h> #include <ctime> void initMatrix (float * matrix, int matrix_m, int matrix_n) { for (int i = 0 ; i < matrix_m * matrix_n; i++) { matrix[i] = (float )i; } } __global__ void initMatrix_CUDA (float * matrix, int total_elems) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < total_elems) { matrix[i] = (float )i; } } void addMatrix (float * matrix, float * matrix_a, float * matrix_b, int matrix_m, int matrix_n) { for (int i = 0 ; i < matrix_m * matrix_n; i++) { matrix[i] = matrix_a[i] + matrix_b[i]; } } __global__ void addMatrix_CUDA (float * matrix, float * matrix_a, float * matrix_b, int matrix_m, int matrix_n) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < matrix_m * matrix_n) { matrix[i] = matrix_a[i] + matrix_b[i]; } } void printMatrix (float * matrix, int matrix_m, int matrix_n, int print_elem) { if (print_elem > matrix_m * matrix_n) { print_elem = matrix_m * matrix_n; } for (int i = 0 ; i < print_elem; i++) { printf ("%f " , matrix[i]); } printf ("\n" ); } int main () { int matrix_m = 8192 , matrix_n = 8192 ; int total_elems = matrix_m * matrix_n; size_t total_size = total_elems * sizeof (float ); clock_t start, end; float * matrix_cpu_a = new float [total_elems]; float * matrix_cpu_b = new float [total_elems]; float * matrix_cpu_c = new float [total_elems]; initMatrix (matrix_cpu_a, matrix_m, matrix_n); initMatrix (matrix_cpu_b, matrix_m, matrix_n); start = clock (); addMatrix (matrix_cpu_c, matrix_cpu_a, matrix_cpu_b, matrix_m, matrix_n); end = clock (); printf ("CPU time: %f\n" , (double )(end - start) / CLOCKS_PER_SEC); printMatrix (matrix_cpu_c, matrix_m, matrix_n, 100 ); cudaEvent_t start_gpu, end_gpu; cudaEventCreate (&start_gpu); cudaEventCreate (&end_gpu); float * matrix_gpu_a, * matrix_gpu_b, * matrix_gpu_c; cudaMalloc ((void **)&matrix_gpu_a, total_size); cudaMalloc ((void **)&matrix_gpu_b, total_size); cudaMalloc ((void **)&matrix_gpu_c, total_size); int grid_size = 32 *32 ; int block_size = 256 ; initMatrix_CUDA <<<grid_size, block_size>>> (matrix_gpu_a, total_elems); initMatrix_CUDA <<<grid_size, block_size>>> (matrix_gpu_b, total_elems); cudaEventRecord (start_gpu); addMatrix_CUDA <<<grid_size, block_size>>> (matrix_gpu_c, matrix_gpu_a, matrix_gpu_b, matrix_m, matrix_n); cudaEventRecord (end_gpu); cudaMemcpy (matrix_cpu_c, matrix_gpu_c, total_size, cudaMemcpyDeviceToHost); cudaEventSynchronize (end_gpu); float gpu_time; cudaEventElapsedTime (&gpu_time, start_gpu, end_gpu); printf ("GPU time: %f\n" , gpu_time); printMatrix (matrix_cpu_c, matrix_m, matrix_n, 100 ); return 0 ; }
结果如下:
1 2 3 4 CPU time: 0.105000 0.000000 2.000000 4.000000 6.000000 8.000000 10.000000 12.000000 14.000000 16.000000 18.000000 20.000000 22.000000 24.000000 26.000000 28.000000 30.000000 32.000000 34.000000 36.000000 38.000000 40.000000 42.000000 44.000000 46.000000 48.000000 50.000000 52.000000 54.000000 56.000000 58.000000 60.000000 62.000000 64.000000 66.000000 68.000000 70.000000 72.000000 74.000000 76.000000 78.000000 80.000000 82.000000 84.000000 86.000000 88.000000 90.000000 92.000000 94.000000 96.000000 98.000000 100.000000 102.000000 104.000000 106.000000 108.000000 110.000000 112.000000 114.000000 116.000000 118.000000 120.000000 122.000000 124.000000 126.000000 128.000000 130.000000 132.000000 134.000000 136.000000 138.000000 140.000000 142.000000 144.000000 146.000000 148.000000 150.000000 152.000000 154.000000 156.000000 158.000000 160.000000 162.000000 164.000000 166.000000 168.000000 170.000000 172.000000 174.000000 176.000000 178.000000 180.000000 182.000000 184.000000 186.000000 188.000000 190.000000 192.000000 194.000000 196.000000 198.000000 GPU time: 0.047008 0.000000 2.000000 4.000000 6.000000 8.000000 10.000000 12.000000 14.000000 16.000000 18.000000 20.000000 22.000000 24.000000 26.000000 28.000000 30.000000 32.000000 34.000000 36.000000 38.000000 40.000000 42.000000 44.000000 46.000000 48.000000 50.000000 52.000000 54.000000 56.000000 58.000000 60.000000 62.000000 64.000000 66.000000 68.000000 70.000000 72.000000 74.000000 76.000000 78.000000 80.000000 82.000000 84.000000 86.000000 88.000000 90.000000 92.000000 94.000000 96.000000 98.000000 100.000000 102.000000 104.000000 106.000000 108.000000 110.000000 112.000000 114.000000 116.000000 118.000000 120.000000 122.000000 124.000000 126.000000 128.000000 130.000000 132.000000 134.000000 136.000000 138.000000 140.000000 142.000000 144.000000 146.000000 148.000000 150.000000 152.000000 154.000000 156.000000 158.000000 160.000000 162.000000 164.000000 166.000000 168.000000 170.000000 172.000000 174.000000 176.000000 178.000000 180.000000 182.000000 184.000000 186.000000 188.000000 190.000000 192.000000 194.000000 196.000000 198.000000
如何选择grid和block的size?我们从原理入手,grid size所管辖的是thread block的数量,而block size是管辖的thread数,这两个数字当然越大越好,从而获取最大程度的并行性。通过时间片轮转,这两个值的乘积哪怕大于CUDA核心数也无所谓。grid size不应该低于SM的数量。如何选择之后会单独调一期文章讲。
后记
到这里,我对于CUDA已经有了一个基本的认知,说实话,这个东西也挺像AI调参的,尤其是grid size和block size这两个东西。不过,最重要的是CUDA的思想,将能并行的事通过索引分开来算。