CUDA优化实例(二)对齐与合并

xiaoxiao2021-02-28  13

CUDA优化实例(二)对齐与合并

本篇主要通过反正的方式进行实验,即说明不合并的内存访问方式慢,以此来说明对全局内存的访问一定要保证合并。

引言

关于全局内存的对齐与合并问题,前面的文章1 前面的文章2也介绍了,我在做有关对齐的试验时发现许多不可解释的问题,主要是对齐的问题,我发现这与我在书中学的不一样,为此我去官方文档中寻找线索,发现,现在的GPU对非对齐问题都进行了优化,不管对齐不对齐,它们的性能几乎是一样的。之前实验中也得到了证明,接下来的实验仍可证明这一点。那么优化对齐在现代的GPU(如GTX1050Ti)中就没有什么必要了。那么合并呢?下面会实验证明,合并还是对性能有很大影响的。

实验

本实验参考的是CUDA官方网站中的例子官网例子。

代码:

/* Copyright (c) 1993-2015, NVIDIA CORPORATION. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of NVIDIA CORPORATION nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include <stdio.h> #include <assert.h> // Convenience function for checking CUDA runtime API results // can be wrapped around any runtime API call. No-op in release builds. inline cudaError_t checkCuda(cudaError_t result) { #if defined(DEBUG) || defined(_DEBUG) if (result != cudaSuccess) { fprintf(stderr, "CUDA Runtime Error: %s\n", cudaGetErrorString(result)); assert(result == cudaSuccess); } #endif return result; } template <typename T> __global__ void offset(T* a, int s) { int i = blockDim.x * blockIdx.x + threadIdx.x + s; a[i] = a[i] + 1; } template <typename T> __global__ void stride(T* a, int s) { int i = (blockDim.x * blockIdx.x + threadIdx.x) * s; a[i] = a[i] + 1; } template <typename T> void runTest(int deviceId, int nMB) { int blockSize = 256; float ms; T *d_a; cudaEvent_t startEvent, stopEvent; int n = nMB*1024*1024/sizeof(T); // NB: d_a(33*nMB) for stride case checkCuda( cudaMalloc(&d_a, n * 33 * sizeof(T)) ); checkCuda( cudaEventCreate(&startEvent) ); checkCuda( cudaEventCreate(&stopEvent) ); printf("Offset, Bandwidth (GB/s):\n"); offset<<<n/blockSize, blockSize>>>(d_a, 0); // warm up for (int i = 0; i <= 32; i++) { checkCuda( cudaMemset(d_a, 0, n * sizeof(T)) ); checkCuda( cudaEventRecord(startEvent,0) ); offset<<<n/blockSize, blockSize>>>(d_a, i); checkCuda( cudaEventRecord(stopEvent,0) ); checkCuda( cudaEventSynchronize(stopEvent) ); checkCuda( cudaEventElapsedTime(&ms, startEvent, stopEvent) ); printf("%d, %f\n", i, 2*nMB/ms); } printf("\n"); printf("Stride, Bandwidth (GB/s):\n"); stride<<<n/blockSize, blockSize>>>(d_a, 1); // warm up for (int i = 1; i <= 32; i++) { checkCuda( cudaMemset(d_a, 0, n * sizeof(T)) ); checkCuda( cudaEventRecord(startEvent,0) ); stride<<<n/blockSize, blockSize>>>(d_a, i); checkCuda( cudaEventRecord(stopEvent,0) ); checkCuda( cudaEventSynchronize(stopEvent) ); checkCuda( cudaEventElapsedTime(&ms, startEvent, stopEvent) ); printf("%d, %f\n", i, 2*nMB/ms); } stride32<<<n/blockSize, blockSize>>>(d_a, 2); checkCuda( cudaEventDestroy(startEvent) ); checkCuda( cudaEventDestroy(stopEvent) ); cudaFree(d_a); } int main(int argc, char **argv) { int nMB = 4; int deviceId = 0; bool bFp64 = false; for (int i = 1; i < argc; i++) { if (!strncmp(argv[i], "dev=", 4)) deviceId = atoi((char*)(&argv[i][4])); else if (!strcmp(argv[i], "fp64")) bFp64 = true; } cudaDeviceProp prop; checkCuda( cudaSetDevice(deviceId) ) ; checkCuda( cudaGetDeviceProperties(&prop, deviceId) ); printf("Device: %s\n", prop.name); printf("Transfer size (MB): %d\n", nMB); printf("%s Precision\n", bFp64 ? "Double" : "Single"); if (bFp64) runTest<double>(deviceId, nMB); else runTest<float>(deviceId, nMB); }

结果:

分析:

1 数据是所有线程iD所能达到的33倍,即不会出现访问非法内存, 2 int i = (blockDim.x * blockIdx.x + threadIdx.x) * s; 表明访问的数据字节位置,成s倍变化。 3 最后耗时有最低耗时和最高耗时和平均耗时,因为核函数访问了32次

结论

分析发现,第一个核函数是针对齐问题的,第二个核函数是针对合并问题的。 第一个核函数的32次非对齐的情况的性能基本一样,验证了我前面所说的现代GPU对对齐问题的内部优化。第二个核函数随着步长的增大,内存请求慢慢的不在同一个内存事物中,带宽自然就降低了。但其降低不是我想的50%,25%。。。而是最低效率是12.5%,这也说明了,这与书上和猜想的不同,即全局内存访问的方式被优化了,不再是我们认为的那样,对齐访问不再影响性能,合并访问的性能下降的梯度也有所减缓,所以不能在向书上那样认识现代GPU的内存访问方式了,不过,有一点还是没变的,即提高内存访问效率,shared memory可让全局内存成合并访问,成为优化CUDA的最有力武器。

转载请注明原文地址: https://www.6miu.com/read-2100060.html

最新回复(0)