我用cuda-C (Win10上的Visual Studio2015,GPU device = TitanXp)编写了以下代码来计算一维数组中所有元素的总和(从二维平面化而来)。主机版本很简单,使用+=操作对所有元素求和并返回值。对于cuBLAS实现,我使用了点积方法(对目标数组进行点积,将所有大小为1的数组进行点积,返回所有元素的和)。该代码适用于小数组(例如262144个元素的数组),但是对于大的数组(例如512x512 =100个元素的数组)返回不正确的值(尽管足够接近正确的值)。我遗漏了什么或做错了什么?提前谢谢。(免责声明- cuBLAS新用户)。
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "book.h"
#include <cublas_v2.h>
void creatematrix(float *out, int nx, int ny)
{
float ctr = 0;
for (int i = 0; i < nx; ++i) {
for (int j = 0; j < ny; ++j) {
out[j * nx + i] = ctr/1E+5;
ctr = ctr + 1;
}
}
}
float add_arr_val(float *im, int N)
{
float tmp = 0;
for (int i = 0; i < N; ++i)
tmp += im[i];
float out = tmp;
return out;
}
__global__ void init_ones(float *d_in, int N)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
if (i < N)
{
d_in[i] = 1.0;
}
}
void main()
{
// Define matrix size (using flattened array for most operations)
int nx = 512; // row size
int ny = 512; // column size
int N = nx * ny; // total size of flattened array
// CPU section ========================================
float *M; M = (float*)malloc(N * sizeof(float)); // create array pointer and allocate memory
creatematrix(M, nx, ny); // create a test matrix of size nx * ny
float cpu_out = add_arr_val(M, N); // CPU function
// GPU and cuBLAS section ==============================
float *d_M;
HANDLE_ERROR(cudaMalloc(&d_M, N * sizeof(float)));
HANDLE_ERROR(cudaMemcpy(d_M, M, N * sizeof(float), cudaMemcpyHostToDevice)); // copy original array M to device as d_M
// create array of all ones, size N for dot product
float *d_ones;
cudaMalloc(&d_ones, N * sizeof(float));
// Max potential blocksize
int minGridSize, blockSize, gridSize;
cudaOccupancyMaxPotentialBlockSize(&minGridSize, &blockSize, init_ones, 0, N);
gridSize = (N + blockSize - 1) / blockSize;
init_ones << <gridSize, blockSize >> > (d_ones, N); // kernel launch to generate array of all 1's
cudaDeviceSynchronize();
float blas_out; // output on host variable
cublasHandle_t handle; cublasCreate(&handle); // initialize CUBLAS context
cublasSdot(handle, N, d_M, 1, d_ones, 1, &blas_out); // Perform cublas single-precision dot product of (d_M . d_ones)
cudaDeviceSynchronize();
//print output from cpu and gpu sections
printf("native output = %lf\n", cpu_out);
printf("cublas output = %lf\n", blas_out);
cublasDestroy(handle);
free(M);
cudaFree(d_M);
cudaFree(d_ones);
}包含262144个元素的数组的输出(展平的512x512矩阵):
native output = 343590.437500
cublas output = 343596.062500
Press any key to continue . . .具有144个元素的数组的输出(展平12x12矩阵):
native output = 0.102960
cublas output = 0.102960
Press any key to continue . . .发布于 2020-08-16 09:28:38
您遇到了float精度的极限。它确实不应该有超过5位小数位的精度,某些计算模式可能会导致它的精度低于这一点。事实上,与CPU实现相比,CUBLAS结果在数值上更接近正确的四舍五入结果。这一点很容易证明。我们需要做的就是使用double执行主机求和操作,我们看到我们得到了一个不同的结果:
$ cat t1784.cu
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <cublas_v2.h>
#define HANDLE_ERROR(x) x
void creatematrix(float *out, int nx, int ny)
{
float ctr = 0;
for (int i = 0; i < nx; ++i) {
for (int j = 0; j < ny; ++j) {
out[j * nx + i] = ctr/1E+5;
ctr = ctr + 1;
}
}
}
float add_arr_val(float *im, int N)
{
float tmp = 0;
for (int i = 0; i < N; ++i)
tmp += im[i];
float out = tmp;
return out;
}
double add_arr_val_dbl(float *im, int N)
{
double tmp = 0;
for (int i = 0; i < N; ++i)
tmp += (double)(im[i]);
return tmp;
}
__global__ void init_ones(float *d_in, int N)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
if (i < N)
{
d_in[i] = 1.0;
}
}
int main()
{
// Define matrix size (using flattened array for most operations)
int nx = 512; // row size
int ny = 512; // column size
int N = nx * ny; // total size of flattened array
// CPU section ========================================
float *M; M = (float*)malloc(N * sizeof(float)); // create array pointer and allocate memory
creatematrix(M, nx, ny); // create a test matrix of size nx * ny
float cpu_out = add_arr_val(M, N); // CPU function
double cpu_dbl_out = add_arr_val_dbl(M, N);
// GPU and cuBLAS section ==============================
float *d_M;
HANDLE_ERROR(cudaMalloc(&d_M, N * sizeof(float)));
HANDLE_ERROR(cudaMemcpy(d_M, M, N * sizeof(float), cudaMemcpyHostToDevice)); // copy original array M to device as d_M
// create array of all ones, size N for dot product
float *d_ones;
cudaMalloc(&d_ones, N * sizeof(float));
// Max potential blocksize
int minGridSize, blockSize, gridSize;
cudaOccupancyMaxPotentialBlockSize(&minGridSize, &blockSize, init_ones, 0, N);
gridSize = (N + blockSize - 1) / blockSize;
init_ones << <gridSize, blockSize >> > (d_ones, N); // kernel launch to generate array of all 1's
cudaDeviceSynchronize();
float blas_out; // output on host variable
cublasHandle_t handle; cublasCreate(&handle); // initialize CUBLAS context
cublasSdot(handle, N, d_M, 1, d_ones, 1, &blas_out); // Perform cublas single-precision dot product of (d_M . d_ones)
cudaDeviceSynchronize();
//print output from cpu and gpu sections
printf("native output = %f\n", cpu_out);
printf("native double output = %f\n", cpu_dbl_out);
printf("cublas output = %f\n", blas_out);
cublasDestroy(handle);
free(M);
cudaFree(d_M);
cudaFree(d_ones);
}
$ nvcc -o t1784 t1784.cu -lcublas
$ ./t1784
native output = 343590.437500
native double output = 343596.072960
cublas output = 343596.062500
$cublas输出实际上更接近(*)的原因是,它执行加法的顺序与主机float代码不同。它在线程块中工作,并在创建最终结果之前将部分和相加在一起。
注意,不需要将l与%f printf格式说明符一起使用。它已经被设计为能够正确地处理double和float格式。
有关如何在求和中产生错误的详细说明,您可以从p238开始阅读this paper,特别是“求和中的错误”。
(*)然而,您不应该假设情况总是如此,尽管我认为部分和方法通常比纯粹的运行和方法更健壮(这只是个人观点,不是我已经证明或想要争论的问题)。但是,在这两种情况下,错误都依赖于数据。我们可以构造一个特定的数据序列,从精度的角度来看,这将使running sum方法看起来非常好。要在最高精度很重要的情况下执行大型求和,您可能应该使用可用的最高精度。除此之外,您可能希望阅读上面链接的论文中关于Kahan求和的内容。
https://stackoverflow.com/questions/63431910
复制相似问题