Cuda обработка ошибок

This post looks at methods of error checking and debugging GPU code. Primarily by creating helper functions and macros for checking for errors.

Author: Greg Gutmann
Affiliation: Tokyo Institute of Technology, Nvidia University Ambassador, Nvidia DLI 

This is a continuation of my posts on CUDA programming, for the previous post on thread indexing and memory click here [Post 2]. In this post, we will look at methods of error checking and debugging GPU code. However, the the CUDA API function calls are not explained in detail, for this I recommend the CUDA documentation [Error Handling].

For beginners, it is not necessary to try to fully understand everything mentioned, specifically later in the post when talking about asynchronous and concurrent GPU debugging. But it is valuable to have a bit of familiarity with the topic for when it becomes relevant in the future.

Debugging with Print Statements

Caution: the SyntaxHighlighter plugin used for the code block has an error at the time of posting. If you see “&” replace it with “&”.

// 0_creatingErrors.cu
#include <stdio.h>

__global__ void kernelA(int * globalArray){
	int globalThreadId = blockIdx.x * blockDim.x + threadIdx.x;

	// If the problem is small or if printing a subset of the problem 
	// (inside conditional expression, etc...). 
	// Then using printf inside of a kernel can be a viable debugging approach.
	printf("blockIdx.x:%d * blockDim.x:%d + threadIdx.x:%d = globalThreadId:%dn", blockIdx.x, blockDim.x, threadIdx.x, globalThreadId);
	globalArray[globalThreadId] = globalThreadId;
}
 
int main()
{
	int elementCount = 32;
	int dataSize = elementCount * sizeof(int);
	
	cudaSetDevice(0);
	
	int * managedArray;
	cudaMallocManaged(&managedArray, dataSize);

	kernelA <<<4,8>>>(managedArray);

    cudaDeviceSynchronize(); 
	
	printf("n");

	// Printing a portion of results can be another good debugging approach
	for(int i = 0; i < elementCount; i++){
		printf("%d%s", managedArray[i], (i < elementCount - 1) ? ", " : "n");
	}	
	
	cudaFree(managedArray);

	cudaDeviceReset();
 
	return 0;
}

One of the most basic methods of debugging, or confirming results, is to simply print out the results.

0, 1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0

By doing this we can see there is a problem with indices 8-31, and potentially all indices. But there is not a lot of information here, it would likely be even less enlightening if the kernel was more complex.

The second way to print is from the CUDA kernel itself.

Sample's Output from the GPU Kernel:
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7

From this, we can see that the correct number of threads and blocks are launched but there is something wrong when computing globalThreadId (blockIdx.y was used instead of blockIdx. x). If the blockIdx.y was printed it would be more obvious what is happening, all zeros, but often mistakes appear in one spot but not another.

Another detail that is easy to see here is that, when running massively parallel code, print statements may quickly become overwhelming. However, targeted printing can still be useful.

Part 1: Error Checking Helpers

It is often not convenient or clean looking to always write error checking code after every operation or function. Thus it is common for many to write macros or functions to speed up writing code and make it look cleaner.

Below an example of explicitly writing out error checking operations, taken from Visual Studio’s CUDA startup project. On the plus side with this approach, you could provide very exact error reporting and possible fixes.

// Choose which GPU to run on, change this on a multi-GPU system.
cudaStatus = cudaSetDevice(0);
if (cudaStatus != cudaSuccess) {
    fprintf(stderr, "cudaSetDevice failed!  Do you have a CUDA-capable GPU installed?");
    goto Error;
}

Below the helper samples in the following subsections have been broken up into .cu and .cuh files for simpler compilation (linking, etc…).

CUDA Error Checking Macros

Macros are a popular option as they can automatically collect and report information about the file, function and line number. If you browse the samples included with the CUDA SDK you will come across error checking macros. The sample code below was actually originally from there but has been modified quite a bit.

// errorCheckingMacro.cuh
#ifndef CHECK_CUDA_ERROR_M_H
#define CHECK_CUDA_ERROR_M_H

#define PRINT_ON_SUCCESS 1

// To be used around calls that return an error code, ex. cudaDeviceSynchronize or cudaMallocManaged
void checkError(cudaError_t code, char const * func, const char *file, const int line, bool abort = true);
#define checkCUDAError(val) { checkError((val), #val, __FILE__, __LINE__); }	// in-line regular function
#define checkCUDAError2(val) check((val), #val, __FILE__, __LINE__) // typical macro 

// To be used after calls that do not return an error code, ex. kernels to check kernel launch errors
void checkLastError(char const * func, const char *file, const int line, bool abort = true);
#define checkLastCUDAError(func) { checkLastError(func, __FILE__, __LINE__); }
#define checkLastCUDAError_noAbort(func) { checkLastError(func, __FILE__, __LINE__, 0); }

#endif // CHECK_CUDA_ERROR_M_H

In the above sample on lines 9 and 10 two different ways of writing the same macro can be seen. I tend to use curly brackets since it acts like a regular function when invoked. For more details on macros see this post: http://www.ebyte.it/library/codesnippets/WritingCppMacros.html#4

// errorCheckingMacro.cu
#include "errorCheckingMacro.cuh"
#include <stdio.h>

// Assumes single device when calling cudaDeviceReset(); and exit(code);
// In some cases a more lengthy program clean up / termination may be needed

void checkError(cudaError_t code, char const * func, const char *file, const int line, bool abort)
{
	if (code != cudaSuccess) 
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
		if (abort){
			cudaDeviceReset();
			exit(code);
		}
	}
	else if (PRINT_ON_SUCCESS)
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
	}
}

void checkLastError(char const * func, const char *file, const int line, bool abort)
{
	cudaError_t code = cudaGetLastError();
	if (code != cudaSuccess)
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
		if (abort) {
			cudaDeviceReset();
			exit(code);
		}
	}
	else if (PRINT_ON_SUCCESS)
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
	}
}

Above, the first function shown takes the CUDA error code as one of its parameters, then uses that to check if an error occurred and if so what kind. The second function instead calls cudaGetLastError which is needed when CUDA operations do not return an error code (ex. kernels).

CUDA Error Checking Functions with Added Functionality

It should be noted that the examples shown below could be converted into macros. The samples are intended to show another approach and some additional methods of error checking asynchronous operations.

Non-macro based error checking lacks the ability to automatically gather the file name, the function name and the line number for printing or other uses. As a solution to this, when calling the error checking function I pass in an identification string.

// errorChecking.cuh
#ifndef CHECK_CUDA_ERROR_H
#define CHECK_CUDA_ERROR_H

// This could be set with a compile time flag ex. DEBUG or _DEBUG
// But then would need to use #if / #ifdef not if / else if in code
#define FORCE_SYNC_GPU 0
#define PRINT_ON_SUCCESS 1

cudaError_t checkAndPrint(const char * name, int sync = 0);
cudaError_t checkCUDAError(const char * name, int sync = 0);

#endif // CHECK_CUDA_ERROR_H

The code below shows error checking that will check errors with or without forced synchronization.

Pros: Forcing the code to synchronize will ensure the operation just called has finished prior to checking if any errors have occurred. Without doing this, errors from asynchronous calls may appear later on in the code when checking errors for other operations, leading to confusion. Or possibly the error never being reported.

Cons: forced synchronization is only to be used when debugging as the code will take a very large performance hit with constant synchronization calls. Also, forced synchronization may change how the code runs since it will prevent most operations from overlapping. For example, the code might normally use many asynchronous calls or call operations on independent threads running concurrently. This can lead to errors that show up in a release but the not show up when debugging.

// errorChecking.cu
#include "errorChecking.cuh"
#include <stdio.h>

cudaError_t checkAndPrint(const char * name, int sync) {
	cudaError_t err = cudaGetLastError();
	if (err != cudaSuccess)
	{
		const char * errorMessage = cudaGetErrorString(err);
		fprintf(stderr, "CUDA error check "%s" returned ERROR code: %d (%s) %s n", name, err, errorMessage, (sync) ? "after sync" : "");
	}
	else if (PRINT_ON_SUCCESS) {
		printf("CUDA error check "%s" executed successfully %sn", name, (sync) ? "after sync" : "");
	}
	return err;
}

cudaError_t checkCUDAError(const char * name, int sync) {
	cudaError_t err = cudaSuccess;
	if (sync || FORCE_SYNC_GPU) {
		err = checkAndPrint(name, 0);
		cudaDeviceSynchronize();
		err = checkAndPrint(name, 1);
	}
	else {
		err = checkAndPrint(name, 0);
	}
	return err;
}

As seen above the function checkCUDAError has been designed to be called right after GPU functions instead of directly taking the return value from functions that return an error value. This is because not all CUDA functions return an error value, and some calls are asynchronous, as mentioned.

For the asynchronous case, the error checking function checks for errors after invoking the operation and after the operation has completed. Example case: kernels can have pre-launch errors (incorrect configuration) and kernel execution errors (errors running the GPU code).

Asynchronous possibilities:

  • Memory copies with Async in the name
  • Kernels

Concurrency can also be increased by using CUDA streams. (A future post, or check google now 🙂 )

Note: The error checking functions could be written to wrap the CUDA functions like a macro, there are many possibilities. General approaches will work but customizing it to your needs may become more desirable as your code becomes more complex.

cudaError_t checkCUDAError(cudaError_t err, const char * name, int sync = 0);
cudaError_t checkCUDAError(cudaError_t err);

Part 2: Using the Error Checking Code Above

Next, we will look at some short samples that make use of the error checking topics just covered.

Use of Error Checking Macros

This sample shows the use of the error checking macros. It also includes other concepts, such as printing a subset of results or writing testing functions.

// 2_creatingErrorsMacro.cu
#include "errorCheckingMacro.cuh"
#include <stdio.h>
 
__global__ void kernelA(int * globalArray){
	int globalThreadId = blockIdx.x * blockDim.x + threadIdx.x;
	globalArray[globalThreadId] = globalThreadId;
}

// Testing function
void testResults(int * data, int N) {
	// For more information incorrect values with their index's can be printed when found
	int testPass = true;
	for (int i = 0; i < N; i++) {
		if (data[i] != i) {
			testPass = false;
		}
	}
	printf("Result of test: %snn", (testPass) ? "passed!" : "uh oh...");
}

int main()
{
	int elementCount = 10240;
	int dataSize = elementCount * sizeof(int);
	
	checkCUDAError(cudaSetDevice(0));
	
	int * managedArray;
	checkCUDAError(cudaMallocManaged(&managedArray, dataSize));
 
	kernelA <<<4,1500>>>(managedArray); // Too many threads per block. 
	// Also, not enough threads for the amount of data but that is not the focus for this sample. 
 
	checkLastCUDAError_noAbort("kernelA");

	checkCUDAError(cudaDeviceSynchronize());
	

	// Can print a subset when problem size is larger
	int printStart = elementCount - 16;
	int printEnd = elementCount;
	printf("nChecking values[%d-%d): ", printStart, printEnd); // Interval notation: https://en.wikipedia.org/wiki/Bracket_(mathematics)#Intervals
	for(int i = printStart; i < printEnd; i++){
		printf("%d%s", managedArray[i], (i < elementCount - 1) ? ", " : "n");
	}	
	
	// Or better yet, write a testing function and let the computer test for you
	testResults(managedArray, elementCount);
	
	checkCUDAError(cudaFree(managedArray));
	
	checkCUDAError(cudaDeviceReset());
 
	return 0;
}
Sample's Output: 
CUDA error returned from "cudaSetDevice(0)" at 2_creatingErrorsMacro.cu:15, Error code: 0 (no error)
CUDA error returned from "cudaMallocManaged(&managedArray, dataSize)" at 2_creatingErrorsMacro.cu:18, Error code: 0 (no error)
CUDA error returned from "kernelA" at 2_creatingErrorsMacro.cu:22, Error code: 9 (invalid configuration argument)
CUDA error returned from "cudaDeviceSynchronize()" at 2_creatingErrorsMacro.cu:24, Error code: 0 (no error)
Checking values[10224-10240): 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
Result of test: uh oh…
CUDA error returned from "cudaFree(managedArray)" at 2_creatingErrorsMacro.cu:46, Error code: 0 (no error)
CUDA error returned from "cudaDeviceReset()" at 2_creatingErrorsMacro.cu:48, Error code: 0 (no error)

The error in this sample (error code: 9) on line 32 was caused by trying to use 1500 threads per block. The result of this error can also be seen when looking at the portion of results printed or by the result of the testing function.

Testing functions are generally a better approach as errors may occur in the portion of results that were not printed. Printing might often just be to make the developer more sure the results are correct by personally seeing them.

One detail that might seem strange is that I have added a feature to prevent the program from closing when an error occurs. From my experience with creating simulations, I have found that when an error occurs it may not always have an effect on the code. Depending on the situation it may be better to notify the user or silently log the error and program state, instead of closing the program immediately and frustrating the user.

Use of Error Checking Functions

The sample below was made to run on 32 elements but then modified to showcase errors when calling cudaSetDevice and cudaMallocManaged.

// 1_creatingErrors.cu
#include "errorChecking.cuh"
#include <stdio.h>

enum errorCheckFlag {
	NO_SYNC,
	SYNC
};

__global__ void kernelA(int * globalArray){
	int globalThreadId = blockIdx.x * blockDim.x + threadIdx.x;
	globalArray[globalThreadId] = globalThreadId;
}
 
void helperFunction(int * managedMem){
	kernelA <<<4,8>>>(managedMem);
	checkCUDAError("<helperFunction> kernelA", SYNC); 
	// Showing a possible naming convention "<location>" for tracking down error   
	// locations when not using macros. A bit of work but it will save time.
}

int main()
{
	int elementCount = 32;
	int dataSize = elementCount * sizeof(int);
	
	cudaSetDevice(5); // The PC I am on does not have 6 GPU
	checkCUDAError("<main> cudaSetDevice");
	
	int * managedArray;
	cudaMallocManaged(&managedArray, dataSize * 1000000000); // My GPU do not have 32 GB
	checkCUDAError("<main> cudaMallocManaged");

	kernelA <<<4,8>>>(managedArray);
	checkCUDAError("<main> kernelA", SYNC);

    //cudaDeviceSynchronize(); // checkCUDAError will sync with flag SYNC
	
	cudaFree(managedArray);
	checkCUDAError("<main> cudaFree");

	cudaDeviceReset();
	checkCUDAError("<main> cudaDeviceReset");
 
	return 0;
}
Sample's Output as is:
CUDA error check "<main> cudaSetDevice" returned ERROR code: 10 (invalid device ordinal)
CUDA error check "<main> cudaMallocManaged" returned ERROR code: 2 (out of memory)
CUDA error check "<main> kernelA" executed successfully
CUDA error check "<main> kernelA" returned ERROR code: 77 (an illegal memory access was encountered) after sync
CUDA error check "<main> cudaFree" returned ERROR code: 77 (an illegal memory access was encountered)
CUDA error check "<main> cudaDeviceReset" executed successfully

The first error (10) was caused because there is no GPU with the device ID of 5 on my system.
The next error (2) was caused because the code attempted to allocate more memory than was available.
Then error (77), in both locations, was a result of trying to work with memory that was never successfully allocated.

Sample's Output if Corrected:
CUDA error check "<main> cudaSetDevice" executed successfully
CUDA error check "<main> cudaMallocManaged" executed successfully
CUDA error check "<main> kernelA" executed successfully
CUDA error check "<main> kernelA" executed successfully after sync
CUDA error check "<main> cudaFree" executed successfully
CUDA error check "<main> cudaDeviceReset" executed successfully

With the errors corrected, a device ID of 0 and removing “* 1000000000”, everything reports running successfully. Printing success is usually not done though.

Conclusion

In this post, we covered the use of error checking functions and the various approaches needed for debugging synchronous and asynchronous GPU operations. Also, though out the post other common debugging methods where mentioned that might prove useful when writing GPU code.

At this point, you should be able to write and debug simple GPU programs if you have also gone through [post 1] and [post 2] previously.

Additional Information

If you were curious about the naming convention, I found starting folders or files with numbers, like 1_creatingErrors.cu, makes for easy use of tab’s autocomplete feature in terminal windows. It is only intended for initial work with code or samples though.

This post became a bit longer than previous samples, so I am including my simple makefile. A future post may take a closer look at makefiles, but as is there are many pages on makefiles across the internet.

NVCC = nvcc
CUDAFLAGS = -arch=sm_61 -lcudart
OPT = -m64

all: zero one two

errorChecking.obj: errorChecking.cu errorChecking.cuh
	${NVCC} ${CUDAFLAGS} ${OPT} -c errorChecking.cu -o errorChecking.obj

errorCheckingMacro.obj: errorCheckingMacro.cu errorCheckingMacro.cuh
	${NVCC} ${CUDAFLAGS} ${OPT} -c errorCheckingMacro.cu -o errorCheckingMacro.obj

zero: 0_creatingErrors.cu
	${NVCC} ${CUDAFLAGS} ${OPT} -o zero 0_creatingErrors.cu

one: 1_creatingErrors.cu errorChecking.obj
	${NVCC} ${CUDAFLAGS} ${OPT} -o one 1_creatingErrors.cu errorChecking.obj

two: 2_creatingErrorsMacro.cu errorCheckingMacro.obj
	${NVCC} ${CUDAFLAGS} ${OPT} -o two 2_creatingErrorsMacro.cu errorCheckingMacro.obj
	
clean:
	${RM} *.o *.obj *.exp *.pdb *.exe zero one two

Contact me if you would like to use the contents of this post. Thanks 🙂
Copyright © 2019 by Gregory Gutmann

In this third post of the CUDA C/C++ series, we discuss various characteristics of the wide range of CUDA-capable GPUs, how to query device properties from within a CUDA C/C++ program, and how to handle errors.

Querying Device Properties

In our last post, about performance metrics, we discussed how to compute the theoretical peak bandwidth of a GPU. This calculation used the GPU’s memory clock rate and bus interface width, which we obtained from product literature. The following CUDA C++ code demonstrates a more general approach, calculating the theoretical peak bandwidth by querying the attached device (or devices) for the needed information.

#include <stdio.h> 

int main() {
  int nDevices;

  cudaGetDeviceCount(&nDevices);
  for (int i = 0; i < nDevices; i++) {
    cudaDeviceProp prop;
    cudaGetDeviceProperties(&prop, i);
    printf("Device Number: %dn", i);
    printf("  Device name: %sn", prop.name);
    printf("  Memory Clock Rate (KHz): %dn",
           prop.memoryClockRate);
    printf("  Memory Bus Width (bits): %dn",
           prop.memoryBusWidth);
    printf("  Peak Memory Bandwidth (GB/s): %fnn",
           2.0*prop.memoryClockRate*(prop.memoryBusWidth/8)/1.0e6);
  }
}

This code uses the function cudaGetDeviceCount() which returns in the argument nDevices the number of CUDA-capable devices attached to this system. Then in a loop we calculate the theoretical peak bandwidth for each device.The body of the loop uses cudaGetDeviceProperties() to populate the fields of the variable prop, which is an instance of the struct cudaDeviceProp. The program uses only three of cudaDeviceProp's many members: namememoryClockRate, and memoryBusWidth.

When I compile (using any recent version of the CUDA nvcc compiler, e.g. 4.2 or 5.0rc) and run this code on a machine with a single NVIDIA Tesla C2050, I get the following result.

Device Number: 0
  Device name: Tesla C2050
  Memory Clock Rate (KHz): 1500000
  Memory Bus Width (bits): 384
  Peak Memory Bandwidth (GB/s): 144.00

This is the same value for theoretical peak bandwidth that we calculated in the previous post. When I compile and run the same code on my laptop computer, I get the following output.

Device Number: 0
  Device name: NVS 4200M
  Memory Clock Rate (KHz): 800000
  Memory Bus Width (bits): 64
  Peak Memory Bandwidth (GB/s): 12.800000

There are many other fields in the cudaDeviceProp struct which describe the amounts of various types of memory, limits on thread block sizes, and many other characteristics of the GPU. We could extend the above code to print out all such data, but the deviceQuery code sample provided with the NVIDIA CUDA Toolkit already does this.

Compute Capability

We will discuss many of the device attributes contained in the cudaDeviceProp type in future posts of this series, but I want to mention two important fields here, major and minor. These describe the compute capability of the device, which is typically given in major.minor format and indicates the architecture generation. The first CUDA-capable device in the Tesla product line was the Tesla C870, which has a compute capability of 1.0. The first double-precision capable GPUs, such as Tesla C1060, have compute capability 1.3. GPUs of the Fermi architecture, such as the Tesla C2050 used above, have compute capabilities of 2.x, and GPUs of the Kepler architecture have compute capabilities of 3.x. Many limits related to the execution configuration vary with compute capability, as shown in the following table.

Tesla C870 Tesla C1060 Tesla C2050 Tesla K10 Tesla K20
Compute Capability 1.0 1.3 2.0 3.0 3.5
Max Threads per Thread Block 512 512 1024 1024 1024
Max Threads per SM 768 1024 1536 2048 2048
Max Thread Blocks per SM 8 8 8 16 16

In the first post of this series, we mentioned that the grouping of threads into thread blocks mimics how thread processors are grouped on the GPU. This group of thread processors is called a streaming multiprocessor, denoted SM in the table above. The CUDA execution model issues thread blocks on multiprocessors, and once issued they do not migrate to other SMs.

Multiple thread blocks can concurrently reside on a multiprocessor subject to available resources (on-chip registers and shared memory) and the limit shown in the last row of the table. The limits on threads and thread blocks in this table are associated with the compute capability and not just a particular device: all devices of the same compute capability have the same limits. There are other characteristics, however, such as the number of multiprocessors per device, that depend on the particular device and not the compute capability. All of these characteristics, whether defined by the particular device or its compute capability, can be obtained using the cudaDeviceProp type.

You can generate code for a specific compute capability by using the nvcc compiler option -arch=sm_xx, where xx indicates the compute capability (without the decimal point). To see a list of compute capabilities for which a particular version of nvcc can generate code, along with other CUDA-related compiler options, issue the command nvcc --help and refer to the -arch entry.

When you specify an execution configuration for a kernel, keep in mind (and query at run time) the limits in the table above. This is especially important for the second execution configuration parameter: the number of threads per thread block. If you specify too few threads per block, then the limit on thread blocks per multiprocessor will limit the amount of parallelism that can be achieved. If you specify too many threads per thread block, well, that brings us to the next section.

Handling CUDA Errors

All CUDA C Runtime API functions have a return value which can be used to check for errors that occur during their execution.  In the example above, we can check for successful completion of cudaGetDeviceCount() like this:

cudaError_t err = cudaGetDeviceCount(&nDevices);
  if (err != cudaSuccess) printf("%sn", cudaGetErrorString(err));

We check to make sure cudaGetDeviceCount() returns the value cudaSuccess. If there is an error, then we call the function cudaGetErrorString() to get a character string describing the error.

Handling kernel errors is a bit more complicated because kernels execute asynchronously with respect to the host. To aid in error checking kernel execution, as well as other asynchronous operations, the CUDA runtime maintains an error variable that is overwritten each time an error occurs. The function cudaPeekAtLastError() returns the value of this variable, and the function cudaGetLastError() returns the value of this variable and also resets it to cudaSuccess.

We can check for errors in the saxpy kernel used in the first post of this series as follows.

saxpy<<<(N+255)/256, 256>>>(N, 2.0, d_x, d_y);
cudaError_t errSync  = cudaGetLastError();
cudaError_t errAsync = cudaDeviceSynchronize();
if (errSync != cudaSuccess) 
  printf("Sync kernel error: %sn", cudaGetErrorString(errSync));
if (errAsync != cudaSuccess)
  printf("Async kernel error: %sn", cudaGetErrorString(errAsync));

This code checks for both synchronous and asynchronous errors. Invalid execution configuration parameters, e.g. too many threads per thread block, are reflected in the value of errSync returned by cudaGetLastError(). Asynchronous errors that occur on the device after control is returned to the host, such as out-of-bounds memory accesses, require a synchronization mechanism such as cudaDeviceSynchronize(), which blocks the host thread until all previously issued commands have completed. Any asynchronous error is returned by cudaDeviceSynchronize(). We can also check for asynchronous errors and reset the runtime error state by modifying the last statement to call cudaGetLastError().

if (errAsync != cudaSuccess)
  printf("Async kernel error: %sn", cudaGetErrorString(cudaGetLastError());

Device synchronization is expensive, because it causes the entire device to wait, destroying any potential for concurrency at that point in your program. So use it with care. Typically, I use preprocessor macros to insert asynchronous error checking only in debug builds of my code, and not in release builds.

Summary

Now you know how to query CUDA device properties and handle errors in CUDA C and C++ programs. These are very important concepts for writing robust CUDA applications.

In the first three posts of this series, we have covered some of the basics of writing CUDA C/C++ programs, focusing on the basic programming model and the syntax of writing simple examples. We discussed timing code and performance metrics in the second post, but we have yet to use these tools in optimizing our code. That will change in the next post, where we will look at optimizing data transfers between the host and device.

Introduction

Proper CUDA error checking is critical for making the CUDA program development smooth and successful. Missing or incorrectly identifying CUDA errors could cause problems in production or waste lots of time in debugging.

In this blog post, I would like to quickly discuss proper CUDA error checking.

CUDA Error Types

CUDA errors could be separated into synchronous and asynchronous errors, or sticky and non-sticky errors.

Synchronous Error VS Asynchronous Error

CUDA kernel launch is asynchronous, meaning when the host thread reaches the code for kernel launch, say kernel<<<...>>>, the host thread issues an request to execute the kernel on GPU, then the host thread that launches the kernel continues, without waiting for the kernel to complete. The kernel might not begin to execute right away either.

There could be two types of error for CUDA kernel launch, synchronous error and asynchronous error.

Synchronous error happens when the host thread knows the kernel is illegal or invalid. For example, when the thread block size or grid size is too large, a synchronous error is resulted immediately after the kernel launch call, and this error could be captured by CUDA runtime error capturing API calls, such as cudaGetLastError, right after the kernel launch call.

Asynchronous error happens during kernel execution or CUDA runtime asynchronous API execution on GPU. It might take a while to encounter the error and send the error to host thread. For example, For example, it might encounter accessing invalid memory address in the late stage of kernel execution or CUDA runtime asynchronous API cudaMemcpyAsync execution, it will abort the execution and then send the error back to thread. Even if there are CUDA runtime error capturing API calls, such as cudaGetLastError, right after the kernel launch call, at the time when the error reaches host, those CUDA runtime error capturing API calls have been executed and they found no error. It is possible to capture the asynchronous error by explicitly synchronizing using the CUDA kernel launch using CUDA runtime API calls, such as cudaDeviceSynchronize, cudaStreamSynchronize, or cudaEventSynchronize, and checking the returned error from those CUDA kernel launch using CUDA runtime API calls or capturing the error using CUDA runtime error capturing API calls, such as cudaGetLastError. However, explicitly synchronization usually affects performance and therefore is not recommended for using in production unless it is extremely necessary.

Sticky VS Non-Sticky Error

CUDA runtime API returns non-sticky error if there is any, whereas CUDA kernel execution resulted in sticky error if there is any.

A non-sticky error is recoverable, meaning subsequent CUDA runtime API calls could behave normally. Therefore, the CUDA context is not corrupted. For example, when we allocate memory using cudaMalloc, it will return a non-sticky error if the GPU memory is insufficient.

A sticky error is not recoverable, meaning subsequent CUDA runtime API calls will always return the same error. Therefore, the CUDA context is corrupted, unless the application host process is terminated. For example, when the kernel tries to access invalid memory address during kernel execution, it will result in a sticky error which will be captured and returned by all the subsequent CUDA runtime API calls.

CUDA Error Checking Best Practice

In a CUDA program implementation, both development and production code, always check the return value of each CUDA runtime synchronous or asynchronous API call to see if there is any CUDA synchronous error, always run CUDA runtime error capturing API calls, such as cudaGetLastError, after kernel launch calls to see if there is any CUDA synchronous error. Check CUDA asynchronous error in development by synchronization and error checking after kernel launch calls and disable it in production.

Quiz

There is a question on the NVIDIA developer forum. Let’s use it as a quiz. Basically, the user has the following code. All calculations are done on the default stream and one thread. The cudaDeviceSynchronize returns cudaSuccess, but the cudaGetLastError call returns an invalid device function error. How would this happen?

1
2
3
4
5
6
7


res = cudaDeviceSynchronize();


res = cudaGetLastError();

cudaGetLastError returns the last error that has been produced by any of the runtime calls in the same host thread and resets it to cudaSuccess. cudaDeviceSynchronize is a CUDA runtime API call and it got no error. This means the kernel launch got no asynchronous error. However, there could be errors from CUDA runtime API calls prior to launching the kernel or the kernel launching encountered synchronous error which have not been properly error-checked. The last error that produced by those would not be reset until the cudaGetLastError call, even though before the reset there were cudaSuccess from other CUDA runtime API calls.

For example,

last_error.cu
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#include <cuda_runtime.h>
#include <iostream>

#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)
template <typename T>
void check(T err, const char* const func, const char* const file,
const int line)
{
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << " " << func << std::endl;


}
}

#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)
void checkLast(const char* const file, const int line)
{
cudaError_t err{cudaGetLastError()};
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << std::endl;


}
}

int main()
{
float* p;

CHECK_CUDA_ERROR(cudaMalloc(&p, 1000000000000000 * sizeof(float)));

CHECK_CUDA_ERROR(cudaMalloc(&p, 10 * sizeof(float)));

CHECK_CUDA_ERROR(cudaFree(p));



CHECK_LAST_CUDA_ERROR();

CHECK_LAST_CUDA_ERROR();
}
1
2
3
4
5
6
$ nvcc last_error.cu -o last_error
$ ./last_error
CUDA Runtime Error at: last_error.cu:37
out of memory cudaMalloc(&p, 1000000000000000 * sizeof(float))
CUDA Runtime Error at: last_error.cu:45
out of memory

Fundamentally, it was due to that the CUDA program error checking was not following the best practice mentioned previously.

References

  • CUDA Debugging

Each CUDA Runtime function returns a cudaError_t which takes one of CUDA Error Types values.

| cudaMallocHost
(void **ptr, size_t size) | Allocates page-locked memory on the host.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaMallocManaged
(void **devPtr, size_t size, unsigned int flags) | Allocates memory that will be automatically managed by the Unified Memory system.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation cudaErrorNotSupported cudaErrorInvalidValue |
| cudaHostAlloc
(void **pHost, size_t size, unsigned int flags) | Allocates page-locked memory on the host.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaFreeHost
(void *ptr) | Frees page-locked memory.

cudaError_t = cudaSuccess, cudaErrorInitializationError |
| cudaMallocMipmappedArray
(cudaMipmappedArray_t *mipmappedArray, const cudaChannelFormatDesc *desc, cudaExtent extent, unsigned int numLevels, unsigned int flags) | Allocate a mipmapped array on the device.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaGetMipmappedArrayLevel
(cudaArray_t *levelArray, cudaMipmappedArray_const_t mipmappedArray, unsigned int level) | Gets a mipmap level of a CUDA

mipmapped array.

cudaError_t = cudaSuccess, cudaErrorInvalidValue |
| cudaFreeMipmappedArray
(cudaMipmappedArray_t mipmappedArray) | Frees a mipmapped array on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInitializationError |
| cudaGetSymbolAddress
(void **devPtr, const void *symbol) | Finds the address associated with a CUDA symbol.

cudaError_t = cudaSuccess, cudaErrorInvalidSymbol |
| cudaGetSymbolSize
(size_t *size, const void *symbol) | Finds the size of the object associated with a CUDA symbol.

cudaError_t = cudaSuccess, cudaErrorInvalidSymbol |
| cudaHostGetDevicePointer
(void **pDevice, void *pHost, unsigned int flags) | Passes back device pointer of mapped host memory allocated by cudaHostAlloc or registered by cudaHostRegister.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorMemoryAllocation |
| cudaHostGetFlags
(unsigned int *pFlags, void *pHost) | Passes back flags used to allocate pinned host memory allocated by cudaHostAlloc.

cudaError_t = cudaSuccess, cudaErrorInvalidValue |
| cudaHostRegister
(void *ptr, size_t size, unsigned int flags) | Registers an existing host memory range for use by CUDA.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorMemoryAllocation |
| cudaHostUnregister
(void *ptr) | Unregisters a memory range that was registered with cudaHostRegister.

cudaError_t = cudaSuccess, cudaErrorInvalidValue |
| cudaMallocPitch
(void **devPtr, size_t *pitch, size_t width, size_t height) | Allocates pitched memory on the device.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaMemcpy
(void *dst, const void *src, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2D
(void *dst, size_t dpitch, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidPitchValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DArrayToArray
(cudaArray_t dst, size_t wOffsetDst, size_t hOffsetDst, cudaArray_const_t src, size_t wOffsetSrc, size_t hOffsetSrc, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DAsync
(void *dst, size_t dpitch, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidPitchValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DFromArray
(void *dst, size_t dpitch, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DFromArrayAsync
(void *dst, size_t dpitch, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t width, size_t height, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DToArray
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DToArrayAsync
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy3D
(const cudaMemcpy3DParms *p) | Copies data between 3D objects.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy3DAsync
(const cudaMemcpy3DParms *p, cudaStream_t stream) | Copies data between 3D objects.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy3DPeer
(const cudaMemcpy3DPeerParms *p) | Copies memory between devices.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpy3DPeerAsync
(const cudaMemcpy3DPeerParms *p, cudaStream_t stream) | Copies memory between devices asynchronously.

cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpyArrayToArray
(cudaArray_t dst, size_t wOffsetDst, size_t hOffsetDst, cudaArray_const_t src, size_t wOffsetSrc, size_t hOffsetSrc, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyAsync
(void *dst, const void *src, size_t count, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromArray
(void *dst, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromArrayAsync
(void *dst, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t count, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromSymbol
(void *dst, const void *symbol, size_t count, size_t offset, cudaMemcpyKind kind) | Copies data from the given symbol on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromSymbolAsync
(void *dst, const void *symbol, size_t count, size_t offset, cudaMemcpyKind kind, cudaStream_t stream) | Copies data from the given symbol on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyPeer
(void *dst, int dstDevice, const void *src, int srcDevice, size_t count) | Copies memory between two devices.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpyPeerAsync
(void *dst, int dstDevice, const void *src, int srcDevice, size_t count, cudaStream_t stream) | Copies memory between two devices asynchronously.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpyToArray
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyToArrayAsync
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t count, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyToSymbol
(const void *symbol, const void *src, size_t count, size_t offset, cudaMemcpyKind kind) | Copies data to the given symbol on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyToSymbolAsync
(const void *symbol, const void *src, size_t count, size_t offset, cudaMemcpyKind kind, cudaStream_t stream) | Copies data to the given symbol on the device.

cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemset
(void *devPtr, int value, size_t count) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset2D
(void *devPtr, size_t pitch, int value, size_t width, size_t height) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset2DAsync
(void *devPtr, size_t pitch, int value, size_t width, size_t height, cudaStream_t stream) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset3D
(cudaPitchedPtr pitchedDevPtr, int value, cudaExtent extent) | Initializes or sets device memory to a value.

cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset3DAsync
(cudaPitchedPtr pitchedDevPtr, int value, cudaExtent extent, cudaStream_t stream) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemsetAsync
(void *devPtr, int value, size_t count, cudaStream_t stream) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |

Each CUDA Runtime function returns a cudaError_t which takes one of CUDA Error Types values.

| cudaMallocHost
(void **ptr, size_t size) | Allocates page-locked memory on the host.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaMallocManaged
(void **devPtr, size_t size, unsigned int flags) | Allocates memory that will be automatically managed by the Unified Memory system.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation cudaErrorNotSupported cudaErrorInvalidValue |
| cudaHostAlloc
(void **pHost, size_t size, unsigned int flags) | Allocates page-locked memory on the host.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaFreeHost
(void *ptr) | Frees page-locked memory.

cudaError_t = cudaSuccess, cudaErrorInitializationError |
| cudaMallocMipmappedArray
(cudaMipmappedArray_t *mipmappedArray, const cudaChannelFormatDesc *desc, cudaExtent extent, unsigned int numLevels, unsigned int flags) | Allocate a mipmapped array on the device.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaGetMipmappedArrayLevel
(cudaArray_t *levelArray, cudaMipmappedArray_const_t mipmappedArray, unsigned int level) | Gets a mipmap level of a CUDA

mipmapped array.

cudaError_t = cudaSuccess, cudaErrorInvalidValue |
| cudaFreeMipmappedArray
(cudaMipmappedArray_t mipmappedArray) | Frees a mipmapped array on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInitializationError |
| cudaGetSymbolAddress
(void **devPtr, const void *symbol) | Finds the address associated with a CUDA symbol.

cudaError_t = cudaSuccess, cudaErrorInvalidSymbol |
| cudaGetSymbolSize
(size_t *size, const void *symbol) | Finds the size of the object associated with a CUDA symbol.

cudaError_t = cudaSuccess, cudaErrorInvalidSymbol |
| cudaHostGetDevicePointer
(void **pDevice, void *pHost, unsigned int flags) | Passes back device pointer of mapped host memory allocated by cudaHostAlloc or registered by cudaHostRegister.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorMemoryAllocation |
| cudaHostGetFlags
(unsigned int *pFlags, void *pHost) | Passes back flags used to allocate pinned host memory allocated by cudaHostAlloc.

cudaError_t = cudaSuccess, cudaErrorInvalidValue |
| cudaHostRegister
(void *ptr, size_t size, unsigned int flags) | Registers an existing host memory range for use by CUDA.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorMemoryAllocation |
| cudaHostUnregister
(void *ptr) | Unregisters a memory range that was registered with cudaHostRegister.

cudaError_t = cudaSuccess, cudaErrorInvalidValue |
| cudaMallocPitch
(void **devPtr, size_t *pitch, size_t width, size_t height) | Allocates pitched memory on the device.

cudaError_t = cudaSuccess, cudaErrorMemoryAllocation |
| cudaMemcpy
(void *dst, const void *src, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2D
(void *dst, size_t dpitch, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidPitchValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DArrayToArray
(cudaArray_t dst, size_t wOffsetDst, size_t hOffsetDst, cudaArray_const_t src, size_t wOffsetSrc, size_t hOffsetSrc, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DAsync
(void *dst, size_t dpitch, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidPitchValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DFromArray
(void *dst, size_t dpitch, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DFromArrayAsync
(void *dst, size_t dpitch, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t width, size_t height, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DToArray
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy2DToArrayAsync
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t spitch, size_t width, size_t height, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy3D
(const cudaMemcpy3DParms *p) | Copies data between 3D objects.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy3DAsync
(const cudaMemcpy3DParms *p, cudaStream_t stream) | Copies data between 3D objects.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidPitchValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpy3DPeer
(const cudaMemcpy3DPeerParms *p) | Copies memory between devices.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpy3DPeerAsync
(const cudaMemcpy3DPeerParms *p, cudaStream_t stream) | Copies memory between devices asynchronously.

cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpyArrayToArray
(cudaArray_t dst, size_t wOffsetDst, size_t hOffsetDst, cudaArray_const_t src, size_t wOffsetSrc, size_t hOffsetSrc, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyAsync
(void *dst, const void *src, size_t count, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromArray
(void *dst, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromArrayAsync
(void *dst, cudaArray_const_t src, size_t wOffset, size_t hOffset, size_t count, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromSymbol
(void *dst, const void *symbol, size_t count, size_t offset, cudaMemcpyKind kind) | Copies data from the given symbol on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyFromSymbolAsync
(void *dst, const void *symbol, size_t count, size_t offset, cudaMemcpyKind kind, cudaStream_t stream) | Copies data from the given symbol on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyPeer
(void *dst, int dstDevice, const void *src, int srcDevice, size_t count) | Copies memory between two devices.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpyPeerAsync
(void *dst, int dstDevice, const void *src, int srcDevice, size_t count, cudaStream_t stream) | Copies memory between two devices asynchronously.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevice |
| cudaMemcpyToArray
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t count, cudaMemcpyKind kind) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyToArrayAsync
(cudaArray_t dst, size_t wOffset, size_t hOffset, const void *src, size_t count, cudaMemcpyKind kind, cudaStream_t stream) | Copies data between host and device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyToSymbol
(const void *symbol, const void *src, size_t count, size_t offset, cudaMemcpyKind kind) | Copies data to the given symbol on the device.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemcpyToSymbolAsync
(const void *symbol, const void *src, size_t count, size_t offset, cudaMemcpyKind kind, cudaStream_t stream) | Copies data to the given symbol on the device.

cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidSymbol, cudaErrorInvalidDevicePointer, cudaErrorInvalidMemcpyDirection |
| cudaMemset
(void *devPtr, int value, size_t count) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset2D
(void *devPtr, size_t pitch, int value, size_t width, size_t height) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset2DAsync
(void *devPtr, size_t pitch, int value, size_t width, size_t height, cudaStream_t stream) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset3D
(cudaPitchedPtr pitchedDevPtr, int value, cudaExtent extent) | Initializes or sets device memory to a value.

cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemset3DAsync
(cudaPitchedPtr pitchedDevPtr, int value, cudaExtent extent, cudaStream_t stream) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |
| cudaMemsetAsync
(void *devPtr, int value, size_t count, cudaStream_t stream) | Initializes or sets device memory to a value.

cudaError_t = cudaSuccess, cudaErrorInvalidValue, cudaErrorInvalidDevicePointer |

Probably the best way to check for errors in runtime API code is to define an assert style handler function and wrapper macro like this:

#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
   if (code != cudaSuccess) 
   {
      fprintf(stderr,"GPUassert: %s %s %dn", cudaGetErrorString(code), file, line);
      if (abort) exit(code);
   }
}

You can then wrap each API call with the gpuErrchk macro, which will process the return status of the API call it wraps, for example:

gpuErrchk( cudaMalloc(&a_d, size*sizeof(int)) );

If there is an error in a call, a textual message describing the error and the file and line in your code where the error occurred will be emitted to stderr and the application will exit. You could conceivably modify gpuAssert to raise an exception rather than call exit() in a more sophisticated application if it were required.

A second related question is how to check for errors in kernel launches, which can’t be directly wrapped in a macro call like standard runtime API calls. For kernels, something like this:

kernel<<<1,1>>>(a);
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaDeviceSynchronize() );

will firstly check for invalid launch argument, then force the host to wait until the kernel stops and checks for an execution error. The synchronisation can be eliminated if you have a subsequent blocking API call like this:

kernel<<<1,1>>>(a_d);
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaMemcpy(a_h, a_d, size * sizeof(int), cudaMemcpyDeviceToHost) );

in which case the cudaMemcpy call can return either errors which occurred during the kernel execution or those from the memory copy itself. This can be confusing for the beginner, and I would recommend using explicit synchronisation after a kernel launch during debugging to make it easier to understand where problems might be arising.

Note that when using CUDA Dynamic Parallelism, a very similar methodology can and should be applied to any usage of the CUDA runtime API in device kernels, as well as after any device kernel launches:

#include <assert.h>
#define cdpErrchk(ans) { cdpAssert((ans), __FILE__, __LINE__); }
__device__ void cdpAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
   if (code != cudaSuccess)
   {
      printf("GPU kernel assert: %s %s %dn", cudaGetErrorString(code), file, line);
      if (abort) assert(0);
   }
}

CUDA Fortran error checking is analogous. See here and here for typical function error return syntax. A method similar to CUDA C++ is used to collect errors related to kernel launches.

Обработка ошибок

Большая часть функций из runtime библиотеки возвращает cudaError_t

Проверяя эти ошибки, можно идентифицировать некоторые проблемы исполнения

#define CUDA_CALL(x) do{ cudaError_t err = (x); if (err != cudaSuccess) {

printf («Error «%s«n«, cudaGetErrorString(err));

exit(-1); }} while (0)

APC | 51

Обработка ошибок

Коды ошибок записываются в специальную переменную типа enum cudaError_t

Эта переменная равна коду последней ошибки

cudaError_t cudaPeekAtLastError()

возвращает текущее значение этой переменной

cudaError_t cudaGetLastError()

возвращает текущее значение этой переменной и присваивает ей cudaSuccess

const char* cudaGetErrorString (cudaError_t error) — по коду ошибки возвращает её текстовое описание

APC | 52

Обработка ошибок

Простейший способ быть уверенным, что в программе не произошло CUDA-ошибки:

Добавить в конце main.c

std::cout << cudaGetErrorString( cudaGetLastError());

APC | 53

Ошибки работы с памятью

APC | 54

Ошибки работы с памятью

В отличии от CPU не идентифицируются автоматически при исполнении

Использование утилиты cuda-memcheck упрощает их поиск

APC | 55

Асинхронность в CUDA

APC | 56

Асинхронность в CUDA

Некоторые CUDA вызовы являются асинхронными

Отправляют команду на устройство и сразу возвращают управление CPU

В том числе:

Конструкция вызова функции-ядра

Функции копирования памяти *Async

Другие

APC | 57

Асинхронность в CUDA

Почему тогда верно работает код?

//запуск ядра (асинхронно)

sum_kernel<<<blocks, threads>>>(aDev, bDev, cDev);

//переслать результаты обратно

cudaMemcpy(cHost, cDev, nb, cudaMemcpyDeviceToHost);

CPU вызывает cudaMemcpy до завершения выполнения ядра

APC | 58

СUDA Stream

CUDA Stream (очередь исполнения) — последовательность команд для GPU (запуски ядер, копирования памяти и т.д.), исполняемая строго последовательно, следующая выполняется после завершения предыдущей

Команды из разных очередей могут выполняться параллельно, не зависит от исполнения команд в других очередях

По умолчанию все команды помещаются в Default Stream, имеющий номер 0

APC | 59

Асинхронность в CUDA

Почему тогда верно работает код?

//запуск ядра (асинхронно)

sum_kernel<<<blocks, threads>>>(aDev, bDev, cDev);

//переслать результаты обратно

cudaMemcpy(cHost, cDev, nb, cudaMemcpyDeviceToHost);

Вызов функции ядра и cudaMemcpy попадают в один поток (Default Stream)

APC | 60

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]

  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #

📅 2011-Mar-02 ⬩ ✍️ Ashwin Nanjappa ⬩ 🏷️ cuda, error ⬩ 📚 Archive

Error checks in CUDA code can help catch CUDA errors at their source. There are 2 sources of errors in CUDA source code:

  1. Errors from CUDA API calls. For example, a call to cudaMalloc() might fail.

  2. Errors from CUDA kernel calls. For example, there might be invalid memory access inside a kernel.

All CUDA API calls return a cudaError value, so these calls are easy to check:

if ( cudaSuccess != cudaMalloc( &fooPtr, fooSize ) )
    printf( "Error!n" );

CUDA kernel invocations do not return any value. Error from a CUDA kernel call can be checked after its execution by calling cudaGetLastError():

fooKernel<<< x, y >>>(); // Kernel call
if ( cudaSuccess != cudaGetLastError() )
    printf( "Error!n" );

These two types of checks can be elegantly wrapped up in two simple error-checking functions like this:

// Define this to turn on error checking
#define CUDA_ERROR_CHECK

#define CudaSafeCall( err ) __cudaSafeCall( err, __FILE__, __LINE__ )
#define CudaCheckError()    __cudaCheckError( __FILE__, __LINE__ )

inline void __cudaSafeCall( cudaError err, const char *file, const int line )
{
#ifdef CUDA_ERROR_CHECK
    if ( cudaSuccess != err )
    {
        fprintf( stderr, "cudaSafeCall() failed at %s:%i : %sn",
                 file, line, cudaGetErrorString( err ) );
        exit( -1 );
    }
#endif

    return;
}

inline void __cudaCheckError( const char *file, const int line )
{
#ifdef CUDA_ERROR_CHECK
    cudaError err = cudaGetLastError();
    if ( cudaSuccess != err )
    {
        fprintf( stderr, "cudaCheckError() failed at %s:%i : %sn",
                 file, line, cudaGetErrorString( err ) );
        exit( -1 );
    }

    // More careful checking. However, this will affect performance.
    // Comment away if needed.
    err = cudaDeviceSynchronize();
    if( cudaSuccess != err )
    {
        fprintf( stderr, "cudaCheckError() with sync failed at %s:%i : %sn",
                 file, line, cudaGetErrorString( err ) );
        exit( -1 );
    }
#endif

    return;
}

Using these error checking functions is easy:

CudaSafeCall( cudaMalloc( &fooPtr, fooSize ) );

fooKernel<<< x, y >>>(); // Kernel call
CudaCheckError();

These functions are actually derived from similar functions which used to be available in the cutil.h in old CUDA SDKs.

Notice that the calls are inline functions, so absolutely no code is produced when CUDA_CHECK_ERROR is not defined. These utility functions can prove their worth to catch errors as close as possible to the error source only if they are used everywhere. So, use them to wrap all CUDA API calls and after all your kernel calls 😊

Tried with: CUDA 5.5

Понравилась статья? Поделить с друзьями:
  • Csfe svc error zimbra
  • Cscwlde error 28008012
  • Cscwlde error 2000fe85
  • Cuda initialize error t rex
  • Cscwidu error 2000fe85