Check cuda error

This post looks at methods of error checking and debugging GPU code. Primarily by creating helper functions and macros for checking for errors.

Author: Greg Gutmann
Affiliation: Tokyo Institute of Technology, Nvidia University Ambassador, Nvidia DLI 

This is a continuation of my posts on CUDA programming, for the previous post on thread indexing and memory click here [Post 2]. In this post, we will look at methods of error checking and debugging GPU code. However, the the CUDA API function calls are not explained in detail, for this I recommend the CUDA documentation [Error Handling].

For beginners, it is not necessary to try to fully understand everything mentioned, specifically later in the post when talking about asynchronous and concurrent GPU debugging. But it is valuable to have a bit of familiarity with the topic for when it becomes relevant in the future.

Debugging with Print Statements

Caution: the SyntaxHighlighter plugin used for the code block has an error at the time of posting. If you see “&” replace it with “&”.

// 0_creatingErrors.cu
#include <stdio.h>

__global__ void kernelA(int * globalArray){
	int globalThreadId = blockIdx.x * blockDim.x + threadIdx.x;

	// If the problem is small or if printing a subset of the problem 
	// (inside conditional expression, etc...). 
	// Then using printf inside of a kernel can be a viable debugging approach.
	printf("blockIdx.x:%d * blockDim.x:%d + threadIdx.x:%d = globalThreadId:%dn", blockIdx.x, blockDim.x, threadIdx.x, globalThreadId);
	globalArray[globalThreadId] = globalThreadId;
}
 
int main()
{
	int elementCount = 32;
	int dataSize = elementCount * sizeof(int);
	
	cudaSetDevice(0);
	
	int * managedArray;
	cudaMallocManaged(&managedArray, dataSize);

	kernelA <<<4,8>>>(managedArray);

    cudaDeviceSynchronize(); 
	
	printf("n");

	// Printing a portion of results can be another good debugging approach
	for(int i = 0; i < elementCount; i++){
		printf("%d%s", managedArray[i], (i < elementCount - 1) ? ", " : "n");
	}	
	
	cudaFree(managedArray);

	cudaDeviceReset();
 
	return 0;
}

One of the most basic methods of debugging, or confirming results, is to simply print out the results.

0, 1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0

By doing this we can see there is a problem with indices 8-31, and potentially all indices. But there is not a lot of information here, it would likely be even less enlightening if the kernel was more complex.

The second way to print is from the CUDA kernel itself.

Sample's Output from the GPU Kernel:
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:1 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:3 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:2 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:0 = globalThreadId:0
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:1 = globalThreadId:1
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:2 = globalThreadId:2
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:3 = globalThreadId:3
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:4 = globalThreadId:4
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:5 = globalThreadId:5
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:6 = globalThreadId:6
blockIdx.x:0 * blockDim.x:8 + threadIdx.x:7 = globalThreadId:7

From this, we can see that the correct number of threads and blocks are launched but there is something wrong when computing globalThreadId (blockIdx.y was used instead of blockIdx. x). If the blockIdx.y was printed it would be more obvious what is happening, all zeros, but often mistakes appear in one spot but not another.

Another detail that is easy to see here is that, when running massively parallel code, print statements may quickly become overwhelming. However, targeted printing can still be useful.

Part 1: Error Checking Helpers

It is often not convenient or clean looking to always write error checking code after every operation or function. Thus it is common for many to write macros or functions to speed up writing code and make it look cleaner.

Below an example of explicitly writing out error checking operations, taken from Visual Studio’s CUDA startup project. On the plus side with this approach, you could provide very exact error reporting and possible fixes.

// Choose which GPU to run on, change this on a multi-GPU system.
cudaStatus = cudaSetDevice(0);
if (cudaStatus != cudaSuccess) {
    fprintf(stderr, "cudaSetDevice failed!  Do you have a CUDA-capable GPU installed?");
    goto Error;
}

Below the helper samples in the following subsections have been broken up into .cu and .cuh files for simpler compilation (linking, etc…).

CUDA Error Checking Macros

Macros are a popular option as they can automatically collect and report information about the file, function and line number. If you browse the samples included with the CUDA SDK you will come across error checking macros. The sample code below was actually originally from there but has been modified quite a bit.

// errorCheckingMacro.cuh
#ifndef CHECK_CUDA_ERROR_M_H
#define CHECK_CUDA_ERROR_M_H

#define PRINT_ON_SUCCESS 1

// To be used around calls that return an error code, ex. cudaDeviceSynchronize or cudaMallocManaged
void checkError(cudaError_t code, char const * func, const char *file, const int line, bool abort = true);
#define checkCUDAError(val) { checkError((val), #val, __FILE__, __LINE__); }	// in-line regular function
#define checkCUDAError2(val) check((val), #val, __FILE__, __LINE__) // typical macro 

// To be used after calls that do not return an error code, ex. kernels to check kernel launch errors
void checkLastError(char const * func, const char *file, const int line, bool abort = true);
#define checkLastCUDAError(func) { checkLastError(func, __FILE__, __LINE__); }
#define checkLastCUDAError_noAbort(func) { checkLastError(func, __FILE__, __LINE__, 0); }

#endif // CHECK_CUDA_ERROR_M_H

In the above sample on lines 9 and 10 two different ways of writing the same macro can be seen. I tend to use curly brackets since it acts like a regular function when invoked. For more details on macros see this post: http://www.ebyte.it/library/codesnippets/WritingCppMacros.html#4

// errorCheckingMacro.cu
#include "errorCheckingMacro.cuh"
#include <stdio.h>

// Assumes single device when calling cudaDeviceReset(); and exit(code);
// In some cases a more lengthy program clean up / termination may be needed

void checkError(cudaError_t code, char const * func, const char *file, const int line, bool abort)
{
	if (code != cudaSuccess) 
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
		if (abort){
			cudaDeviceReset();
			exit(code);
		}
	}
	else if (PRINT_ON_SUCCESS)
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
	}
}

void checkLastError(char const * func, const char *file, const int line, bool abort)
{
	cudaError_t code = cudaGetLastError();
	if (code != cudaSuccess)
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
		if (abort) {
			cudaDeviceReset();
			exit(code);
		}
	}
	else if (PRINT_ON_SUCCESS)
	{
		const char * errorMessage = cudaGetErrorString(code);
		fprintf(stderr, "CUDA error returned from "%s" at %s:%d, Error code: %d (%s)n", func, file, line, code, errorMessage);
	}
}

Above, the first function shown takes the CUDA error code as one of its parameters, then uses that to check if an error occurred and if so what kind. The second function instead calls cudaGetLastError which is needed when CUDA operations do not return an error code (ex. kernels).

CUDA Error Checking Functions with Added Functionality

It should be noted that the examples shown below could be converted into macros. The samples are intended to show another approach and some additional methods of error checking asynchronous operations.

Non-macro based error checking lacks the ability to automatically gather the file name, the function name and the line number for printing or other uses. As a solution to this, when calling the error checking function I pass in an identification string.

// errorChecking.cuh
#ifndef CHECK_CUDA_ERROR_H
#define CHECK_CUDA_ERROR_H

// This could be set with a compile time flag ex. DEBUG or _DEBUG
// But then would need to use #if / #ifdef not if / else if in code
#define FORCE_SYNC_GPU 0
#define PRINT_ON_SUCCESS 1

cudaError_t checkAndPrint(const char * name, int sync = 0);
cudaError_t checkCUDAError(const char * name, int sync = 0);

#endif // CHECK_CUDA_ERROR_H

The code below shows error checking that will check errors with or without forced synchronization.

Pros: Forcing the code to synchronize will ensure the operation just called has finished prior to checking if any errors have occurred. Without doing this, errors from asynchronous calls may appear later on in the code when checking errors for other operations, leading to confusion. Or possibly the error never being reported.

Cons: forced synchronization is only to be used when debugging as the code will take a very large performance hit with constant synchronization calls. Also, forced synchronization may change how the code runs since it will prevent most operations from overlapping. For example, the code might normally use many asynchronous calls or call operations on independent threads running concurrently. This can lead to errors that show up in a release but the not show up when debugging.

// errorChecking.cu
#include "errorChecking.cuh"
#include <stdio.h>

cudaError_t checkAndPrint(const char * name, int sync) {
	cudaError_t err = cudaGetLastError();
	if (err != cudaSuccess)
	{
		const char * errorMessage = cudaGetErrorString(err);
		fprintf(stderr, "CUDA error check "%s" returned ERROR code: %d (%s) %s n", name, err, errorMessage, (sync) ? "after sync" : "");
	}
	else if (PRINT_ON_SUCCESS) {
		printf("CUDA error check "%s" executed successfully %sn", name, (sync) ? "after sync" : "");
	}
	return err;
}

cudaError_t checkCUDAError(const char * name, int sync) {
	cudaError_t err = cudaSuccess;
	if (sync || FORCE_SYNC_GPU) {
		err = checkAndPrint(name, 0);
		cudaDeviceSynchronize();
		err = checkAndPrint(name, 1);
	}
	else {
		err = checkAndPrint(name, 0);
	}
	return err;
}

As seen above the function checkCUDAError has been designed to be called right after GPU functions instead of directly taking the return value from functions that return an error value. This is because not all CUDA functions return an error value, and some calls are asynchronous, as mentioned.

For the asynchronous case, the error checking function checks for errors after invoking the operation and after the operation has completed. Example case: kernels can have pre-launch errors (incorrect configuration) and kernel execution errors (errors running the GPU code).

Asynchronous possibilities:

  • Memory copies with Async in the name
  • Kernels

Concurrency can also be increased by using CUDA streams. (A future post, or check google now 🙂 )

Note: The error checking functions could be written to wrap the CUDA functions like a macro, there are many possibilities. General approaches will work but customizing it to your needs may become more desirable as your code becomes more complex.

cudaError_t checkCUDAError(cudaError_t err, const char * name, int sync = 0);
cudaError_t checkCUDAError(cudaError_t err);

Part 2: Using the Error Checking Code Above

Next, we will look at some short samples that make use of the error checking topics just covered.

Use of Error Checking Macros

This sample shows the use of the error checking macros. It also includes other concepts, such as printing a subset of results or writing testing functions.

// 2_creatingErrorsMacro.cu
#include "errorCheckingMacro.cuh"
#include <stdio.h>
 
__global__ void kernelA(int * globalArray){
	int globalThreadId = blockIdx.x * blockDim.x + threadIdx.x;
	globalArray[globalThreadId] = globalThreadId;
}

// Testing function
void testResults(int * data, int N) {
	// For more information incorrect values with their index's can be printed when found
	int testPass = true;
	for (int i = 0; i < N; i++) {
		if (data[i] != i) {
			testPass = false;
		}
	}
	printf("Result of test: %snn", (testPass) ? "passed!" : "uh oh...");
}

int main()
{
	int elementCount = 10240;
	int dataSize = elementCount * sizeof(int);
	
	checkCUDAError(cudaSetDevice(0));
	
	int * managedArray;
	checkCUDAError(cudaMallocManaged(&managedArray, dataSize));
 
	kernelA <<<4,1500>>>(managedArray); // Too many threads per block. 
	// Also, not enough threads for the amount of data but that is not the focus for this sample. 
 
	checkLastCUDAError_noAbort("kernelA");

	checkCUDAError(cudaDeviceSynchronize());
	

	// Can print a subset when problem size is larger
	int printStart = elementCount - 16;
	int printEnd = elementCount;
	printf("nChecking values[%d-%d): ", printStart, printEnd); // Interval notation: https://en.wikipedia.org/wiki/Bracket_(mathematics)#Intervals
	for(int i = printStart; i < printEnd; i++){
		printf("%d%s", managedArray[i], (i < elementCount - 1) ? ", " : "n");
	}	
	
	// Or better yet, write a testing function and let the computer test for you
	testResults(managedArray, elementCount);
	
	checkCUDAError(cudaFree(managedArray));
	
	checkCUDAError(cudaDeviceReset());
 
	return 0;
}
Sample's Output: 
CUDA error returned from "cudaSetDevice(0)" at 2_creatingErrorsMacro.cu:15, Error code: 0 (no error)
CUDA error returned from "cudaMallocManaged(&managedArray, dataSize)" at 2_creatingErrorsMacro.cu:18, Error code: 0 (no error)
CUDA error returned from "kernelA" at 2_creatingErrorsMacro.cu:22, Error code: 9 (invalid configuration argument)
CUDA error returned from "cudaDeviceSynchronize()" at 2_creatingErrorsMacro.cu:24, Error code: 0 (no error)
Checking values[10224-10240): 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
Result of test: uh oh…
CUDA error returned from "cudaFree(managedArray)" at 2_creatingErrorsMacro.cu:46, Error code: 0 (no error)
CUDA error returned from "cudaDeviceReset()" at 2_creatingErrorsMacro.cu:48, Error code: 0 (no error)

The error in this sample (error code: 9) on line 32 was caused by trying to use 1500 threads per block. The result of this error can also be seen when looking at the portion of results printed or by the result of the testing function.

Testing functions are generally a better approach as errors may occur in the portion of results that were not printed. Printing might often just be to make the developer more sure the results are correct by personally seeing them.

One detail that might seem strange is that I have added a feature to prevent the program from closing when an error occurs. From my experience with creating simulations, I have found that when an error occurs it may not always have an effect on the code. Depending on the situation it may be better to notify the user or silently log the error and program state, instead of closing the program immediately and frustrating the user.

Use of Error Checking Functions

The sample below was made to run on 32 elements but then modified to showcase errors when calling cudaSetDevice and cudaMallocManaged.

// 1_creatingErrors.cu
#include "errorChecking.cuh"
#include <stdio.h>

enum errorCheckFlag {
	NO_SYNC,
	SYNC
};

__global__ void kernelA(int * globalArray){
	int globalThreadId = blockIdx.x * blockDim.x + threadIdx.x;
	globalArray[globalThreadId] = globalThreadId;
}
 
void helperFunction(int * managedMem){
	kernelA <<<4,8>>>(managedMem);
	checkCUDAError("<helperFunction> kernelA", SYNC); 
	// Showing a possible naming convention "<location>" for tracking down error   
	// locations when not using macros. A bit of work but it will save time.
}

int main()
{
	int elementCount = 32;
	int dataSize = elementCount * sizeof(int);
	
	cudaSetDevice(5); // The PC I am on does not have 6 GPU
	checkCUDAError("<main> cudaSetDevice");
	
	int * managedArray;
	cudaMallocManaged(&managedArray, dataSize * 1000000000); // My GPU do not have 32 GB
	checkCUDAError("<main> cudaMallocManaged");

	kernelA <<<4,8>>>(managedArray);
	checkCUDAError("<main> kernelA", SYNC);

    //cudaDeviceSynchronize(); // checkCUDAError will sync with flag SYNC
	
	cudaFree(managedArray);
	checkCUDAError("<main> cudaFree");

	cudaDeviceReset();
	checkCUDAError("<main> cudaDeviceReset");
 
	return 0;
}
Sample's Output as is:
CUDA error check "<main> cudaSetDevice" returned ERROR code: 10 (invalid device ordinal)
CUDA error check "<main> cudaMallocManaged" returned ERROR code: 2 (out of memory)
CUDA error check "<main> kernelA" executed successfully
CUDA error check "<main> kernelA" returned ERROR code: 77 (an illegal memory access was encountered) after sync
CUDA error check "<main> cudaFree" returned ERROR code: 77 (an illegal memory access was encountered)
CUDA error check "<main> cudaDeviceReset" executed successfully

The first error (10) was caused because there is no GPU with the device ID of 5 on my system.
The next error (2) was caused because the code attempted to allocate more memory than was available.
Then error (77), in both locations, was a result of trying to work with memory that was never successfully allocated.

Sample's Output if Corrected:
CUDA error check "<main> cudaSetDevice" executed successfully
CUDA error check "<main> cudaMallocManaged" executed successfully
CUDA error check "<main> kernelA" executed successfully
CUDA error check "<main> kernelA" executed successfully after sync
CUDA error check "<main> cudaFree" executed successfully
CUDA error check "<main> cudaDeviceReset" executed successfully

With the errors corrected, a device ID of 0 and removing “* 1000000000”, everything reports running successfully. Printing success is usually not done though.

Conclusion

In this post, we covered the use of error checking functions and the various approaches needed for debugging synchronous and asynchronous GPU operations. Also, though out the post other common debugging methods where mentioned that might prove useful when writing GPU code.

At this point, you should be able to write and debug simple GPU programs if you have also gone through [post 1] and [post 2] previously.

Additional Information

If you were curious about the naming convention, I found starting folders or files with numbers, like 1_creatingErrors.cu, makes for easy use of tab’s autocomplete feature in terminal windows. It is only intended for initial work with code or samples though.

This post became a bit longer than previous samples, so I am including my simple makefile. A future post may take a closer look at makefiles, but as is there are many pages on makefiles across the internet.

NVCC = nvcc
CUDAFLAGS = -arch=sm_61 -lcudart
OPT = -m64

all: zero one two

errorChecking.obj: errorChecking.cu errorChecking.cuh
	${NVCC} ${CUDAFLAGS} ${OPT} -c errorChecking.cu -o errorChecking.obj

errorCheckingMacro.obj: errorCheckingMacro.cu errorCheckingMacro.cuh
	${NVCC} ${CUDAFLAGS} ${OPT} -c errorCheckingMacro.cu -o errorCheckingMacro.obj

zero: 0_creatingErrors.cu
	${NVCC} ${CUDAFLAGS} ${OPT} -o zero 0_creatingErrors.cu

one: 1_creatingErrors.cu errorChecking.obj
	${NVCC} ${CUDAFLAGS} ${OPT} -o one 1_creatingErrors.cu errorChecking.obj

two: 2_creatingErrorsMacro.cu errorCheckingMacro.obj
	${NVCC} ${CUDAFLAGS} ${OPT} -o two 2_creatingErrorsMacro.cu errorCheckingMacro.obj
	
clean:
	${RM} *.o *.obj *.exp *.pdb *.exe zero one two

Contact me if you would like to use the contents of this post. Thanks 🙂
Copyright © 2019 by Gregory Gutmann

Introduction

Proper CUDA error checking is critical for making the CUDA program development smooth and successful. Missing or incorrectly identifying CUDA errors could cause problems in production or waste lots of time in debugging.

In this blog post, I would like to quickly discuss proper CUDA error checking.

CUDA Error Types

CUDA errors could be separated into synchronous and asynchronous errors, or sticky and non-sticky errors.

Synchronous Error VS Asynchronous Error

CUDA kernel launch is asynchronous, meaning when the host thread reaches the code for kernel launch, say kernel<<<...>>>, the host thread issues an request to execute the kernel on GPU, then the host thread that launches the kernel continues, without waiting for the kernel to complete. The kernel might not begin to execute right away either.

There could be two types of error for CUDA kernel launch, synchronous error and asynchronous error.

Synchronous error happens when the host thread knows the kernel is illegal or invalid. For example, when the thread block size or grid size is too large, a synchronous error is resulted immediately after the kernel launch call, and this error could be captured by CUDA runtime error capturing API calls, such as cudaGetLastError, right after the kernel launch call.

Asynchronous error happens during kernel execution or CUDA runtime asynchronous API execution on GPU. It might take a while to encounter the error and send the error to host thread. For example, For example, it might encounter accessing invalid memory address in the late stage of kernel execution or CUDA runtime asynchronous API cudaMemcpyAsync execution, it will abort the execution and then send the error back to thread. Even if there are CUDA runtime error capturing API calls, such as cudaGetLastError, right after the kernel launch call, at the time when the error reaches host, those CUDA runtime error capturing API calls have been executed and they found no error. It is possible to capture the asynchronous error by explicitly synchronizing using the CUDA kernel launch using CUDA runtime API calls, such as cudaDeviceSynchronize, cudaStreamSynchronize, or cudaEventSynchronize, and checking the returned error from those CUDA kernel launch using CUDA runtime API calls or capturing the error using CUDA runtime error capturing API calls, such as cudaGetLastError. However, explicitly synchronization usually affects performance and therefore is not recommended for using in production unless it is extremely necessary.

Sticky VS Non-Sticky Error

CUDA runtime API returns non-sticky error if there is any, whereas CUDA kernel execution resulted in sticky error if there is any.

A non-sticky error is recoverable, meaning subsequent CUDA runtime API calls could behave normally. Therefore, the CUDA context is not corrupted. For example, when we allocate memory using cudaMalloc, it will return a non-sticky error if the GPU memory is insufficient.

A sticky error is not recoverable, meaning subsequent CUDA runtime API calls will always return the same error. Therefore, the CUDA context is corrupted, unless the application host process is terminated. For example, when the kernel tries to access invalid memory address during kernel execution, it will result in a sticky error which will be captured and returned by all the subsequent CUDA runtime API calls.

CUDA Error Checking Best Practice

In a CUDA program implementation, both development and production code, always check the return value of each CUDA runtime synchronous or asynchronous API call to see if there is any CUDA synchronous error, always run CUDA runtime error capturing API calls, such as cudaGetLastError, after kernel launch calls to see if there is any CUDA synchronous error. Check CUDA asynchronous error in development by synchronization and error checking after kernel launch calls and disable it in production.

Quiz

There is a question on the NVIDIA developer forum. Let’s use it as a quiz. Basically, the user has the following code. All calculations are done on the default stream and one thread. The cudaDeviceSynchronize returns cudaSuccess, but the cudaGetLastError call returns an invalid device function error. How would this happen?

1
2
3
4
5
6
7


res = cudaDeviceSynchronize();


res = cudaGetLastError();

cudaGetLastError returns the last error that has been produced by any of the runtime calls in the same host thread and resets it to cudaSuccess. cudaDeviceSynchronize is a CUDA runtime API call and it got no error. This means the kernel launch got no asynchronous error. However, there could be errors from CUDA runtime API calls prior to launching the kernel or the kernel launching encountered synchronous error which have not been properly error-checked. The last error that produced by those would not be reset until the cudaGetLastError call, even though before the reset there were cudaSuccess from other CUDA runtime API calls.

For example,

last_error.cu
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#include <cuda_runtime.h>
#include <iostream>

#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)
template <typename T>
void check(T err, const char* const func, const char* const file,
const int line)
{
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << " " << func << std::endl;


}
}

#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)
void checkLast(const char* const file, const int line)
{
cudaError_t err{cudaGetLastError()};
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << std::endl;


}
}

int main()
{
float* p;

CHECK_CUDA_ERROR(cudaMalloc(&p, 1000000000000000 * sizeof(float)));

CHECK_CUDA_ERROR(cudaMalloc(&p, 10 * sizeof(float)));

CHECK_CUDA_ERROR(cudaFree(p));



CHECK_LAST_CUDA_ERROR();

CHECK_LAST_CUDA_ERROR();
}
1
2
3
4
5
6
$ nvcc last_error.cu -o last_error
$ ./last_error
CUDA Runtime Error at: last_error.cu:37
out of memory cudaMalloc(&p, 1000000000000000 * sizeof(float))
CUDA Runtime Error at: last_error.cu:45
out of memory

Fundamentally, it was due to that the CUDA program error checking was not following the best practice mentioned previously.

References

  • CUDA Debugging

Probably the best way to check for errors in runtime API code is to define an assert style handler function and wrapper macro like this:

#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
   if (code != cudaSuccess) 
   {
      fprintf(stderr,"GPUassert: %s %s %dn", cudaGetErrorString(code), file, line);
      if (abort) exit(code);
   }
}

You can then wrap each API call with the gpuErrchk macro, which will process the return status of the API call it wraps, for example:

gpuErrchk( cudaMalloc(&a_d, size*sizeof(int)) );

If there is an error in a call, a textual message describing the error and the file and line in your code where the error occurred will be emitted to stderr and the application will exit. You could conceivably modify gpuAssert to raise an exception rather than call exit() in a more sophisticated application if it were required.

A second related question is how to check for errors in kernel launches, which can’t be directly wrapped in a macro call like standard runtime API calls. For kernels, something like this:

kernel<<<1,1>>>(a);
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaDeviceSynchronize() );

will firstly check for invalid launch argument, then force the host to wait until the kernel stops and checks for an execution error. The synchronisation can be eliminated if you have a subsequent blocking API call like this:

kernel<<<1,1>>>(a_d);
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaMemcpy(a_h, a_d, size * sizeof(int), cudaMemcpyDeviceToHost) );

in which case the cudaMemcpy call can return either errors which occurred during the kernel execution or those from the memory copy itself. This can be confusing for the beginner, and I would recommend using explicit synchronisation after a kernel launch during debugging to make it easier to understand where problems might be arising.

Note that when using CUDA Dynamic Parallelism, a very similar methodology can and should be applied to any usage of the CUDA runtime API in device kernels, as well as after any device kernel launches:

#include <assert.h>
#define cdpErrchk(ans) { cdpAssert((ans), __FILE__, __LINE__); }
__device__ void cdpAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
   if (code != cudaSuccess)
   {
      printf("GPU kernel assert: %s %s %dn", cudaGetErrorString(code), file, line);
      if (abort) assert(0);
   }
}

CUDA Fortran error checking is analogous. See here and here for typical function error return syntax. A method similar to CUDA C++ is used to collect errors related to kernel launches.

📅 2011-Mar-02 ⬩ ✍️ Ashwin Nanjappa ⬩ 🏷️ cuda, error ⬩ 📚 Archive

Error checks in CUDA code can help catch CUDA errors at their source. There are 2 sources of errors in CUDA source code:

  1. Errors from CUDA API calls. For example, a call to cudaMalloc() might fail.

  2. Errors from CUDA kernel calls. For example, there might be invalid memory access inside a kernel.

All CUDA API calls return a cudaError value, so these calls are easy to check:

if ( cudaSuccess != cudaMalloc( &fooPtr, fooSize ) )
    printf( "Error!n" );

CUDA kernel invocations do not return any value. Error from a CUDA kernel call can be checked after its execution by calling cudaGetLastError():

fooKernel<<< x, y >>>(); // Kernel call
if ( cudaSuccess != cudaGetLastError() )
    printf( "Error!n" );

These two types of checks can be elegantly wrapped up in two simple error-checking functions like this:

// Define this to turn on error checking
#define CUDA_ERROR_CHECK

#define CudaSafeCall( err ) __cudaSafeCall( err, __FILE__, __LINE__ )
#define CudaCheckError()    __cudaCheckError( __FILE__, __LINE__ )

inline void __cudaSafeCall( cudaError err, const char *file, const int line )
{
#ifdef CUDA_ERROR_CHECK
    if ( cudaSuccess != err )
    {
        fprintf( stderr, "cudaSafeCall() failed at %s:%i : %sn",
                 file, line, cudaGetErrorString( err ) );
        exit( -1 );
    }
#endif

    return;
}

inline void __cudaCheckError( const char *file, const int line )
{
#ifdef CUDA_ERROR_CHECK
    cudaError err = cudaGetLastError();
    if ( cudaSuccess != err )
    {
        fprintf( stderr, "cudaCheckError() failed at %s:%i : %sn",
                 file, line, cudaGetErrorString( err ) );
        exit( -1 );
    }

    // More careful checking. However, this will affect performance.
    // Comment away if needed.
    err = cudaDeviceSynchronize();
    if( cudaSuccess != err )
    {
        fprintf( stderr, "cudaCheckError() with sync failed at %s:%i : %sn",
                 file, line, cudaGetErrorString( err ) );
        exit( -1 );
    }
#endif

    return;
}

Using these error checking functions is easy:

CudaSafeCall( cudaMalloc( &fooPtr, fooSize ) );

fooKernel<<< x, y >>>(); // Kernel call
CudaCheckError();

These functions are actually derived from similar functions which used to be available in the cutil.h in old CUDA SDKs.

Notice that the calls are inline functions, so absolutely no code is produced when CUDA_CHECK_ERROR is not defined. These utility functions can prove their worth to catch errors as close as possible to the error source only if they are used everywhere. So, use them to wrap all CUDA API calls and after all your kernel calls 😊

Tried with: CUDA 5.5

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

1
branch

0
tags

Code

  • Use Git or checkout with SVN using the web URL.

  • Open with GitHub Desktop

  • Download ZIP

Latest commit

Files

Permalink

Failed to load latest commit information.

Type

Name

Latest commit message

Commit time

About

CUDA Error Checking Function: Do you want to check for errors using the CUDA Driver API? Here is a header for checking errors in CUDA Driver Api. The function checkCudaErrors checks the result of CUresult and returns it value.

Topics

Resources

Readme

Stars

1
star

Watchers

1
watching

Forks

0
forks

Понравилась статья? Поделить с друзьями:
  • Check cable connection на компьютере как исправить ошибку
  • Check cab controller at next stop ошибка volvo
  • Check brake pads дискавери 3 ошибка
  • Check brake pads passat b6 ошибка
  • Check atomizer ошибка на вейпе что делать