This section covers troubleshooting tips and solutions to common issues. If the following doesn’t fix your problem, please submit a support request via Codeplay’s community support website. We cannot provide any guarantees of support, but we will try to help. Please ensure that you are using the most recent stable release of the software before submitting a support request.
Bugs, performance, and feature requests can be reported via the oneAPI DPC++ compiler open-source repository.
Missing Devices in
sycl-ls does not list the expected devices within the system:
Check that there is a compatible version of the CUDA® or ROCm™ SDK installed on the system (for CUDA or HIP plugins respectively), as well as the compatible drivers.
rocm-smican correctly identify the devices.
Check that the plugins are correctly loaded. This can be done by setting the environment variable
sycl-lsagain. For example:
You should see output similar to the following:
SYCL_PI_TRACE[basic]: Plugin found and successfully loaded: libpi_opencl.so [ PluginVersion: 11.15.1 ] SYCL_PI_TRACE[basic]: Plugin found and successfully loaded: libpi_level_zero.so [ PluginVersion: 11.15.1 ] SYCL_PI_TRACE[basic]: Plugin found and successfully loaded: libpi_cuda.so [ PluginVersion: 11.15.1 ] [ext_oneapi_cuda:gpu:0] NVIDIA CUDA BACKEND, NVIDIA A100-PCIE-40GB 0.0 [CUDA 11.7]
If the plugin you’ve installed doesn’t show up in the
sycl-lsoutput, you can run it again with
SYCL_PI_TRACEthis time set to
-1to see more details of the error:
Within the output, which can be quite large, you may see errors like the following:
SYCL_PI_TRACE[-1]: dlopen(/opt/intel/oneapi/compiler/2024.0.0/linux/lib/libpi_hip.so) failed with <libamdhip64.so.4: cannot open shared object file: No such file or directory> SYCL_PI_TRACE[all]: Check if plugin is present. Failed to load plugin: libpi_hip.so
The CUDA plugin requires
libcupti.sofrom the CUDA SDK.
The HIP plugin requires
Double-check your CUDA or ROCm installation and make sure that the environment is set up properly i.e.
LD_LIBRARY_PATHpoints to the correct locations to find the above libraries.
Check that there isn’t any device filtering environment variable set such as
sycl-lswill warn if this one is set), or
Check permissions. On POSIX access to accelerator devices is typically gated on being a member of the proper groups. For example, on Ubuntu Linux GPU access may require membership of the
rendergroups, but this can vary depending on your configuration.
Dealing with Invalid Binary Errors
A common mistake is to execute a SYCL program using a platform for which
the SYCL program does not have a compatible binary. For example the SYCL
program may have been compiled for a SPIR-V backend but then executed on a
HIP device. In such a case the following error code,
PI_RESULT_ERROR_INVALID_BINARY, will be thrown. In this scenario, check
the following points:
Make sure your target platform is in
-fsycl-targetsso that the program will be compiled for the required platform(s).
Make sure that the program is using a sycl platform or device selector that is compatible with the platforms for which the executable was compiled.
Correct Platform, Incorrect Device
When running SYCL™ applications targeting CUDA or HIP, under certain
circumstances the application may fail and report an error about an
invalid binary. For example, for CUDA it may report
This means that the SYCL device selected was provided with a binary for the correct platform but an incorrect architecture. In that scenario, check the following points:
Make sure your target is in -fsycl-targets and that the correct architecture matching the available hardware is specified with the flags:
Flags for CUDA:
Flags for HIP:
Ensure that the correct SYCL device (matching the architecture that the application was built for) is selected at run-time. The environment variable
SYCL_PI_TRACE=1can be used to display more information on which device was selected, for example:
SYCL_PI_TRACE[basic]: Plugin found and successfully loaded: libpi_opencl.so [ PluginVersion: 11.16.1 ] SYCL_PI_TRACE[basic]: Plugin found and successfully loaded: libpi_level_zero.so [ PluginVersion: 11.16.1 ] SYCL_PI_TRACE[basic]: Plugin found and successfully loaded: libpi_cuda.so [ PluginVersion: 11.16.1 ] SYCL_PI_TRACE[all]: Requested device_type: info::device_type::automatic SYCL_PI_TRACE[all]: Requested device_type: info::device_type::automatic SYCL_PI_TRACE[all]: Selected device: -> final score = 1500 SYCL_PI_TRACE[all]: platform: NVIDIA CUDA BACKEND SYCL_PI_TRACE[all]: device: NVIDIA GeForce GTX 1050 Ti
If an incorrect device is selected, the environment variable
ONEAPI_DEVICE_SELECTORmay be used to help the SYCL device selector pick the correct one - see the Environment Variables section of the Intel® oneAPI DPC++/C++ Compiler documentation.
Unresolved extern function ‘…’ / Undefined external symbol ‘…’
This may be caused by a number of things.
There is currently no support for
std::complexin DPC++. Please use
icpxcompiler driver uses
-ffast-mathmode by default, which can currently lead to some issues resolving certain math functions such as
logf. This can be worked around by disabling
See Install oneAPI for NVIDIA GPUs for more information.
Compiler Error: “cannot find libdevice”
If the CUDA SDK is not installed in a standard location,
fail to find it - leading to errors during compilation such as:
clang-17: error: cannot find libdevice for sm_50; provide path to different CUDA installation via '--cuda-path', or pass '-nocudalib' to build without linking with libdevice
To fix this issue, specify the path to your CUDA installation using the
Compiler Error: “needs target feature”
Some nvptx builtins that are used by the DPC++ runtime require a minimum
Compute Capability in order to compile. If you have not targeted a
sufficient Compute Capability for a builtin that you’re using in your
program (by using the compiler argument
-Xsycl-target-backend=nvptx64-nvidia-cuda --cuda-gpu-arch=sm_xx), an
error with the following pattern will be reported:
error: '__builtin_name' needs target feature (sm_70|sm_72|..),...
In order to avoid such an error, ensure that you are compiling for a device
with a sufficient Compute Capability.
If you are still getting such an error despite passing a supported Compute
Capability to the compiler, this may be because you are passing the 32-bit
nvptx-nvidia-cuda triple does not allow the compilation of target
feature builtins and is not officially supported by DPC++. The 64-bit
nvptx64-nvidia-cuda, is supported by all modern NVIDIA® devices,
so it is always recommended.
Compiler Warning: “CUDA version is newer than the latest supported version”
Depending on the CUDA version used with the release, the compiler may output the following warning:
clang++: warning: CUDA version is newer than the latest supported version 12.1 [-Wunknown-cuda-version]
In most cases this warning can safely be ignored. It simply means that DPC++ may not use some of the latest CUDA features, but it should still work perfectly fine in most scenarios.
Out of resources on kernel launch
- Too Many Resources Requested for Launch:
CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES: This indicates that a launch did not occur because it did not have appropriate resources.
- Possible reasons are:
The maximum number of work-items (threads in Cuda) for the device have been exceeded.
The maximum work-group size (thread-block in Cuda) for the device has been exceeded.
The kernel resources (i.e., registers or shared memory) exceed the device capabilities.
We can verify these possibilities by checking the device capabilities and resolve them configuring the kernel launch with those capability limitations in mind.
However, the maximum work-group size for kernel launch is not always the same number as the potential capability of the device, and this is where we need to understand the register usage of our kernel and take it into account. High register pressure with large work-groups can lead to an invalid kernel launch due to exceeding hardware limitations, such as available registers. More information can be seen in the following table: technical-specifications-per-compute-capability.
Out of available registers:
CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES error can be a result of using too many registers per CUDA block.
To detect this problem, a quick check on the number of registers allocated for the kernel by
can be performed by specifying the
-Xcuda-ptxas --verbose option when compiling. This will enable
verbose mode which prints code generation statistics, including register usage for the kernel(s) in the binary.
ptxas verbose output:
ptxas info : Compiling entry function 'my_kernel' for 'sm_75' ptxas info : Function properties for my_kernel 8192 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads ptxas info : Used 100 registers, 256 bytes cmem, 1512 bytes cmem
The DPC++ runtime can, as of recently, also detect this case and throw a detailed exception about the kernel having exceeded the number of registers available on the hardware, and giving some insights on how many registers it actually uses and what is the work-group size of the failing launch configuration.
This type of error is mapped to the
ERROR_INVALID_WORK_GROUP_SIZE error code from DPC++.
Exceeded the number of registers available on the hardware. The number registers per work-group cannot exceed 65536 for this kernel on this device. The kernel uses 100 registers per work-item for a total of 1024 work-items per work-group. -54 (PI_ERROR_INVALID_WORK_GROUP_SIZE)
Usually, if the kernel has exceeded the number of registers available on the multiprocessor, one advice can be to lower the work-group size which effectively reduces the number of threads to execute in a CUDA block.
However, if this is not the desired solution, we can also instruct the compiler to lower the register pressure and spill beyond a certain threshold, which can also result in a successful launch without having to size down the thread blocks. In DPC++, this can be achieved by:
1. Specifying the CUDA architecture or SM/compute capability of the target device, i.e.
-Xsycl-target-backend --cuda-gpu-arch=sm_86for Nvidia GeForce RTX 3060/TI.
2. Instructing the PTX backend that we want to limit the registers in the kernel. This is done with the
-Xcuda-ptxas --maxrregcount=<N>option, added to the compile command.
A downside to limiting register usage in the kernel via the
-Xcuda-ptxas --maxrregcount compiler option, is that the remaining registers may be spilled into DRAM, which may impact performance.
Sub-group size issues in codes ported across platforms/architectures
Consider code that uses the kernel attribute
reqd_sub_group_size to set a
specific sub-group size that is then ported to a different platform or executed
on a different architecture to the one it was originally written for. In such a
case if the requested sub-group size is not supported by the platform/architecture
then a runtime error will be thrown:
Sub-group size x is not supported on the device
On the CUDA platform only a single sub-group size is supported, hence only a warning is given:
CUDA requires sub_group size 32
and the runtime will use the sub-group size of 32 instead of the requested
sub-group size. The
reqd_sub_group_size kernel attribute is designed for
platforms/architectures that support multiple sub-group sizes. Note that some
SYCL code is not portable across different sub-group sizes. For example,
the result of the sub-group collective
reduce_over_group will depend on the
sub-group size. Users that want to write code that is portable across
platforms/architectures which use different sub-group sizes should either:
Write code in a portable way such that the result does not depend on sub-group size.
sub-group size sensitive parts of the code should have different versions for different platforms/architectures to take account of different sub-group sizes.