info Please note that this product is now discontinued and product support has ceased. For more information, please read the blog

ComputeCpp™ Professional Edition, our Full-Featured SYCL™ Implementation

ComputeCpp is a SYCL 1.2.1 conformant implementation developed by Codeplay®. Compile SYCL code to a range of different platforms such as Linux® and Windows® and architectures including x86_64 and AArch64.

cloud_download Download Now auto_stories Getting Started
Devices

Accelerate Your Application with ComputeCpp

ComputeCpp also works with a number of frameworks including ParallelSTL and VisionCpp™. ComputeCpp, Codeplay's implementation of the open standard SYCL, enables you to integrate parallel computing into your application and accelerate your code across OpenCL™ devices such as GPUs. Applications that require a large number of common operations can make huge performance improvements by running the operations in parallel on OpenCL devices. For example, the neural networks used in machine learning perform huge numbers of matrix calculations and ComputeCpp can be used to run these operations in parallel, vastly increasing performance and reducing the power used by the application.

With ComputeCpp and SYCL you can write code once and execute on a range of OpenCL enabled devices reducing your development effort. Develop with standard C++ and the SYCL open standard, re-using your existing C++ libraries. ComputeCpp is also building support for C++17 Parallel STL enabling parallelized library functions to run on accelerated processors. ComputeCpp also works with a number of frameworks including ParallelSTL and VisionCpp.

ComputeCpp Stack

Who is ComputeCpp for?

  • Parallel Icon

    Portable Parallel Computing Applications

    OpenCL devices such as GPUs can be used to accelerate applications by running operations in parallel. By implementing ComputeCpp using the SYCL open standard, developers can write software with C++ single source and run their code using parallel computing across a range of OpenCL devices.

  • Tensor Icon

    Using TensorFlow with ComputeCpp

    Machine learning framework TensorFlow requires large amounts of vector and matrix operations. Performance and power consumption can be vastly improved by using parallel computing. ComputeCpp enables developers to target OpenCL devices such as GPUs using modern C++ code.

  • Brain Icon

    Artificial Intelligence Applications

    Performing complex image processing operations can be accelerated using parallel computing. ComputeCpp enables high-level programmability for custom vision processors, enabling additional custom features on top of existing optimized hardware functions.

  •  Math Icon

    Complex Mathematical Applications

    The Eigen library is one of the most popular C++ libraries for linear algebra, matrix and vector operations and related algorithms. Eigen is integrated with ComputeCpp enabling developers to run these operations on OpenCL devices. By taking advantage of these parallel architectures, applications can be accelerated.

Who is Using ComputeCpp?

  • University of Münster

    living.knowledge

  • WIGNER Fizikai Kutatóközpont

    Sokszín? Fizika

  • ONERA

    The French Aerospace Lab

  • Stellar Group

    Shaping a Scalable Future

  • UWS

    University of the West of Scotland

Select a Product

Please select a product

oneAPI

oneAPI is a cross-industry, open, standards-based unified programming model that delivers a common developer experience across accelerator architecture - for faster application performance, more productivity, and greater innovation.

Dark Mode
brightness_2

Dark Mode

Light text on a dark background.

Light Mode
lightbulb_outline

Light Mode

Dark text on a light background.

public

Also,

part of our network

search
No results found, please try another search term.
sync
Searching, Please Wait...
There was a problem searching, please try again later.