TensorFlow™ AMD Setup Guide

This guide was created for versions: v0.1.0 - v0.9.1

This guide will explain how to set up your machine to run the OpenCL version of TensorFlow using ComputeCpp, a SYCL implementation. ComputeCpp allows you to run your TensorFlow application using OpenCL devices to enable parallel computation.


TensorFlow and ComputeCpp has been tested with the AMD Radeon R9 Nano and AMD FirePro W8100 using Ubuntu 14.04 and 16.04 with the default kernel versions.

You'll need the following installed on your machine, step by step instructions on how to install these on Ubuntu are available.

  • JDK 8
  • bazel
  • gcc or clang
  • build-essential
  • git
  • clinfo
  • python
  • OpenCL drivers

We are currently only testing specific versions of Linux (Ubuntu 14.04 and 16.04) with ComputeCpp and TensorFlow. This doesn't mean other platforms and hardware don't work. If you'd like to help us to test other Linux versions and platforms please get in touch with us and we'll help you set up your hardware.

Checking your OpenCL setup

This guide assumes you have correctly installed the OpenCL drivers for your machine already and in the following steps we verify everything is setup correctly.

You can use the following command on Linux to get information about your OpenCL setup and validate it is correct.

Run the following command in a Terminal:

> clinfo

The output should look something like this:

Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 2.0 AMD-APP (1912.5)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_event_callback cl_amd_offline_devices
Platform Name: AMD Accelerated Parallel Processing
Number of devices: 2
Vendor ID: 1002h
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Platform ID: 0x7f8a310c4a18
Name: Fiji
Vendor: Advanced Micro Devices, Inc.
Device OpenCL C version: OpenCL C 2.0
Driver version: 1912.5 (VM)
Version: OpenCL 2.0 AMD-APP (1912.5)
Extensions: cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_khr_gl_depth_images cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_image2d_from_buffer cl_khr_spir cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes

Note: If you see "ICD loader reports no usable platforms" or "Number of platforms 0" then your OpenCL device and drivers are not set up.

Setting up ComputeCpp for TensorFlow

Download the ComputeCpp archive for your Linux version and extract this to /usr/local/computecpp

> tar -xvf ComputeCpp-CE-<>.tar.gz
> sudo mv <>/ /usr/local/computecpp

Checking your ComputeCpp setup

To check that your ComputeCpp installation is set up correctly and can find the OpenCL devices on your machine run the following command in a Terminal window:

> /usr/local/computecpp/bin/computecpp_info

The expected output looks something like this:

ComputeCpp Info (CE 0.2.0)
Toolchain information:

GLIBCXX: 20150426
This version of libstdc++ is supported.
Device Info:
Discovered 1 devices matching:
  platform    : 
  device type : 
Device 0:

  Device is supported                     : YES - Tested internally by Codeplay Software Ltd.
  CL_DEVICE_NAME                          : Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
  CL_DEVICE_VENDOR                        : Intel(R) Corporation
  CL_DRIVER_VERSION                       :
  CL_DEVICE_TYPE                          : CL_DEVICE_TYPE_CPU

Note:If you see "error while loading shared libraries: libOpenCL.so" then you have not installed the OpenCL drivers needed to run ComputeCpp.

Get the source code for TensorFlow

In order to run TensorFlow with OpenCL it is necessary to build it from source.

Since this implementation is in development there are some different places you can get the source code from as we propagate changes to the main TensorFlow branch.

First clone the development branch of TensorFlow

> git clone https://github.com/codeplaysoftware/tensorflow/

Now change to the TensorFlow source code folder using the following command in a Terminal:

> cd tensorflow

Before building the code, check out the correct branch

> git checkout eigen_sycl

There is also a branch called "eigen_mehdi" that is where active development is happening. This branch will have more functionality but is likely to also be less stable.

Configuring TensorFlow

The config script is used by TensorFlow to configure various settings when building TensorFlow code. Some of these options enable ComputeCpp and OpenCL for TensorFlow.

In a Terminal enter the following command:

> ./configure

You'll be asked various questions, these questions need to be answered in order to use ComputeCpp with TensorFlow.

Do you wish to build TensorFlow with OpenCL support? [y/N] y

Please specify which C++ compiler should be used as the host C++ compiler. [Default is /usr/bin/clang++-3.6 ]: In this field, enter the path of your C++ compiler, this is usually /usr/bin/g++
Please specify which C++ compiler should be used as the host C++ compiler. [Default is /usr/bin/clang++-3.6 ]: In this field, enter the path of your C compiler, this is usually /usr/bin/gcc

Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]: /usr/local/computecpp

Building TensorFlow source with ComputeCpp

Now that TensorFlow has been configured to use ComputeCpp you can build the TensorFlow source code.

In a Terminal enter the following command:

> bazel build -c opt --config=sycl //tensorflow/tools/pip_package:build_pip_package
> bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
> sudo pip install /tmp/tensorflow_pkg/tensorflow-1.0.0-cp27-none-linux_x86_64.whl

Note: The "sudo pip install" command takes the whl for your version of TensorFlow so is likely to be different to the one above.

Setting up the path

When running Tensorflow it needs to know where to find the ComputeCpp libary files.

> export LD_LIBRARY_PATH=/usr/local/computecpp/lib

Running TensorFlow MNIST

MNIST is a simple computer vision dataset. You can find out more about MNIST on the TensorFlow website.

Now that TensorFlow has been built using ComputeCpp it is possible to run the MNIST tests using ComputeCpp and an OpenCL device.

Change to the directory at the same level as the tensorflow root.

> cd ..

Clone the TensorFlow models repository and run the convolution

> git clone http://github.com/tensorflow/models
> cd models/tutorials/image/mnist
> python convolutional.py

This will run convolution on the MNIST model.

Running TensorFlow with ImageNet

The ImageNet tutorial sample in the models repository is a model that is pre-trained and can be used to make predictions on what an image contains.

Change to the "imagenet" folder in the same "tutorials" folder as the "mnist" one.

> cd ../imagenet

Now you can use a pre-trained model with images.

Enter the following command to run the Python script with the default image.

> python classify_image.py

During execution the terminal will show output that confirms the use of OpenCL.

The output for the default image will show the following in the terminal.

giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca (score = 0.89107)
indri, indris, Indri indri, Indri bervicaudatus (score = 0.0079)
lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens (score = 0.00296)

It is also possible to execute the script with a specific image using the --image_file flag.

For example if I had a file called cat.jpg in the same folder as the "imagenet" folder the command would be

> python classify_image.py --imagefile cat.jpg