Version Latest

TensorFlow ARM Setup


Codeplay and Arm have collaborated to bring TensorFlow support to Arm Mali™ via the SYCL™ and OpenCL™ open standards for heterogeneous computing. This guide describes how to build and run TensorFlow on an Arm Mali device.

If you would like to follow a more generic guide we also detail how to build TensorFlow with SYCL, these can potentially be adapted for other platforms we do not list in our documentation.

The supported platform for this release is the HiKey 960 development board, running Debian 9. For other platforms, please adapt the instructions below.

Configuration management

These instructions relate to the following versions:

  • Tensorflow : master
  • ComputeCpp: 1.1.1
  • CPU: 32- or 64-bit ARM CPU
  • GPUs: Arm Mali G71 MP8


  • For older or newer versions of TensorFlow, please contact Codeplay for updated build documentation.
  • If you are interested in the latest features you may try our experimental branch.
  • GPUs other than those listed above may work, but Codeplay does not support them at this time.


  • A development PC with Ubuntu 16.04.3 64-bit installed.
  • Hikey 960 development board.
  • Please contact Arm to obtain an Arm Mali driver with support for OpenCL 1.2 with SPIR-V.
  • Install ComputeCpp (Select "Ubuntu 16.04" as the Operating System even if you are using another Linux distribution and "arm64" as the Architecture.)

Build TensorFlow for ARM Mali

The following steps have been verified on a clean installation of Ubuntu 16.04.3 64-bit.

Set up the environment for the ARM architecture that you want to target

  • For 32-bit ARM CPUs:
export TARGET_ARCH=armhf
  • For 64-bit ARM CPUs:
export TARGET_ARCH=arm64

Install dependency packages

sudo dpkg --add-architecture $TARGET_ARCH
echo "deb [arch=$TARGET_ARCH] xenial main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/arm.list
echo "deb [arch=$TARGET_ARCH] xenial-updates main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/arm.list
echo "deb [arch=$TARGET_ARCH] xenial-security main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/arm.list
echo "deb [arch=$TARGET_ARCH] xenial-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/arm.list
sudo sed -i 's#deb  [arch=amd64] ' /etc/apt/sources.list
sudo sed -i 's#deb   [arch=amd64]   ' /etc/apt/sources.list
sudo apt-get update
sudo apt-get install -y git cmake libpython-all-dev:$TARGET_ARCH opencl-headers openjdk-8-jdk python python-pip zlib1g-dev:$TARGET_ARCH
pip install --user numpy==1.14.5 wheel==0.31.1 six==1.11.0 mock==2.0.0 enum34==1.1.6

Install toolchains

For 32-bit ARM CPUs:

  • Download the following ComputeCpp version: Ubuntu 14.04 > arm32 > computecpp-ce-1.1.1-ubuntu.14.04-arm32.tar.gz
tar -xf ComputeCpp-CE-1.1.1-Ubuntu.16.04-64bit.tar.gz
tar -xf ComputeCpp-CE-1.1.1-Ubuntu.14.04-ARM32.tar.gz
cp ComputeCpp-CE-1.1.1-Ubuntu-16.04-x86_64/bin/compute++ ComputeCpp-CE-1.1.1-Ubuntu-14.04-ARM_32/bin
tar -xf gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf.tar.xz
mkdir -p $HOME/gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf/arm-linux-gnueabihf/libc/usr/include/arm-linux-gnueabihf
ln -s /usr/include/arm-linux-gnueabihf/python2.7/ $HOME/gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf/arm-linux-gnueabihf/libc/usr/include/arm-linux-gnueabihf
export COMPUTECPP_TOOLKIT_PATH=$HOME/ComputeCpp-CE-1.1.1-Ubuntu-14.04-ARM_32
export TF_SYCL_CROSS_TOOLCHAIN=$HOME/gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
export TF_SYCL_CROSS_TOOLCHAIN_NAME=arm-linux-gnueabihf
export CC_OPT_FLAGS="-march=armv7"

For 64-bit ARM CPUs:

  • Check the version of GCC that is installed on your development board (not your development PC):
gcc -v

If the GCC version is earlier than 5.0, replace "Ubuntu 16.04" with Ubuntu 14.04" in all subsequent steps. * Download the following ComputeCpp version: Ubuntu 16.04 > arm64 > computecpp-ce-1.1.1-ubuntu.16.04-arm64.tar.gz

tar -xf ComputeCpp-CE-1.1.1-Ubuntu.16.04-64bit.tar.gz
tar -xf ComputeCpp-CE-1.1.1-Ubuntu.16.04-ARM64.tar.gz
cp ComputeCpp-CE-1.1.1-Ubuntu-16.04-x86_64/bin/compute++ ComputeCpp-CE-1.1.1-Ubuntu-16.04-ARM_64/bin
tar -xf gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu.tar.xz
mkdir -p $HOME/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/aarch64-linux-gnu/libc/usr/include/aarch64-linux-gnu
ln -s /usr/include/aarch64-linux-gnu/python2.7/ $HOME/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/aarch64-linux-gnu/libc/usr/include/aarch64-linux-gnu/
export COMPUTECPP_TOOLKIT_PATH=$HOME/ComputeCpp-CE-1.1.1-Ubuntu-16.04-ARM_64
export TF_SYCL_CROSS_TOOLCHAIN=$HOME/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu
export TF_SYCL_CROSS_TOOLCHAIN_NAME=aarch64-linux-gnu
export CC_OPT_FLAGS="-march=armv8-a"

Install Bazel

sudo apt install -y ./bazel_0.16.0-linux-x86_64.deb
bazel version

Check that the bazel version output from the above command is 0.16.0.

Build TensorFlow

git clone
cd tensorflow
export TMPDIR=~/tensorflow_temp
mkdir -p $TMPDIR

export PYTHON_BIN_PATH=/usr/bin/python
export TF_NEED_MKL=0
export TF_NEED_GCP=0
export TF_NEED_HDFS=0
export TF_ENABLE_XLA=0
export TF_NEED_CUDA=0
export TF_NEED_VERBS=0
export TF_NEED_MPI=0
export TF_NEED_GDR=0
export TF_NEED_AWS=0
export TF_NEED_S3=0
export TF_NEED_KAFKA=0


  • The possible values for TF_SYCL_BITCODE_TARGET are spir32, spir64, spirv32 or spirv64 depending on which intermediate language your OpenCL library supports. Check the device properties output by the clinfo command:
  • In the "Extensions" field, if cl_khr_spir is present, use spirXX, or if cl_khr_il_program is present, use spirvXX.
  • Substitute "XX" above for the value of the "Address bits" field. Note that issues can arise if the device's "Address bits" value does not match that of the host CPU e.g. a 64-bit CPU and 32-bit GPU.
  • TF_SYCL_USE_LOCAL_MEM is set to "0" to avoid generating kernels that use local memory thus reducing the online compiliation time. In general, you can set it to "1" if the "Local memory type" field of clinfo is "Local" and "Local memory size" is equal or greater than 4KiB for better performance. If it is unset, both kernels will be generated and the best one will be picked at runtime.
  • TF_SYCL_USE_SERIAL_MEMOP must be set to "1" for this device. Setting it to "0" generates slightly more efficient kernels but is not supported with this device.
bazel --output_user_root=$TMPDIR build --config=sycl_arm -c opt --verbose_failures --copt=-DEIGEN_DONT_VECTORIZE_SYCL --copt=-Wno-c++11-narrowing //tensorflow/tools/pip_package:build_pip_package

Note the EIGEN_DONT_VECTORIZE_SYCL flag is an optimization for HiKey 960. If you are using a different platform, you will most likely want to remove this option. If you are using an Ubuntu 14.04 version of ComputeCpp, add --copt=-D_GLIBCXX_USE_CXX11_ABI=0 before //tensorflow/tools/pip_package:build_pip_package in the above command.

bazel-bin/tensorflow/tools/pip_package/build_pip_package $TMPDIR

Rename the wheel file for the target architecture:

  • For 32-bit ARM CPUs:
mv $TMPDIR/tensorflow-1.9.0rc0-cp27-cp27mu-linux_x86_64.whl $TMPDIR/tensorflow-1.9.0rc0-cp27-cp27mu-linux_arm.whl
  • For 64-bit ARM CPUs:
mv $TMPDIR/tensorflow-1.9.0rc0-cp27-cp27mu-linux_x86_64.whl $TMPDIR/tensorflow-1.9.0rc0-cp27-cp27mu-linux_aarch64.whl

Set up the development board

  • Install the operating system and Arm Mali driver according to Arm's instructions.
  • Copy ComputeCpp-CE-1.1.1-Ubuntu.16.04-ARM64.tar.gz and $TMPDIR/tensorflow-1.9.0rc0-cp27-cp27mu-linux_aarch64.whl to your device e.g. using the scp command.

All of the following commands should be run on the development board. Depending on how your development board's disk space has been partitioned, you may have to manage the available space carefully - the following requires at least 1.2GB free.

Install dependency packages

apt-get -y install clinfo git python-pip
# The following apt packages are required to build scipy from source but can be removed later
apt-get -y install gcc gfortran python-dev libopenblas-dev liblapack-dev cython

Many tests and benchmarks require more pip packages than the minimal set of packages listed in the pre-requistes. The versions listed below are known to work with this build of TensorFlow:

pip install -U --user numpy==1.14.5 wheel==0.31.1 six==1.11.0 mock==2.0.0 enum34==1.1.6 portpicker==1.2.0
# Cython is required to build the next packages from source but can be removed later
pip install -U --user cython==0.29.1
pip install -U --user scipy==1.1.0
pip install -U --user scikit-learn==0.20.2
pip install -U --user --no-deps sklearn

Verify that the OpenCL installation is correct:


If any errors are present, check the installation of the OpenCL driver. * It is important to have this step working correctly, or it is likely that you run into errors later when running TensorFlow. * For example, if the OpenCL driver cannot be found, ensure that LD_LIBRARY_PATH has been set correctly.

Install Tensorflow

pip install --user tensorflow-1.9.0rc0-cp27-cp27mu-linux_aarch64.whl

Set up ComputeCpp

tar -xf ComputeCpp-CE-1.1.1-Ubuntu.16.04-ARM64.tar.gz
export LD_LIBRARY_PATH+=:$HOME/ComputeCpp-CE-1.1.1-Ubuntu-16.04-ARM_64/lib

The output should show that the Mali-G71 OpenCL driver has been found, and that it does not support SPIR - that is ok.

Run benchmarks

To verify the installation, you can execute some of the standard TensorFlow benchmarks. The example below shows how to run AlexNet:

git clone
cd benchmarks
git checkout f5d85aef2851881001130b28385795bc4c59fa38
python scripts/tf_cnn_benchmarks/ --num_batches=10 --local_parameter_device=sycl --device=sycl --batch_size=1 --forward_only=true --model=alexnet --data_format=NHWC

Setting a higher batch_size will increase the GPU usage and give better inference/s but this is not always possible in real world applications. You may see warnings about deprecated functions, but they can be safely ignored.


    Select a Product

    Please select a product

    ComputeCpp enables developers to integrate parallel computing into applications using SYCL and accelerate code on a wide range of OpenCL devices such as GPUs.

    ComputeSuite for R-Car enables developers to accelerate their applications on a wide range of Renesas R-Car based hardware such as the H3 and V3M, using widely supported open standards such as Khronos SYCL and OpenCL.


    part of our network