I installed GPU TensorFlow from source on Ubuntu Server 16.04 LTS with CUDA 8 and a GeForce GTX 1080 GPU, but it should work for Ubuntu Desktop 16.04 LTS.
In this tutorial I will be going through the process of building the latest TensorFlow from sources for Ubuntu Server 16.04. TensorFlow now supports using Cuda 8.0 & CuDNN 5.1 so you can use the pip’s from their website for a much easier install.
In order to use TensorFlow with GPU support you must have a NVIDIA graphic card with a minimum compute capability of 3.0.
Getting started I am going to assume you know some of the basics of using a terminal in Linux. (Check this post for commonly used Linux commands.)
1: Install Required Packages
Open a terminal by pressing Ctrl + Alt + T.
(Because it is Ubuntu Server 16.04, need to install those required packages below, if you are on Ubuntu Desktop 16.04, most of the libraries below already come with the OS installation.)
Paste each line one at a time (without the $) using Shift + Ctrl + V
$ sudo apt-get install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy build-essential python-pip python3-pip python-virtualenv swig python-wheel libcurl3-dev
2: Update & Install NVIDIA Drivers
Note that if you have a monitor connected to your server, be sure to disconnect it before you start to install the NVIDIA drivers. Otherwise, it may cause trouble when you reboot your server after you install your NVIDIA drivers. You can reconnect your monitor after you successfully install the NVIDIA drivers.
You must also have the 367 (or later) NVidia drivers installed, this can easily be done from Ubuntu’s built in additional drivers after you update your driver packages. (you can check the latest drivers version according to your GPU info from The NVIDIA downloads page, for example, mine is 375.)
$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt update
$ sudo apt-get install nvidia-375
(Note: use the following command if you encounter this error “sudo: add-apt-repository: command not found”)
$ sudo apt-get install software-properties-common
Once installed the driver restart your computer. You can use the command below to reboot the server from command line.
$ sudo reboot -h now
If you experience any troubles booting linux or logging in: try disabling fast & safe boot in your bios and modifying your grub boot options to enable nomodeset.
You can use the following command to get various diagnostics of the GTX 1080.
$ sudo nvidia-smi

3: Install NVIDIA CUDA Toolkit 8.0
Skip if not installing with GPU support
(Note: If you have older version of CUDA and cuDNN installed, check the post for uninstallation. How to uninstall CUDA Toolkit and cuDNN under Linux? (02/16/2017) (pdf))
(If you need to use command line to transfer files from your clienet computer to your server. refer to the following scp command)
File Transfer: getting files to/from your Ubuntu server
copy file:
scp -p file_name username@yourserver_hostname:destination/directory
for a full directory tree:
scp -pr dir_name username@yourserver_hostname:destination/directory
To install the Nvidia Toolkit download base installation .run file from Nvidia website (download the .run file. NOT THE DEB FILE!!).

$ cd ~/Downloads # or directory to where you downloaded file
$ sudo sh cuda_8.0.44_linux.run # hold s to skip
This will install cuda into: /usr/local/cuda-8.0
MAKE SURE YOU SAY NO TO INSTALLING NVIDIA DRIVERS! (Very important, If you answer yes, the GTX 1080 375 driver will be overwritten.)
Also make sure you select yes to creating a symbolic link to your cuda directory.
(FYI, the following is the questions to be asked.)
The following contains specific license terms and conditions
for four separate NVIDIA products. By accepting this
agreement, you agree to comply with all the terms and
conditions applicable to the specific product(s) included
herein.
Do you accept the previously read EULA?
accept/decline/quit: accept
Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 361.62?
(y)es/(n)o/(q)uit: n
Install the CUDA 8.0 Toolkit?
(y)es/(n)o/(q)uit: y
Enter Toolkit Location
[ default is /usr/local/cuda-8.0 ]:
Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: y
Install the CUDA 8.0 Samples?
(y)es/(n)o/(q)uit: y
Enter CUDA Samples Location
[ default is /home/liping ]:
Installing the CUDA Toolkit in /usr/local/cuda-8.0 …
Installing the CUDA Samples in /home/liping …
Copying samples to /home/liping/NVIDIA_CUDA-8.0_Samples now…
Finished copying samples.
= Summary =
===========
Driver: Not Selected
Toolkit: Installed in /usr/local/cuda-8.0
Samples: Installed in /home/liping, but missing recommended libraries
Please make sure that
– PATH includes /usr/local/cuda-8.0/bin
– LD_LIBRARY_PATH includes /usr/local/cuda-8.0/lib64, or, add /usr/local/cuda-8.0/lib64 to /etc/ld.so.conf and run ldconfig as root
To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-8.0/bin
Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-8.0/doc/pdf for detailed information on setting up CUDA.
***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 361.00 is required for CUDA 8.0 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
sudo <CudaInstaller>.run -silent -driver
Logfile is /tmp/cuda_install_7169.log
4: Install NVIDIA cuDNN
Once the CUDA Toolkit is installed, download cuDNN v5.1 for Cuda 8.0 from NVIDIA website (Note that you will be asked to register an NVIDIA developer account in order to download) and extract into /usr/local/cuda via:
$ sudo tar -xzvf cudnn-8.0-linux-x64-v5.1.tgz
$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
Then update your bash file:
$ nano ~/.bashrc
This will open your bash file in a text editor which you will scroll to the bottom and add these lines:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda
Once you save and close the text file you can return to your original terminal and type this command to reload your .bashrc file:
$ source ~/.bashrc
5: Install Bazel
Instructions also on Bazel website
$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ curl https://storage.googleapis.com/bazel-apt/doc/apt-key.pub.gpg | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install bazel
$ sudo apt-get upgrade bazel
6: Clone TensorFlow
$ cd ~
$ git clone https://github.com/tensorflow/tensorflow
7: Configure TensorFlow Installation
$ cd ~/tensorflow
$ ./configure
Use defaults by pressing enter for all except:
Please specify the location of python. [Default is /usr/bin/python]:
For Python 2 use default or If you wish to build for Python 3 enter:
$ /usr/bin/python3.5
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]:
For Python 2 use default or If you wish to build for Python 3 enter:
$ /usr/local/lib/python3.5/dist-packages
Unless you have a Radeon graphic card you can say no to OpenCL support. (has anyone tested this? ping me if so!)
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:
$ 8.0
Please specify the Cudnn version you want to use. [Leave empty to use system default]:
$ 5
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: “3.5,5.2”]: 5.2,6.1
……….
INFO: Starting clean (this may take a while). Consider using –expunge_async if the clean takes more than several minutes.
………
INFO: All external dependencies fetched successfully.
Configuration finished
If all was done correctly you should see:
INFO: All external dependencies fetched successfully.
Configuration finished.
8: Build TensorFlow
Warning Resource Intensive I recommend having at least 8GB of computer memory.
(Note that you current path in terminal is ~/tensorflow)
If you want to build TensorFlow with GPU support enter (Note that the command should be one line):
$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
For CPU only enter:
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
9:Build & Install Pip Package
(Note that you current path in terminal is ~/tensorflow)
This will build the pip package required for installing TensorFlow in your ~/tensorflow_pkg [you can change this directory as the one you like]
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/tensorflow_pkg
Remember that, at any time, you can manually force the project to be reconfigured (run the ./configure file in step 7 above to reconfigure) and built from scratch by emptying the directory ~/tensorflow_pkg with:
Now you can cd into the directory where you build your tensorflow, for example my case is ~/tensorflow_pkg
then issue the following command according to you are using python or python 3.
To Install Using Python 3 (remove sudo if using a virtualenv)
$ sudo pip3 install tensorflow-0.12.1-cp27-cp27mu-linux_x86_64.whl
# tip: after you type tensorflow, you can hit Tab on your keyboard to autofill the name of the .whl file you just built
For Python 2 (remove sudo if using a virtualenv)
$ sudo pip install tensorflow-0.12.1-cp27-cp27mu-linux_x86_64.whl
# tip: after you type tensorflow, you can hit Tab on your keyboard to autofill the name of the .whl file you just built
Note that if you meet this error:
The directory ‘/home/youraccountname/.cache/pip/http’ or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag.
Change the command above to
sudo -H pip install tensorflow-0.12.1-cp27-cp27mu-linux_x86_64.whl
If you meet this warning
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the ‘pip install –upgrade pip’ command.
I would suggest just ignore this – sometimes after doing upgrade there might appear some trouble because of dependencies.
10: Test Your Installation
Finally, time to test our installation.
To test the installation, open an interactive Python shell and import the TensorFlow module:
$ cd # this will return to your home root directory ~
$ python # or python3
…
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally
With the TensorFlow module imported, the next step to test the installation is to create a TensorFlow Session, which will initialize the available computing devices and provide a means of executing computation graphs:
>>> sess = tf.Session()
>>> sess = tf.Session()
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:03:00.0
Total memory: 7.92GiB
Free memory: 7.81GiB
…
To manually control which devices are visible to TensorFlow, set the CUDA_VISIBLE_DEVICES environment variable when launching Python. For example, to force the use of only GPU 0:
$ CUDA_VISIBLE_DEVICES=0 python
You should now be able to run a Hello World application:
>>> hello_world = tf.constant("Hello, TensorFlow!")
>>> print sess.run(hello_world)
Hello, TensorFlow!
>>> print sess.run(tf.constant(12)*tf.constant(3))
36
TensorFlow also has instructions on how to do a basic test and a list of common installation problems.
You should now have TensorFlow installed on your computer. This tutorial was tested on a fresh install of Ubuntu Server 16.04 with a GeForce GTX 1080.
Referenced posts (See this page for more TensorFlow setup links I collected):