[Paper published] Check out our new (deep) machine learning paper for flood detection

New machine/deep learning paper led by Liping: Analysis of remote sensing imagery for disaster assessment using deep learning: a case study of flooding event

A full-text view-only version of the paper can be found via the link: https://rdcu.be/bpUvx.

 

Check out this page for Liping’s more publications.

 

Install Keras with GPU TensorFlow as backend on Ubuntu 16.04

This post introduces how to install Keras with TensorFlow as backend on Ubuntu Server 16.04 LTS with CUDA 8 and a NVIDIA TITAN X (Pascal) GPU, but it should work for Ubuntu Desktop 16.04 LTS.

We gratefully acknowledge the support of NVIDIA Corporation with awarding one Titan X Pascal GPU used for our machine learning and deep learning based research.

Keras is a great choice to learn machine learning and deep learning. Keras has easy syntax and can use Google TensorFlow or Microsoft CNTK or Theano as its backend.  Keras is simply a wrapper around more complex numerical computation engines such as TensorFlow and Theano.

Keras abstracts away much of the complexity of building a deep neural network, leaving us with a very simple, nice, and easy to use interface to rapidly build, test, and deploy deep learning architectures.

TensorFlow is extremely flexible, allowing you to deploy network computation to multiple CPUs, GPUs, servers, or even mobile systems without having to change a single line of code.

This makes TensorFlow an excellent choice for training distributed deep learning networks in an architecture agnostic way.

Now Let’s start on the installation of Keras with TensorFlow as its backend.

1: Setup Python virtual environment

Check my post about more details about how to setup python virtual environment and why it is better to install python libraries in Python virtual environment.

  • Install pip and Virtualenv for python and python 3:
$ sudo apt-get update
$ sudo apt-get install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy build-essential python-pip python3-pip python-virtualenv swig python-wheel libcurl3-dev
  • Create a Virtualenv environment in the directory for python and python 3:
#for python 2
virtualenv --system-site-packages -p python ~/keras-tf-venv

# for python 3 
virtualenv --system-site-packages -p python3 ~/keras-tf-venv3

(Note: To delete a virtual environment, just delete its folder.  For example, In our cases, it would be rm -rf keras-tf-venv or rm -rf keras-tf-venv3.)

2: Update & Install NVIDIA Drivers (skip this if you do not need to TensorFlow GPU version)

Check another post I wrote(steps 1-4 in that post) for detailed instructions about how to update and install NVIDIA Drive and CUDA 8.0 and cuDNN for the requirements of GPU TensorFlow installation.

Notes: If you have old version of NVIDIA driver installed used the following to remove it first before installation of new driver.

Step 1: Remove older version of NVIDIA
sudo apt-get purge nvidia*

Step 2: Reboot the system

test whesther it is removed

$ sudo nvidia-smi 
$ sudo: nvidia-smi: command not found  # this means the old driver was uninstalled.

(Note: If you have older version of CUDA and cuDNN installed, check the post for uninstallation.  How to uninstall CUDA Toolkit and cuDNN under Linux? (02/16/2017) (pdf). Actually to uninstall (older version) of CUDA, it tells you how to uninstall it when you install, see the Install cuda 8.0 below.)

(Note: I tried to install the latest Nvidia drive, latest cuda and latest cudnn (i.e., v6.0), but it did not work for me when I installed TensorFlow. After a few testing, I found when I install Nvidia drive 375.82,  cuda_8.0.61_375.26_linux.run, cudnn-8.0-linux-x64-v5.1.tgz. it works.)

Install cuda 8.0:

Toolkit: Installed in /usr/local/cuda-8.0
Samples: Installed in /home/liping, but missing recommended libraries

Please make sure that
– PATH includes /usr/local/cuda-8.0/bin
– LD_LIBRARY_PATH includes /usr/local/cuda-8.0/lib64, or, add /usr/local/cuda-8.0/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-8.0/bin

Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-8.0/doc/pdf for detailed information on setting up CUDA.

***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 361.00 is required for CUDA 8.0 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
sudo <CudaInstaller>.run -silent -driver

Logfile is /tmp/cuda_install_3813.log

 

3: Install TensorFlow

Before installing TensorFlow and Keras, be sure to activate your python virtual environment first.

# for python 2
$ source ~/keras-tf-venv/bin/activate  # If using bash
(keras-tf-venv)$  # Your prompt should change

# for python 3
$ source ~/keras-tf-venv3/bin/activate  # If using bash
(keras-tf-venv3)$  # Your prompt should change

 (keras-tf-venv)$ pip install --upgrade tensorflow   # Python 2.7; CPU support (no GPU support)
 (keras-tf-venv3)$ pip3 install --upgrade tensorflow   # Python 3.n; CPU support (no GPU support)
 (keras-tf-venv)$ pip install --upgrade tensorflow-gpu  # Python 2.7;  GPU support
 (keras-tf-venv3)$ pip3 install --upgrade tensorflow-gpu # Python 3.n; GPU support

Note: If the commands for installing TensorFlow given above failed (typically because you invoked a pip version lower than 8.1), install TensorFlow in the active virtualenv environment by issuing a command of the following format:

 (keras-tf-venv)$ pip install --upgrade TF_PYTHON_URL   # Python 2.7
 (keras-tf-venv3)$ pip3 install --upgrade TF_PYTHON_URL  # Python 3.N

where TF_PYTHON_URL identifies the URL of the TensorFlow Python package. The appropriate value of TF_PYTHON_URLdepends on the operating system, Python version, and GPU support. Find the appropriate value for TF_PYTHON_URL for your system here. For example, if you are installing TensorFlow for Linux, Python 2.7, and CPU-only support, issue the following command to install TensorFlow in the active virtualenv environment: (see below for examples. Note that check here to get the latest version for your system.)

#for python 2.7 -- CPU only
(keras-tf-venv)$pip install --upgrade \ https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp27-none-linux_x86_64.whl

#for python 2.7 -- GPU support
(keras-tf-venv)$pip install --upgrade \ https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.1.0-cp27-none-linux_x86_64.whl

# for python 3.5 -- CPU only
(keras-tf-venv3)$ pip3 install --upgrade \
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp35-cp35m-linux_x86_64.whl

# for python 3.5 -- GPU support
(keras-tf-venv3)$ pip3 install --upgrade \ 
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.1.0-cp35-cp35m-linux_x86_64.whl
  • Validate your TensorFlow installation. (as I just installed GPU tensorflow, so if you install CPU TensorFlow, the output might be slightly different.)

#For Python 2.7

(keras-tf-venv) :~$ python
Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-08-01 14:28:31.257054: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257090: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257103: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257114: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257128: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:32.253475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:03:00.0
Total memory: 11.90GiB
Free memory: 11.75GiB
2017-08-01 14:28:32.253512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 
2017-08-01 14:28:32.253519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 
2017-08-01 14:28:32.253533: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:03:00.0)
>>> print(sess.run(hello))
Hello, TensorFlow!
>>> exit()
(keras-tf-venv) :~$ 

#for python 3

(keras-tf-venv3) :~$ python3
Python 3.5.2 (default, Nov 17 2016, 17:05:23) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-08-01 13:54:30.458376: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458413: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458425: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458436: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458448: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:31.420661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:03:00.0
Total memory: 11.90GiB
Free memory: 11.75GiB
2017-08-01 13:54:31.420692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 
2017-08-01 13:54:31.420699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 
2017-08-01 13:54:31.420712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:03:00.0)
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
>>> exit() 
(keras-tf-venv3) :~$

If you see the output as below, it indicates your TensorFlow was installed correctly.

Hello, TensorFlow!

3: Install Keras

(Note: Be sure that you activated your python virtual environment before you install Keras.)

Installing Keras is even easier than installing TensorFlow.

First, let’s install a few dependencies:

#for python 2
$ pip install numpy scipy
$ pip install scikit-learn
$ pip install pillow
$ pip install h5py

#for python 3
$ pip3 install numpy scipy
$ pip3 install scikit-learn
$ pip3 install pillow
$ pip3 install h5py

4: Verify that your keras.json file is configured correctly

Let’s now check the contents of our keras.json  configuration file. You can find this file at ~/.keras/keras.json .

use nano to open and edit the file.

$ nano ~/.keras/keras.json

The default values should be something like this:

{
 "epsilon": 1e-07,
 "backend": "tensorflow",
 "floatx": "float32",
 "image_data_format": "channels_last"
}

Can’t find your keras.json file?

On most systems the keras.json  file (and associated subdirectories) will not be created until you open up a Python shell and directly import the keras  package itself.

If you find that the ~/.keras/keras.json  file does not exist on your system, simply open up a shell, (optionally) access your Python virtual environment (if you are using virtual environments), and then import Keras:

#for python 2
$ python
>>> import keras
>>> quit()

#for python 3
$ python3
>>> import keras
>>> quit()

From there, you should see that your keras.json  file now exists on your local disk.

If you see any errors when importing keras  go back to the top of step 4 and ensure your keras.json  configuration file has been properly updated.

5: Test Keras + TensorFlow installation

To verify that Keras + TensorFlow have been installed, simply access the keras_tf  environment using the workon  command, open up a Python shell, and import keras :

Specifically, you can see the text Using TensorFlow backend  display when importing Keras — this successfully demonstrates that Keras has been installed with the TensorFlow backend.

Note: each time you would like to use Keras, you need to activate the virtual environment into which it installed, and when you are done using Keras, deactivate the environment.

# for python 2
(keras-tf-venv)$ deactivate
$  # Your prompt should change back

#for python 3
(keras-tf-venv3)$ deactivate
$  # Your prompt should change back

Note: To delete a virtual environment, just delete its folder. (In this case, it would be rm -rf keras-tf-venv or rm -rf keras-tf-venv3.)

References:

Installing Keras with TensorFlow backend (by on November 14, 2016 in Deep Learning, Libraries, Tutorials)

Installing keras makes tensorflow can’t find GPU

Installing Nvidia, Cuda, CuDNN, TensorFlow and Keras

https://www.tensorflow.org/install/install_linux

Keras as a simplified interface to TensorFlow: tutorial

I: Calling Keras layers on TensorFlow tensors

II: Using Keras models with TensorFlow

III: Multi-GPU and distributed training

IV: Exporting a model with TensorFlow-serving

Install GPU TensorFlow from Source on Ubuntu Server 16.04 LTS

I installed GPU TensorFlow from source on Ubuntu Server 16.04 LTS with CUDA 8 and a GeForce GTX 1080 GPU, but it should work for Ubuntu Desktop 16.04 LTS.

In this tutorial I will be going through the process of building the latest TensorFlow from sources for Ubuntu Server 16.04.  TensorFlow now supports using Cuda 8.0 & CuDNN 5.1 so you can use the pip’s from their website for a much easier install.

In order to use TensorFlow with GPU support you must have a NVIDIA graphic card with a minimum compute capability of 3.0.

Getting started I am going to assume you know some of the basics of using a terminal in Linux. (Check this post for commonly used Linux commands.)

1: Install Required Packages

Open a terminal by pressing Ctrl + Alt + T.

(Because it is Ubuntu Server 16.04, need to install those required packages below, if you are on Ubuntu Desktop 16.04, most of the libraries below already come with the OS installation.)

Paste each line one at a time (without the $) using Shift + Ctrl + V

$ sudo apt-get install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy build-essential python-pip python3-pip python-virtualenv swig python-wheel libcurl3-dev

2: Update & Install NVIDIA Drivers

Note that if you have a monitor connected to your server, be sure to disconnect it before you start to install the NVIDIA drivers. Otherwise, it may cause trouble when you reboot your server after you install your NVIDIA drivers. You can reconnect your monitor after you successfully install the NVIDIA drivers.

You must also have the 367 (or later) NVidia drivers installed, this can easily be done from Ubuntu’s built in additional drivers after you update your driver packages. (you can check the latest drivers version according to your GPU info from The NVIDIA downloads page, for example, mine is 375.)

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt update
$ sudo apt-get install nvidia-375  

(Note: use the following command if you encounter this error “sudo: add-apt-repository: command not found”)

$ sudo apt-get install software-properties-common

Once installed the driver restart your computer. You can use the command below to reboot the server from command line.

$ sudo reboot -h now

If you experience any troubles booting linux or logging in: try disabling fast & safe boot in your bios and modifying your grub boot options to enable nomodeset.

You can use the following command to get various diagnostics of the GTX 1080.

$ sudo nvidia-smi

 

3: Install NVIDIA CUDA Toolkit 8.0 

Skip if not installing with GPU support

(Note: If you have older version of CUDA and cuDNN installed, check the post for uninstallation.  How to uninstall CUDA Toolkit and cuDNN under Linux? (02/16/2017) (pdf))

(If you need to use command line to transfer files from your clienet computer to your server. refer to the following scp command)

File Transfer: getting files to/from  your Ubuntu server

copy file:

scp -p file_name username@yourserver_hostname:destination/directory

for a full directory tree:

scp -pr dir_name username@yourserver_hostname:destination/directory

 

To install the Nvidia Toolkit  download base installation .run file from Nvidia website (download the .run file. NOT THE DEB FILE!!).

 

$ cd ~/Downloads # or directory to where you downloaded file
$ sudo sh cuda_8.0.44_linux.run  # hold s to skip

This will install cuda into: /usr/local/cuda-8.0

MAKE SURE YOU SAY NO TO INSTALLING NVIDIA DRIVERS! (Very important, If you answer yes, the GTX 1080 375 driver will be overwritten.

Also make sure you select yes to creating a symbolic link to your cuda directory.

(FYI, the following is the questions to be asked.)

The following contains specific license terms and conditions
for four separate NVIDIA products. By accepting this
agreement, you agree to comply with all the terms and
conditions applicable to the specific product(s) included
herein.

Do you accept the previously read EULA?
accept/decline/quit: accept

Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 361.62?
(y)es/(n)o/(q)uit: n

Install the CUDA 8.0 Toolkit?
(y)es/(n)o/(q)uit: y

Enter Toolkit Location
[ default is /usr/local/cuda-8.0 ]:

Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: y

Install the CUDA 8.0 Samples?
(y)es/(n)o/(q)uit: y

Enter CUDA Samples Location
[ default is /home/liping ]:

Installing the CUDA Toolkit in /usr/local/cuda-8.0 …
Installing the CUDA Samples in /home/liping …
Copying samples to /home/liping/NVIDIA_CUDA-8.0_Samples now…
Finished copying samples.

 

= Summary =
===========

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-8.0
Samples:  Installed in /home/liping, but missing recommended libraries

Please make sure that
 –   PATH includes /usr/local/cuda-8.0/bin
 –   LD_LIBRARY_PATH includes /usr/local/cuda-8.0/lib64, or, add /usr/local/cuda-8.0/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-8.0/bin

Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-8.0/doc/pdf for detailed information on setting up CUDA.

***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 361.00 is required for CUDA 8.0 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
    sudo <CudaInstaller>.run -silent -driver

Logfile is /tmp/cuda_install_7169.log

 

4: Install NVIDIA cuDNN

Once the CUDA Toolkit is installed, download cuDNN v5.1 for Cuda 8.0 from NVIDIA website (Note that you will be asked to register an NVIDIA developer account in order to download) and extract into /usr/local/cuda via:

$ sudo tar -xzvf cudnn-8.0-linux-x64-v5.1.tgz
$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

Then update your bash file:

$ nano ~/.bashrc

This will open your bash file in a text editor which you will scroll to the bottom and add these lines:

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda

Once you save and close the text file you can return to your original terminal and type this command to reload your .bashrc file:

$ source ~/.bashrc

5: Install Bazel

Instructions also on Bazel website

$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ curl https://storage.googleapis.com/bazel-apt/doc/apt-key.pub.gpg | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install bazel
$ sudo apt-get upgrade bazel

6: Clone TensorFlow

$ cd ~
$ git clone https://github.com/tensorflow/tensorflow

7: Configure TensorFlow Installation

$ cd ~/tensorflow
$ ./configure

Use defaults by pressing enter for all except:

Please specify the location of python. [Default is /usr/bin/python]:

For Python 2 use default or If you wish to build for Python 3 enter:

$ /usr/bin/python3.5

Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]:

For Python 2 use default or If you wish to build for Python 3 enter:

$ /usr/local/lib/python3.5/dist-packages

Unless you have a Radeon graphic card you can say no to OpenCL support. (has anyone tested this? ping me if so!)

Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:

$ 8.0

Please specify the Cudnn version you want to use. [Leave empty to use system default]:

$ 5

Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: “3.5,5.2”]: 5.2,6.1
……….
INFO: Starting clean (this may take a while). Consider using –expunge_async if the clean takes more than several minutes.
………
INFO: All external dependencies fetched successfully.
Configuration finished

If all was done correctly you should see:

INFO: All external dependencies fetched successfully.
Configuration finished.

8: Build TensorFlow

Warning Resource Intensive I recommend having at least 8GB of computer memory.

(Note that you current path in terminal is ~/tensorflow) 

If you want to build TensorFlow with GPU support enter (Note that the command should be one line):

$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

For CPU only enter:

$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package

9:Build & Install Pip Package

(Note that you current path in terminal is ~/tensorflow) 

This will build the pip package required for installing TensorFlow in your ~/tensorflow_pkg [you can change this directory as the one you like]

$ bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/tensorflow_pkg

Remember that, at any time, you can manually force the project to be reconfigured (run the ./configure file in step 7 above to reconfigure) and built from scratch by emptying the directory ~/tensorflow_pkg  with:

rm -rf ./*

Now you can cd into the directory where you build your tensorflow, for example my case is  ~/tensorflow_pkg

then issue the following command according to you are using python or python 3.

To Install Using Python 3 (remove sudo if using a virtualenv)

$ sudo pip3 install tensorflow-0.12.1-cp27-cp27mu-linux_x86_64.whl

# tip: after you type tensorflow, you can hit Tab on your keyboard to autofill the name of the .whl file you just built

For Python 2 (remove sudo if using a virtualenv)

$ sudo pip install tensorflow-0.12.1-cp27-cp27mu-linux_x86_64.whl

# tip: after you type tensorflow, you can hit Tab on your keyboard to autofill the name of the .whl file you just built

Note that if you meet this error:

The directory ‘/home/youraccountname/.cache/pip/http’ or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag. 

Change the command above to

 sudo -H pip install tensorflow-0.12.1-cp27-cp27mu-linux_x86_64.whl

If you meet this warning

You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the ‘pip install –upgrade pip’ command.

I would suggest just ignore this – sometimes after doing upgrade there might appear some trouble because of dependencies.

10: Test Your Installation

Finally, time to test our installation.

To test the installation, open an interactive Python shell and import the TensorFlow module:

$ cd # this will return to your home root directory ~
$ python  # or python3
… 
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally

With the TensorFlow module imported, the next step to test the installation is to create a TensorFlow Session, which will initialize the available computing devices and provide a means of executing computation graphs:

>>> sess = tf.Session()
>>> sess = tf.Session() 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:03:00.0
Total memory: 7.92GiB
Free memory: 7.81GiB
…

To manually control which devices are visible to TensorFlow, set the CUDA_VISIBLE_DEVICES environment variable when launching Python. For example, to force the use of only GPU 0:

$ CUDA_VISIBLE_DEVICES=0 python

You should now be able to run a Hello World application:

>>> hello_world = tf.constant("Hello, TensorFlow!") 
>>> print sess.run(hello_world) 
Hello, TensorFlow! 
>>> print sess.run(tf.constant(12)*tf.constant(3)) 
36 

TensorFlow also has instructions on how to do a basic test and a list of common installation problems.

You should now have TensorFlow installed on your computer. This tutorial was tested on a fresh install of Ubuntu Server 16.04 with a GeForce GTX 1080.

 

Referenced posts (See this page for more TensorFlow setup links I collected):