Distributed TensorFlow

This page lists links to  resources about distributed TensorFlow.

 

A brief tutorial on how to do asynchronous and data parallel training using three worker machines with each one using a GTX 960 GPU (2GB) and one parameter server with no GPU. The author used a simple sigmoid network with a small learning rate to measure performance differences on MNIST. The goal is not to achieve a high accuracy but to learn about tensorflows distribution capabilities.

Running Tensorflow on Spark in the scalable, fast and compatible style