This page lists links to resources about distributed TensorFlow.
A brief tutorial on how to do asynchronous and data parallel training using three worker machines with each one using a GTX 960 GPU (2GB) and one parameter server with no GPU. The author used a simple sigmoid network with a small learning rate to measure performance differences on MNIST. The goal is not to achieve a high accuracy but to learn about tensorflows distribution capabilities.
- TensorOnSpark on Github (by liangfengsid)
Running Tensorflow on Spark in the scalable, fast and compatible style
- Spark GPU Demo on GitHub (by )
- Glossary in Distributed TensorFlow (August 26, 2016 by weiwen.web)