This page lists official links and official examples and tutorials of TensorFlow.
(Stay tuned in, the list is growing over time.)
- TensorFlow official webpage
- TensorFlow has nice tutorials about TensorFlow basics and Convolutional Neural Networks usage – you can find them here.
- TensorFlow on Github
- TensorFlow’s repository of models
- TensorFlow Official Tutorials
- TensorFlow Glossary
- TensorFlow Guides (How To)
- Reading data
- than 3000 TensorFlow-related repositories listed on GitHub
- TensorFlow Wide & Deep Learning Tutorial (post, Github)
- TF Learn (Github)
TF Learn is a simplified interface for TensorFlow, to get people started on predictive analytics and data mining. The library covers a variety of needs: from linear models to Deep Learning applications like text and image understanding.
- TensorFlow official examples on Github by tensorflow
Contains image_retraining and tutorials sub-folders.
- How to Classify Images with TensorFlow (google research blog, tutorial)
- How to Retrain Inception’s Final Layer for New Categories
- Official resources recommended by TensorFlow.
- Open sourcing the Embedding Projector: a tool for visualizing high dimensional data
***** Visualization tools in TensorFlow*****
The computations you’ll use TensorFlow for – like training a massive deep neural network – can be complex and confusing. To make it easier to understand, debug, and optimize TensorFlow programs, TensorFlow has included a suite of visualization tools called TensorBoard.
TensorBoard is a suite of web applications for inspecting and understanding your TensorFlow runs and graphs. You can use TensorBoard to visualize your TensorFlow graph, plot quantitative metrics about the execution of your graph, and show additional data like images that pass through it. TensorBoard currently supports five visualizations: scalars, images, audio, histograms, and the graph.
The following are visualization tutorials using TensorBoard (The TensorBoard README has a lot more information on TensorBoard usage, including tips & tricks, and debugging information):
- TensorBoard: Visualizing Learning
- TensorBoard: Graph Visualization
- TensorBoard: Embedding Visualization
*****defining, training and evaluating complex models in TensorFlow*****
TF-Slim is a lightweight library for defining, training and evaluating complex models in TensorFlow. Components of tf-slim can be freely mixed with native tensorflow, as well as other frameworks, such as tf.contrib.learn.
import tensorflow.contrib.slim as slim
TF-Slim is a library that makes building, training and evaluation neural networks simple:
- Allows the user to define models much more compactly by eliminating boilerplate code. This is accomplished through the use of argument scoping and numerous high level layers and variables. These tools increase readability and maintainability, reduce the likelihood of an error from copy-and-pasting hyperparameter values and simplifies hyperparameter tuning.
- Makes developing models simple by providing commonly used regularizers.
- Several widely used computer vision models (e.g., VGG, AlexNet) have been developed in slim, and are available to users. These can either be used as black boxes, or can be extended in various ways, e.g., by adding “multiple heads” to different internal layers.
- Slim makes it easy to extend complex models, and to warm start training algorithms by using pieces of pre-existing model checkpoints.
TF-slim is a new lightweight high-level API of TensorFlow (
tensorflow.contrib.slim) for defining, training and evaluating complex models. This directory contains code for training and evaluating several widely used Convolutional Neural Network (CNN) image classification models using TF-slim. It contains scripts that will allow you to train models from scratch or fine-tune them from pre-trained network weights. It also contains code for downloading standard image datasets, converting them to TensorFlow’s native TFRecord format and reading them in using TF-Slim’s data reading and queueing utilities. You can easily train any model on any of these datasets, as we demonstrate below. We’ve also included a jupyter notebook, which provides working examples of how to use TF-Slim for image classification.
This process may take several days, depending on your hardware setup. For convenience, we provide a way to See model_deploy for details about how to train a model on multiple GPUs, and/or multiple CPUs, either synchrononously or asynchronously.
An example of fine-tuning inception-v3 on flowers.
To evaluate the performance of a model (whether pretrained or your own), you can use the eval_image_classifier.py script
This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks.
*****Serving system for machine learning models*****
*****From Google research blog*****
- Celebrating TensorFlow’s First Year (November 09, 2016, Google Research Blog)
TF-Slim: A high level library to define complex models in TensorFlow (August 30, 2016, Google Research Blog)
- Text summarization with TensorFlow (August 24, 2016 Google Research Blog) – (GitHub repo, a good post using the textsum model based on TensorFlow)
- Announcing TensorFlow Fold: Deep Learning With Dynamic Computation Graphs (February 07, 2017 Google Research Blog) – (GiHub repo, Download and Setup, Quick Start Notebook, Documentation)
TensorFlow Fold is not an official Google product.
TensorFlow by itself was not designed to work with tree or graph structured data. It does not natively support any data types other than tensors, nor does it support the complex control flow, such as recursive functions, that are typically used to run models like tree-RNNs. When the input consists of trees (e.g. parse trees from a natural language model), each tree may have a different size and shape. A standard TensorFlow model consists of a fixed graph of operations, which cannot accommodate variable-shaped data. Fold overcomes this limitation by using the dynamic batching algorithm.
The input to a Fold model is a mini-batch of Python objects. These objects may be produced by deserializing a protocol buffer, JSON, XML, or a custom parser of some kind. The input objects are assumed to be tree-structured. The output of a Fold model is a set of TensorFlow tensors, which can be hooked up to a loss function and optimizers in the usual way.
Given a mini-batch of data structures as input, Fold will take care of traversing the input data, and combining and scheduling operations in a way that can be executed efficiently by TensorFlow. For example, if each node in a tree outputs a vector using a fully-connected layer with shared weights, then Fold will not simply traverse the tree and do a bunch of vector-matrix multiply operations. Instead, it will merge nodes at the same depth in the tree that can be executed in parallel into larger and more efficient matrix-matrix multiply operations, and then split up the output matrix into vectors again.
The basic component of a Fold model is the
td.Block. A block is essentially a function — it takes an object as input, and produces another object as output. The objects in question may be tensors, but they may also be tuples, lists, Python dictionaries, or combinations thereof. The types page describes the Fold type system in more detail.
Blocks are organized hierarchically into a tree, much like expressions in a programming language, where larger and more complex blocks are composed from smaller, simpler blocks. Note that the block structure must be a tree, not a DAG. In other words, each block (i.e. each instance of one of the block classes below) must have a unique position in the tree. The type-checking and compilation steps depend on tree property.