TensorFlow: raise ValueError(“GraphDef cannot be larger than 2GB.”)

While I was using TensorFlow’s imageNet trained model to extract the last pooling layer’s features as representation vectors for a new dataset of images, it worked just fine for around 21 – 22 images but then crashed with the following error:

File ".../lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2152, in _as_graph_def
     raise ValueError("GraphDef cannot be larger than 2GB.")
 ValueError: GraphDef cannot be larger than 2GB.

The reason caused the error: each call to run_inference_on_image() adds nodes to the same TensorFlow graph, which eventually exceeds the maximum size (i.e., 2 GB).

The efficient solution:

Modify run_inference_on_image() to run on multiple images. Call the instance of sess.run() in a for loop that reads your image files. In this way, we will no longer need to reconstruct the entire model on each call, which will make processing each image much faster.

See the snippet below for some hints:

Rewrite the function run_inference_on_image()

I rename it to (and with one parameter) run_inference_on_multiple_images(path_to_your_image_files)

In the main function:

directory = os.path.dirname(os.getcwd()) + "/path-to-images/"
run_inference_on_multiple_images(directory)

In the run_inference_on_multiple_images function

def run_inference_on_multiple_images(path_to_image_files):
  create_graph()
  ...
  for filename in os.listdir(path_to_image_files):
    if filename.endswith(".jpg"):
    image = (path_to_image_files+ filename)
    ...
    with tf.Session() as sess:
      ...

 

References:

Leave a Reply

Your email address will not be published. Required fields are marked *