While downloading TF Hub (Tensorflow Hub) pre-trained model, it saves all files to your cache directory inside C:\ drive. But in practice the model should be inside your working directory. In this tutorial, I will show you how you can download and save TF Hub model to a custom path (folder) and load it whenever you want.
What is TensorFlow Hub module?
TensorFlow Hub is a library offered by Google. It provides pre-trained machine learning models for various tasks like image classification, neural style transfer, natural language processing, text classification, etc.
You can use these freely available pre-trained models to improve accuracy of your machine learning projects. You no need to start something from beginning.
Course for You: Python for Computer Vision with OpenCV and Deep Learning
TensorFlow Hub Model URL
Tensorflow Hub (or TF Hub) provides so many pre-trained deep learning models. Now the question is what’s the URL of the TensorFlow Hub site containing lots of models? Follow the below steps to get the answer.
- Go to the TensorFlow Hub website: https://tfhub.dev/
- Click on the “Browse” button () on the top left corner of the page
- You can find categories like Text, Image, Video, and Audio
- There are so many models for each category. You can click on any model to view its details such as its description, example code, etc.
- You can download those tf hub model manually by clicking the “Download” button, and save it in your custom local folder path
- You can also find the link for that model to download it from Python
Also Read: Neural Style Transfer using Tensorflow Hub
Now the default and popular way of loading and saving any TF Hub model using the model link. You just need to pass a model link in Python and the model will automatically download the model for you. Let’s see steps to download and save any TF Hub model using Python.
Download TF Hub model
To download TF Hub model you need to pass the model link to
hub.load() function. It will automatically download the model in background.
import tensorflow_hub as hub # Specify model url model_url = "https://tfhub.dev/google/universal-sentence-encoder/4" # Download TF Hub model from URL model = hub.load(model_url)
Load & Run the model
# Run Tensorflow Hub model embeddings = model(["This is an example sentence."]) # Print Output print(embeddings)
Now the problem is, this code will save the model in your default cache location. In my case, it saved the model inside this directory :
Now if you want that tf hub model to be saved inside a custom folder path. There are three ways you can do it.
Method1: Save TF Hub model to custom path
Let’s say you downloaded the model inside your default cache path (C:\Users\Anindya\AppData\Local\Temp\tfhub_modules). Now you can copy the entire model directory to your local working directory.
Below Python code will first download the model and save it to your default cache path. Then it will copy the model directory to your custom folder path.
import tensorflow_hub as hub # Download TF Hub model from URL embed_model = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4") # Path where to save the model saved_model_path = "D:/tf_hub_saved_model" # Save TF Hub model to custom folder path tf.saved_model.save(embed_model, saved_model_path)
Load & Run the model
import tensorflow_hub as hub # Saved model folder path saved_model_path = "D:/tf_hub_saved_model" # Load the TF Hub model from custom folder path loaded_embed_model = hub.load(saved_model_path) # Run the saved model embeddings = loaded_embed_model(["This is an example sentence."]) # Print model output print(embeddings)
Method2: Change Cache location
You can change the default cache location before downloading the model. The TF Hub will then save the model inside your custom updated cache path.
In the below Python code at line 5, we are updating the cache path to:
D:\tfhub_cache. So now our model will download in that directory directly.
import os import tensorflow_hub as hub # Update Tensorflow Hub cache path os.environ["TFHUB_CACHE_DIR"] = 'D:/tfhub_cache' model_url = "https://tfhub.dev/google/universal-sentence-encoder/4" # Download and load model from TF Hub model = hub.load(model_url)
Load & Run the model
import tensorflow_hub as hub # Saved model folder path saved_model_path = "D:/tfhub_cache" # Load the TF Hub model from custom folder path loaded_embed_model = hub.load(saved_model_path) # Run the saved model embeddings = loaded_embed_model(["This is an example sentence."]) # Print model output print(embeddings)
Method3: Download TF Hub models manually
If you know the link of the model, you can download it manually by adding “?tf-hub-format=compressed” after the model link.
For example, if your model URL is:
So after adding that extra string, final URL should look like below:
Now you just need to copy this final URL and paste it into any web browser and hit enter. The model will be downloaded as a tarfile to your machine.
Next, extract this tarfile inside your preferred location (in my case
D:\unzip_tf_hub_model\universal-sentence-encoder_4). Then run below Python code to load the model.
import tensorflow_hub as hub model_url = "D:/unzip_tf_hub_model/universal-sentence-encoder_4" model = hub.load(model_url)
Load & Run the model
import tensorflow_hub as hub # Saved model folder path saved_model_path = "D:/unzip_tf_hub_model/universal-sentence-encoder_4" # Load the TF Hub model from custom folder path loaded_embed_model = hub.load(saved_model_path) # Run the saved model embeddings = loaded_embed_model(["This is an example sentence."]) # Print model output print(embeddings)
Error you may encounter
While running Tensorflow Hub model you may sometime get below error.
InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run Identity: Dst tensor is not initialized. [Op:Identity]
This is nothing but memory (RAM) issue. To solve this error you just need to restart your kernel in Jupyter Notebook.
While downloading TF Hub model, by default it is saved inside C:\ drive (cache location). My system only has 5 GB free space in C:\ drive. This is the reason I had to find out some way to save those pre-trained models inside a custom folder path (inside D:\ drive).
After doing a little bit of research I found some methods which I shared in this post. Please let me know how helpful this tutorial was for you in the comment section below.
Hi there, I’m Anindya Naskar, Data Science Engineer. I created this website to show you what I believe is the best possible way to get your start in the field of Data Science.