Object Detection: From labeling images to tflite model

Emory Raphael Viana Freitas
4 min readDec 28, 2021

In this post, I will demonstrate how you can build your tflite model to be embedded in a mobile application from scratch.

A commom way to create an IA model to detect objects on your image is using Yolo algorithm. So let’s use what they create for us in order to accelerate our process.

Here is the link to colabweb that we will use during this post. It’s very well commented and easy to be adapted. I made changes so it can be used in google colab, but all the code needed is there.

Custom dataset

Let’s create our own dataset of the object we want to detect in image. For this we need LabelImage.

After collect images of the object we wish to train our model, let’s split our dataset in two directories, train and test. Assuming that our dataset can be large, it would be hard to split it. For this reason, here is a code that can do it for us.

Code to split dataset into train and test folders

The folder, after all the process should look like this

.
├── data.yaml
├── test
│ ├── images
│ └── labels
└── train
├── images
└── labels

LabelImage is the tool we need to create our dataset, opening your images folder

LabelImg home screen

Now you can select Create reactbox and select the object in the image, and you can create multiples reactbox to multiples labels. Use the Change Save dir to save the box labels into the labels folder of train/test directories.

Example creating a reactbox

After labeling the train and test images, let’s save them as a zip file and upload to google drive. This will allow us to use in our notebook easily by mapping our drive after running the cell to import the drive to content.

from google.colab import drive
drive.mount(‘/content/gdrive’)

Now you can see at files tab the directory drive and found your zip file that contains all your images. Then copy the file to /content to be unzip. Your directory will be also in the context of content, making easy and fast to be used on our training process.

%cp /content/gdrive/My\ Drive/<file_name>.zip /content/<file_name>.zip!unzip <file_name>.zip; rm <file_name>.zip

Training model

Now we can say that is the easier part. In the notebook we will starting the section to choose the architecture we will use. In our example, there are four options: small, medium, large and xlarge.

Let’s choose the small one, based on some experiements, I will say that it take longer to converge and extract less characteristics. On the other hands, the size is smaller than others which can be a good thing and think that our final destination is an mobile application. Follow the steps until find the next cell

!python3 train.py — img 416 — batch 90 — epochs 100 — data ‘../data.yaml’ — cfg ./models/custom_yolov5s.yaml — weights ‘’ — name <yolo>_results — cache

There are a bunch of parameters you can customize, but maybe google colab cannot handle them if you increase some values such img and batch numbers. They required more memory and if you are using a free version they are limited.

Hint: if it’s your first time using google colab, go and check about how to enable gpu to make your process faster.

When all epocs finish or we observe that the values are not improve anymore, a new file will be generate. So yeah, we build our model!!! The file must be in the folder runs/train/<yolo>_results/weights/best.pt. Then copy the model to your drive to download it.

%cp /content/yolov5/runs/train/<yolo>_results/weights/best.pt /content/gdrive/MyDrive/best.pt

Convert to tllite model

We are very close to put an end to our story. So we need just to do three more steps to finish it.

First we need to convert from .pt to .onnx, the following code will do this for us and it requires images to infer the weights. The next step is convert the onnx into tensorflow model. Last one is from tensorflow to tflite. Here is some code how we do this:

Conclusion

That’s it!! We build our object detection as tflite model. I hope this tutorial helps you to start your project.

Here is the repository with the code and jupyter notebook that I used.

--

--