Model Training on Supervisely using AWS Cluster

Supervisely is a web-platform which provides an environment to manage all the aspect, like Data collection, annotation, experiments with models, continuous model improvement, sharing, collaboration, in Deep Learning and Computer Vision project development process.

In this project, my objective is to create a model and train the same on an AWS agent.

LETSS STARTT!!!!!!

Initially, we will find a team with one workspace and one member (you). Other members can be added as well but, we won’t be needing that.

I created a workspace, named mlop_task_6.

Once the workspace is created, a screen appeared which explains the procedure of carrying out the work on supervisely.

For this project, I created a dataset using images from Google of people wearing masks. I tried my best to maintain the variety in the dataset, from age, skin complexion to different types of masks.

In any model, training with 15 images does not make any sense. But later we will perform image augmentation, and hence the number of images will increase.

Later, after uploading the data, I named the project as covid_mask_detection.

Once the data is uploaded and the project is created, we need to annotate the images.

Now, what is Image Annotation!!!!

Image annotation is a process of labeling various components of an image. It’s like telling the computer that “this is a car”, ”this is a flower”, and so on.

How Does Image Annotation Work?

To create annotated images you need three things:

  1. Images
  2. Someone to annotate the images
  3. A platform to annotate the images on

In this project, to train the model to find out whether someone is wearing a mask or not, I have specified a face with a mask on as region of interest using Polygonal Segmentation.

In Polygonal Segmentation, we mark the boundary of the object(s). The area enclosed, represented by the shaded region, is our target object. One image may have multiple objects with different labels.

Similarly, I have annotated all the images in the dataset.

Supervisely provides us with this amazing feature of “ADD SMART” in which, we just have to mark the endpoints of the rectangle around the ROI, and it will be segmented automatically, as shown in the following images.

ONLY 15 IMAGES….!!??

In any model, training with 15 images does not make any sense.

But now we will perform image augmentation, and hence the number of images will increase.

IMAGE AUGMENTATION !!?

The image data augmentation technique is used to expand the size of a training dataset.

HOW? It is done by creating various versions of an image in the dataset. By variations I mean, zooming in and out the images, tilting the images at various angles, grayscaling the images, blurring the images, flipping the images, and in many other ways.

Training deep learning neural network models with more and more data results in more skillful models and the augmented images improve the ability of the models to generalize what they have learned to new images.

In Supervisely, Data Transformation Language (DTL) is used for image augmentation.

A sample code is also provided by supervisely.

I have used this DTL code. (Source: VIMAL DAGA Sir’s Github Repository)

It is very much clear from the flowchart that how data is augmented using the code.

Here, the images are resized in two ways.

Also, noise is added to each image.

Flipping the images will also create more different images.

One more folder with augmented images is created.

We can notice this very clearly, that in the new folder there are many more images and also that the first image is flipped vertically.

Now we will divide the data set in training and testing data set. Supervisely provides us a DTL code to divide, as shown in the images below.

Now that I am done with my dataset, I need to have a neural network on which I will do the training.

Supervisely provides us with various pre-built neural networks. From the “+ADD” option at the top, we can select the required neural network.

List of added Neural Networks.

To start the training, press the “TRAIN” button in front.

And then you will end up with the following screen pop-up.

WHY THIS HAPPENED!!!???

Supervisely only gives models, DTL codes for augmentation and annotation, and many other things but resources to run it.

Supervisely acts as a manager in the cluster and we need to provide it with agents who have resources(RAM, GPU, etc).

In this project, we will train our model on a slave launched using AWS Cloud Services.

STEP1:

Create a new account, if you don’t have already and then go to the AWS Management Console.

STEP2:

Find EC2 Services.

Scroll down to find the LAUNCH INSTANCE button, and select “Launch Instances option”.

STEP3:

A list of multiple OS (Amazon Machine Image: AMI) images will appear.

For supervisory to train our model, a slave must be Linux OS, with NVIDIA CUDA GPU, DOCKER, and NVIDIA-DOCKER. Therefore, I made the final call on the highlighted option.

This image will be used as a slave for our master node.

STEP4:

The next step after selecting an image is to select the instance type.

For the neural networks, we need an instance with GPU. So I filtered the list accordingly.

I selected g4dn.xlarge instance.

You may not be having a key-pair, therefore create a new one, otherwise, select the existing one.

Then launch the instance.

NOTE:

If you are a new user, you might not be having vCPUs for launching the instance. Calculate the new limit and then request the instances.

Now we need to connect our system with the launched instance, through SSH protocol, using details given on the AWS page.

Add a slave/agent in supervisely in the “Clusters” section, then this window will pop up.

Copy the given command in Command Prompt, and run it to connect with Supervisely.

Now in the Neural Network section, press the train button in front of the selected model. Then fill the information asked and RUN the model.

There is an option to view the training chart.

Also, we can download the model and use it accordingly.

Keep in mind to terminate the instance, after using it.

In today’s time, the datasets are getting bigger and the models are getting complex every day. More the complexity, more resources will be required to train the model.

Cloud the life savior in this situation.

Hence cloud is the most crucial technology for any tech enthusiast to learn and this is my first step towards it.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store