How does artificial intelligence assess your handsome …? – Develop Model Deep Learning for Beauty Evaluate problem

Tram Ho

How do you know if you’re handsome or pretty if you’re not relying on others?

And the answer is AI will do it for you, today I will introduce to you how to build a very interesting and interesting problem called “Beaty Evaluate” . Use deep learning train with the data set SCUT-FBP5500 to get the “ultimate” model to predict how beautiful your beautiful zai is, my code will be at the bottom of the article … Now for now let’s begin now =))


Scattered about DATASET

SCUT-FBP5500 data set includes a total of 5500 faces of men, women, Asia, foreigners, old, young … and diverse labels (landmark, beauty score in 5 points scale, distribution of beauty scores ) enable different computational models with different facial beauty prediction models. The dataset can be divided into four subset with different races and genders, including 2000 Asian women (AF), 750 white women (CF) and 750 white men (CM) . Most images of SCUT-FBP5500 are collected from the Internet.

You can read more details on this paper here and download the data set here

Training / Testing set split

In the Dataset download, we have split into two experiments:

  • 5-folds cross validation, with each validation 80% samples (4400 images) are used for training and the remaining 20% ​​(1100 images) are used for testing.
  • The second test is divided into 60% for training and 40% for testing.

This is the label of the train data set, and as you can see it is labeled with each image + their beauty score next to it.

And here are some images for each respective label:


Oh, oh hey, the theory is now the part I like to write the most =))

Split data

Use this function to get labels and get only image paths in ALL_labels.txt and remove the score.

Here I use the split file into 60% training and 40% testing

Use cv2.imread to read the input image and save it into RAM as an array. note : I only use this method temporarily because it is convenient: v, but I encourage you that if the code, you should save it as a file to save the train next time rather than performing this step. Also, if you use gg colab you will understand yourself ns: v.

Finally, reshape again to enter the network with the size of 224×224 and 1 anchor:

Loss function

Here I will use RMSE to make losses because it is suitable for the problem or bluntly follow paper: v, you can read about RMSE here.

Build Model

  • Model with input image is 224×224 and 1 rub anchor
  • 7 conv2D layers and and up to 1024 filters per layer
  • 7 2D MaxPools
  • 2 FC

The network I used is for a youngster’s (I put it at the end of the RF) but I adjusted it to my liking by adding a Conv2D layer, setting the dropout and resizing the image to 224×224, the original network of they are 7 million parameters, after adjusting it, it has increased to 30 million: v


I also use sgd instead of adam because of its stability (at first I tried to use adam but its loss jumped too much), set EarlyStopping so that when val_loss doesn’t increase then more than 10 ep then stop by itself and set checkpoint to Save the most optimal model

This is the result after my training, the loss is about 0.28 and the model is quite perfect: v Some of you may wonder why not augmentation with photos to give better train results, answer me tried it and it didn’t increase much ? )


After having the model, we must build the code to load the model as well as the result show, I use opencv to detect the result because it is fast =)), Here the image will be calculated on a scale of 1-5 (1 is ugly / most damn 5 is prettiest / most ugly: v)

Some results obtained:

Princess Thuy Tung Tung Son: v

Tom Hiddleston with a pretty high score

This lady I bear, type model on gg finished out: v

You can go here to test it out:

Link source code : Here

My article here is over, nothing is wrong and I do not expect you to comment below, and please upvote yourself for the best future between us, thank you very much, see you guys in the next article ? )


Share the news now

Source : Viblo