Matrix Factorization: Suggested method based on matrix decomposition technique (P2)

Tram Ho

Similar to the previous post, after the theory in Part 1 , in this part I will present the algorithm demo. Let’s find out together ?

1. Building class MF

Initialization function

Input parameters:

  • Y : Utility matrix, consisting of 3 columns, each column has 3 figures: user_id, item_id, rating.
  • n_factors : number of hidden dimensions between users and items, default n_factors = 2 .
  • X : users matrix
  • W : matrix ratings
  • lamda : weight the regularization of the loss function to avoid overfitting, default lamda = 0.1
  • learning_rate : is learning_rate – the weight of Gradient Descent, used to adjust the learning speed., default learning_rate = 2
  • n_epochs : number of iterations for training, default n_epochs = 50
  • top : number of suggested items per user. The default is 10 .
  • filename : File to store evaluation data.

 

 

Changing the weights, you can observe the influence of weights on the evaluation results of the algorithm.

GetUserRated () and getItemsRatedByUser ()

The get_user_rated_item(i) returns the list of users who have rated the i item

The get_item_rated_by_user(u) returns a list of items evaluated by the u user

We will use these two functions to optimize the two matrices X and W.

The update X and W functions :

These are the two optimal functions X and W , with the number of loops being fixed at 50 times.

 

Main algorithm

2. Evaluation

Similar to the previous two methods, here I use 2 measures, RMSE and PR :

3. Demo with Movielen dataset

The results I obtained are:

Change the weights to find the best set of weights

Source code and references:

Code

https://machinelearningcoban.com/2017/05/31/matrixfactorization/

Share the news now

Source : Viblo