Instructions to make chatbot simple

Tram Ho

Preamble

On a nice day, I sat here writing a tutorial to make a simple chatbot, say no to chatbot framework like Rasa ? . This bot I will code in NLU direction, ie no stories, no dialogue memory, no slotfilling, no action, bla bla …

Language: Python Operating system: Ubuntu 16.04 Technique: Database, Intent Classification

Database

DB: Postgres

To ease the code and run the product, I created a shell shell file that automatically runs the installation commands run.sh as below:

The requirements.txt file contains the packages of python, before installing all postgres:

To run the file run.sh you use this command sh run.sh

After installation is complete, you create a database, well, then I also write a shell file for convenient installation ? create_db.sh :

After creating the db, create a function that connects to the db funnybot:

After connecting to db funnybot, you create the table:

Add some data to the table:

  1. Add bot

  1. Add intents

  1. Add training data for intents

  1. Add the answer

Sure you feel a bit of data like this does not need DB, but not when doing a real project, you will encounter a lot of data problems: large amount of data, confidentiality of data data, data storage capacity, structured data, data access, bla bla … Not only need to import a csv file ? . Ok, it’s a bit rambling, after having data we need to process it to put in the model of Machine Learning to classify the user’s Intent

Intent Classification

Please understand, I do not guide how to get data from db postgres because the article will be thin and long. In general, if you want to retrieve data, change the sql SELECT * FROM table_name; to SELECT * FROM table_name; is to be.

Import package needed, here I use MultinomialNB probability model and text-to-vector (Bag of words, TFIDF) technique of scikit-learn

Data preparation

After we have prepared the raw data above, we will train train

Here I use sklearn’s Pipeline module to simplify and simplify the preprocessing process and define model. The model only needs to fit data => data will go through the pipeline. After the train is finished predicting, a few more logical steps

Result

Conclusion

Come on come alone, I plan to do more Entity Extraction but due to busy too temporarily put aside, maybe in the future will do later

Thank you for watching this article ?

Share the news now

Source : Viblo