Chapter 1. Introducing Lifelong Machine Learning

Tram Ho

Machine learning (ML) is the tool that advances both data analysis and artificial intelligence (AI). In recent years, as the computing power of computers has risen to a new height and the vast amount of data collected by major technology firms, ML has taken a long step and a new field. was born called Deep Learning. The ML algorithm has been applied in almost all areas of computer science, natural sciences, engineering, social sciences and more. Without ML algorithms, many industries would not exist or develop, for example: E-commerce, web search. However, the current ML model has some weaknesses. This chapter will outline the classic ML model and its weaknesses, and then introduce lifelong learning (LL) as a promising new direction.

1. Classic machine learning model

The current model for ML is to run an ML algorithm on a dataset to create a model. This model is then applied in practical tasks. We call this model is isolated learning because it does not consider any information or knowledge we have learned before. With an isolated learning model it does not retain and accumulate knowledge learned in the past and use that knowledge to learn and solve problems in the future. This is in contrast to human learning. Because there is no prior knowledge accumulation, ML algorithms need a large amount of data for training examples to learn effectively. The learning environment is usually closed and static. For supervised learning, data labeling is often done manually, which is very laborious and time consuming. Worse is that everything around us changes constantly. Even with unsupervised learning, collecting large amounts of data may not be possible in many cases.
On the contrary, we humans learn the other way. We accumulate and maintain the knowledge we learned from previous tasks and use it seamlessly into learning new tasks and solving new tasks. In the face of new knowledge we can adjust our past knowledge to deal with new situations and also learn from it. Lifelong machine learning (LL) aims to capture this human process and ability to learn. Because everything around us is so closely linked, knowledge learned on some topics can help us understand and learn some other topics.

For example

1.1 Automatic driving

There are two methods to learn to drive: the law- based method and the learning-based approach. In the pineapple approach on the set of rules, it is difficult to generalize all the situations on the road. The pineapple approach in learning has a similar problem because the environment is very volatile and complex. We use the cognitive system as an example: The cognitive system is used to detect and identify all types of objects on the road to predict hazards and situations, training based on the attached data set. Label is very difficult. It is desirable that the system can continuously learn while driving and also learn its behavior and how dangerous it is by using past knowledge and feedback from the environment.
For example, when the vehicle discovers a black patch on a road that it has never seen in the past, it must first realize that this is an unknown object and then gradually learn to recognize it in future and assess its danger level. If other cars have passed (environmental feedback), that means the patch is not dangerous. In fact, on the road, cars can learn a lot of knowledge from the cars around. This learning process is unattended and does not stop. Time went by and the car got smarter.

1.2 Chatbots

Chatbots are becoming more and more popular nowadays due to their wide application in performing targeted tasks (customer support, product purchase, etc.) and people also eliminate stress through conversations. However, there are still many limitations with the current chatbot limiting their scope of application. A serious weakness of current chatbots is that they cannot learn new knowledge in conversations, ie their knowledge is provided in advance and cannot be updated during chatbots. On the contrary, people learn a lot of knowledge through conversations.

2. Definition of lifelong learning

The initial definition of LL is as follows. At any given time, the system has learned how to perform N tasks. When faced with the N + 1 task, it uses the knowledge gained from N past tasks to help learn the N + 1 task. We extend this definition by adding details and additional features: first, an explicit knowledge base (KB) is added to keep the knowledge learned from previous tasks; second, the ability to discover new learning tasks in the model application process; Thirdly, kar learning while working is combined.
Definition 1.1 . Lifelong learning (LL) is a continuous learning process. At any given time, the department has implemented a series of learning tasks T 1 , T 2 , …, T N ; These N tasks are also called N tasks in the past, with the corresponding data sets D 1 , D 2 , …, D N. The tasks may vary and come from many different domains. In the face of the new N + 1 task T N + 1 with D N + 1 data , the course can leverage knowledge learned in the knowledge base (KB) to help learn T N + 1 The mission of LL is usually to optimize the performance of the new T N + 1 task, but it can also optimize any past task. Ideally an LL course can:

  1. Learn and operate in an open environment, where it not only applies past learning models to solve problems but also discovers new tasks to learn.
  2. Learn to improve model performance in an application or test a learned model.

We can see that this definition is not official or complete. Below, we add some additional comments.

  1. The definition indicates five main characteristics of LL:
  • continuous learning process
  • Knowledge accumulation and maintenance in the KB
  • ability to use knowledge accumulated to help learn in the future.
  • Ability to detect new missions
  • ability to learn while working or learning about a job.
  1. Because knowledge is accumulated and used, we must first think about the problem of knowledge and its role in learning LL.
  2. Distinguish two types of tasks:
  • Independent tasks: Each Ti mission is independent from other tasks, each Ti task can be learned independently, although there is still similarity and knowledge sharing.
  • Dependent task: Each Ti task has several dependencies on another.
  1. Tasks are not necessarily from a domain.
  2. The change to new task may occur suddenly or gradually.
  3. The LL may require a systematic approach that incorporates many different algorithms and schemas for the representation of knowledge.

Based on definition 1.1, we can outline a general process of LL and LL system architecture, very different from the current isolation model with only one task T and data set D.

The new LL system architecture is shown in Figure 1.2. The following is a description of the main components of the LL system (not all existing systems use all of these components, in fact most systems are much simpler).

  1. Knowledge Base (KB): Stores previously learned knowledge. There are auxiliary components.
    (a) Past Information Store (PIS): Stores information from past learning (models, patterns, or other forms of results)
    (b) Meta-Knowledge Miner (MKM). It implements super-knowledge mining in PIS and in the super knowledge archive.
    (c) Meta-Knowledge Store (MKS): Stores knowledge mined or integrated from PIS and also from MKS itself.
    (d) Knowledge Reasoner (KR): Provide knowledge-based inference in MKB and PIS to create more knowledge (an important component).
  2. Knowledge-Based Learner (KBL): The department needs to be able to use knowledge before learning, that is, knowledge-based learning, making use of learned knowledge to perform new tasks.
  3. Task-based Knowledge Miner (TKM): Module to extract knowledge from specific KB for new tasks.
  4. Model: This is a learned model, can be a predictive or layered model, ….
  5. Application: This is a practical application for the model. It is important that when applying the model, the system can still learn new knowledge and can discover new tasks to learn. The app can also provide feedback for knowledge-based learning to improve models.
  6. Task Manager (TM): Receive and manage incoming tasks in the system, handle task conversion and present new learning tasks for KBL.

A typical LL process begins with the Task Manager assigning new tasks to KBL. KBL then works with the help of past knowledge stored in the KB to create an output model for the user and also sends information or knowledge that needs to be retained for future use by the KB. . The system can also detect new tasks and learn at work. Some knowledge gained in applications may also be retained to help learn future tasks.

3. Main types of knowledge and challenges

There are mainly two types of knowledge used and shared in learning new assignments:

  1. Global knowledge : Many current LL methods assume that there is a globally implicit structure between tasks shared by all tasks. This global structure can be learned and utilized in new tasks. Such knowledge is more suitable for similar tasks in the same domain because such N tasks usually have a high correlation or very similar distributions.
  2. Local knowledge : Many other methods suggest that different tasks can use different parts of knowledge learned from different previous tasks. We call these pieces of knowledge local because they are local to their past tasks and are not assumed to form a coherent global structure.
    The LL method based on local knowledge often focuses on optimizing the current task performance with the help of past knowledge. They can also improve the performance of any previous task by treating it as a new task. Its advantages are flexible (it is possible to choose any past piece of knowledge useful for a new task).
    The global advantage is that they are usually approximate in terms of optimization on each task, including the previous and the current ones. However, there are also many difficulties, there are two basic challenges:
  3. The correctness of knowledge
  4. The ability to use knowledge

4. Methods of evaluation and the role of big data

Evaluation is usually done through the following steps:

  1. Running on data from previous tasks
  2. Runs on new task data.
  3. Running a baseline algorithm: There are usually two types of baselines. The first type is algorithms that perform isolation on new data without using any knowledge in the past. The second type is the existing LL algorithm.
  4. Result analysis: This step compares the results from steps 2 and 3 and analyzes the results to make some observations (for example, comparing the superiority from the results of step 2 with the base algorithm in step 3). ).

Some additional considerations during the LL assessment process:

  1. A large number of tasks: A large number of tasks and data sets are needed to evaluate the LL algorithm. This is because knowledge from some tasks does not improve the learning of new tasks, but can only provide a good amount of knowledge for new tasks and the data in new tasks is often quite small.
  2. Task sequence: A sequence of tasks can be meaningful, meaning different task sequences can produce different results. Because the LL algorithm is often not guaranteed to be optimal for all previous tasks. To consider the task sequence effect, users can generate random task sequences. The following results are aggregated for comparison.
  3. Experimental experiments: Because previous tasks have created more knowledge, allowing LL algorithms to produce better results for new tasks.

Above are some basic knowledge so that we can work with LL.
In the next section we will learn about “Part 2: Learning models related to lifelong learning”

Chia sẻ bài viết ngay

Nguồn bài viết : Viblo