Chapter 2. Machine learning model related to Lifelong Machine Learning

Tram Ho

As described in Chapter 1: Introduction to Lifelong Machine Learning , lifelong learning (LML) has three main characteristics: continuous, sustained learning; accumulate knowledge and use the knowledge learned in the past to support new tasks. In addition it can also discover new tasks and learn them step by step to add knowledge to help improve the model. There are several machine learning models (ML) with related characteristics. This chapter discusses some of the ML models most relevant to lifelong learning: Transfer learning; Multi-task learning (MTL); Online Learning; Intensive learning and Super learning. The first two models are more closely related to LL because both involve knowledge transfer across domains or tasks, but they do not continually learn and do not accumulate learned knowledge. Online and intensive study involve continuous learning but they focus on a single (domain) learning task with a one-dimensional time. Meta-learning is also interested in many tasks, focusing primarily on one-time learning or little learning.

1. Transfer learning

Conversion learning (or transform learning or transmission learning) is quite common in machine learning and data mining. Also known as domain adaptation in natural language processing. Learning transformation usually consists of two domains: a source domain and a destination domain. There may be more than one source domain, but most current studies use only one source domain.

  • The source domain usually has a large amount of labeled training data.
  • The destination domain has little or no labeled training data.

The purpose of transfer study is to use data labeled in the source domain to help learn in the target domain. There are many types of knowledge that can be transferred from source to target to help learn in the target domain.

1.1 Structural Correspondence Learning (SCL)

One of the most common transfer learning techniques used in text classification. The algorithm works with:

  • Input: Data is labeled from the source domain and data is not labeled in both source and destination domains.
  • Output: A set of axial characteristics with similar characteristics and behaviors in both regions.

How it works:

  1. Select a set of features that appear frequently in both regions (they are also factors that help predict good labels).
  2. SCL calculates the correlation of non-axis features with axial features in both regions. Creates a panoramic matrix W where: row i is a vector of correlation values ​​of non-axis features with axial features i. A positive value indicates that features other than this axis have a positive correlation with the axis characteristic i in the source or target domain. It is possible to establish a correlation between the two regions

1.2 Differences from lifelong learning

Because the transfer material is very broad, the differences described here may not apply to all transferring lessons.

  1. Not related to continuous learning or accumulation of knowledge.
    The transfer of information and knowledge from the source domain in the destination domain is usually only one time. It does not retain the knowledge to use for future tasks. On the other hand LL for continuous learning and maintenance of knowledge and accumulation is essential for LL.
  2. Transfer learning is one-way learning
    It only transfers knowledge from the source domain to the destination domain. Meanwhile LL, results learned from the domain, new tasks can be used to improve learning in previous domains or tasks if needed.
  3. Number of domains
    Transferred learning usually involves only two domains, one source domain and one destination domain (although there are cases where there are many source domains). It assumes that the source domain has the same characteristics as the destination domain, two similar domains that are usually chosen by the user. On the other hand LL considers a large number of domains (which may be unlimited). When dealing with a new problem, the department needs to decide which past knowledge will be appropriate for the new task.
  4. Identify new learning tasks.
    Transfer study cannot detect new tasks during the course of study as opposed to LL.

2. Multi-task learning

Multitasking learning (MTL) learns many tasks at once to achieve better performance by using related information shared by multiple tasks. It also prevents overfitting during individual tasks and thus has a better overview ability. Now we learn multitasking, also known as batch multitasking.
Definition: Multi-task learning (MTL) involves learning multiple tasks T = {1,2, .., N} simultaneously. Each task t of T has training data as Dt. The goal is to maximize performance on every task. Most of the current research only discusses supervised multitasking.
Comparison between multitasking and lifelong learning

  • Same:
    Both aim to use some of the information shared between tasks to help with learning.
  • Different:
    Multitasking learns to follow the traditional model but instead of optimizing a task it optimizes multiple tasks at once. It does not accumulate knowledge over time and it has no concept of continuous learning as characteristics of LL. Therefore it is important to retain knowledge to enable the increased learning of many tasks with the help of knowledge learned in the past from previous tasks. That is why we consider online or incremental MTLs as LL.

3. Online learning

Is a learning model where training data points arrive in sequential order. When a new data point arrives, the current model is updated quickly to create the best model ever. It is often used when calculations cannot be performed over entire data sets or practical applications cannot wait until a large amount of training data is collected. This is in contrast to classical series learning, where all training data is available from the beginning for training.

  • Differences online learning and lifelong learning
    Although online learning with future data in the stream or in sequential order, its purpose is very different from LL. Learning online still does the same learning task over time. Its goal is to learn more effectively with data step by step. LL, on the other hand, aims to learn from a range of different tasks, retain the knowledge learned so far, and use it to help learn future jobs.

4. Reinforcement Learning

An agent learning through trial and error interaction with a dynamic environment. In each interaction step, the receiving agent contains the current state of the environment. The actor chooses an action from a set of possible actions. This action changes the state of the environment. The agent will receive the value of this state transition, be it a reward or punishment. This process is repeated when the agent knows the trajectory of actions to optimize its goals. The goal of intensive learning is to learn an optimal policy to map the status to actions that maximize the total amount of long-term bonuses.

  • The difference between intensive and lifelong learning
    Intensive learning is learning by trial and error in its interaction with the environment, giving feedback or rewards to agents that are limited to a task and an environment. There is no concept of accumulating knowledge to help future learning.

5. Super learning (Meta-learning)

Aim to learn a new task with only a few training examples using a trained model of many very similar task domains. It is often used to solve one-time or little-learning problems. There are usually two components of learning in the superlearning system: a basic study set (or a quick study set) and a super study set (or a learning set for magnet).

  • Junior school sets are trained in a task with quick updates.
  • The super learner performs in a mission meta space with the goal of transferring knowledge across tasks. The model learned from the super learner allows basic learning to be effective with only a very small set of training examples.

With two-tier meta-learning architecture is often described as “learning to learn”. Basically, Meta-learning treats N learning tasks as learning examples.

  • The difference between super learning and lifelong learning
    Meta-learning trains a meta model from a large number of tasks that quickly adapt to a new task with just a few examples. A key assumption made by most meta-learning techniques is that the new training and testing task / task is from the same distribution, which is a major weakness and limits the scope of application of the meta. -learning. The reason is that in most real-life situations, we expect that many new tasks have something fundamentally different from the old ones. In evaluating meta-learning algorithms, past tasks are often created and created to have the same distribution as new / experimental tasks. LL generally does not make this assumption. LL must choose the appropriate knowledge to apply to solve new problems. If nothing is used, no prior knowledge will be used. Meta-learning involves LL in points that use N quests to help learn new tasks.

summary

The main characteristics of LL are the continuous learning process, accumulating knowledge in the KB, using past knowledge to learn in the future more advanced features including learning while working and discovering knowledge. New issues while performing the task. The related ML model does not have all these features. In short, LL basically tries to imitate the human learning process to overcome the limitations of the current isolated learning model. In the following chapters, we look at the existing LL research directions and algorithms, representation techniques.

Share the news now

Source : Viblo