Skip to main content
Skip to "About government"
Language selection
Français
Government of Canada /
Gouvernement du Canada
Search
Search the website
Search
Menu
Main
Menu
Jobs and the workplace
Immigration and citizenship
Travel and tourism
Business and industry
Benefits
Health
Taxes
Environment and natural resources
National security and defence
Culture, history and sport
Policing, justice and emergencies
Transport and infrastructure
Canada and the world
Money and finances
Science and innovation
You are here:
Canada.ca
Library and Archives Canada
Services
Services for galleries, libraries, archives and museums (GLAMs)
Theses Canada
Item – Theses Canada
Page Content
Item – Theses Canada
OCLC number
1369199489
Link(s) to full text
LAC copy
Author
Ngo, Quan.
Title
Sequential consolidation using multiple task learning, sweep rehearsal and CVAE generated pseudo examples.
Degree
Master of Science -- Acadia University, 2022
Publisher
[Wolfville, Nova Scotia] : Acadia University, 2022
Description
1 online resource
Abstract
A key component for a Lifelong Learning Agent is the integration or consolidation of new task knowledge with prior task knowledge. Consolidation requires a solution to several problems,most notably, the catastrophic forgetting problem where the development of representation for a new task reduces the accuracy of prior tasks. This research extends our prior work on consolidation using multiple tasks learning (MTL) networks and a task rehearsal or replay approach. The goal is to maintain functional stability of the MTL network models for prior tasks, while providing representational plasticity to integrate new task knowledge into the same network. Our approach uses (1) a conditional variational autoencoder (CVAE) to generate accurate pseudo-examples (PEs) of prior tasks, (2) sweep-rehearsal requiring only a small number of PEs for each training iteration, (3) the appropriate weighing of PEs to ensure consolidation of new task knowledge with prior tasks, and (4) a novel network architecture we call MTL with Context inputs (MTLc) which combines the best of standard MTL and context-sensitive MTL (csMTL) architectures. Sequential learning of twenty classification tasks using a combination of MNIST and Fashion-MNIST datasets shows that our CVAE based approach to generating accurate PEs is promising and that MTLc performs better than either MTL or csMTL with minimal loss of task accuracy over the sequence of tasks.
Other link(s)
scholar.acadiau.ca
Subject
LE3 .A278 2022
Date modified:
2022-09-01