Continual Learning


Autonomous robots are desired to learn multiple tasks incrementally over a lifetime. However, learning of (deep) neural networks has a fatal problem, so-called “catastrophic forgetting”, in which the old contents previously learned are easily overwritten by learning new contents, and is not expected to achieve “continual learning”.

In this study, to mitigate the catastrophic forgetting,

  • Pseudo modularization of tasks using fractal network structure
  • Regularization to hold the important network parameters and to initialize unnecessary ones

have been conducted. By applying these technologies, we are also performing hierarchical learning of quadrupedal walking control and imitation learning of hand-written characters.