Home >> Research Highlight

Continual learning of context-dependent processing in neural networks

Guanxiong Zeng 1,2 * , Yang Chen 1 * , Bo Cui 1,2 and Shan Yu 1,2,3 

* These authors contributed equally to this work. 

1Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China. 

2University of Chinese Academy of Sciences, 100049 Beijing, China. 

3CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, 100190 Beijing, China. 

Abstract:

Deep neural networks are powerful tools in learning sophisticated but fixed mapping rules between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but change according to different contexts. To lift such limits, we developed an approach involving a learning algorithm, called orthogonal weights modification, with the addition of a context-dependent processing module. We demonstrated that with orthogonal weights modification (OWM) to overcome catastrophic forgetting, and the context-dependent processing module to learn how to reuse a feature representation and a classifier for different contexts, a single network could acquire numerous context-dependent mapping rules in an online and continual manner, with as few as approximately ten samples to learn each. Our approach should enable highly compact systems to gradually learn myriad regularities of the real world and eventually behave appropriately within it. 

 

Figure 1. Schematic diagram of orthogonal weights modification (OWM) 

 

Figure 2. Online learning with small sample size achieved by OWM in recognizing Chinese characters 

 

Figure 3. Achieving context-dependent sequential learning via the OWM algorithm and the CDP module

 Article: Continual learning of context-dependent processing in neural networks

附件: