|| Online Transfer Learning: Negative Transfer and Effect of Prior Knowledge
||Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu, University of Melbourne, Australia|
||D4-S1-T3: Online & Meta-Learning
||Thursday, 15 July, 22:00 - 22:20
||Thursday, 15 July, 22:20 - 22:40
Transfer learning is a machine learning paradigm where the knowledge from one task is utilized to resolve the problem in a related task. On the one hand, it is conceivable that knowledge from one task could be useful for solving a related problem. On the other hand, it is also recognized that if not executed properly, transfer learning algorithms could in fact impair the learning performance instead of improving it - commonly known as "negative transfer". In this paper, we study the online transfer learning problems where the source samples are given in an offline way while the target samples arrive sequentially. We define the expected regret of the online transfer learning problem and provide upper bounds on the regret using information-theoretic quantities. We also obtain exact expressions for the bounds when the sample size becomes large. Examples show that the derived bounds are accurate even for small sample sizes. Furthermore, the obtained bounds give valuable insight on the effect of prior knowledge for transfer learning in our formulation. In particular, we formally characterize the conditions under which negative transfer occurs.