Remembering images is an innate human capability. Camera images are captured by different people under varying environmental conditions, which leads to highly diverse image memorability scores. However, the factors that make an image more or less memorable are unclear, and it remains unknown how we can more accurately predict image memorability by using such factors. In this work, we propose a
... [Show full abstract] novel framework called Multi-view Transfer Learning from External Sources (MTLES) to predict image memorability. In this framework, we simultaneously leverage different types of visual feature sets and multiple types of predefined image attributes derived from external sources. In particular, to enhance representation ability of visual features, we construct connections between visual feature sets and higher level image attributes by transferring attribute knowledge from external sources. MTLES integrates weak learning through external sources, transfer learning, and multi-view consistency loss with different types of feature sets into a joint framework. To better solve this joint optimization problem, we further develop an alternating iterative algorithm to deal with it. Experiments performed on the publicly available LaMem dataset demonstrate the effectiveness of the proposed scheme.