CuratorNet: Visually-aware recommendation of art images
Keywords: Neural networks; Recommender systems; Visual art
Abstract
Although there are several visually-aware recommendation models in domains like fashion or even movies, the art domain lacks the same level of research attention, despite the recent growth of the online artwork market. To reduce this gap, in this article we introduce CuratorNet, a neural network architecture for visually-aware recommendation of art images. CuratorNet is designed at the core with the goal of maximizing generalization: the network has a fixed set of parameters that only need to be trained once, and thereafter the model is able to generalize to new users or items never seen before, without further training. This is achieved by leveraging visual content: items are mapped to item vectors through visual embeddings, and users are mapped to user vectors by aggregating the visual content of items they have consumed. Besides the model architecture, we also introduce novel triplet sampling strategies to build a training set for rank learning in the art domain, resulting in more effective learning than naive random sampling. With an evaluation over a real-world dataset of physical paintings, we show that CuratorNet achieves the best performance among several baselines, including the state-of-the-art model VBPR. CuratorNet is motivated and evaluated in the art domain, but its architecture and training scheme could be adapted to recommend images in other areas.
Más información
| Título según SCOPUS: | CuratorNet: Visually-aware recommendation of art images |
| Título de la Revista: | CEUR Workshop Proceedings |
| Volumen: | 2697 |
| Editorial: | CEUR-WS |
| Fecha de publicación: | 2020 |
| Idioma: | English |
| Notas: | SCOPUS |