VETE: improving visual embeddings through text descriptions for eCommerce search engines
Abstract
A search engine is a critical component in the success of eCommerce. Searching for a particular product can be frustrating when users want specific product features that cannot be easily represented by a simple text search or catalog filter. Due to the advances in artificial intelligence and deep learning, content-based visual search engines are included in eCommerce search bars. A visual search is instantaneous, just take a picture and search; and it is fully expressive of image details. However, visual search in eCommerce still undergoes a large semantic gap. Traditionally, visual search models are trained in a supervised manner with large collections of images that do not represent well the semantic of a target eCommerce catalog. Therefore, we propose VETE (Visual Embedding modulated by TExt) to boost visual embeddings in eCommerce leveraging textual information of products in the target catalog. with real eCommerce data. Our proposal improves the baseline visual space for global and fine-grained categories in real-world eCommerce data. We achieved an average improvement of 3.48% for catalog-like queries, and 3.70% for noisy ones.
Más información
Título según WOS: | VETE: improving visual embeddings through text descriptions for eCommerce search engines |
Título según SCOPUS: | ID SCOPUS_ID:85151253168 Not found in local SCOPUS DB |
Título de la Revista: | MULTIMEDIA TOOLS AND APPLICATIONS |
Editorial: | Springer |
Fecha de publicación: | 2023 |
DOI: |
10.1007/S11042-023-14595-8 |
Notas: | ISI, SCOPUS |