Towards an Efficient Segmentation Algorithm for Near-Infrared Eyes Images
Abstract
Semantic segmentation has been widely used for several applications, including the detection of eye structures. This is used in tasks such as eye-tracking and gaze estimation, which are useful techniques for human-computer interfaces, salience detection, and Virtual reality (VR), amongst others. Most of the state of the art techniques achieve high accuracy but with a considerable number of parameters. This article explores alternatives to improve the efficiency of the state of the art method, namely DenseNet Tiramisu, when applied to NIR image segmentation. This task is not trivial; the reduction of block and layers also affects the number of feature maps. The growth rate (k) of the feature maps regulates how much new information each layer contributes to the global state, therefore the trade-off amongst grown rate (k), IOU, and the number of layers needs to be carefully studied. The main goal is to achieve a light-weight and efficient network with fewer parameters than traditional architectures in order to be used for mobile device applications. As a result, a DenseNet with only three blocks and ten layers is proposed (DenseNet10). Experiments show that this network achieved higher IOU rates when comparing with Encoder-Decoder, DensetNet56-67-103, MaskRCNN, and DeeplabV3+ models in the Facebook database. Furthermore, this method reached 8th place in The Facebook semantic segmentation challenge with 0.94293 mean IOU and 202.084 parameters with a final score of 0.97147. This score is only 0,001 lower than the first place in the competition. The sclera was identified as the more challenging structure to be segmented.
Más información
Título según WOS: | Towards an Efficient Segmentation Algorithm for Near-Infrared Eyes Images |
Título según SCOPUS: | ID SCOPUS_ID:85102826657 Not found in local SCOPUS DB |
Título de la Revista: | IEEE Access |
Volumen: | 8 |
Fecha de publicación: | 2020 |
Página de inicio: | 171598 |
Página final: | 171607 |
DOI: |
10.1109/ACCESS.2020.3025195 |
Notas: | ISI, SCOPUS |