Evaluating Anytime Performance on NAS-Bench-101

Vieira, Carlos; Perez Caceres, Leslie; Bezerra, Leonardo C. T.; IEEE

Abstract

Neural architecture search (NAS) is a field where computational effort poses a significant challenge, requiring large computing clusters and specialized hardware. Furthermore, the lack of common experimental guidelines often compromises NAS comparison or induces premature conclusions. In a recent work, NAS-Bench-101 was proposed to help mitigate those factors, providing both a common benchmark and experimental guidelines for its use. In this work, we discuss the design choices in NAS-Bench-101 and propose improvements that increase the potential of the benchmark. First, we bridge NAS and the research on anytime performance, showing how a bi-objective formulation of NAS can improve the insights provided by NAS-Bench-101. Then, we discuss choices made in the design of the benchmark, namely (i) the fixed-size encoding, (ii) the effects of the limited variability available, (iii) the assessment of algorithms only from a TPU time perspective, and; (iv) the number of repetitions proposed. We demonstrate our contributions assessing the best-performing algorithms originally benchmarked on NAS-Bench-101 and also irace, one of the best-performing algorithm configurators from the literature. Results indicate that (i) the anytime performance methodology enriches the insights obtained from the assessment on the original NAS-Bench-101; (ii) algorithm comparison is strongly affected by the design choices discussed, and; (iii) the performance of SMAC in this benchmark is significantly improved by our alternative setups.

Más información

Título según WOS: Evaluating Anytime Performance on NAS-Bench-101
Título de la Revista: 2021 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC 2021)
Editorial: IEEE
Fecha de publicación: 2021
Página de inicio: 1249
Página final: 1256
DOI:

10.1109/CEC45853.2021.9504902

Notas: ISI