Poster Title

P-03 Exploring the Efficiency of Neural Architecture Search (NAS) Modules

Department

Computing

Abstract

Machine learning is obscure and expensive to develop. NAS automates this process by learning to create premier ML networks. An exploding field, months old, most research focuses on a particular combination of NAS’s three parts. Despite regularly acquiring state of the art results, this practice sacrifices computing time and hardware for slight increases in accuracy; this also obstructs comparison across different combinations of modules. To discover efficient modules, my research graphs the accuracy per compute time tradeoff between combinations. A subset of these possible combinations is derived from leading research, each producing an ML model that is tested on CIFAR-10, a standard image recognition dataset.

Acknowledgments

Advisor: Rodney Summerscales, Computing

Location

Buller Hall, Student Lounge

Start Date

3-11-2022 1:30 PM

End Date

3-11-2022 3:30 PM

This document is currently not available here.

COinS
 
Mar 11th, 1:30 PM Mar 11th, 3:30 PM

P-03 Exploring the Efficiency of Neural Architecture Search (NAS) Modules

Buller Hall, Student Lounge

Machine learning is obscure and expensive to develop. NAS automates this process by learning to create premier ML networks. An exploding field, months old, most research focuses on a particular combination of NAS’s three parts. Despite regularly acquiring state of the art results, this practice sacrifices computing time and hardware for slight increases in accuracy; this also obstructs comparison across different combinations of modules. To discover efficient modules, my research graphs the accuracy per compute time tradeoff between combinations. A subset of these possible combinations is derived from leading research, each producing an ML model that is tested on CIFAR-10, a standard image recognition dataset.