What is neural architecture search?


Deep neural networks have a huge advantage: They replace “feature engineering”—a difficult and arduous part of the classic machine learning cycle—with an end-to-end process that automatically learns to extract features.

However, finding the right deep learning architecture for your application can be challenging. There are numerous ways you can structure and configure a neural network, using different layer types and sizes, activation functions, and operations. Each architecture has its strengths and weaknesses. And depending on the application and environment in which you want to deploy your neural networks, you might have special requirements, such as memory and computational constraints.

The classic way to find a suitable deep learning architecture is to start with a model that looks promising and gradually modify it and train it until you find the best configuration. However, this can take a long time, given the numerous configurations and amount of time that each round of training and testing can take.

An alternative to manual design is “neural architecture search” (NAS), a series of machine learning techniques that can help discover optimal neural networks for a given problem. Neural architecture search is a big area of research and holds a lot of promise for future applications of deep learning.

Search spaces for deep learning