Abstract
In this talk, I will talk about a simple, fast algorithm for hyperparameter optimization inspired by techniques from the analysis of Boolean functions. We focus on the high-dimensional regime where the canonical example is training a neural network with a large number of hyperparameters. The algorithm– an iterative application of compressed sensing techniques for orthogonal polynomials– requires only uniform sampling of the hyperparameters and is thus easily parallelizable.
Experiments for training deep nets on Cifar-10 show that compared to state-of-the-art tools (e.g., Hyperband and Spearmint), our algorithm finds significantly improved solutions, in some cases matching what is attainable by hand-tuning. In terms of overall running time (i.e., time required to sample various settings of hyperparameters plus additional computation
time), we are at least an order of magnitude faster than Hyperband and even more so compared to Bayesian Optimization. We also outperform Random Search 5x, which is a hard-to-beat benchmark.
Additionally, our method comes with provable guarantees and yields the first quasi-polynomial time algorithm for learning decision trees under the uniform distribution with polynomial sample complexity, the first improvement in over two decades.
This is joint work with Elad Hazan (Princeton) and Adam Klivans (UT Austin).
Time
2017-06-14 13:45 ~ 14:30Speaker
Yang Yuan, Cornell University
Room
Room 102, No.100 Wudong Road, School of Information Management & Engineering, Shanghai University of Finance & Economics