logoalt Hacker News

yorwbayesterday at 8:08 AM2 repliesview on HN

One problem with testing one change at a time is that if you can only run a small number of experiments because each one requires many GPU hours to get results, you can also only test a small number of changes. If you can come up with and implement new changes much more easily than you can test them, it would be more efficient to test multiple changes at a time and use some form of Bayesian optimization to find the best combination of changes with as few experiments as possible.


Replies

ImageXavyesterday at 12:05 PM

Agreed. One at a time testing (OAT) has been outdated for almost a century at this point. Factorial and fractional factorial experiments have been around for that long and give detailed insights into the effect of not just single changes but the interaction between changes, which means you can superpower your learnings as many variables in DL do in fact interact.

Or, more modern Bayesian methods if you're more interested in getting the best results for a given hyperparameter sweep.

However, that is not to detract from the excellent effort made here and the great science being investigated. Write ups like this offer so much gold to the community.

empikoyesterday at 3:40 PM

The number of runs you can afford are not enough to perform Bayesian optimization. Count how many different options they explored in the text and take a guess how many samples you need to start modeling the hyperparameter space.