Hyperparameter tuning, Batch Normalization, Programming Frameworks
질문 1. If searching among a large number of hyperparameters, you should try values in a grid rather than random values, so that you can carry out the search more systematically and not rely on chance. True or False?
질문 2. Every hyperparameter, if set poorly, can have a huge negative impact on training, and so all hyperparameters are about equally important to tune well. True or False?
질문 3. During hyperparameter search, whether you try to babysit one model (“Panda” strategy) or train a lot of models in parallel (“Caviar”) is largely determined by:
1) Whether you use batch or mini-batch optimization
2) The presence of local minima (and saddle points) in your neural network
3) The number of hyperparameters you have to tune
4) The amount of computational power you can access
질문 4. If you think (hyperparameter for momentum) is between 0.9 and 0.99, which of the following is the recommended way to sample a value for beta?
1) r = np.random.rand() beta = r*0.9 + 0.09
2) r = np.random.rand() beta = 1-10**(- r + 1)
3) r = np.random.rand() beta = 1-10**(- r - 1)
4) r = np.random.rand() beta = r*0.09 + 0.9
질문 5. Finding good hyperparameter values is very time-consuming. So typically you should do it once at the start of the project, and try to find very good hyperparameters so that you don’t ever have to revisit tuning them again. True or false?
질문 6. In batch normalization as presented in the videos, if you apply it on the lth layer of your neural network, what are you normalizing?
b^{[l]}z^{[l]}a^{[l]}W^{[l]}
질문 7. In the normalization formula $z_{norm}^{(i)}$ = $\frac{z^{(i)} - \mu}{\sqrt{\sigma^2 + \varepsilon}}$ why do we use epsilon?
1) To speed up convergence
2) To avoid division by zero
3) To have a more accurate normalization
4) In case is too small
질문 8. Which of the following statements about \gamma and \beta in Batch Norm are true?
1) The optimal values are $\sqrt{\sigma^2 + \varepsilon}$, and .
2) They can be learned using Adam, Gradient descent with momentum, or RMSprop, not just with gradient descent.
3) There is one global value of and one global value of for each layer, and applies to all the hidden units in that layer.
4) They set the mean and variance of the linear variable $z^{[l]}$of a given layer.
5) and are hyperparameters of the algorithm, which we tune via random sampling.
질문 9. After training a neural network with Batch Norm, at test time, to evaluate the neural network on a new example you should:
1) Skip the step where you normalize using $\mu$ and $\sigma^2$ since a single test example cannot be normalized.
2) Use the most recent mini-batch’s value of $μ$ and $\sigma^2$ to perform the needed normalizations.
3) If you implemented Batch Norm on mini-batches of (say) 256 examples, then to evaluate on one test example, duplicate that example 256 times so that you’re working with a mini-batch the same size as during training.
4. Perform the needed normalizations, use $\mu$ and $\sigma^2$ estimated using an exponentially weighted average across mini-batches seen during training.
질문10. Which of these statements about deep learning programming frameworks are true? (Check all that apply)
1) A programming framework allows you to code up deep learning algorithms with typically fewer lines of code than a lower-level language such as Python.
2) Deep learning programming frameworks require cloud-based machines to run.
3) Even if a project is currently open source, good governance of the project helps ensure that the it remains open even in the long term, rather than become closed or modified to benefit only one company.
'Code > 딥러닝(NL)' 카테고리의 다른 글
[numpy] np.newaxis는 무엇이고 언제 사용하는가? (0) | 2021.10.04 |
---|---|
Sequence Model References(feat. Coursera Andrew Ng) (0) | 2021.10.02 |
Logistic Regression Loss Function 미분하기 (0) | 2021.08.22 |
YOLO4 (0) | 2021.06.03 |
[Deep Learning] 6. Convolutional Neural Network(feat. CNN) (0) | 2021.05.05 |