1) Gradient descent finds global minimum when:
A) Function convex
2) Dropout primarily reduces:
A) Variance
3) XOR cannot be solved by single perceptron because:
A) Not linearly separable
4) Does increasing model complexity always reduce training error?
A) Yes
5) Regularization increases bias and decreases variance:
A) True
6) Double descent suggests:
A) Test error may decrease after interpolation
7) Can a model be accurate but unfair?
A) Yes
8) Is cross-validation unbiased?
A) Approximately under assumptions
9) K-Means guarantees:
A) Local minimum
10) Training accuracy 98%, validation 60% suggests:
A) High variance
11) Is logistic regression a linear model?
A) Linear in parameters only
12) Can PCA reduce overfitting?
A) Sometimes depending on noise
