CS446: Machine Learning

Spring 2017


Quiz 4

Note: answers are bolded
  1. Stochastic gradient descent, when used with the hinge loss, leads to which update rule?
    1. Winnow
    2. Widrow's Adaline
    3. Perceptron
    4. AdaGrad

  2. In a mistake-driven algorithm, if we make a mistake on example xi with label yi, we update the weights w so that we now predict yi correctly.
    1. True
    2. False

  3. Which of the following properties is true about the (original) Perceptron algorithm?
    1. The Perceptron always converges to the best linear separator for a given dataset.
    2. The convergence criteria for Perceptron depends on the initial value of the weight vector.
    3. If the dataset is not linearly separable, the Perceptron algorithm does not converge and keeps cycling between some sets of weights.
    4. If the dataset is not lineary separable, the Perceptron algorithm learns the linear separator with least misclassifications.

  4. Let's assume that we are using the standard Averaged Perceptron algorithm for training and testing (prediction). Let's further assume that it makes k mistakes on the training data. Now, how many weight vectors do we require to predict the label for a test instance?
    1. O(1)
    2. O(k)
    3. O(k2)
    4. Not enough information.

  5. Winnow has a better mistake bound than Perceptron when only k of n features are relevant to the prediction and k << n.
    1. True
    2. False

Dan Roth