CS446: Machine Learning

Spring 2017


Quiz 6

Note: answers are bolded
  1. In the AdaBoost algorithm, if the final hypothesis makes no mistakes on the training data, which of the following is correct?
    1. The individual weak learners also make zero error on the training data.
    2. Additional rounds of training always leads to worse performance on unseen data.
    3. Additional rounds of training can help reduce the errors made on unseen data.


  2. You are given a set of examples that are linearly inseparable over an original feature set X. The classifier trained on this set of examples using a blown-up feature space Φ(X) always performs worse than the one trained using the kernel based method.
    1. True
    2. False

  3. Given a pre-existing kernel k(x, x'), which of the following is not guaranteed to be a valid kernel?
    1. k'(x,x') = exp(c*k(x,x')), where c is a constant
    2. k'(x,x') = log(x)k(x,x')log(x')
    3. k'(x,x') = (k(x,x'))2
    4. k'(x,x') = k(x,x') + xAx', where A is an upper triangular matrix

  4. What does the generalization ability (or: mistake bound) of using a Kernel method for Perceptron depend on?
    1. The size of the original feature space
    2. The size of the corresponding blown up feature space

  5. Which of the following functions will have significant improvement in accuracy upon using the Kernel Perceptron with polynomial kernel instead of the regular perceptron algorithm?
    1. l-of-m-of-n class of functions
    2. Class of functions where only positive examples are enclosed by an ellipse
    3. k-disjunctions

Dan Roth