#Sales Offer!| Get upto 25% Off:

1. Failure of k-fold cross validation Consider a case in that the label is chosen at random according to P[= 1] = P[= 0] = 1/2. Consider a learning algorithm that outputs the constant predictor h(x)= 1 if the parity of the labels on the training set is 1 and otherwise the algorithm outputs the constant predictor h(x)=0. Prove that the difference between the leave-one-out estimate and the true error in such a case is always 1/2.

2. LetH1, . . .,Hk be hypothesis classes. Suppose you are given i.i.d. training examples and you would like to learn the class = ∪ki =1 Hi . Consider two alternative approaches: _ Learn on the examples using the ERM rule _ Divide the examples into a training set of size (1 − α)and a validation set of size αm, for some α ∈ (0,1). Then, apply the approach of model selection using validation. That is, first train each class Hi on the (1−α)training examples using the ERMrule with respect toHi, and let ˆ1, . . .,ˆhk be the resulting\ hypotheses. Second, apply the ERM rule with respect to the finite class {ˆ1, . . .,ˆhk} on the αvalidation examples. Describe scenarios in which the first method is better than the second and vice versa.

Found something interesting ?

• On-time delivery guarantee
• PhD-level professional writers
• Free Plagiarism Report

• 100% money-back guarantee
• Absolute Privacy & Confidentiality
• High Quality custom-written papers

Grab your Discount!

25% Coupon Code: SAVE25
get 25% !!