#Sales Offer!| Get upto 25% Off:

1. Consider the problem of learning halfspaces with the hinge loss. We limit our domain to the Euclidean ball with radius R. That is, ={x :    x2 ≤ R}. The label set is = {±1} and the loss function        is defined by     (w(xy)) = max{0,1− y_w,x_}. We already know that the loss function is convex. Show that it is R-Lipschitz.

2.  (*) Convex-Lipschitz-Boundedness Is Not Sufficient for Computational Efficiency: In the next chapter we show that from the statistical perspective, all convex- Lipschitz-bounded problems are learnable (in the agnostic PAC model). However, our main motivation to learn such problems resulted from the computational perspective – convex optimization is often efficiently solvable. Yet the goal of this exercise is to show that convexity alone is not sufficient for efficiency. We show that even for the case = 1, there is a convex-Lipschitz-bounded problem which cannot be learned by any computable learner. Let the hypothesis class be H=[01] and let the example domain, Z, be the set of all Turing machines. Define the loss function as follows. For every Turing machine

∈ Z, let               (0,)=1 if halts on the input 0 and         (0,)=0 if doesn’t halt on the input 0. Similarly, let        (1,)=0 if halts on the input 0 and                (1,)=1 if doesn’t halt on the input 0. Finally, for ∈ (01), let                 (h,) = h              (0,)+(1−h) (1,).

 

Found something interesting ?

• On-time delivery guarantee
• PhD-level professional writers
• Free Plagiarism Report

• 100% money-back guarantee
• Absolute Privacy & Confidentiality
• High Quality custom-written papers

Grab your Discount!

25% Coupon Code: SAVE25
get 25% !!