#Sales Offer!| Get upto 25% Off:

1. In this exercise, we present several classes for which finding an ERM classifier is computationally hard. First, we introduce the class of n-dimensional halfspaces, HSn, for a domain = Rn. This is the class of all functions of the form hw,b(x) = sign(_w,x_+b)where w,x∈Rn, _w,x_ is their inner product, and ∈R. See a detailed description in

2. Show that ERMover the class HHSn of linear predictors is computationally hard.More precisely, we consider the sequence of problems in which the dimension grows linearly and the number of examples is set to be some constant times nHint: You can prove the hardness by a reduction from the following problem: Max FS: Given a system of linear inequalities, Ab with ∈ Rm×and b∈R(that is, a systemof linear inequalities in variables, x=(x1, . . ., xn )), find a subsystem containing as many inequalities as possible that has a

solution (such a subsystem is called feasible). It has been shown (Sankaran 1993) that the problem Max FS is NP-hard. Show that any algorithm that finds an ERMHSn hypothesis for any training sample ∈ (R×{+1,−1})can be used to solve the Max FS problem of size m,nHint: Define a mapping that transforms linear inequalities in variables into labeled points in Rn, and a mapping that transforms vectors in Rto halfspaces, such that a vector w satisfies an inequality if and only if the labeled point that corresponds to is classified correctly by the halfspace corresponding to w. Conclude that the problem of empirical risk minimization for halfspaces in also NP-hard (that is, if it can be solved in time polynomial in the sample size, m, and the Euclidean dimension, n, then every problem in the class NP can be solved in polynomial time).

 

Q35;

1. Let =Rand let Hnkbe the class of all intersections of k-many linear halfspaces in Rn. In this exercise, we wish to show that ERMHnk is computationally hard for every ≥ 3. Precisely, we consider a sequence of problems where ≥ 3 is a constant and grows linearly. The training set size, m, also grows linearly with n. Toward this goal, consider the k-coloring problem for graphs, defined as follows: Given a graph = (VE), and a number k, determine whether there exists a function →{1. . .k} so that for every (u,v) ∈ E(u) _= (v). The k-coloring problem is known to be NP-hard for every ≥ 3 (Karp 1972). We wish to reduce the k-coloring problem to ERMHnk : that is, to prove that if

there is an algorithm that solves the ERMHnk problem in time polynomial in kn, and the sample size m, then there is a polynomial time algorithm for the graph k-coloring problem. Given a graph = (VE), let {v. . .vn } be the vertices in . Construct a sample S(G) ∈ (R×{±1})m, where = |V| + |E|, as follows: _ For every vi ∈ V, construct an instance ewith a negative label. _ For every edge (vi ,v j ) ∈ E, construct an instance (e+e)/2 with a positive

label.

Found something interesting ?

• On-time delivery guarantee
• PhD-level professional writers
• Free Plagiarism Report

• 100% money-back guarantee
• Absolute Privacy & Confidentiality
• High Quality custom-written papers

Related Model Questions

Feel free to peruse our college and university model questions. If any our our assignment tasks interests you, click to place your order. Every paper is written by our professional essay writers from scratch to avoid plagiarism. We guarantee highest quality of work besides delivering your paper on time.

Grab your Discount!

25% Coupon Code: SAVE25
get 25% !!