Compute the information gain of the term “elections” according to Eq. 5.7. January 24, 2021 Read More »
Predict the probabilities of categories Cat and Car of Test2 on the toy corpus example in Sect. 5.3.5.2. You can use the multinomial na¨ıve Bayes model with the same level of smoothing as used in the example in the book. Return normalized probabilities that sum to 1 over the two categories. 2. Na¨ıve Bayes is a generative model in which each class corresponds to one mixture component. Design a fully supervised generalization of the nai¨ıve Bayes model in which each of the k classes contains exactly b > 1 mixture components for a total of b · k mixture components. How would you perform parameter estimation in this model? January 24, 2021 Read More »
What are the possible advantages and disadvantages of using such an approach? January 24, 2021 Read More »
What is the impact of the lexicon size and average document size on various classifiers? January 24, 2021 Read More »
Discuss the advantages of rule-based learners over decision trees, when the amount of data is limited. January 24, 2021 Read More »
Formulate a variation of regularized least-squares classification in which L1-loss is used instead of L2-loss. January 24, 2021 Read More »
Derive stochastic gradient-descent steps for the variation of L1-loss classification introduced in Exercise 6. January 24, 2021 Read More »
Discuss the effect on the bias and variance by making the following changes to a classification algorithm January 24, 2021 Read More »
Implement a subsampling ensemble in combination with a 1-nearest neighbor classifier. January 24, 2021 Read More »
Show how to use a factorization machine to perform undirected link prediction in a social network with content. January 24, 2021 Read More »