# Statistické strojové učení

## Zkouška

### Cheat sheet

A cheat sheet is allowed for the exam, as described in course note from 2020-01-13:

You are allowed to prepare & use one A4 page with handwritten notes (one sided).

Example cheat sheet:

### 18.1.2019

Zkoušky z minulých let, které se mi podařilo získat:

### 18.1.2019

Midterm test, který jsme si letos mohli vypracovat nanečisto.

### 20.1.2020

Solutions by bartefil:

1. Regular Perceptron algorithm. See slide 4 in svm1_ws2019.pdf.
2. Assignment 2:
1. a) Evaluate ER for each h. Select h that maximizes ER.
2. b) 5000 log(4000)
3. Assignment 3:
1. a) \alpha(k) = \frac{p(x, k)}{\sum_{k'} p(x, k')}
2. b) Partial solution: \max_\pi \sum_{l=1}^m \sum_{k=0}^n \alpha_l(k) \log \pi_k
4. Assignment 4:
• Algorithm: for i in range(n): for k in K: p(s_i = k) := \sum_{k' \in K} p(s_i = k | s_{i-1} = k') p(s_{i-1} = k')
• Complexity: O(n |K|^2)
5. See slide 38 in ensembling-ws2019.pdf. Discussion missing.
6. Assignment 5:
• Auxiliary: y_j = \sum_i x_i w_{i,j}
• Forward: z_k = \max(y_k, a_k y_k)
• Backward: dz_k / dx_i = ([y_k > a_k y_k] + [z_k \leq a_k z_k] a_k) w_{i,k}
• Parameter:
• dz_k / da_p = [p = k] [y_k \leq a_k y_k] y_k
• dz_k / dw_{l,m} = [k = m] ([y_k > a_k y_k] + [z_k \leq a_k z_k] a_k) x_l

### 14.2.2023

Literature

Majority of SSU subjects understandably explained here: http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf

SVM

Lecture on SVM on MIT https://www.youtube.com/watch?v=_PwhiWxHK8o

https://www.youtube.com/watch?v=IOetFPgsMUc + pokracovanie v part II. a III.

Neural nets + convolutional

3Blue1Brown: Neural Networks (YouTube playlist) Nice basic explanation of how neural networks work. Chapters 3 and 4 provide efficient explanations of backpropagation using good visualizations.

https://www.youtube.com/watch?v=vT1JzLTH4G4&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv Whole course on neural nets and convolutional networks. Very comprehensive lectures, explained from the basic concepts plus nice motivation examples.

MLE

First what is likelihood?

EM + gaussian mixture

Andrew Ng: Lecture on clustering, mixture of Gaussians, Jensen's inequality, EM algorithm (CS 229, Stanford University): video, lecture notes

Bayes learning

1) Note that pages 5 and 6 duplicate pages 1 and 2 respectively.
courses/be4m33ssu.txt · Poslední úprava: 2023/06/30 13:37 autor: pedro
Nahoru 