Biologicky inspirovane algoritmy

Cvičení

Semestralka:

temata zde http://archive.ics.uci.edu/ml/datasets.html

Nastroje:

pouzivane na cviceni:

dalsi mozne:

Plan:

  • zadani - 4. tyden
  • odevzdani uz 6.tyden - takze sup sup! :)

Skripta Doc. Snorka http://www.uloz.to/xW51TGT/neurony-kniha-zip

Test v semestru 2016:

Test v semestru 2012:

zapoctak, 60min, 20b/potreba min 10, 2 skupiny

A:
1) krouzkovaci GA jsou -> pravdepodobnostni, nejspis najdou v rozumnem case slusny vysledek
2) nakreslit minimální RBF síť schopnou naučit se XOR (reseni: http://i.imgur.com/hoT76Sv.png)
3) jak se počítá topologická chyba v SOM
4) NN popis, co co znamena ve vzorecku
  w_i(t+1)=w_i(t) + alfa*(d(t)-y(t))*x_i(t)
5) f(A)=1, f(B)=3, f(C)=2, f(D)=5, f(E)=9 - nakreslit odpovídající ruletové kolo. Selekce 10 vzorků, jaké rozložení pravděpodobně dostaneme? Jaké hodnoty můžeme teoreticky dostat?
6) vícekriteriální GA. Co je to dominance, o co usilujeme u vícekriteriálního GA, zakreslit nedominovaná řešení do grafu.
7) genetické programování. popsat terminály a funkce a vysvětlit to na příkladu symbolické regrese

(na stiahnutie tuna: http://uloz.to/xAgovEPD/test-varianta2012-a-pdf)

Test v semestru 2011:

zapoctak, 60min, 20b/potreba min 10, 2 skupiny

A:
1) krouzkovaci GA jsou -> pravdepodobnostni, nejspis najdou v rozumnem case slusny vysledek
2) co a jak se pouziva konstanta 1/5 :D v ES (naky modification probability ratio ci co)
3) co je SA(simulated annealing) a popis vliv chovani na teplote
4) NN popis, co co znamena ve vzorecku
  w_i(t+1)=w_i(t) + alfa*(d(t)-y(t))*x_i(t)
5) popis neuron MIA v GMBH, jak vypada, 2 aktivacni fce,...
6) co znamena linearne separabilni, ukaz priklad lin (ne)separab mnozin
7) 2 strategie vypoctu BMU v SOM:
  - min_i{|| x - w_i ||} a max_i{(x*w)}
8) 
B:

Zkouška

2011

1.5h, casu tak akorat/dost, jedno zadani, 10 otazek, max 40b

1/ GMDH 
nakreslit, popsat vsechny vstupy/vahy MIA sit, ktera dela AND na dvou binarnich vstupech. 
Ivankoscev. neur = ai^2 + bij + cj^2 + di + ej +f. Urcit kdy skonci. 
..je to jen jeden ctverecek napojenej na dva vstupy. koef napr: b=1, zbytek=0, konec ptze err==0


2/ LSTM nakresli a popis
takovej ten kosoctverec s cyklem uvnitr (CEC), zapominani, in,out,..obr ve slidech

3/ HyperNEAT, co to je, jaka reprezentace, co dela CPPN, jak se lisi od NEATu, co je "ne suteren,..podobne :P"

4/ popis kroky obyc. GA

5/ perceptron, cim oddeluje, obrazek ve 2D

6/ Sammonova projekce

7/ hodnoceni fitness v NSGA, cim je horsi nez NSGA2, vyznam fitness sharing

8/ "linear fitness spread" nebo tak, ze pri samplovani zohlednime: pocet_jedincu * jejich_fitness

9/ S-vyhodnoceni, + a - ... ta plocha

10/ popsat rovnici upravy rychlosti castice v PSO

2012

1/ Nakreslit nejmenší MLP síť řešící XOR s nulovou chybou, včetně vah a prahu neuronů
2/ Popsat SANE, fitness funkci ..
3/ Popsat síť "s časovými okny", rozdíl oproti rekurentním sítím
4/ Popsat co je SOM, a jak to funguje
5/ Příklad dvou schemat (*1*100*1*, nepamatuji si), které je náchylnější na rozbití
6/ Popsat C metriku, výhody, nevýhody
7/ Vysvětlit proměnné a konstanty ve vzorci pro výběr dalšího města v Ant optimalizaci
8/ SPEA, počítání fitness, co je "síla"

2013, 1. termin

1. MIA GMDH, Ivakhnenkuv polynom, nakreslit XOR sit
2. NEAT
3. Sammonova projekce
4. ESN (echo state networks)
5. NSGA
6. C-metrika

2013, 2. termin

1. zadana je synchronni rekurznivni NN + vahy spojeni, vyjadrit vystupy rovnicemi a spocitat vystupy v case 1 a 2
2. GNARL
3. MIA GMDH
4. popsat RBF neuron - co vyjadruje + rovnice a obrazek
5. napsat rovnici energie site pouzivanou pocas backpropagation
6. SPEA 2
7. zadan bod v prostoru, urcit kde v prostoru lezi reseni dominujici, dominovana a nedominovana
8. popsat rovnici pravdepodobnosti vyberu cesty v Ant Colony Optimization + kdy se pouziva
9. memory based immigrants scheme
10. linear scaling + zakreslit do grafu

2014, 2. termin

B:
1. Sammon's projection - what is it used for, write the equation
2. GMDH MIA - write the architecture, ...
3. Echo State network
4. NEAT - structural mutations, selection, benefits of complexification
5. indirect gene encoding, use in hyperNEAT
6. NSGA2 / SPEA2 - how is the density information used
7. Roulette wheel - solve example (same as above), write expected values and actual value range
8. C-metric - advantages, disadvantages, draw some example
9. Discrete PSO - position, velocity vectors; describe velocity change when Vmax is used
10. Memory-based immigration scheme

2015, 1. 6.

varianta A:
1. RBF (4p)
2. Mutation and crossover in neuro-evolution. (3p)
3. What dataset preprocessing approaches do you know? Give a short explanation for each. (3p)
4. HyperNEAT (4p)
5. GMDH MIA (3p)
6. Relation between PCA and autoencoders. How would you learn Stacked Auto-Encoder? (3p)
7. Multi-Objective optimization  (4p)
  * Domination
  * Describe two goals of multi-objective optimization. Two desired properties of final set of solutions.
  * Draw Pareto-optimal set
  * Draw first 4 non-dominated fronts
8. ACOR  (4p)
9. Linear scaling + draw a graph.  (4p)
10. SPEA2 - describe fitness assignment scheme. (3p)
  * Strength value
  * Raw fitness
  * Density information
11. Dynamic Optimizations - GARB  (3p)
  * Redundant represenatation
  * Gene-strength adaption
12.Describe Strongly-Typed Genetic Programming.  (2p)

2016, 2. 6.

Explain/define/describe: 
a)	k fold crossvalidation
b)	BPTT
c)	NEAT
d)	CNN
e)	Domanation in MOO
f)	Memory-based Approaches: Explicit Memory
g)	Evolution scheme in NSGA II
h)	Diffecential evolution – donor vector + crossvalidation
i)	Velocity in PSO
j)      Linear scaling (use graph)
Draw 4 pareto-optimal front into graph (min-max objectives)
Competing Conventions Problem - i inputs, h hidden neurons, o outputs. # of symetries?
Only neurons with linear activation function. Write equation for ANN with single hidden layer.

2016, 16. 6.

1. Self-Organizing Maps (SOM) [3 pts]
• Explain SOM network architecture and learning.
• Draw the SOM for 2D input and 5x5 grid of neurons.

2. Recurrent Neural Netowrk (RNN) [3 pts]
• Describe fully-connected RNN architecture.
• Describe evaluation and learning of RNN.
• What is synchronous/asynchronous network evaluation?

3. Neuro-evolution [3 pts]
• Discuss mutation and crossover in neuro-evolution.

4. Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) [4 pts]
• Describe HyperNEAT algorithm.
• What is the difference between HyperNEAT and standard evolution of ANNs?

5. Overfitting [4 pts]
• How can you prevent overfitting of artificial neural network?

6. Perceptron vs Radial Basis Function (RBF) type neurons [3 pts]
• Describe differences between perceptron and RBF neurons.
• Draw an illustrative picture of how they divide an input space.

7. EAs for Multi-Objective Optimizations [4 pts]
• Write a definition of domination.
• Describe two goals of the multi-objective optimization. What are the two desired properties of the final set of solutions?

• Lets assume a set of all feasible solution in the left figure below. Draw a complete Pareto-optimal set in the figure given the minimization objective o1 and minimization objective o2.
• Lets assume the set of candidate solutions in the right figure. Draw the first four fronts of non-dominated solutions given the minimization objective o1 and minimization objective o2.
{ two pictures, first figure - space, second figure - points }

8. Ant Colony Optimization for Continuous Domain (ACOr) [3 pts]
• Describe the ACOr algorithm
• Describe the way the Gaussian kernel probabilty density function (PDF) is used to model the pheromone.
• Describe how the Gaussian kernel PDF is modelled and how its parameters are estimated.

9. Roulette Wheel [4 pts]
• Describe the roulette wheel selection method.
• Given a population of 5 individuals with the following fitness value - fitness(A)=1, fitness(B)=2, fitness(C)=3, fitness(D)=5, fitness(E)=9 - determine an expected number of copies that solution A and E recieve among ten solutions samples using a roulette wheel selection method. What is the range of the actual number of copies of A and E out of the ten samples solutions?

10. Dynamic Optimizations [3 pts]
• Desribe the principles of the GA with Real-coded Binary Representation (GARB).
• Redundant representaion
• Gene-strength adaptation

11. Ant Colony Optmimization (ACO) [3 pts]
• Describe the following formula used for probabilistic decision making in ACO algorithm. How is it used in ACO algorithm?
• Explain a meaning of the symbols Tau, Eta, alpha, beta, tabuk.

{ p_ij^k = ... formula }

12. Performance Metrics for Multi-Objective Optimizations [3 pts]
• Describe the S metric (i.e. the size of the space covered) used to assess a quality of a set of non-dominated solutions.
• What are its advantages and disadvantages?
• Illustrate it on an example.
courses/a4m33bia.txt · Poslední úprava: 2019/01/10 18:36 (upraveno mimo DokuWiki)
Nahoru
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0