CPSC 636-600 Homework 4 (Total of 100 points)

Transcription

CPSC 636-600 Homework 4 (Total of 100 points)
CPSC 636-600 Homework 4 (Total of 100 points)
See the course web page for the due date and submission info.
Instructor: Yoonsuck Choe
April 6, 2015
1
SOM
Problem 1 (Program: 10 pts): Fill in the two lines in the som.m skeleton code (marked ”......”) from the
course web page.
http://courses.cs.tamu.edu/choe/15spring/636/src/som.m
Problem 2 (Written: 10 pts): With som.m, run a space-filling curve experiment. 1D lattice, 2D input
(uniformly randomly distributed within a unit square). Visualize the map in input space at 0, 20, 100, 1000,
10000, and 25000 iterations.
Problem 3 (Written: 10 pts): (1) Generate 20 random (x, y) points, and run a 1D-lattice SOM with 200
neurons to run a traveling salesman experiment. Adjust the learning rate and neighborhood radius update
schedule appropriately.
(2) Write a simple function to calculated the tour distance based on the learned weights. Hint: Simply go
through the weight matrix w from top row to the bottom row and calculate the sum of Euclidian distance
between w(n,:) and w(n+1,:).
(3) Run SOM with different initial weights (repeat this 5 times), and run it on the same 20 inputs. Calculate
the tour distance for each run and compare the results. Discuss how optimal SOM can get, based on these
results.
2
Neurodynamics
Problem 4 (Written: 10 pts):
Consider the following dynamical system:
where the matrix A = [-4.
dx
= Ax
dt
-2; 5, 1] and x is the state vector (column vector, as usual).
1. Calculate the eigenvalues of the matrix A and guess what would be the behavior of the dynamical
system (diverge? converge?). A = [-4. -2; 5, 1]; eig(A) in Octave or Matlab will
give you the eigenvalues. See slide08.pdf, page 12.
1
2. Given an initial condition x(0) of (-2,1)’, compute the trajectory and plot it. What is the equilibrium point? You can use a simple numerical integration method based on Taylor series expansion:
dx dt x=x(t)
= x(t) + ∆tAx(t)
x(t + ∆t) = x(t) + ∆t
where ∆t is a scalar value that indicates the integration step size. Try ∆t = 0.01 and iterate for 1000
times. Collect the data points and plot it.
3. Try various other initial conditions and see how the system works. Try plotting all the trajectories in
one plot.
4. Optional: experiment with different A matrices. Try to guess what kind of matrices will give different
categories of eigenvalues.
5. Optional: experiment with different ∆t (0.001, 0.1, 0.5) and compare the accuracy (the smaller the
more accurate). You may have to increase the number of iterations for smaller ∆t.
Problem 5 (Program: 10 pts): Fill in the two lines in the hopfield.m skeleton code from the course
web page, run the three examples (noisy input and partial input), and report your results.
http://courses.cs.tamu.edu/choe/15spring/636/src/hopfield.m
There are several other files that you may need. Important: For matlab, use a dogfilter of size 7 instead of
17. Or, better yet,
1. Fix the last line in dogfilter.m to this M = real(conv2(X,g,’valid’));.
2. When calling dogfilter, do dogfilter(rand(36,36),17), instead of dogfilter(rand(20,20),17).
This is when you construct the input matrix.
A typical patten you should get is like this:
Problem 6 (Written: 10 pts): Test your implementation and see how much noise the Hopfield network
can tolerate. Change the noise level and see beyond which point the network does not settle to the correct
attractor.
Adjusting the inequality part (0.6) will allow you to control the noise level: (rand(1,20*20)<0.6).
Problem 7 (Written: 10 pts): Test how much omitted information the Hopfield network can tolerate.
This part inp(1:10,:)=zeros(10,20); controls how much input to eliminate.
Problem 8 (Written: 10 pts): Try increasing the number of memory patterns and check if you get the αc
value as determined in slide09.pdf page 25–26.
2
3
PCA
Problem 9 (Written: 20 pts):
Write a simple function to do PCA, using cov(...), eig(...) and other standard Octave/Matlab
functions.
Implement Oja’s rule and compare the results (first principal component vector) against the basic implementation of PCA above.
Use the following data set.
inp=[randn(800,2)/9+0.5;randn(1000,2)/6+ones(1000,2)];
3