up
### Problem Set 1 (15.4 - 22.4.07)

#### Exercise 1

#### Solution

Python program (v1.2) / plot 1 / plot 2

This is the example solution that I wrote. Note the use of derived classes (though object orientiation does not make a big difference for such a simple example) and lambda expressions.

I tried to adhere to coding conventions (though there are some violations like the longer line lengths) and to document my code. This is the level of convention and documentation that I would also like to see in your code.#### Exercise 2 (optional)

This problem set deals with Hebbian learning, as it was introduced at the end of the last semester (see script).

- Implement Hebbian learning of a single linear unit on a set of input vectors (so the whole dataset can be represented as a two dimensional NumPy array). Implement both explicite normalisation and Oja's rule.
- Test your implenentation on a two dimensional Gaussian cloud of data points (e.g. 100 points). How fast does it converge?
- Plot the convergence of the weight vector (e.g. the scalar product with the final weight vector).

This is the example solution that I wrote. Note the use of derived classes (though object orientiation does not make a big difference for such a simple example) and lambda expressions.

I tried to adhere to coding conventions (though there are some violations like the longer line lengths) and to document my code. This is the level of convention and documentation that I would also like to see in your code.

- Generate an image sequence with an underlying gaussian variance (e.g. different gratings, you can use PIL). Train your Hebbian unit from exercise 1 with the data. Then create a picture of the optimal stimulus.
- Generate small image patches from a larger natural image and do the same analysis as in a.
- Use multiple decorallated units (e.g. using asymetric inhibitory lateral connections) on the data used in b to learn several different principal components.