Chapter 7: Deep Learning: Exercise#
\[ % Latex macros
\newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}}
\newcommand{\p}[2]{\frac{\partial #1}{\partial #2}}
\renewcommand{\b}[1]{\boldsymbol{#1}}
\newcommand{\w}{\boldsymbol{w}}
\newcommand{\x}{\boldsymbol{x}}
\newcommand{\y}{\boldsymbol{y}}
\newcommand{\z}{\boldsymbol{z}}
\]
1. Try back-propagation learning with different function approximation or classification tasks. See how the choices of numbers of layers \(L\) and units \(M^l\), as well as data size \(N\) and the learning rate \(\alpha\) affect learning.
2. Implement deep neural networks with ReLU activation functions for the hidden units or softmax for the output units.
3. Run the restricted Boltzmann machine with diffent input patterns, the number of hidden units \(M_h\), and the cycle of contrastive divergence \(K\).