Nodes of different colours represent the following:
Solid arrows point from a submodule to the (sub)module which it is
descended from. Dashed arrows point from a module or program unit to
modules which it uses.
Where possible, edges connecting nodes are
given different colours to make them easier to distinguish in
large graphs.
This program demonstrates how to train a simple neural network starting from a randomized initial condition and
how to write the initial network and the trained network to separate JSON files. The network has two hiden layers.
The input, hidden, and output layers are all two nodes wide. The training data has outputs that identically match
the corresponding inputs. Hence, the desired network represents an identity mapping. With RELU activation functions,
the desired network therefore contains weights corresponding to identity matrices and biases that vanish everywhere.
The initial condition corresponds to the desired network with all weights and biases perturbed by a random variable
that is uniformly distributed on the range [0,0.1].
Nodes of different colours represent the following:
Solid arrows point from a procedure to one which it calls. Dashed
arrows point from an interface to procedures which implement that interface.
This could include the module procedures in a generic interface or the
implementation in a submodule of an interface in a parent module.
Where possible, edges connecting nodes are
given different colours to make them easier to distinguish in
large graphs.