Network¶
This module contains the class for lenet. This contains all the architecture design.

lenet.network.
apply_adam
(var_list, obj, learning_rate=0.0001)[source][source]¶ Sets up the ADAM optimizer
Parameters:  var_list – List of variables to optimizer over.
 obj – Node of the objective to minimize
Notes
learning_rate: What learning rate to run with. (Default =
0.01
) Set withLR

lenet.network.
apply_gradient_descent
(var_list, obj)[source][source]¶ Sets up the gradient descent optimizer
Parameters:  var_list – List of variables to optimizer over.
 obj – Node of the objective to minimize
Notes
learning_rate: What learning rate to run with. (Default =
0.01
) Set withLR

lenet.network.
apply_l1
(var_list, name='l1')[source][source]¶ This method applies L1 Regularization to all weights and adds it to the
objectives
collection.Parameters:  var_list – List of variables to apply l1
 name – For the tensorflow scope.
Notes
What is the coefficient of the L1 weight? Set
L1_COEFF
.( Default = 0.0001 )

lenet.network.
apply_regularizer
(var_list)[source][source]¶ This method applyies Regularization to all weights and adds it to the
objectives
collection.Parameters: var_list – List of variables to apply l1 Notes
What is the coefficient of the L1 weight? Set
L1_COEFF
.( Default = 0.0001 )

lenet.network.
apply_rmsprop
(var_list, obj)[source][source]¶ Sets up the RMS Prop optimizer
Parameters:  var_list – List of variables to optimizer over.
 obj – Node of the objective to minimize
Notes
 learning_rate: What learning rate to run with. (Default =
0.001
). SetLR
 momentum: What is the weight for momentum to run with. (Default =
0.7
). SetMOMENTUM
 decay: What rate should learning rate decay. (Default =
0.95
). SetDECAY

lenet.network.
apply_weight_decay
(var_list, name='weight_decay')[source][source]¶ This method applies L2 Regularization to all weights and adds it to the
objectives
collection.Parameters:  name – For the tensorflow scope.
 var_list – List of variables to apply.
Notes
What is the coefficient of the L2 weight? Set
WEIGHT_DECAY_COEFF
.( Default = 0.0001 )

class
lenet.network.
lenet5
(images)[source][source]¶ Definition of the lenet class of networks.
Notes
 Produces the lenet model and returns the weights. A typical lenet has
two convolutional layers with filters sizes
5X5
and3X3
. These are followed by two fullyconnected layers and a softmax layer. This network model, reproduces this network to be trained on MNIST images of size28X28
.  Most of the important parameters are stored in
global_definitions
in the fileglobal_definitions.py
.
Parameters: images – Placeholder for images 
images
[source]¶ This is the placeholder for images. This needs to be fed in from
lenet.dataset.mnist`
.

logits
[source]¶ Output node of the softmax layer, before softmax. This is an output from a
lenet.layers.dot_product_layer()
.

back_prop
[source]¶ Backprop is an optimizer. This is a node that will be used by a
lenet.trainer.trainer
later.

labels
[source]¶ Placeholder for labels, needs to be fed in. This is added fed in from the dataset class.

accuracy
[source]¶ Tensor for accuracy. This is a node that measures the accuracy for the mini batch.

cook
(labels)[source][source]¶ Prepares the network for training
Parameters: labels – placeholder for labels Notes
 Each optimizer has a lot parameters that, if you want to change, modify in the code directly. Most do not take in inputs and runs. Some parameters such as learning rates play a significant role in learning and are good choices to experiment with.
 what optimizer to run with. (Default =
sgd
), other options include ‘rmsprop’ and ‘adam’. SetOPTIMIZER
 Produces the lenet model and returns the weights. A typical lenet has
two convolutional layers with filters sizes