bob.machine.MLP¶
-
class
bob.machine.MLP((object)self, (MLP)other) → None :¶ Bases:
Boost.Python.instanceAn MLP object is a representation of a Multi-Layer Perceptron. This implementation is feed-forward and fully-connected. The implementation allows setting of input normalization values and a global activation function. References to fully-connected feed-forward networks: Bishop’s Pattern Recognition and Machine Learning, Chapter 5. Figure 5.1 shows what we mean.
MLPs normally are multi-layered systems, with 1 or more hidden layers. As a special case, this implementation also supports connecting the input directly to the output by means of a single weight matrix. This is equivalent of a LinearMachine, with the advantage it can be trained by MLP trainers.
Initializes a new MLP copying data from another instance
- __init__( (object)arg1, (object)shape) -> object :
- Builds a new MLP with a shape containing the number of inputs (first element), number of outputs (last element) and the number of neurons in each hidden layer (elements between the first and last element of given tuple). The default activation function will be set to hyperbolic tangent.
- __init__( (object)self, (HDF5File)config) -> None :
- Constructs a new MLP from a configuration file. Both weights and biases have their dimensionalities checked between each other for consistency.
- __init__( (object)self, (MLP)machine) -> None :
- Copy constructs an MLP machine
-
__init__((object)self, (MLP)other) → None :¶ Initializes a new MLP copying data from another instance
- __init__( (object)arg1, (object)shape) -> object :
- Builds a new MLP with a shape containing the number of inputs (first element), number of outputs (last element) and the number of neurons in each hidden layer (elements between the first and last element of given tuple). The default activation function will be set to hyperbolic tangent.
- __init__( (object)self, (HDF5File)config) -> None :
- Constructs a new MLP from a configuration file. Both weights and biases have their dimensionalities checked between each other for consistency.
- __init__( (object)self, (MLP)machine) -> None :
- Copy constructs an MLP machine
Methods
__init__((object)self, (MLP)other)Initializes a new MLP copying data from another instance forward((MLP)self, (object)input, (object)output)Projects the input to the weights and biases and saves results on the output. forward_((MLP)self, (object)input, …)Projects the input to the weights and biases and saves results on the output. is_similar_to((MLP)self, (MLP)other [, …)Compares this MLP with the ‘other’ one to be approximately the same. load((MLP)self, (HDF5File)config)Loads the weights, biases and other configuration parameter sfrom a configuration file. randomize((MLP)self)Sets all weights and biases of this MLP, with random values between [-0.1, 0.1) as advised in textbooks. save((MLP)self, (HDF5File)config)Saves the weights and biases to a configuration file. Attributes
biasesA set of biases for each layer in the MLP. hidden_activationThe activation function (for all hidden layers) - by default, the hyperbolic tangent function. input_divideInput division factor, before feeding data through the MLP. input_subtractInput subtraction factor, before feeding data through the MLP. output_activationThe output activation function (only for the last output layer) - by default, the hyperbolic tangent function. shapeA tuple that represents the size of the input vector followed by the number of neurons in each hidden layer of the MLP and, finally, terminated by the size of the output vector in the format (input, hidden0, hidden1, ..., hiddenN, output).weightsA set of weights for the synapses connecting each layer in the MLP. -
__call__((MLP)self, (object)input, (object)output) → None :¶ Projects the input to the weights and biases and saves results on the output. You can either pass an input with 1 or 2 dimensions. If 2D, it is the same as running the 1D case many times considering as input to be every row in the input matrix.
- __call__( (MLP)self, (object)input) -> object :
- Projects the input to the weights and biases and returns the output. This method implies in copying out the output data and is, therefore, less efficient as its counterpart that sets the output given as parameter. If you have to do a tight loop, consider using that variant instead of this one. You can either pass an input with 1 or 2 dimensions. If 2D, it is the same as running the 1D case many times considering as input to be every row in the input matrix.
-
biases¶ A set of biases for each layer in the MLP. This is represented by a standard tuple containing the biases as 1D numpy.ndarray’s of double-precision floating-point elements. Each of the ndarrays has the number of elements equals to the number of neurons in the respective layer. Note that, by definition, the input layer is not subject to biasing. If you need biasing on the input layer, use the input_subtract and input_divide attributes of this MLP.
-
forward((MLP)self, (object)input, (object)output) → None :¶ Projects the input to the weights and biases and saves results on the output. You can either pass an input with 1 or 2 dimensions. If 2D, it is the same as running the 1D case many times considering as input to be every row in the input matrix.
- forward( (MLP)self, (object)input) -> object :
- Projects the input to the weights and biases and returns the output. This method implies in copying out the output data and is, therefore, less efficient as its counterpart that sets the output given as parameter. If you have to do a tight loop, consider using that variant instead of this one. You can either pass an input with 1 or 2 dimensions. If 2D, it is the same as running the 1D case many times considering as input to be every row in the input matrix.
-
forward_((MLP)self, (object)input, (object)output) → None :¶ Projects the input to the weights and biases and saves results on the output. You can either pass an input with 1 or 2 dimensions. If 2D, it is the same as running the 1D case many times considering as input to be every row in the input matrix.
The activation function (for all hidden layers) - by default, the hyperbolic tangent function. The output provided by the activation function is passed, unchanged, to the user.
-
input_divide¶ Input division factor, before feeding data through the MLP. The division is applied just after subtraction - by default, it is set to 1.0
-
input_subtract¶ Input subtraction factor, before feeding data through the MLP. The subtraction is the first applied operation in the processing chain - by default, it is set to 0.0.
-
is_similar_to((MLP)self, (MLP)other[, (float)r_epsilon=1e-05[, (float)a_epsilon=1e-08]]) → bool :¶ Compares this MLP with the ‘other’ one to be approximately the same.
-
load((MLP)self, (HDF5File)config) → None :¶ Loads the weights, biases and other configuration parameter sfrom a configuration file.
-
output_activation¶ The output activation function (only for the last output layer) - by default, the hyperbolic tangent function. The output provided by the activation function is passed, unchanged, to the user.
-
randomize((MLP)self) → None :¶ Sets all weights and biases of this MLP, with random values between [-0.1, 0.1) as advised in textbooks.
Values are drawn using boost::uniform_real class. The seed is picked using a time-based algorithm. Different calls spaced of at least 1 microsecond (machine clock) will be seeded differently. Values are taken from the range [lower_bound, upper_bound) according to the boost::random documentation.- randomize( (MLP)self, (float)lower_bound, (float)upper_bound) -> None :
Sets all weights and biases of this MLP, with random values between [lower_bound, upper_bound).
Values are drawn using boost::uniform_real class. The seed is picked using a time-based algorithm. Different calls spaced of at least 1 microsecond (machine clock) will be seeded differently. Values are taken from the range [lower_bound, upper_bound) according to the boost::random documentation.
- randomize( (MLP)self, (mt19937)rng) -> None :
Sets all weights and biases of this MLP, with random values between [-0.1, 0.1) as advised in textbooks.
Values are drawn using boost::uniform_real class. You should pass the generator in this variant. You can seed it the way it pleases you. Values are taken from the range [lower_bound, upper_bound) according to the boost::random documentation.
- randomize( (MLP)self, (mt19937)rng, (float)lower_bound, (float)upper_bound) -> None :
Sets all weights and biases of this MLP, with random values between [lower_bound, upper_bound).
Values are drawn using boost::uniform_real class. In this variant you can pass your own random number generate as well as the limits from where the random numbers will be chosen from. Values are taken from the range [lower_bound, upper_bound) according to the boost::random documentation.
-
save((MLP)self, (HDF5File)config) → None :¶ Saves the weights and biases to a configuration file.
-
shape¶ A tuple that represents the size of the input vector followed by the number of neurons in each hidden layer of the MLP and, finally, terminated by the size of the output vector in the format
(input, hidden0, hidden1, ..., hiddenN, output). If you set this attribute, the network is automatically resized and should be considered uninitialized.
-
weights¶ A set of weights for the synapses connecting each layer in the MLP. This is represented by a standard tuple containing the weights as 2D numpy.ndarray’s of double-precision floating-point elements. Each of the ndarrays has the number of rows equals to the input received by that layer and the number of columns equals to the output fed to the next layer.