Implements multi-layer perceptron (MLP) training

This algorithm is a legacy one. The API has changed since its implementation. New versions and forks will need to be updated.

Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.

Group: main

Endpoint Name Data Format Nature
class_id system/uint64/1 Input
image system/array_2d_floats/1 Input
model tutorial/mlp/1 Output

Parameters allow users to change the configuration of an algorithm when scheduling an experiment

Name Description Type Default Range/Choices
number-of-iterations uint32 50
seed uint32 0
number-of-hidden-units uint32 10

The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box

This algorithm implements a scoring procedure for a multi-layer perceptron (MLP) [Bishop] [Duda], a neural network architecture that has some well-defined characteristics such as a feed-forward structure.

This implementation relies on the Bob library.

The inputs are:

  • image: a two-dimensional array of floats (64 bits), which is flattened for the training procedure
  • class_id: an identifier for the class of the image, such that supervised training is possible

The output scores is the corresponding set of score values.

[Bishop]Pattern Recognition and Machine Learning, C.M. Bishop, chapter 5
[Duda]Pattern Classification, Duda, Hart and Stork, chapter 6
No experiments are using this algorithm.
This algorithm was never executed.
Terms of Service | Contact Information | BEAT platform version 2.2.1b0 | © Idiap Research Institute - 2013-2024