The main purpose of the BEAT platform is to allow researchers to construct
Experiment is a specific combination of a dataset, a
Toolchain, and a set of relevant, and appropriately parameterised, Algorithms.
Experiment produces a set of numerical and graphical results.
Each experiment uses different resources available on the BEAT platform such as different databases and algorithms. Each experiment has its own Toolchains which cannot be changed after the experiment is created. Experiments can be shared and forked, to ensure maximum re-usability.
3.1. Displaying an existing experiment¶
To see the list of existing Experiments, click the
User Resources tab in
your home page and select
Experiments from the drop-down menu. You will see
a webpage similar to the following image:
You can use the various filters (
as well as the free-text
Search-box to narrow down your search. For each
Experiment shown in the list, additional information is also displayed.
a gold medal: indicating whether an attestation has already been generated for this experiment,
a green tick: indicating that the last execution of the
the database used in this
the analyzers used in this
Clicking on any
Experiment, leads to a new page, displaying its
configuration and results:
This page consists of several Tabs:
Referers. Of these, the first two tabs are the most
useful. By default, the
Results tab is open, showing the results of the
The contents of the
Results tab depends on the configuration of the
Analyzer in the
Toolchain. Typically, numerical values, such as various
kinds of error-rates, as well as graphical elements, such as ROC curves for
different data-sets, are displayed in this tab.
Execution Details tab , a graphical representation of the
Toolchain is displayed. This tab also displays the parameters selected for
each block in the Toolchain, as well as information about the execution of each
block (queuing time and execution time).
Icons for several actions are provided in the top-right region of the
Experiment page. The list of icons should be similar to that shown in the
These icons represent the following options (from left to right):
green arrow: share the (currently private) experiment with other users
red cross: delete the experiment
blue tag: rename the experiment
gold medal: request attestation
circular arrow: reset the experiment (if some of the blocks in the experiment have been ran before the platform will use the cache available for the outputs of those blocks)
fork: fork a new, editable copy of this experiment
page: add experiment to report
blue lens: search for similar experiments
(Placing the mouse of an icon will also display a tool-tip indicating the
function of the icon.) The exact list of options provided will depend on what
kind of experiment you are looking at. For example, the
gold medal will
appear on the page only if you are permitted to request attestation for this
particular experiment (i.e., if you are the owner of this experiment and it
Similar experiments opens a new tab where experiments using the
same toolchain, analyzer or database are shown:
3.3. Running an experiment¶
In order to start running an experiment on the platform, the current way to do it is by using the beat command on the command line.
$ beat exp <experiment_full_name> run
The run status of the experiment can be followed using either the –monitor option to the previous run subcommand or use the runstatus subcommand.
$ beat exp <experiment_full_name> runstatus
This subcommand can be used to monitor the experiment run at any time.
Canceling and reseting an experiment can also be done from the command line with the corresponding subcommands:
$ beat exp <experiment_full_name> cancel $ beat exp <experiment_full_name> reset
Note that the reset of an experiment can only be done after a completed run (successful or not) or after cancellation was completed.