.. vim: set fileencoding=utf-8 : .. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ .. .. Contact: beat.support@idiap.ch .. .. .. .. This file is part of the beat.web module of the BEAT platform. .. .. .. .. Commercial License Usage .. .. Licensees holding valid commercial BEAT licenses may use this file in .. .. accordance with the terms contained in a written agreement between you .. .. and Idiap. For further information contact tto@idiap.ch .. .. .. .. Alternatively, this file may be used under the terms of the GNU Affero .. .. Public License version 3 as published by the Free Software and appearing .. .. in the file LICENSE.AGPL included in the packaging of this file. .. .. The BEAT platform is distributed in the hope that it will be useful, but .. .. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY .. .. or FITNESS FOR A PARTICULAR PURPOSE. .. .. .. .. You should have received a copy of the GNU Affero Public License along .. .. with the BEAT platform. If not, see http://www.gnu.org/licenses/. .. .. _validation-web: Validation Procedure for web interface ====================================== In this section, we briefly explain some sort of "scripted" validation procedure for presentations/tutorials of the BEAT platform. This procedure should be executed ahead of possible presentations. Automated testing as, e.g., with Selenium (http://www.seleniumhq.org/) would be a plus. .. _validation-web-front-page: Front page ---------- * Hit front page, all components should load (video tutorial), right bar with stats. * A yellow banner gets the user attention to our terms of service * The video tutorial should be playable * The user should be able to browse all public resources on the platform from this page, selecting the link from the top-bar button * The user should be able to search all public experiments on the platform from this page, using the search box * The user should be able to login or sign-up using the buttons on this page - in particular, access to the user "tutorial" should be possible through the login page * The user should be able to click on the "User Guide" button and a user guide should open .. _validation-web-user-page: User page --------- * By clicking on the "Home" button on the front page, after login, the user is taken to the "User page" * The page should display 3 columns, on the left, shared material, on the centre, monitored searchers and on the right, own contributions * The values for own contributions should make sense w.r.t. material developed by that user * Clicking on one of the leaderboards should get the user to the related search. .. _validation-web-search: Search ------ * The omni-search should work from any page * The search for "eigenface" should lead to a page full of experiments * It should be possible to configure this search to visualize aggregated results from this page, by narrowing down the analyzer list and hitting "Update" or the keyboard key `` * Once narrowed down, it should be possible to display aggregated plots * It should be possible to sort columns by any of the headers in result table and the sorting must work correctly * It should be possible to register this search for repeating it later * It should be possible to repeat a stored search * It should be possible to register to a search creating a leaderboard .. _validation-web-experiments: Experiments ----------- * It should be possible to fork an existing experiment (I typically use this one: https://www.beat-eu.org/platform/experiments/tutorial/tutorial/eigenface/1/atnt-eigenfaces-5-comp/) and run it by adjusting some parameter, the experiment name and clicking on the button "Run" * It should be possible to go to the toolchain tab on the experiment configurator page, show the associated toolchain * That experiment should run in a reasonable amount of time (less than 5 minutes) * Once the experiment finishes, it should take the user to the experiment "done" page * The user should receive an e-mail notification that the experiment finished * It should be possible go from tab to tab in the experiment page * It should be possible to click on the "Search similar" button on that page and search for similar experiments. When this happens, there should be a list of experiments to which we can compare the recently run experiment with * The last point implies that updates on the environments, databases or related stuff should be followed up with the test execution of at least the following experiments: tutorial/tutorial/eigenface/1/atnt-eigenfaces-5-comp/, 6, 7, 8, 9 and 10 components * It should be possible to parallelize blocks in experiments * It should be possible to change the software environment for globally or on a per-block basis * There must be more than one environment for selection (typically, a couple Python-based and a C++ based environment) * It should be possible to change the processing queue either globally or locally in each block * It should be possible configure block details on a per block basis, besides queue and software enviroment (e.g. parameters) * Once an experiment is "Done", it should be possible to change its privacy settings * Changing the privacy settings of an experiment affects all underlying components correctly * It should be possible to share an experiment with a group, users or make it public * It should be possible attest (certify) an experiment * It should be possible to browse to the attestation of an attested experiment by clicking on the icon on the left of the experiment name (on the top of the page) * If you re-run an experiment that was just executed, the execution of the new experiment is cached and it runs immediately * During tutorials, it should be possible to run multiple experiments at once since multiple users will try to run concurrently * Here is a list of experiments that should be checked if underlying changes are performed to the platform: * https://www.beat-eu.org/platform/experiments/tutorial/tutorial/eigenface/1/atnt-eigenfaces-5-comp/ * https://www.beat-eu.org/platform/experiments/tutorial/tutorial/eigenface_with_preprocessing/1/eigenface-with-preproc-15/ * https://www.beat-eu.org/platform/experiments/anjos/ivana7c/simple-antispoofing-updated/1/face-antipoofing-lbp-histogram-comparison/#exec * https://www.beat-eu.org/platform/experiments/smarcel/tutorial/digit/2/mnist-mlp-nhu10-niter100-seed2001/ .. _validation-web-toolchains: Toolchains ---------- * It should be possible to jump to the associate toolchain from the associated experiment page * The chosen toolchain is normally this one: https://www.beat-eu.org/platform/toolchains/tutorial/eigenface/1/ * The toolchain should be displayed. The colors on the links between blocks should be clearly visible * It should be possible to visit all tabs on the toolchain page * It should be possible to click on the "[compare]" button on one of the history links * The documentation for the toolchain should display correctly * The associated list of experiments should be non-empty * It should be possible to fork the displayed toolchain * The toolchain editor should work flawlessly. It should at least be possible to add a single block to the canvas * It should be possible to create a new toolchain from scratch * It should be possible to share a toolchain with a group, users or make it public * Toolchains we normally use for tutorials: * https://www.beat-eu.org/platform/toolchains/tutorial/eigenface_with_preprocessing/1/ * https://www.beat-eu.org/platform/toolchains/tutorial/eigenface/1/ * https://www.beat-eu.org/platform/toolchains/tutorial/digit/2/ .. _validation-web-algorithms: Algorithms ---------- * It should be possible to list all algorithms * It should be possible to fork an algorithm or create one from scratch * It should be possible to edit an algorithm in Python * The possibility to upload a C++ algorithm should be there * It should be possible to share an algorithm with a group, users or make it public * Algorithms we normally use for tutorials: * All from this experiment: https://www.beat-eu.org/platform/experiments/tutorial/tutorial/eigenface_with_preprocessing/1/eigenface-with-preproc-15/#exec .. _validation-web-databases: Databases --------- * It should be possible to select "Home" -> "Databases" and display all available databases on the platform. * The list should be non-empty and contain a number of datasets pre-inserted * Typically, I use this one: https://www.beat-eu.org/platform/databases/atnt/4/ * The documentation should be correctly displayed, as well as the protocols page * The sharing tab should be "Public" * Databases we normally use for tutorials: * https://www.beat-eu.org/platform/databases/atnt/4/ .. _validation-web-attestations: Attestations ------------ * All assets related to a certified experiment should be either public or "executable" (for algorithms) * It should be possible to "unlock" an attestation * It should be possible to delete a "locked" attestation * The perma-link to a locked attestation should be visitable by an anonymous party .. _validation-web-reports: Reports ------- (Considering we have moved the features on the staging to production) * It should be possible to create a new report and to populate it from the experiment list * It should be possible to fill-in documentation * It should be possible to create groups and associate experiments to these groups with aliases * It should be possible to add documentation cells arbitrarily in each group * It should be possible to add tables and figures to each group * It should be possible to lock a report and ready it for publication * It should be possible to unlock a report and make it return to editable mode * It should be possible to make a report public and lock it permanently * Reports we normally mention in tutorials (they must always be working): * https://www.beat-eu.org/platform/reports/429641009/ * https://www.beat-eu.org/platform/reports/1229989776/ * https://www.beat-eu.org/platform/reports/990149671/ * Make sure that a MR **DOES NOT** touch the API and breaks the logic which is well tested * Check that a `locked` report **DOES NOT** have their full name experiments as reviewers are going through it as some point. Which brakes completely the `"blind"` review which would give hints to a reviewer to accept or reject a report due to a possible friendship or not with the person who created the report. * On a `locked` report people shouldn't be able to click on the experiment names at all, only the alias should be visible * When creating a table, verify that only the experiment alias is visible for `locked` reports * When going to the experiments page and adding experiments to a report from there, the pop-up asks if we wish to go to the report with a button `View Report`. When clicking on that button, check that you are sent to the page `https://www.beat-eu.org/platform/reports///`