Scoring

Scoring

Gold standard

For each sub-challenge, the submissions will be scored by comparing the predictions to the “gold standard” that is unseen by the participants. The gold standard corresponds to the true class/group labels of the samples present in the test and verification sets. The experimental data used to generate the gold standard were processed and normalized the same way as those corresponding to the training and verification sets provided to the participants.

Scoring methodology

To establish a fair and meaningful score that is not biased by any particular performance measure, different metrics will be used and aggregated. The scoring methods and metrics will be selected by the Scoring Review Panel based on the application of scientific principles before the opening of the challenge. The scoring methods and metrics will be disclosed once the scoring is completed in accordance with the Challenge Rules. Each sub-challenge is self-contained and will be scored independently.

Tie resolution

If several teams obtain the same score, the Scoring Review Panel will perform a scientific review of the available information to distinguish the submissions as described further in the Challenge Rules. In the case of the tie persisting, the incentives will be allocated according to the Challenge Rules.

Scorers and Scoring Review Panel

A team of researchers from Philip Morris R&D in Neuchâtel (Switzerland) will establish a scoring methodology and perform the scoring on the blinded submissions under the review of an independent Scoring Review Panel.

The sbv IMPROVER Scoring Review Panel* consists of the following experts in the field of systems biology:

  • Prof. Dr. Leonidas Alexopoulos, National Technical University of Athens
  • Prof. Dr. Rudiyanto Gunawan, ETH Zurich
  • Dr. Alberto de la Fuente, Leibniz Institute for Farm Animal Biology

The sbv IMPROVER Scoring Review Panel will review the scoring strategy and process for the Markers of Exposure Response Identification challenge to ensure fairness and transparency.

*Additional members may be added to the Scoring Review Panel during the open phase of the challenge.

Procedures

Blinded scoring

Submissions will be anonymized before scoring, so that both the scorer and the Scoring Review Panel do not have access to the identity of the participating teams or the members of the teams. To help us maintain this, submissions must not include any information regarding the identity or affiliations of the team or the members of the team.

Submissions and significance

For each sub-challenge, a minimum of five submissions are required. One of the submissions must be statistically significant in at least one metric, at a level of significance given by a P-value of 0.05. If these requirements are not met for a particular sub-challenge, the challenge organizers retain the right not to declare a sub-challenge best performer in accordance with the Challenge Rules.

Timelines

The scoring process will start as soon as the relevant sub-challenge has been closed (timings are given in the Challenge Rules). If all conditions are met and the open period of the sub-challenge is not extended, the anonymized ranking of the participating teams will be disclosed and the best performers per sub-challenge will be informed by email in May 2016.

Share this page