Scoring

Scoring

Gold Standard

Each submission will be scored by comparing the predictions to the “gold standard”, which corresponds to the true class labels of the subjects from which the microbiome samples have been collected. 

Scoring Methodology

Predefined metrics will be applied to score anonymized participants’ predictions. Complementary metrics may be used to evaluate different aspects of the submitted predictions. Scores will be aggregated to provide final ranking of participants.

To avoid the optimization toward the maximization of specific scoring metrics, the scoring methods and metrics will only be disclosed once the scoring is completed in accordance with the Challenge Rules.

Once all submissions will have been scored and the ranking established, teams will be associated with their respective submission numbers and the winners announced.

Tie Resolution

If several teams obtain the same aggregated score, incentives will be allocated according to the Challenge Rules.

Scorers and Scoring Review Panel

A team of researchers from PMI R&D, Philip Morris Products S.A., in Neuchâtel (Switzerland) will establish a scoring methodology and perform the scoring on the blinded submissions under the review of an independent Scoring Review Panel.

The sbv IMPROVER Scoring Review Panel:

  • will consist of experts in the field of metagenomics and systems biology, and their names will be disclosed during the open phase of the Challenge
  • will review the scoring strategy and procedure for the Challenge to ensure fairness and transparency.

Procedures

Blinded scoring: Submissions will be anonymized before scoring, so that the scoring team do not have access to the identity of the participating teams or the members of the teams. To help us maintain this, submissions (e.g. prediction files and write up) should not include any information regarding the identity or affiliations of the team or the members of the team.

Submissions and significance: The submission requires one .zip archive containing all the necessary files, in the specified format (see the Template files and the Technical description). One of the submissions must be significantly better than random prediction in at least one metric. The threshold Score above which a prediction is considered to be significant, will be defined as the 95th percentile of the distribution of random prediction scores. Your predictions will be scored by computing predefined metric(s) and compared to a random prediction score distribution to assess that a prediction is better than random. If these requirements are not met the Challenge organizers retain the right not to declare a best performer in accordance with the Challenge Rules.

Timelines 

The scoring process will start as soon as the Challenge has been closed (see Challenge Rules). If all conditions are met, the anonymized ranking of the participating teams will be disclosed and the best performers informed by email.

Awards & Opportunities 

Contribute and help the scientific community to benchmark computational methods objectively and establish standards and best practices in computational metagenomics data analysis.

  • Win a cash 2,000 USD price that will be awarded to the three best performing teams of each sub-challenge (see Rules). HYPERLINK
  • Contribute to writing peer-reviewed scientific article(s) describing the outcome of the Challenge.

In addition, solving this Challenge provides you an opportunity to:

  • Show your data science skills
  • Receive an independent assessment of your methods
  • Collaborate with other researchers and grow your professional network

Share this page