Crowdsourcing Adverse Test Sets to Help Surface AI Blindspots
Participate

To start participating, sign up and accept the terms and conditions.

To participate you need to submit a sample set of images from the target images for the target labels.

  • Each submission should be a CSV file;
  • Each submission should contain in every row of the CSV file only imageIDs, corresponding target label, and a brief text explaining the discovery rationale;
  • Each submission should contain only image IDs from the OID dataset and labels from the target label set; Check the example submission file.
  • Synthetically-manipulated images are not accepted;
  • Each submission should not exceed participants submission quota (see Submission for quota explanation);
  • Participants earn points by submitting adverse examples, which are image-label pairs for which the human verification is in disagreement with a machine prediction, e.g.
    • human verification = Y, machine prediction = N (false negative);
    • human verification = N, machine prediction = Y (false positive).
  • If multiple participants submit the same image-label pair, a point is awarded to the first participant who submits it (based on the timestamp in the submitted images queue)
  • The image-label pairs submitted by all participants will be published on the challenge website and will be visible in the Queue page;
  • Participants earn bonus submission quota with every image-label pair which earned them a point;
  • The Leaderboard is updated continuously as the image-label pairs are verified.

Participants are free to use any creative form of sampling methods for discovering images from the OID dataset (including generating synthetic manipulations of the original OID images to identify both false positives and false negatives).

Check Overview to learn about types of adverse examples.

Check Rules to learn about the rules for this challenge.