Participants in CATS4ML will discover and submit adverse images, which we call AI blindspots. Participants will invent new and creative ways to explore an existing publicly available benchmark dataset and discover blindspot examples guided by a list of preselected target labels.
AI Blindspots are not the typical adversarial examples defined in ML literature by Goodfellow (2015) as machine manipulated images aimed to fool machines and to be imperceptible to humans,
AI blindspots are unmanipulated (real) images for which humans can reliably agree on a label but most AI models would disagree.
AI blindspots are unknown unknowns, e.g. images with visual patterns that can not be easily distinguished by AI models because they are:
In the CATS4ML challenge, finding AI blindspots is about using human intuition to find the unknown unknowns, unlike traditional active learning techniques, which utilize signals such as machine confidence to explore what the model “thinks” it does not know (i.e. the known unknowns).
Check Participate to learn how to start contributing.