Learning from Contrastive Examples.

Sandre Zilles
Regina

Machine learning can greatly benefit from providing learning algorithms with pairs of contrastive training examples---typically pairs of instances that differ only slightly (under some metric), yet have different class labels. Intuitively, the difference in the instances serves as a means of explaining the difference in the class labels. This presentation proposes a theoretical framework in which the effect of various types of contrastive examples can be studied formally and empirically. The focus is on the sample complexity, i.e., the worst-case number of training examples required for achieving highly accurate classification, and how this complexity is influenced by the choice of contrastive examples. For certain choices of metrics, contrastive examples are shown to reduce the sample complexity. On the other hand, metric-independent lower bounds on the sample complexity demonstrate the limitations of learning with contrastive examples. (Joint work with Farnam Mansouri, Valentio Iverson, Mohamadsadegh Khosravani, Hans U. Simon, Adish Singla, and Yuxin Chen.)