Current Research

Explainable Artificial Intelligence

This is a DARPA funded project I am working on at the CoDaS Laboratory (link).

Using Bayesian teaching, our goal is to build a machine that uses examples to explain opaque models to human beings, where the examples are very small subsets of the original training data. We demonstrate the efficacy of this approach to model explanation through human experimentation via Amazon Mechanical Turk.

I design and evaluate teaching models to make neural network image classifiers (e.g. ResNet) interpretable to human users at the category, datum, and feature levels.