Home - | Research - | Artwork - | About Me |
Machine Learning
|
I co-founded Google's People + AI Research initiative for understanding and improving human/AI interaction. Key goals of my research are to broaden participation in machine learning, and to ensure it reflects our values. I'm fascinated by the questions of interpretability, controllability, and transparency which underlie many contemporary concerns about AI.
The work described here was done in collaboration
with multiple teams at Google.
Publications
Visualizing and Measuring the Geometry of BERT Human-centered tools for coping with imperfect algorithms during medical decision-making The What-If Tool: Interactive Probing of Machine Learning Models Deep learning of aftershock patterns following large earthquakes Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) SmoothGrad: removing noise by adding noise Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation Embedding Projector: Interactive Visualization and Interpretation of Embeddings. Direct-Manipulation Visualization of Deep Networks How to Use t-SNE Effectively TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems Ad Click Prediction: a View from the Trenches |
As part of the Google Brain team, my colleagues and I have worked to create tools for inspecting the workings of ML models. We have also seen that training data is often a key to understanding—a point of view summarized in the slogan, "Don't just debug the model, debug the data." The image above shows the Embedding Projector, a visualization tool for rich interactive exploration of the kind of high-dimensional data sets that are common in machine learning. My colleagues and I have used this tool as a kind of scientific instrument, leading to insights into state-of-the-art systems. For examples, an investigation of a machine translation model found suggestions of a language-independent representation of meaning.
A second challenge is to explore the actions of complex ML models in the real world. The power—and challenge—of ML systems is that their behavior is not predefined by a human. The image below shows an application we created for monitoring changes in a large-scale mission critical ML system. Here we attacked the problem of understanding how a difference in model corresponds to a change in output.
|