+
+
Your browser does not support the video tag.
@@ -159,7 +160,7 @@ A Multimodal Automated Interpretability
Understanding a neural model can take many forms. For instance, we might want to know when and how the system relies on sensitive or spurious features, identify systematic errors in its predictions, or learn how to modify the training data and model architecture to improve accuracy and robustness. Today, answering these types of questions often involves significant human effort—researchers must formalize their question, formulate hypotheses about a model’s decision-making process, design datasets on which to evaluate model behavior, then use these datasets to refine and validate hypotheses. As a result, this type of understanding is slow and expensive to obtain, even about the most widely used models.
Automated Interpretability approaches have begun to address the issue of scale. Recently, such approaches have used pretrained language models like GPT-4 (in Bills et al. 2023 ) or Claude (in Bricken et al. 2023 ) to generate feature explanations. In earlier work, we introduced MILAN (Hernandez et al. 2022 ), a captioner model trained on human feature annotations that takes as input a feature visualization and outputs a description of that feature. But automated approaches that use learned models to label features leave something to be desired: they are primarily tools for one-shot hypothesis generation (Huang et al. 2023 ) rather than causal explanation, they characterize behavior on a limited set of inputs, and they are often low precision.
- We introduce the M ultimodal A utomated I nterpretability A gent (MAIA), aiming to help users understand models. MAIA combines the scalability of automated techniques with the flexibility of human experimentation—It iteratively generates hypotheses, runs experiments that test these hypotheses, observes experimental outcomes, and updates hypotheses until it can answer the user query. MAIA is based on the recent success of our Automated Interpretability Agent (AIA) paradigm (Schwettmann et al. 2023 ) where an LM-based agent interactively probes systems to explain their behavior. We expand this by equipping MAIA with a vision-language model backbone and an API of tools for designing experiments on other systems [add a link to the webpage section describing the tools]. With simple modifications to the user query to the agent, the same modular system can field both "macroscopic" questions like identifying systematic biases in model predictions (see the tench example above), as well as "microscopic" questions like describing individual features (see example below).
+ Our current line of research aims to build tools that help users understand models, while combining the flexibility of human experimentation with the scalability of automated techniques. We introduce the M ultimodal A utomated I nterpretability A gent (MAIA), which designs experiments to answer user queries about components of AI systems. MAIA iteratively generates hypotheses, runs experiments that test these hypotheses, observes experimental outcomes, and updates hypotheses until it can answer the user query. MAIA builds on the Automated Interpretability Agent (AIA) paradigm we introduced in Schwettmann et al. 2023 , where an LM-based agent interactively probes systems to explain their behavior. MAIA is equipped with a vision-language model backbone and an API of tools for designing interpretability experiments. With simple modifications to the user query to the agent, the same modular system can field both "macroscopic" questions like identifying systematic biases in model predictions (see the tench example above), as well as "microscopic" questions like describing individual features (see example below).
@@ -170,7 +171,7 @@ A Multimodal Automated Interpretability
-
+
Your browser does not support the video tag.
@@ -187,7 +188,7 @@ MAIA
- We introduce MAIA, a Multimodal Automated Interpretability Agent. MAIA is a system that uses neural models to automate neural model understanding tasks like feature interpretation and failure mode discovery.
+ MAIA is a system that uses neural models to automate neural model understanding tasks like feature interpretation and failure mode discovery.
It equips a pre-trained vision-language model with a set of tools that support iterative experimentation on subcomponents of other models to explain their behavior. These include tools commonly used by human interpretability researchers: for synthesizing and editing inputs, computing maximally activating exemplars from real-world datasets, and summarizing and describing experimental results.
Interpretability experiments proposed by MAIA compose these tools to describe and explain system behavior.
@@ -200,7 +201,7 @@
MAIA
-