From 7fb98eb8e3359269ca7cb9925795742be3a3174e Mon Sep 17 00:00:00 2001 From: Sarah Schwettmann Date: Tue, 9 Apr 2024 22:38:52 -0400 Subject: [PATCH] Update index.html --- index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/index.html b/index.html index cb6adad..fe00460 100644 --- a/index.html +++ b/index.html @@ -100,7 +100,7 @@

A Multimodal Automated Interpretability -
+

Understanding an AI system can take many forms. For instance, we might want to know when and how the system relies on sensitive or spurious features, identify systematic errors in its predictions, or learn how to modify the training data and model architecture to improve accuracy and robustness. Today, answering these types of questions often involves significant effort on the part of researchers: synthesizing the outcomes of different experiments that use a variety of tools.