From 97f4d58982069f3e2d91f2f0c8052929e795c4ad Mon Sep 17 00:00:00 2001 From: Ashima Suvarna <31371748+asuvarna31@users.noreply.github.com> Date: Mon, 9 Dec 2024 14:40:46 -0800 Subject: [PATCH] Update index.html --- index.html | 92 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 56 insertions(+), 36 deletions(-) diff --git a/index.html b/index.html index 4cf3579..5576ebb 100644 --- a/index.html +++ b/index.html @@ -42,40 +42,6 @@

UCLA NLP Seminar Series

-
-

Talk Schedule for Fall 2024

- - - - - - - - - - - - - - - - - - - - - - - - - - -
DateSpeakerTitle
Oct 25Elisa KreissTranslating images into words: From truthful to useful
Nov 1Jieyu ZhaoBuilding Accountable NLP Models for Social Good
Nov 5Robin JiaAuditing, Understanding, and Leveraging Large Language Models
-

Talk Schedule for Winter 2025

@@ -89,7 +55,7 @@

Talk Schedule for Winter 2025

- + @@ -122,9 +88,63 @@

Talk Schedule for Winter 2025

- +

🚀 Upcoming Talks

+
+
+
+
JAN
+
10
+
+
+ + Prof. Swabha Swayamdipta + +
+
+

Auditing, Understanding, and Leveraging Large Language Models

+

Person IconSwabha Swayamdipta

+

Clock IconJan 10, 2024, 2:00 PM

+

Location Icon289, Engineering VI

+ +
+ +
+
+

Speaker Bio: Swabha Swayamdipta is an Assistant Professor of Computer Science and a Gabilan Assistant Professor at the University of Southern California. Her research interests lie in natural language processing and machine learning, with a primary focus on the evaluation of generative models of language, understanding the behavior of language models, and designing language technologies for societal good. At USC, Swabha leads the Data, Interpretability, Language, and Learning (DILL) Lab. She received her PhD from Carnegie Mellon University, followed by a postdoctoral position at the Allen Institute for AI. Her work has received awards at EMNLP, ICML, NeurIPS, and ACL. Her research is supported by awards from the National Science Foundation, the Allen Institute for AI, and a Rising Star Award from Intel Labs.

+

Abstract: As large language models have become ubiquitous, it has proven increasingly challenging to enforce their accountability and safe deployment. In this talk, I will discuss the importance of ensuring the safety, responsibility, and accountability of Large Language Models (LLMs) throughout all stages of their development: pre-training, post-training evaluation, and deployment. First, I will present the idea of a unique LLM signature that can identify the model to ensure accountability. Next, I will present our recent work on reliably evaluating LLMs through our novel formulation of generation separability, and how this could lead to more reliable generation. Finally, I will present some ongoing work that demonstrates LLMs' ability to understand but not generate unsafe or untrustworthy content.

+
+
+

Organizing Committee

Jan 10 Swabha SwayamdiptaEnsuring Safety and Accountability in LLMs, Pre- and Post Training
Jan 17