diff --git a/_pages/schedule.md b/_pages/schedule.md index a5ca99c..5bec174 100644 --- a/_pages/schedule.md +++ b/_pages/schedule.md @@ -8,25 +8,31 @@ nav_order: 4 # Schedule
-_:warning: This is a tentative schedule, the organizers are still finializing the details._ - - |               |    | |-------------|:-------------| |__AM__                    || -| 8:45-9:00 | Opening Remarks | -| 9:00-9:30 | Invited Talk 1 | -| 9:30-10:00 | Invited Talk 2 | +| 8:40-9:00 | Opening Remarks | +| 9:00-9:30 | Invited Talk 1 - Tatsunori Hashimoto | +| 9:30-10:00 | Invited Talk 2 - Nazneen Rajani | | 10:00-10:15 | Break | -| 10:15-10:45 | Invited Talk 3 | -| 10:45-11:30 | Panel 1 | +| 10:15-10:45 | Invited Talk 3 – Fei Xia | +| 10:45-11:30 | Panel 1: Key Techniques, Insights, and Challenges in Building Instruction-following Models +| | Panelists: Alex Tamkin, Albert Webson, Fei Xia | |-------------|:-------------| |__PM__|| -| 13:00-14:00 | Poster | -| 14:00-14:30 | Invited Talk 4 | -| 14:30-15:00 | Invited Talk 5 | +| 13:00-14:00 | Poster Session | +| 14:00-14:30 | Invited Talk 4 - Sara Hooker | +| 14:30-15:00 | Invited Talk 5 - Alex Tamkin | | 15:00-15:15 | Break | -| 15:15-15:45 | Invited Talk 6 | -| 15:45-16:30 | Panel 2 | -| 16:30-17:00 | Contributed Talks | -| 17:00-17:15 | Closing Remarks | +| 15:15-16:00 | Panel 2: Open and Collaborative Strategies for the Large Language Models Adaptation| +| | Panelists: Nazneen Rajani, Colin Raffel, Hao Zhang, Tatsunori Hashimoto | +| 16:00-17:20 | Oral Presentations (10 min each) | +| | 1. Understanding Hidden Context in Preference Learning: Consequences for RLHF | +| | 2. Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game | +| | 3. Understanding the Effects of RLHF on LLM Generalisation and Diversity | +| | 4. Learning Interactive Real-World Simulators | +| | 5. Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks | +| | 6. Self-RAG: Self-reflective Retrieval Augmented Generation | +| | 7. Delve into PPO: Implementation Matters for Stable RLHF | +| | 8. FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets | +| 17:20-17:30 | Closing Remarks |