- ReAct: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS
- Repository-Level Prompt Generation for Large Language Models of Code
- CodePlan: Repository-level Coding using LLMs and Planning
- Guiding Language Models of Code with Global Context using Monitors
- https://github.com/microsoft/monitors4codegen
- Highlights: Some recent approaches use static analysis (Shrivastava et al., 2022; Ding et al., 2022; Pei et al., 2023) or retrieval (Zhang et al., 2023) to extract relevant code fragments from the global context. These approaches expand the prompt (Shrivastava et al., 2022; Pei et al., 2023; Zhang et al., 2023) or require architecture modifications (Ding et al., 2022) and additional training (Ding et al., 2022; Pei et al., 2023). In comparison, we provide token-level guidance to a frozen LM by invoking static analysis on demand. Our method is complementary to these approaches as they condition the generation by modifying the input to the LM, whereas we apply output-side constraints by reshaping the logits.
- RepoFusion: Training Code Models to Understand Your Repository
- RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
- QUANTIFYING LANGUAGE MODELS’ SENSITIVITY TO SPURIOUS FEATURES IN PROMPT DESIGN or: How I learned to start worrying about prompt formatting
- LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning