We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback Active Prompting with Chain-of-Thought for Large Language Models More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models Guiding Large Language Models via Directional Stimulus Prompting When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment
The text was updated successfully, but these errors were encountered:
https://arxiv.org/pdf/2302.12813.pdf Augment with search, also rl
https://arxiv.org/pdf/2302.12246.pdf CoT bad for complex reasoning Selecting exemplars, seems complex
https://arxiv.org/pdf/2302.11520.pdf Generate partial, helping stimulus prompt, not LP
https://arxiv.org/pdf/2210.01478.pdf Moral decision making, maybe add reference in reliability section
https://arxiv.org/pdf/2302.12173.pdf Very cool, about different types of injections, direct, indirect, etc.
https://arxiv.org/pdf/2303.11315.pdf Good fit for learn prompting Deal w knowledge conflict and prediction with abstention Through prompting
Sorry, something went wrong.
No branches or pull requests
Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback
Active Prompting with Chain-of-Thought for Large Language Models
More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
Guiding Large Language Models via Directional Stimulus Prompting
When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment
The text was updated successfully, but these errors were encountered: