Popular repositories Loading
-
LLMs-Finetuning-Safety
LLMs-Finetuning-Safety PublicForked from LLM-Tuning-Safety/LLMs-Finetuning-Safety
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
Python
-
-
YichenBC.github.io
YichenBC.github.io PublicForked from RayeRen/acad-homepage.github.io
AcadHomepage: A Modern and Responsive Academic Personal Homepage
SCSS
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.