We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
- Full Fine-Tuning - Adapter-Tuning - LoRAs - Initialization - PiSSA - OLoRA - EVA - LoftQ - rsLoRA - DoRA - LoHa - LoKr - AdaLoRA - X-LoRA - OFT - BOFT - Llama-Adapter - HRA - Bone - Prompting - 提示调整(prompt tuning) - 前缀调整(prefix tuning) - P 调整(P-tuning) - 多任务提示调整(multitask prompt tuning) - Context-Aware Prompt Tuning (CPT) - Dreambooth
在预训练模型的特定层或位置插入一些小型的适配器模块(Adapter)。在微调过程中,主要对这些适配器模块的参数进行调整,而保持预训练模型的大部分参数不变。通过这种方式,可以利用预训练模型的强大表示能力,同时以较小的计算成本和数据量实现对特定任务的适配和优化,使得模型在新任务上的性能得到提升,并且在一定程度上缓解过拟合现象,提高模型的泛化能力。
The text was updated successfully, but these errors were encountered:
No branches or pull requests
大模型微调
Adapter-Tuning
在预训练模型的特定层或位置插入一些小型的适配器模块(Adapter)。在微调过程中,主要对这些适配器模块的参数进行调整,而保持预训练模型的大部分参数不变。通过这种方式,可以利用预训练模型的强大表示能力,同时以较小的计算成本和数据量实现对特定任务的适配和优化,使得模型在新任务上的性能得到提升,并且在一定程度上缓解过拟合现象,提高模型的泛化能力。
Prompting
Reference
The text was updated successfully, but these errors were encountered: