-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release codes of incremental learning and DoubleAdapt #1560
base: main
Are you sure you want to change the base?
Conversation
@MogicianXD please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
1 similar comment
@MogicianXD please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
@microsoft-github-policy-service agree |
Add a new baseline for benchmarks_dynamic.
Description
Here is a new online retraining algorithm based on incremental learning, named DoubleAdapt.
The paper has been accepted by KDD 2023. [arXiv]
Motivation and Context
The existing baselines of benchmarks_dynamic, RR and DDG-DA, require expensive time and space consumption. The training
time increases with the size of the enlarged training data, which further causes an unbearable duration of hyperparameter tuning and retraining algorithm selection.
An alternative way known as Incremental Learning is to fine-tune the model only with the latest incremental data.
However, with the stock market dynamically evolving, the distribution of future data can differ from incremental data, hindering the effectiveness of incremental updates. To address this challenge, we propose DoubleAdapt to mitigate the effects of distribution shifts.
In this PR, we also provide the implementation of naive incremental learning with argument
--naive True
.How Has This Been Tested?
pytest qlib/tests/test_all_pipeline.py
under upper directory ofqlib
.Screenshots of Test Results (if appropriate):
Types of changes