-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSPP: Development of Federated Incremental Learning for Label Scarcity: Base on KubeEdge-Ianvs #143
base: main
Are you sure you want to change the base?
Conversation
5a1a796
to
1b9b881
Compare
Signed-off-by: Marchons <[email protected]> Signed-off-by: Marchons <[email protected]>
new file: core/testcasecontroller/algorithm/paradigm/federated_learning/federeated_learning.py Signed-off-by: Marchons <[email protected]> Signed-off-by: Marchons <[email protected]>
Signed-off-by: Marchons <[email protected]> Signed-off-by: Marchons <[email protected]>
Signed-off-by: Marchons <[email protected]> Signed-off-by: Marchons <[email protected]>
32f7aff
to
8eaab3c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are three CI issues that remain to be resolved, mainly for Pylint, see https://github.com/kubeedge/ianvs/actions/runs/11137579503?pr=143
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This comment was marked as resolved.
This comment was marked as resolved.
Signed-off-by: Marchons <[email protected]> Signed-off-by: Marchons <[email protected]>
Signed-off-by: Yu Fan <[email protected]>
Signed-off-by: Marchons <[email protected]>
def augment_for_cifar(self, images: np.ndarray): | ||
return self.augment_shift(self.augment_mirror(images), 4) | ||
|
||
def augment_for_svhn(self, images: np.ndarray): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused dataset augmentation code should be removed
self.rand_augment = Rand_Augment() | ||
if self.dataset_name in ["cifar10", "cifar100", "svhn"]: | ||
self.input_shape = (32, 32, 3) | ||
elif self.dataset_name == "stl10": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused dataset augmentation code should be removed
information needed to compute system metrics. | ||
""" | ||
# init client wait for connection | ||
# self.init_client() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
if not len(clients): | ||
return self.weights | ||
self.total_size = sum([c.num_samples for c in clients]) | ||
# print(next(iter(clients)).weights) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
prob = tf.nn.softmax(pred, axis=1) | ||
pred = tf.argmax(prob, axis=1) | ||
pred = tf.cast(pred, dtype=tf.int32) | ||
# pred = tf.cast(tf.argmax(logits, axis=1), tf.int32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
# print(f' grad_diff shape {grad.shape} and type(grad) {type(grad)}') | ||
opt.apply_gradients(zip([grad], [dummy_data])) | ||
|
||
# if iter == self.Iteration - 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
# print(f"in evalute data: {data}") | ||
for i, (x, y) in enumerate(data): | ||
logits = self.model(x, training=False) | ||
# prob = tf.nn.softmax(logits, axis=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
self.fc = Dense(num_classes, activation='softmax') | ||
|
||
def call(self, inputs): | ||
# print(type(self.feature)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
|
||
import tensorflow as tf | ||
import keras | ||
# import keras |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
class_labels = np.unique(y_train) # 获取所有类别 | ||
train_class_dict = {label: [] for label in class_labels} | ||
test_class_dict = {label: [] for label in class_labels} | ||
# train_cnt = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotated code should be removed
@chenhaorui0768: changing LGTM is restricted to collaborators In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR add two main feature:
Besides this PR also add the example of these two paradigm.
Which issue(s) this PR fixes:
related issue: #97
Fixes #