-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generating embeddings for new training samples #3
Comments
Code for generating embeddings for new training datasets is currently not supported, but it should not be that much required. The following code detects an image, crops the different detections and post processes the embedding/image to generate a single annotated image for each detection. import torchvision
from fba.anonymizer.cse import CSEDetector
from fba.anonymizer.post_process import process_cse_detections
input_path = "coco_val2017_000000001000.jpg"
im = torchvision.io.read_image(input_path, mode=torchvision.io.ImageReadMode.RGB)
detector = CSEDetector()
detections = detector(im)
embed_map = detections["embed_map"]
if detections is None:
exit()
post_process_cfg = dict(
target_imsize=(288, 160),
exp_bbox_cfg=dict(percentage_background=0.3, axis_minimum_expansion=.1),
exp_bbox_filter=dict(minimum_area=32*32, min_bbox_ratio_inside=0, aspect_ratio_range=[0, 99999]),
)
detections = process_cse_detections(
im, **detections, **post_process_cfg
)
for i, sample in enumerate(detections):
print("Detection:", i)
for key, value in sample.items():
if key == "exp_bbox":
print(key, value)
elif key == "N":
continue
else:
print(key, value.shape, value.dtype) Hope this helps out! |
@hukkelas Great! Thank you for your quick answer! |
My pleasure :) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
How to generate embeddings for my training set?
The text was updated successfully, but these errors were encountered: