Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openpose model with hand/face #5

Open
huchenlei opened this issue Apr 22, 2024 · 5 comments
Open

Openpose model with hand/face #5

huchenlei opened this issue Apr 22, 2024 · 5 comments

Comments

@huchenlei
Copy link

Hi folks,

Awesome work on improving the control map alignment! I think the openpose full control type should benefit the most from this new work as previously the control on hand/face were not so good. Is there any plan to train such an openpose model soon?

@liming-ai
Copy link
Owner

Hi folks,

Awesome work on improving the control map alignment! I think the openpose full control type should benefit the most from this new work as previously the control on hand/face were not so good. Is there any plan to train such an openpose model soon?

Thank you for pointing out this question. We are working hard to support more conditions (including 2D human pose) and train SDXL version of the model, please pay attention to subsequent updates!

@huchenlei
Copy link
Author

Hi folks,
Awesome work on improving the control map alignment! I think the openpose full control type should benefit the most from this new work as previously the control on hand/face were not so good. Is there any plan to train such an openpose model soon?

Thank you for pointing out this question. We are working hard to support more conditions (including 2D human pose) and train SDXL version of the model, please pay attention to subsequent updates!

I think one more thing to mention is that for openpose we can probably use difference between detected openpose json as reward/loss function, as the detected map often cannot show enough details on hand skeletons keypoints / face keypoints.

@liming-ai
Copy link
Owner

I think one more thing to mention is that for openpose we can probably use difference between detected openpose json as reward/loss function, as the detected map often cannot show enough details on hand skeletons keypoints / face keypoints.

A great addition. What do you think of heatmap? As far as I know, some methods try to turn 2D pose into multi-dimension heatmap to calculate MSE loss. What do you think about such a paradigm and whether it is also possible cannot show enough details?

@huchenlei
Copy link
Author

The heatmap method should work as long as it has more channels per pixel than RGB to reflect move of a keypoint. The main purpose is just to make the loss function smoother, as some small movement of hand/face keypoint might not get reflected on rendered control map.

@liming-ai
Copy link
Owner

The heatmap method should work as long as it has more channels per pixel than RGB to reflect move of a keypoint. The main purpose is just to make the loss function smoother, as some small movement of hand/face keypoint might not get reflected on rendered control map.

Thanks for the advice and explanation, I will try this in the near future! Looking forward to further discussions with you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants