Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimentos futuros #85

Open
vitalwarley opened this issue Jul 28, 2024 · 6 comments
Open

Experimentos futuros #85

vitalwarley opened this issue Jul 28, 2024 · 6 comments
Assignees

Comments

@vitalwarley
Copy link
Owner

  • Y-aware Contrastive Loss: perda contrastiva com marge consciente de Y, onde Y é alguma propriedade das amostras e/ou pares.
    • Inspirado nas modificações da perda softmax:
      • ArcFace: introduziu margem fixa
      • MagFace: introduziu margem dependente da magnitude das embeddings (proxy para qualidade da imagem)
      • AdaFace: introduziu uma função de margem dependente (adaptativa, em função) da magnitude das embeddings (proxy para qualidade da imagem)
@vitalwarley vitalwarley self-assigned this Jul 28, 2024
@vitalwarley
Copy link
Owner Author

  • Contrastive Loss usando a família como rótulo, em vez do parentesco.
    • Razão: aproximar famílias parece mais coerente do que existência de parentesco.
    • Dificuldade: formar exemplos negativos.
    • Possível solução: mudar a formulação da perda contrastiva para considerar N famílias por batch e computar o numerador e denominador para cada família (e.g. num_i para família i contendo apenas pares válidos, den_i para família i contendo apenas pares inválidos com as demais famílias).

@vitalwarley
Copy link
Owner Author

Otimizar a perda contrastiva via $L_{align}$ e $L_{uniformity}$, como exposto em Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere.

image

@vitalwarley
Copy link
Owner Author

vitalwarley commented Sep 16, 2024

Uma forma de agregar aos meus resultados é apresentar robusteza à expressões faciais. Isso é, mantém performance de reconhecimento de parentesco independente da expressão facial. Dificilmente dará tempo agora, mas é uma possibilidade futura aí.

Obter expressões faciais com algum outro modelo (e.g. facetorch, HSEmotion).

Ver conversa com Claude abaixo para outros tipos de robustez, bem como melhorias contra variações intraclasse (i.e., mantém performance independente das classes e desafios de idade, gênero etc.).

What would be a result that would guarantee that we improved on addressing intraclass variations and model robustness?

To demonstrate improvements in addressing intraclass variations and model robustness, you would want to focus on specific metrics and experiments that highlight these aspects. Here are some results that would provide strong evidence for these improvements:

  1. Intraclass Variations:

    a) Improved performance on age-gap kinship pairs:

    • Higher accuracy on parent-child or grandparent-grandchild pairs compared to baseline models.
    • Example result: "Our model achieved 85% accuracy on father-son pairs with >30 years age difference, compared to 75% for the baseline."

    b) Consistent performance across gender combinations:

    • Similar accuracy levels for mother-daughter, father-son, mother-son, and father-daughter pairs.
    • Example result: "Our model maintained >80% accuracy across all parent-child gender combinations, with less than 2% variation between them."

    c) Robustness to facial expressions:

    • Maintain high accuracy when verifying kin pairs with different facial expressions.
    • Example result: "Our model achieved 82% accuracy on kin pairs with different facial expressions, compared to 73% for the baseline."
  2. Model Robustness:

    a) Cross-dataset performance:

    • Maintain high accuracy when tested on a different kinship dataset without fine-tuning.
    • Example result: "Our model, trained on FIW, achieved 79% accuracy on the KinFaceW-II dataset without any fine-tuning, demonstrating strong generalization."

    b) Performance on hard negatives:

    • Higher accuracy on pairs that are visually similar but not related.
    • Example result: "On the subset of hard negative pairs, our model achieved 85% accuracy compared to 70% for the baseline."

    c) Consistency across different image qualities:

    • Maintain performance when tested on lower quality or partially occluded images.
    • Example result: "Our model's accuracy only dropped by 3% when tested on low-resolution images, compared to a 10% drop for the baseline."

    d) Improved performance on rare or underrepresented kinship types:

    • Higher accuracy on less common relationships in the dataset.
    • Example result: "On grandparent-grandchild pairs, which comprise only 5% of the dataset, our model achieved 76% accuracy compared to 65% for the baseline."

To present these results effectively, you could include a table or graph showing comparisons between your model and baselines across these specific scenarios. Additionally, you might consider creating a subsection in your results specifically focused on intraclass variations and robustness, where you dive deeper into these aspects of your model's performance.

Remember to tie these results back to your initial objectives and the challenges you identified in facial kinship verification. This will help emphasize how your approach addresses key issues in the field.

@vitalwarley
Copy link
Owner Author

Hierarchical Representation Learning for Kinship Verification + Kinship Representation Learning with Face Componential Relation

De alguma forma atentar regiões entre si, em vez de features globais.

@vitalwarley
Copy link
Owner Author

Hierarchical Representation Learning for Kinship Verification + Kinship Representation Learning with Face Componential Relation

De alguma forma atentar regiões entre si, em vez de features globais.

Learning Part-Aware Attention Networks for Kinship Verification

lmao

@vitalwarley
Copy link
Owner Author

Masked Image Modelling

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant