Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for demo on using emotion2vec with Speech + Text modality #42

Open
misaka14514 opened this issue Aug 7, 2024 · 3 comments
Open

Comments

@misaka14514
Copy link

Hello there! I'm currently trying to use the emotion2vec for sentiment analysis tasks and appreciate your work. After reading related papers and documentation, I noticed that you have provided instructions on how to predict using speech or text modal data separately.

However, I am also interested in understanding how to combine both speech and text data (i.e., Speech + Text) for multimodal emotion prediction. According to my findings from literature, this seems like an important application scenario.

Therefore, could you please provide a simple example demonstrating how to integrate these two modalities of data and run the model? I believe this would be highly beneficial for other users as well.

Thank you!
97b35cdff111d3584459175d8e3b9b09

@June1124
Copy link

June1124 commented Aug 9, 2024

I was wondering the same thing. Any results yet, please?

@ddlBoJack
Copy link
Owner

You can refer to Shi et al.'s(2020) and (2023) papers. We reproduced their methods to align with their numbers.

@misaka14514
Copy link
Author

Is there a plan to open source the speech+text model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants