You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello there! I'm currently trying to use the emotion2vec for sentiment analysis tasks and appreciate your work. After reading related papers and documentation, I noticed that you have provided instructions on how to predict using speech or text modal data separately.
However, I am also interested in understanding how to combine both speech and text data (i.e., Speech + Text) for multimodal emotion prediction. According to my findings from literature, this seems like an important application scenario.
Therefore, could you please provide a simple example demonstrating how to integrate these two modalities of data and run the model? I believe this would be highly beneficial for other users as well.
Thank you!
The text was updated successfully, but these errors were encountered:
Hello there! I'm currently trying to use the emotion2vec for sentiment analysis tasks and appreciate your work. After reading related papers and documentation, I noticed that you have provided instructions on how to predict using speech or text modal data separately.
However, I am also interested in understanding how to combine both speech and text data (i.e., Speech + Text) for multimodal emotion prediction. According to my findings from literature, this seems like an important application scenario.
Therefore, could you please provide a simple example demonstrating how to integrate these two modalities of data and run the model? I believe this would be highly beneficial for other users as well.
Thank you!
The text was updated successfully, but these errors were encountered: