-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any suggestions to handle longer text? #46
Comments
Hello! |
Hi Laura, I see. Thanks for the insights. I was actually thinking about splitting the post and then avergaing the results. Just wanted to check if there is inbuilt way to handle it. |
Just a suggestion: taking the max over the splits, perhaps breaking at sentences would likely be better than averaging. The model tends to work as a detector, so finding any objectionable content in any part should disqualify the whole document. |
Another way to increase the limit a little bit would be to implement stopword removal before it becomes a sequence? |
Hi, I'd like to open this up again as the initial fixes kind of failed again when the text passed is in not English. The normal length detection in python would identify as less than 512, but the parser would detect it being more than it. This is the case for text which are non English like Mandarin. Do you have any idea what type of unicode conversion is happening/how to remove such cases before it goes to prediction? |
I'm trying to do predictions with the pre-trained model and I keep running into the issue of;
The issue is when I try to predict a text that is longer than 512, this happens. I understand this is because the string is long, other than chopping off the string. Is there any suggestions on how to deal with this problem with the package?
Thank you
The text was updated successfully, but these errors were encountered: