You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Say you had a mapping of typos to their corrected spelling (training set in ML terms), along with the number of occurrences of each typo. If you can get the edit operations that are required to transform the typo to the correct spelling (this library can't do this, but other libraries out there can), you have just gathered the probability of each character being added/removed/substituted/transposed. From here you can derive the weights.Thus, your training set is very important, and it will differ depending on the domain of the problem.This is just a rough outline of my solution, I'm sure there are many possible ways you can go about this On Aug. 5, 2019 2:49 a.m., jtlz2 <[email protected]> wrote:Do you know of a way to calculate a probability (cf Levenshtein ratio) for the weighted Levenshtein distance, preferably using your code? Thanks!
—You are receiving this because you are subscribed to this thread.Reply to this email directly, view it on GitHub, or mute the thread.
Do you know of a way to calculate a probability (cf Levenshtein ratio) for the weighted Levenshtein distance, preferably using your code? Thanks!
The text was updated successfully, but these errors were encountered: