Predict which residues are the most important #113
-
Hi All! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
The straightforward way to do this is to mask the token under consideration (there is some existing discussion on how to do this, a repo update to make this easier is upcoming too). |
Beta Was this translation helpful? Give feedback.
The straightforward way to do this is to mask the token under consideration (there is some existing discussion on how to do this, a repo update to make this easier is upcoming too).
Then with
result = model(masked_sequences)
, you'll findresult['logits']
, per sequence it'll be sizeL x K
(seqlen x alphabet_size). To make it a probability distribution, useF.softmax
. Then you can compute for example the per-position entropy:(p * p.log()).sum(-1)