This repository has been archived by the owner on Aug 1, 2024. It is now read-only.
Why always get kernel dies when computing representations? #308
Unanswered
Leo-T-Zang
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Are you running on CPU or GPU? Your kernel will die if you run out of resources (probably memory). Maybe you're feeding a sequence that's too long? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am trying to use EMS2 to compute representation of some peptides in Jupyter Lab. But most of time I could only get results of 30 peptides, more than that, it shows kernel dies.
I use exactly the code provided
``
import torch
import esm
model, alphabet = esm.pretrained.esm2_t33_650M_UR50D()
batch_converter = alphabet.get_batch_converter()
model.eval() # disables dropout for deterministic results
data = [
("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
("protein2", "KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
data can only have 30 protein sequences, more than that kernel dies
]
batch_labels, batch_strs, batch_tokens = batch_converter(data)
with torch.no_grad():
results = model(batch_tokens, repr_layers=[33], return_contacts=True)
token_representations = results["representations"][33]
sequence_representations = []
for i, (_, seq) in enumerate(data):
sequence_representations.append(token_representations[i, 1 : len(seq) + 1].mean(0))
``
Beta Was this translation helpful? Give feedback.
All reactions