You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 4, 2022. It is now read-only.
Hi,
I have been experiencing large numbers of unclassified reads esp. with the pHMM backend. With the edlib backend, I get more full-length transcripts, but still the no. of unclassified is still significantly high. I sequenced using the PCB109 kit that multiplexes 12 samples. But it has VNP and SSPprimers that I assume are the same. How can I rescue more reads since my samples are multiplexed and I am only interested in unmapped reads in the long run? Going with these reads as recommended by pychopper, I might end up with very little data to work with since the coverage greatly reduces with multiplexing. Here is the HTML to my sequencing summary, and one of the pychopper reports.
Thanks
The text was updated successfully, but these errors were encountered:
I used DCS109 without multiplex and my results are similar to yours, I got 32% primers found and 65% unusable. Does that meam that 65% of my reads are missing primers? I actually miss a better explanation of the outputs in the readme...
Hi,
I have been experiencing large numbers of unclassified reads esp. with the
pHMM
backend. With theedlib
backend, I get more full-length transcripts, but still the no. of unclassified is still significantly high. I sequenced using the PCB109 kit that multiplexes 12 samples. But it has VNP and SSPprimers that I assume are the same. How can I rescue more reads since my samples are multiplexed and I am only interested in unmapped reads in the long run? Going with these reads as recommended by pychopper, I might end up with very little data to work with since the coverage greatly reduces with multiplexing. Here is the HTML to my sequencing summary, and one of the pychopper reports.Thanks
The text was updated successfully, but these errors were encountered: