-
Notifications
You must be signed in to change notification settings - Fork 751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new File breaks cloud execution in bcl_demultiplex #5612
Comments
@k1sauce @Aratz @edmundmiller Hey, how are you doing? An idea to solve this issue is to have a parameter called --log_skipped_fastqs so that the user can decide if the file gets created or not.
if false, the file named what do you think? I tested it and opened two pull requests for the demultiplex.nf and the module counterpart: |
@k1sauce how are you doing? would it be possible to ask you ro run a test with the branch I am working on, to check if it fixes the s3 issue? The idea behind it, is that one can set a flag: Testingtrue |
@glichtenstein Thanks working on this. I spent some time thinking more about this too and I think I would like to tackle it a different way, here are my thoughts:
So that being said I also have a branch that I am testing out some changes on that would address this, if you are in agreement I can open a PR after a bit more development and we can review that one instead? |
@glichtenstein I don't want to get ahead of myself but I am thinking of something simple like this
|
@glichtenstein Ok how does this look #5720 If it's ok with you I think this may be the right approach. Then we can add a filter on file size before Falco in the demultiplex workflow. |
@k1sauce, how are you? I've work in this proposal to fix the |
@matthdsm @k1sauce @SPPearce How are you guys, can we decide on which route to take to fix the reported issue? I've been using my branch so far when possible but if its gonna be drop and closed I want to know so that I can deliver reproducible results to the core cusotmers, my supervisor is not keen in using dev branches. So in brief, it may be fixed by #5720 or #5781 I am getting this error quite often when empty fastqs appear while running nf-core/demultiplex 1.4.1: -- Check script '/home/hw-m-sylvesterportal/.nextflow/assets/nf-core/demultiplex/./workflows/../subworkflows/nf-core/bcl_demultiplex/main.nf' at line: 125 or see '.nextflow.log' file for more details` For the sake of time in production we usually end up running bclconvert in basespace to get the fastqs without falco nor multiqc to the customer when nf-core fails on us. So if we could know which are the file with wrong barcodes after the demux it could help is a lot. I see the point, if you dont like a new log file created, we could reach the multiqc and bclconvert reports to track the bad fastqs also, but the workflow needs to complete so we can get that info straight forward, right now one has to dwell into the workDir and search for the files manually. |
@glichtenstein Hey, I have updated the test but now I need to debug the Github actions workflow. I am having trouble with the test in docker even thought the singularity test passes. You can take a look here and let me know if you have any ideas https://github.com/nf-core/modules/actions/runs/9782057279/job/27007581785?pr=5720 Also, when I run the docker test locally it passes, so I'm inferring it has something to do with the GitHub action |
@glichtenstein should be ready to go now, see my comment here on why the test was failing. #5720 (comment) |
Have you checked the docs?
Description of the bug
This bug relates to the bcl_demultiplex sub workflow.
new File
will not work with cloud storage like s3, see nf-core/tools#354 for reference, need to usefile
instead.Command used and terminal output
No response
Relevant files
No response
System information
No response
The text was updated successfully, but these errors were encountered: