Hi!
I am a first-time doer of bioinformatics (and recovering luddite), attempting to use the q2 galaxy interface for my workflow.
I'm following this tutorial for setting everything up with docker: Running QIIME 2 inside Galaxy (alpha release version) - YouTube.
It was going well until I went to upload the atacama metadata file to set up the conda environment and the job of simply uploading this tiny file has been "waiting to run" for close to half an hour now. I am not getting any error messages per se, so I don't know what's wrong. My internet connection is okay, I think. What are common sources of long job wait time when the task is so simple? Are there things I can check/troubleshoot/tinker with that might speed this up? I will attach the current log since the task began in case that would help diagnose the problem.
Thanks!
2023-03-06 13:38:52
2023-03-06 13:38:52 ==> /home/galaxy/logs/handler1.log <==
2023-03-06 13:38:52 galaxy.tool_util.deps.resolvers.conda DEBUG 2023-03-06 18:38:52,190 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-0] Removing failed conda install of [CondaTarget[bcftools,unversioned]]
2023-03-06 13:38:52 galaxy.tool_util.deps.conda_util DEBUG 2023-03-06 18:38:52,191 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-0] Executing command: /export/tool_deps/_conda/bin/conda create -y --quiet --override-channels --channel 2022.2-core-passed --channel iuc --channel conda-forge --channel bioconda --channel defaults --name __bcftools@uv bcftools
2023-03-06 13:40:17 galaxy.tool_util.deps.resolvers.conda DEBUG 2023-03-06 18:40:17,456 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-0] Removing failed conda install of bcftools, version 'None'
2023-03-06 13:40:17 galaxy.jobs.runners DEBUG 2023-03-06 18:40:17,465 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-0] (3) command is: mkdir -p working outputs configs
2023-03-06 13:40:17 if [ -d _working ]; then
2023-03-06 13:40:17 rm -rf working/ outputs/ configs/; cp -R _working working; cp -R _outputs outputs; cp -R _configs configs
2023-03-06 13:40:17 else
2023-03-06 13:40:17 cp -R working _working; cp -R outputs _outputs; cp -R configs _configs
2023-03-06 13:40:17 fi
2023-03-06 13:40:17 cd working; /bin/bash /export/galaxy-central/database/job_working_directory/000/3/tool_script.sh > ../outputs/tool_stdout 2> ../outputs/tool_stderr; return_code=$?; cd '/export/galaxy-central/database/job_working_directory/000/3';
2023-03-06 13:40:17 [ "GALAXY_VIRTUAL_ENV" = "None" ] && GALAXY_VIRTUAL_ENV="_GALAXY_VIRTUAL_ENV"; _galaxy_setup_environment True
2023-03-06 13:40:17 python "metadata/set.py"; sh -c "exit $return_code"
2023-03-06 13:40:17 galaxy.jobs.runners.drmaa DEBUG 2023-03-06 18:40:17,494 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-0] (3) submitting file /export/galaxy-central/database/job_working_directory/000/3/galaxy_3.sh
2023-03-06 13:40:17 galaxy.jobs.runners.drmaa DEBUG 2023-03-06 18:40:17,495 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-0] (3) native specification is: --ntasks=1 --share
2023-03-06 13:40:17 galaxy.jobs.runners.drmaa INFO 2023-03-06 18:40:17,524 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-0] (3) queued as 3
2023-03-06 13:40:17
2023-03-06 13:40:17 ==> /home/galaxy/logs/slurmctld.log <==
2023-03-06 13:40:17 [2023-03-06T18:40:17.523] _slurm_rpc_submit_batch_job: JobId=3 InitPrio=4294901758 usec=1047
2023-03-06 13:40:18
2023-03-06 13:40:18 ==> /home/galaxy/logs/handler1.log <==
2023-03-06 13:40:18 galaxy.jobs.runners.drmaa DEBUG 2023-03-06 18:40:18,150 [pN:handler1,p:3215,tN:SlurmRunner.monitor_thread] (3/3) state change: job is running
2023-03-06 13:40:18
2023-03-06 13:40:18 ==> /home/galaxy/logs/slurmctld.log <==
2023-03-06 13:40:18 [2023-03-06T18:40:17.631] backfill: Started JobID=3 in debug on 807433fd76d1
2023-03-06 13:40:18
2023-03-06 13:40:18 ==> /home/galaxy/logs/slurmd.log <==
2023-03-06 13:40:18 [2023-03-06T18:40:17.634] _run_prolog: run job script took usec=8
2023-03-06 13:40:18 [2023-03-06T18:40:17.634] _run_prolog: prolog with lock for job 3 ran for 0 seconds
2023-03-06 13:40:18 [2023-03-06T18:40:17.634] Launching batch job 3 for UID 1450
2023-03-06 13:40:25
2023-03-06 13:40:25 ==> /home/galaxy/logs/handler1.log <==
2023-03-06 13:40:25 galaxy.jobs.runners.drmaa DEBUG 2023-03-06 18:40:25,348 [pN:handler1,p:3215,tN:SlurmRunner.monitor_thread] (3/3) state change: job finished normally
2023-03-06 13:40:25 galaxy.model.metadata DEBUG 2023-03-06 18:40:25,418 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-1] loading metadata from file for: HistoryDatasetAssociation 2
2023-03-06 13:40:25 galaxy.jobs INFO 2023-03-06 18:40:25,569 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-1] Collecting metrics for Job 3 in /export/galaxy-central/database/job_working_directory/000/3
2023-03-06 13:40:25
2023-03-06 13:40:25 ==> /home/galaxy/logs/slurmctld.log <==
2023-03-06 13:40:25 [2023-03-06T18:40:24.716] _job_complete: JobID=3 State=0x1 NodeCnt=1 WEXITSTATUS 0
2023-03-06 13:40:25 [2023-03-06T18:40:24.716] _job_complete: JobID=3 State=0x8003 NodeCnt=1 done
2023-03-06 13:40:25
2023-03-06 13:40:25 ==> /home/galaxy/logs/slurmd.log <==
2023-03-06 13:40:25 [2023-03-06T18:40:24.715] [3.batch] sending REQUEST_COMPLETE_BATCH_SCRIPT, error:0 status 0
2023-03-06 13:40:25 [2023-03-06T18:40:24.717] [3.batch] done with job
2023-03-06 13:40:26
2023-03-06 13:40:26 ==> /home/galaxy/logs/handler1.log <==
2023-03-06 13:40:26 galaxy.jobs DEBUG 2023-03-06 18:40:25,597 [pN:handler1,p:3215,tN:SlurmRunner.work_thread-1] job_wrapper.finish for job 3 executed (205.609 ms)
2023-03-06 13:53:18
2023-03-06 13:53:18 ==> /home/galaxy/logs/uwsgi.log <==
2023-03-06 13:53:18 172.17.0.1 - - [06/Mar/2023:18:53:17 +0000] "GET /datasets/f597429621d6eb2b/show_params HTTP/1.1" 200 - "http://localhost:8080/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
2023-03-06 13:53:18 [pid: 4133|app: 0|req: 76/147] 172.17.0.1 () {58 vars in 1201 bytes} [Mon Mar 6 18:53:17 2023] GET /datasets/f597429621d6eb2b/show_params => generated 7042 bytes in 538 msecs (HTTP/1.1 200) 2 headers in 88 bytes (1 switches on core 1)
2023-03-06 13:53:18 172.17.0.1 - - [06/Mar/2023:18:53:18 +0000] "GET /api/datasets/f597429621d6eb2b/storage HTTP/1.1" 200 - "http://localhost:8080/datasets/f597429621d6eb2b/show_params" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
2023-03-06 13:53:18 [pid: 4133|app: 0|req: 77/148] 172.17.0.1 () {54 vars in 1071 bytes} [Mon Mar 6 18:53:18 2023] GET /api/datasets/f597429621d6eb2b/storage => generated 95 bytes in 75 msecs (HTTP/1.1 200) 3 headers in 139 bytes (1 switches on core 2)
2023-03-06 13:53:19 172.17.0.1 - - [06/Mar/2023:18:53:18 +0000] "GET /api/datasets/f597429621d6eb2b/parameters_display?hda_ldda=hda HTTP/1.1" 200 - "http://localhost:8080/datasets/f597429621d6eb2b/show_params" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
2023-03-06 13:53:19 [pid: 4136|app: 0|req: 74/149] 172.17.0.1 () {54 vars in 1118 bytes} [Mon Mar 6 18:53:18 2023] GET /api/datasets/f597429621d6eb2b/parameters_display?hda_ldda=hda => generated 452 bytes in 301 msecs (HTTP/1.1 200) 3 headers in 139 bytes (1 switches on core 2)
2023-03-06 13:53:19 172.17.0.1 - - [06/Mar/2023:18:53:18 +0000] "GET /api/jobs/1cd8e2f6b131e891?full=true HTTP/1.1" 200 - "http://localhost:8080/datasets/f597429621d6eb2b/show_params" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
2023-03-06 13:53:19 [pid: 4136|app: 0|req: 74/150] 172.17.0.1 () {54 vars in 1066 bytes} [Mon Mar 6 18:53:18 2023] GET /api/jobs/1cd8e2f6b131e891?full=true => generated 1172 bytes in 400 msecs (HTTP/1.1 200) 3 headers in 139 bytes (1 switches on core 0)
2023-03-06 13:53:19 172.17.0.1 - - [06/Mar/2023:18:53:18 +0000] "GET /api/datasets/f597429621d6eb2b HTTP/1.1" 200 - "http://localhost:8080/datasets/f597429621d6eb2b/show_params" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
2023-03-06 13:53:19 [pid: 4136|app: 0|req: 74/151] 172.17.0.1 () {54 vars in 1025 bytes} [Mon Mar 6 18:53:18 2023] GET /api/datasets/f597429621d6eb2b => generated 103782 bytes in 300 msecs (HTTP/1.1 200) 3 headers in 139 bytes (1 switches on core 1)