Gneiss problem - index 1 is out of bounds for axis 0 with size 1

Dear users and developers,

I am using Gneiss for a project and I am getting this message error in the plugin qiime gneiss balance-taxonomy:

Plugin error from gneiss:

index 1 is out of bounds for axis 0 with size 1

The debug file returns the following information:

“/opt/miniconda3/envs/qiime2-2018.8/lib/python3.5/site-packages/gneiss-0.4.4-py3.5.egg/gneiss/util.py:245: FutureWarning: ‘.reindex_axis’ is deprecated and will be removed in a future version. Use ‘.reindex’ instead.
Traceback (most recent call last):
File “/opt/miniconda3/envs/qiime2-2018.8/lib/python3.5/site-packages/q2cli/commands.py”, line 274, in call
results = action(**arguments)
File “”, line 2, in balance_taxonomy
File “/opt/miniconda3/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 231, in bound_callable
output_types, provenance)
File “/opt/miniconda3/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/action.py”, line 424, in callable_executor
ret_val = self._callable(output_dir=temp_dir, **view_args)
File “/opt/miniconda3/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_gneiss/plot/_plot.py”, line 168, in balance_taxonomy
right_group = dcat.value_counts().index[1]
File “/opt/miniconda3/envs/qiime2-2018.8/lib/python3.5/site-packages/pandas/core/indexes/base.py”, line 1743, in getitem
return getitem(key)
IndexError: index 1 is out of bounds for axis 0 with size 1”

Does any of you knows what is going on?

Thank you very much

Hi @Bruno_Andrade, its not immediately clear from the error message – how did you process the data?
Could you provide the commands that you ran along with the datasets?

Also, have you tried running this on the most up-to-date qiime2 version?

Hi!
Could you double check which column you indicated in

--m-metadata-column

This error can be due the absence of variability of values in this column, so it can’t plot your data

1 Like

Thank you for your quick response. I was able to track down the error and my input was lacking part of the experimental group, so the index of the table were not matching the metadata based on the column Diet at all.

Anyway, I did what you suggested, I updated my qiime to the version 2019.4 and runned my analysis again. The input has 52 columns with 4233 Amplicon sequence variants. When I tried to run the ols-regression I got the following error:

"During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2cli/commands.py”, line 311, in call
results = action(**arguments)
File “</opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/decorator.py:decorator-gen-278>”, line 2, in ols_regression
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/qiime2/sdk/action.py”, line 231, in bound_callable
output_types, provenance)
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/qiime2/sdk/action.py”, line 427, in callable_executor
ret_val = self._callable(output_dir=temp_dir, **view_args)
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/q2_gneiss/regression/_regression.py”, line 34, in ols_regression
ols_summary(output_dir, res, tree)
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/gneiss/plot/_regression_plot.py”, line 290, in ols_summary
_deposit_results(model, output_dir)
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/gneiss/plot/_regression_plot.py”, line 251, in _deposit_results
header=True, index=True)
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/pandas/core/generic.py”, line 3020, in to_csv
formatter.save()
File “/opt/miniconda3/envs/qiime2-2019.4/lib/python3.6/site-packages/pandas/io/formats/csvs.py”, line 187, in save
f.close()
OSError: [Errno 28] No space left on device"

Even though my cluster has 1.6 tb of space available. I really don’t know how to solve this, is it a bug?

hmm… strange.

It looks like the model was able to run - and it is failing when trying to write the results.

If I had to guess, it is possible that this is a cluster specific problem.
Things that would help debug include:

  1. Double checking how much space you have on your personal allocation and clearing out space if necessary
  2. Rerunning this analysis on scratch (if you have one)
  3. Try it on another machine (4k features is not that big).

@mortonjt I just did the analysis from scratch with an old build, 2018.8, and everything was fine. The output has 10 Mb of size, so its not big at all… I think that the developers should take a look because it looks like a bug in this new build.

Hey @Bruno_Andrade,

I have an idea, if you run df -lh do we see that the /tmp partition uses the tmpfs filesystem?

If so, that means your /tmp drive is actually made of RAM (and swap) which is obviously more limited (especially when actually doing anything).

This is newish with systemd based linuxes (and overriden in a few distros). If it is the case that you have a tmpfs system, you can create an /etc/fstab entry as always, or there is a new systemd config path which can mount /tmp differently.

You can set $TMPDIR to somewhere other than /tmp to get everything running ASAP if you don’t want to configure /tmp to use a different filesystem (if indeed it is tmpfs).

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.