Hi there @valzip!
The short answer is that the training model is based on the first "n" reads in your dataset - the default is 1,000,000. If you change the ordering of the samples, your model will be trained with slightly different data, which is probably what you're observing here. A related post:
I want to make sure you're aware that DADA2 expects to be executed on a per-sequencing-run basis. Its okay to subdivide a run into multiple execution batches, but combining sequencing runs should be avoided!
I hope that helps.
:qiime2: