I've read the host-removal steps within the moshpit tutorial again for more clarity.
You can basically differentiate between both approaches as: 1. filter out reads that do not map to a reference and 2. filter out hits/classifications from your feature table after classification.
The Host removal part of the moshpit tutorial, displayed two variants of the same approach 1.
"Removal of contaminating reads" section is, let's call it, the generic way. Here you import a reference genome (most often the host), create an indexed version of this genome, as this is the way mapping tools work, and you map your reads against it for negative filtering. Here you can use any genome.
Then the "Human host reads" section explains the filter-reads-pangenome action, which is something you called a convenient wrapper. I never tried the filter-reads-pangenome action, but from what I read, it's better, as this will use human pangenome reference data instead of just GRCh38. I assume it will be more thorough. On the negative side, this is a utility action only for human host (possibly the most common host/contamination in the research area), and it will probably take more time to finish, as "under the hood", it has to download something, create the index, and then perform the mapping and filtering. But on the positive side, you have 3 separate commands running together, and next time when you have a similar set of samples and you want to perform host reads removal, you can use this previously created index from filter-reads-pangenome with filter-reads action (will save time).
The three steps that you describe are technically and logically correct, but I don't think anybody refers to the second step the same way. In my head, that is something that comes by default. Let's say I want to classify mainly bacteria, and I use the Standard DB (because I have this one, or our design requires it, or for the sake of comparison to other results), and I want to use Kraken2. This is an example where I do not want to classify reads against viruses and human, but some may end up in my feature table. As @llenzi said, it's way faster to download a pre-compiled database (ready to use with Kraken2), instead of going through the hassle myself. ALSO, here comes the 3rd filtering step you described. Sometimes Kraken2 may classify reads as human and you now have the option to remove them, something like a double protection.
You're now asking: if my classification database doesn't include human genomes, will the human-derived reads remain unclassified and be naturally filtered out (as in your second step), or is there a risk that they could be misclassified as something else?
I do not know to what extent this could be true, but I strongly believe that they will be unclassified. Somebody correct me here or add more info. If you want only highly reliable classification, you can increase the confidence score of Kraken2, and this hypothetical scenario will be resolved.
Also I remember when I tried to build DBs for Kraken2 myself. That was a nightmare, having to use only 1 core, running for days, popping errors when downloading from the wrong ftp link (I don't think they have even resolved that). Also, having to constantly redownload and rebuild databases to be up-to-date. Ever since I discovered that website with regularly updated versions of databases, I use them all the time. Sadly, there is no bacteria-only DB. So I do all the steps.
Excuse me for mixing up mapping and aligning.