Integration example 3: Package for MAGs (Binning-level) with extdb

This tutorial will guide you through an integration of an example package, that requires an external database and it is referred to MAGs module. If this is your first time that reads a guide to our contribute section, we suggest you to read first the other two example.

What we need for this integration

  • Understand the Stream-level: in this case Binning-based

  • Module name: mags_func_annotation

  • Package name: mags_kofam_scan

  • Know which are the conda dependencies for this packages, or the conda package name.

  • Know the code to actual integrate and execute the package.

Step 1: Clone/fork the repository and install Geomosaic

Since the final strategy is to make a pull request to the main repository, we suggest to fork our repo and then clone it (in the SSH way)

git@github.com:<YOURNAME>/Geomosaic.git

Install the Geomosaic conda environment. You can follow the Installation Guide.

Remember to replace <YOURNAME> with your GitHub user account.

Once you have cloned the repository, open the directory created with the clone and also create another branch specifying the name of the package that you are going to integrate

git checkout -b mags_kofam_scan

Step 2: Create the module folder (if does not exists)

Note

As we can intuitively think, KOfam Scan is a package for the functional annotation, in this case for the mags. Before this integration, the only package that belonged to this module was DRAM. However, DRAM usually takes in input a folder that contains all the fasta files of the mags, which is different from the input of the kofam scan, which takes the predicted orf from a single MAG.

Due to this difference, both packages cannot be in the same modules as the dependencies are different. So I decided to change the module belonging to DRAM, calling it mags_metabolism_annotation and insert mags_kofam_scan in the mags_functional_annotation, which depends on the mags_orf_prediction. Modules names are just to describe what the packages do, however, it would have been the same if it was mags_functional_annotation for DRAM and mags_functional_annotation_2 for kofam scan.

Step 3: Create the package folder

We need to create the package folder inside the corresponding module, which in this case is mags_func_annotation. Since we are going to integrate the program called KOfam Scan for MAGs, we can create a folder called mags_kofam_scan.

Warning

Do not use any special characters or insert spaces in the name.

Highlight

Just rely on underscore and all lower-case characters

Step 4: Create package’s snakefiles

Now we need to create the three files where we are going to implement all the necessary code:

  • Snakefile.smk

  • Snakefile_target.smk

  • param.yaml

For now you can leave them empty.

Important

The names for this file are standard and are the same for each package. Do not change the filenames.

Step 5: create the corresponding conda env file

For this step, we don’t need to create the corresponding conda env file as we have already created for the Integration Example 2

Now we can write our code inside the Snakefile

Step 7: How to organize the code for the External Database (extdb).

You can see this section in the Integration example 2.

Step 8: Write the actual code.

To my knowledge, for the binning-based stream level there are two types of tool:

  • the first one are those that take into consideration as input all the fasta files inside a folder, like DRAM or GTDBTK. Indeed, their dependency is the mags_retrieval.

  • the second regards instead the tools that mainly take as input one fasta file at the time, like Prodigal or KOfam Scan which takes in input one predicted orf (fasta file) at the time.

For this latter case, it is essential to retrieve all the mags that we have and then refer to the fasta file for that mags. Many example are already integrated in Geomosaic, so we can start from that.

We can use the template of the mags_recognizer from the module mags_domain_annotation.

Step 8.1 Snakefile: input/output section

We need to change the rule definition with the package name, composed also of the prefix run_.

  • Our input section is fine, as we need only the predicted orf from each mags (mags_orf_prediction module).

  • In output section usually we put the folder output that must be the same of the package name. However if you know that your package is going to provide in output a specific file, you can even increase the detail of this section by inserting also that file.

Step 8.2 Snakefile: threads section

The threads section is fine like this. If we know that is not possible to execute our package through parallelization we can put in this section 1, otherwise we can leave it as it is.

Step 8.3 Snakefile: conda section

In this section, we only need to put our package name.

Step 8.4 Snakefile: params section

In each package we put at least a param variable called user_params, which is going to read the param.yaml file that we have created in the Step 4. The code to read user parameters, is almost always the same (so you don’t need to modify it):


user_params=( lambda x: " ".join(filter(None , yaml.safe_load(open(x, "r"))["mags_kofam_scan"])) ) (config["USER_PARAMS"]["mags_kofam_scan"])

Just replace mags_kofam_scan with your package name.

Since this package accept a profile database for the annotation, we have inserted another param called mags_kofam_scan_profiles (Section 10) that is read by the following line


user_kofam_profiles = (lambda x: yaml.safe_load(open(x, "r"))["mags_kofam_scan_profiles"]) (config["USER_PARAMS"]["mags_kofam_scan"])

Step 8.5 Snakefile: shell section

This is the section in which we are going to put the actual code to execute our programs.

snakefile

Step 9: Snakefile Target

In our file Snakefile_target.smk we only need to write few rows. First, the name of the rule must be the same name of the package name with the all_ prefix. And then we need to change the rows in the input section, and we need to specify the same folder output as in this case was our only output that we specified in the Snakefile.smk.

snakefile_target

Step 10: Param.yaml file

The param.yaml is a file in which the user, before the execution of the workflow, can insert all the optional parameters belonging to the package as bullet points. In this case, we only need to open this file and add the following lines:

params

Test the integration

Now we should test the integrated package. Activate the conda environment of geomosaic. Updated geomosaic by doing

pip install .

Once we have tested, we can commit the changes and create the pull request.