Difference between revisions of "Ipyrad"
Line 145: | Line 145: | ||
[carl]$ sbatch ipyrad_job.sh | [carl]$ sbatch ipyrad_job.sh | ||
Note that you do need to activate the conda environment for this, it will be (and has to be) done within the job. | Note that you do need to activate the conda environment for this, it will be (and has to be) done within the job. | ||
=== Job Script for Using Multiple Compute Nodes === | |||
<pre> | |||
#!/bin/bash | |||
#SBATCH --partition carl.p | |||
#SBATCH --nodes 1 | |||
#SBATCH --tasks-per-node 1 | |||
#SBATCH --cpus-per-task 12 | |||
#SBATCH --time 0-01:00:00 | |||
#SBATCH --mem-per-cpu 5000 | |||
#SBATCH --job-name ipyrad | |||
#SBATCH --output ipyrad_output_%j.txt | |||
## assembly name | |||
assembly_name="iptest" | |||
## load modules and activate conda environment | |||
module load hpc-env/8.3 | |||
module load Anaconda3/2020.02 | |||
source activate ipyrad | |||
## change into the directory where your params file resides | |||
cd $WORK/ipyrad | |||
## create, prepare and change to a job specific dir | |||
jobdir="ipyrad_${SLURM_JOB_ID}" | |||
params="params-${assembly_name}.txt" | |||
mkdir $jobdir | |||
sed "s#$(pwd) #$(pwd)/$jobdir#" $params > $jobdir/$params | |||
cd $jobdir | |||
## setting the number of available cores | |||
cores=1 | |||
if [ -z $SLURM_CPUS_PER_TASK ] | |||
then | |||
cores=${SLURM_NTASKS} | |||
else | |||
cores=$((SLURM_NTASKS*SLURM_CPUS_PER_TASK)) | |||
fi | |||
## call ipyrad on your params file and perform 7 steps from the workflow | |||
cmd="ipyrad -p $params -s 1234567 -c $cores" | |||
echo "== starting ipyrad on $(hostname) at $(date) ==" | |||
echo "== command: $cmd" | |||
eval $cmd | |||
retval=$? | |||
if [ $retval -ne 0 ] | |||
then | |||
echo "Warning: exit code for command $cmd is non-zero (=$retval)" | |||
fi | |||
echo "== completed ipyrad on $(hostname) at $(date) ==" | |||
exit $retval | |||
</pre> |
Revision as of 09:13, 16 June 2020
Introduction
The software ipyrad is an interactive toolkit for assembly and analysis of restriction-site associated genomic data sets (e.g., RAD, ddRAD, GBS) for population genetic and phylogenetic studies. [1]
At the moment, there is no central installation of ipyrad, however, you can easily install it yourself using Anaconda3 as described below.
Installation
To install ipyrad you first need to load a module for Anaconda3. In this example, we use Anaconda3/2020.02 which can be found in hpc-env/8.3 (if you want to use a different version/environment you can search with module av Anaconda3 or module spider Anaconda3):
[carl]$ module load hpc-env/8.3 [carl]$ module load Anaconda/2020.02
The next step is to create a new environment for ipyrad with the command:
[carl]$ conda create --name ipyrad Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: /user/abcd1234/.conda/envs/ipyrad Proceed ([y]/n)? Preparing transaction: done Verifying transaction: done Executing transaction: done
The name for the environment can be freely chosen and it will be created after you have confirmed to proceed with pressing (y and)enter. You may see a warning about an outdated conda which you can safely ignore (or, if you wish, you can switch to a newer module of Anaconda3 if available).
The new environment can now be activated. We recommend using the command(*):
[carl]$ source activate ipyrad (ipyrad) [carl]$
You will notice the change of the command-line prompt to indicate the active environment. Packages that are now installed with conda install will be installed in this environment and not interfere with other software installations.
(*) The alternative conda activate requires you to use the command conda init bash first which modifies your .bashrc and more or less forces you to always use the same version of Anaconda3.
Now you can install ipyrad along with a the package mpi4py for parallel computing?
(ipyrad) [carl]$ conda install ipyrad -c bioconda (ipyrad) [carl]$ conda install mpi4py -c conda-forge
These commands will take a moment to complete but after that ipyrad is ready to use. And next time you log in or in a job script you only need the commands
[carl]$ module load hpc-env/8.3 [carl]$ module load Anaconda/2020.02 [carl]$ source activate ipyrad
to get started. If you want to leave the environment you can always type
(ipyrad) [carl]$ conda deactivate
which should return you to the normal command-line prompt.
Using ipyrad on CARL
Preparations and First Tests
Following the Introductory tutorial to the CLI you can start by downloading some test data and creating a parameter file in a new directory under $WORK:
(ipyrad) [carl]$ mkdir $WORK/ipyrad_test (ipyrad) [carl]$ cd $WORK/ipyrad_test (ipyrad) [carl]$ curl -LkO https://eaton-lab.org/data/ipsimdata.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 11.8M 100 11.8M 0 0 8514k 0 0:00:01 0:00:01 --:--:-- 8508k (ipyrad) [carl]$ tar -xzf ipsimdata.tar.gz (ipyrad) [carl]$ ipyrad -n iptest New file 'params-iptest.txt' created in /gss/work/lees4820/ipyrad
The resulting file params-iptest.txt has to be opened in a text editor to add the locations of the raw non-demultiplexed fastq file and the barcodes file. With the test data the first couple of lines should look like this:
(ipyrad) [carl]$ $ head params-iptest.txt ------- ipyrad params file (v.0.9.53)------------------------------------------- iptest ## [0] [assembly_name]: Assembly name. Used to name output directories for assembly steps /gss/work/abcd1234/ipyrad ## [1] [project_dir]: Project dir (made in curdir if not present) /gss/work/abcd1234/ipyrad/ipsimdata/rad_example_R1_.fastq.gz ## [2] [raw_fastq_path]: Location of raw non-demultiplexed fastq files /gss/work/abcd1234/ipyrad/ipsimdata/rad_example_barcodes.txt ## [3] [barcodes_path]: Location of barcodes file ## [4] [sorted_fastq_path]: Location of demultiplexed/sorted fastq files denovo ## [5] [assembly_method]: Assembly method (denovo, reference)
We recommend to use absolute file names including the full path to the file which allows you to move the parameter file to other locations (e.g. a job specific directory).
A simple test can be performed with this command:
(ipyrad) [carl]$ ipyrad -p params-iptest.txt -s 1 -c 8 ------------------------------------------------------------- ipyrad [v.0.9.53] Interactive assembly and analysis of RAD-seq data ------------------------------------------------------------- Parallel connection | hpcl004: 8 cores Step 1: Demultiplexing fastq data to Samples
The program performs the first step of a work flow (-s 1) using a total of 8 cores (-c 8) on the login node. We can now remove the newly created data files and directories with
(ipyrad) [carl]$ rm -r iptest_fastqs/ iptest.json
to avoid error message in the next steps.
Job Script for Using Multiple Cores on a Single Compute Node
With the preparations from the previous step, we can now use a job script to run ipyrad on the compute nodes. The first example uses a single compute node with multiple cores. The job script could look like this:
#!/bin/bash #SBATCH --partition carl.p #SBATCH --nodes 1 #SBATCH --tasks-per-node 1 #SBATCH --cpus-per-task 12 #SBATCH --time 0-01:00:00 #SBATCH --mem-per-cpu 5000 #SBATCH --job-name ipyrad #SBATCH --output ipyrad_output_%j.txt ## assembly name assembly_name="iptest" ## load modules and activate conda environment module load hpc-env/8.3 module load Anaconda3/2020.02 source activate ipyrad ## change into the directory where your params file resides cd $WORK/ipyrad ## create, prepare and change to a job specific dir jobdir="ipyrad_${SLURM_JOB_ID}" params="params-${assembly_name}.txt" mkdir $jobdir sed "s#$(pwd) #$(pwd)/$jobdir#" $params > $jobdir/$params cd $jobdir ## setting the number of available cores cores=1 if [ -z $SLURM_CPUS_PER_TASK ] then cores=${SLURM_NTASKS} else cores=$((SLURM_NTASKS*SLURM_CPUS_PER_TASK)) fi ## call ipyrad on your params file and perform 7 steps from the workflow cmd="ipyrad -p $params -s 1234567 -c $cores" echo "== starting ipyrad on $(hostname) at $(date) ==" echo "== command: $cmd" eval $cmd retval=$? if [ $retval -ne 0 ] then echo "Warning: exit code for command $cmd is non-zero (=$retval)" fi echo "== completed ipyrad on $(hostname) at $(date) ==" exit $retval
Some explanations:
- the job script requests a single task with a total of 12 CPU cores (--cpus-per-task 12). Depending on the partition, the number of cores can be chosen between 1 and the maximum number of cores available. For carl.p this is 24. The the number of cores selected is automatically passed in the command-line near the end of the script.
- other resources can be requested as needed, here e.g. 5000 MB of RAM.
- the script assumes a base directory ($WORK/ipyrad where the file params-iptest.txt and the job script are located. A job-specific subdirectory is created and the params-iptest.txt is copied there (by the sed-command which also changes the project dir). If you do not want a job-specific directory you can comment-out the lines that contain jobdir (with or without the $).
When you save the job script as ipyrad_job.sh you can submit a job with
[carl]$ sbatch ipyrad_job.sh
Note that you do need to activate the conda environment for this, it will be (and has to be) done within the job.
Job Script for Using Multiple Compute Nodes
#!/bin/bash #SBATCH --partition carl.p #SBATCH --nodes 1 #SBATCH --tasks-per-node 1 #SBATCH --cpus-per-task 12 #SBATCH --time 0-01:00:00 #SBATCH --mem-per-cpu 5000 #SBATCH --job-name ipyrad #SBATCH --output ipyrad_output_%j.txt ## assembly name assembly_name="iptest" ## load modules and activate conda environment module load hpc-env/8.3 module load Anaconda3/2020.02 source activate ipyrad ## change into the directory where your params file resides cd $WORK/ipyrad ## create, prepare and change to a job specific dir jobdir="ipyrad_${SLURM_JOB_ID}" params="params-${assembly_name}.txt" mkdir $jobdir sed "s#$(pwd) #$(pwd)/$jobdir#" $params > $jobdir/$params cd $jobdir ## setting the number of available cores cores=1 if [ -z $SLURM_CPUS_PER_TASK ] then cores=${SLURM_NTASKS} else cores=$((SLURM_NTASKS*SLURM_CPUS_PER_TASK)) fi ## call ipyrad on your params file and perform 7 steps from the workflow cmd="ipyrad -p $params -s 1234567 -c $cores" echo "== starting ipyrad on $(hostname) at $(date) ==" echo "== command: $cmd" eval $cmd retval=$? if [ $retval -ne 0 ] then echo "Warning: exit code for command $cmd is non-zero (=$retval)" fi echo "== completed ipyrad on $(hostname) at $(date) ==" exit $retval