Same as MPI version.
Some environmental variables are used. The first two are mandatory.The number of threads to run in a process: $ export MAFFT_N_THREADS_PER_PROCESS="8" Location of mpirun/mpiexec and options: $ export MAFFT_MPIRUN="/somewhere/bin/mpirun -n 12 -npernode 2 -bind-to none ..." (for OpenMPI) $ export MAFFT_MPIRUN="/somewhere/bin/mpirun -n 12 -perhost 2 -binding none ..." (for MPICH) mpirun or mpiexec must be from the same library as mpicc that was used in compiling. Depending on the configuration of your cluster, LD_LIBRARY_PATH may be necessary to set. $ export LD_LIBRARY_PATH="/somewhere/lib" (Optional) Location of temporary directory (see details): $ export MAFFT_TMPDIR="/location/of/shared/filesystem/"MAFFT_N_THREADS_PER_PROCESS and MAFFT_MPIRUN specify the computational resources (number of machines, threads, etc) to use for the calculation. Environment-specific options for mpirun/mpiexec are also set using MAFFT_MPIRUN. Options differ for different MPI environments. Consult with administrator of your cluster about appropriate settings.In the above example, 8 threads run in a process. Two processes run on each of 6 (=12/2) machines. In total, 12 process with 96 (=12×8) threads run across 6 machines. The number of threads per machine (8×2=16) must be less than or equal to the number of physical cores in a machine.
To avoid typing these commands each time, try batch, in which parameters are easily set.
Add "--mpi --large" to the normal command of G-INS-1, L-INS-1 or E-INS-1.G-large-INS-1: $ mafft --mpi --large --globalpair --threadtb 16 input L-large-INS-1: $ mafft --mpi --large --localpair --threadtb 16 input E-large-INS-1: $ mafft --mpi --large --genafpair --threadtb 16 input E-large-INS-1 (old parameters): $ mafft --mpi --large --oldgenafpair --thread 16 inputThe --threadtb flag specifies the number of threads (16 in these examples) used in step 2 (see below). It must be less than or equal to the number of physical cores in a single machine.
To set the environmental variables and run the command at a time, a batch script can be used.$ sh simple.noschedulerEdit the simple.noscheduler script according to your cluster's environment, and run it. Detailed information about the variables are explained in the batch file itself.For job schedulers, edit and run one of the following templates. If unsuccessful, try the script above without scheduler for small input sequence data, to identify the cause.
LSF:
$ bsub < simple.lsfSGE/UGE has no common option for MPI+multithread hybrid programs, but many cluster systems have a parallel environment (PE) configured for this purpose. The example below uses a PE named mpi16, but the name of PE should differ for different systems.
$ qsub simple.ugeAsk system administrator if there is an appropriate PE for MPI/multithread hybrid programs.If no, disable multithread.
$ qsub singlethread.ugeIn this case, step 2 runs in serial.PBS:
$ qsub simple.pbsSLURM:
preparing
See MPI version.
are welcome.