Atlas is a Cray CS500 Linux cluster.
Atlas is composed of 244 nodes, all of which contain two 2.40GHz Xeon Platinum 8260 2nd Generation Scalable Processors with 24 cores each, for a total of 48 cores per node.
$ ssh <SCINet UserID>@Atlas-login.hpc.msstate.edu
$ ssh <SCINet UserID>@Atlas-dtn.hpc.msstate.edu
msuhpc2#Atlas-dtn
.
Atlas uses LMOD as an environment module system.
module load blast
or ml blast
Some applications are available as containers, self-contained application execution environments that contain the software and its dependencies.
module avail
once the user has loaded the singularity
module.Atlas uses the Slurm Workload Manager as a scheduler and resource manager.
Slurm has three primary job allocation commands which accept almost identical options.
srun
to run one step of an analysissbatch
to submit a job array to run the second step of the analysis several times on different data/home/adam.thrash/training/
athal.fa.gz
data/DRR0161
+ the numberscinet
accountsingularity
module and salmon
modulesalmon
module is accessible as a container, so singularity
has to be loaded firstsalmon
indexing commandrna_files
salmon