Software  ›   pipelines
If your question is not answered here, please email us at:  support@10xgenomics.com

10x Genomics
Chromium Single Cell ATAC

Cluster Mode: Batch Scheduling Cell Ranger ATAC

Cell Ranger ATAC can be run in cluster mode, using SGE or LSF to run the stages via batch scheduling. This allows highly parallelizable stages to utilize hundreds or thousands of cores concurrently, dramatically reducing time to solution.

Running pipelines in cluster mode requires the following:

  1. Cell Ranger ATAC is installed in the same location on all nodes of the cluster. For example, /opt/cellranger-atac-1.1.0 or /net/apps/cellranger-atac-1.1.0
  2. Cell Ranger ATAC pipelines will be run on a shared file system that is accessible to all nodes of the cluster. NFS-mounted directories are are the most common solution to this requirement.
  3. The cluster will accept both single-core and multithreaded (shared-memory) jobs.

Configuring Cluster Integration

Installing the Cell Ranger ATAC software on a cluster is identical to the installation procedure for local-mode (non-cluster) operation. After you have confirmed that the cellranger-atac pipelines can run in local mode, you must configure the job submission template that Cell Ranger ATAC will use to submit jobs to your cluster. Assuming you installed Cell Ranger ATAC to /opt/cellranger-atac-1.1.0, the process is as follows.

Step 1. Navigate to the Martian runtime's jobmanagers/ directory which contains example jobmanager templates.

$ cd /opt/cellranger-atac-1.1.0/martian-cs/3.2.1/jobmanagers
$ ls
bsub.template.example  config.json  sge.template.example

Step 2. Make a copy of your cluster's example template (SGE or LSF) to either sge.template or lsf.template in this jobmanagers/ directory.

$ cp -v sge.template.example sge.template
`sge.template.example' -> `sge.template'
$ ls
bsub.template.example  config.json  sge.template  sge.template.example

Step 3. Edit this template file and make the necessary modifications that may be required by your specific cluster. You should add the following information.

  1. Job name
  2. Number of required threads
  3. Amount of required memory
  4. Where to direct the standard output and error streams
  5. Any commands the cluster requires to submit or otherwise handle the job
$ nano sge.template
...
 
$ cat sge.template
#$ -N __MRO_JOB_NAME__
#$ -V
#$ -pe threads __MRO_THREADS__
#$ -l mem_free=__MRO_MEM_GB__G
#$ -cwd
#$ -o __MRO_STDOUT__
#$ -e __MRO_STDERR__
 
__MRO_CMD__

If you are using an SGE cluster, you MUST modify the #$ -pe <pe_name> line of the example template to reflect the name of your cluster's multithreaded parallel environment (e.g., threads in the above example). You can view a list of your cluster's parallel environments using the qconf -spl command.

The most common modifications to the job submission template include adding additional lines to specify:

These job submission templates contain a number of special variables that are substituted by the Martian runtime when each stage is being submitted. Specifically, the following variables will be expanded when a pipeline is submitting jobs to the cluster:

Variable Must be present? Description
__MRO_JOB_NAME__ Yes Job name composed of the sample ID and stage being executed
__MRO_THREADS__ No Number of threads required by the stage
__MRO_MEM_GB__
__MRO_MEM_MB__
No Amount of memory (in GB or MB) required by the stage
__MRO_MEM_GB_PER_THREAD__
__MRO_MEM_MB_PER_THREAD__
No Amount of memory (in GB or MB) required per thread in multi-threaded stages.
__MRO_STDOUT__
__MRO_STDERR__
Yes Paths to the _stdout and _stderr metadata files for the stage
__MRO_CMD__ Yes Bourne shell command to run the stage code

It is critical that the special variables listed as required are present in the final template you create. If you are unsure of how this template should appear for your cluster, consult your cluster's administrator or help desk.

Validating Template Configuration

To run a Cell Ranger ATAC pipeline in cluster mode, simply add the --jobmode=sge or --jobmode=lsf command-line option when using the cellranger-atac commands. The pipeline orchestration will still occur on your local machine, but individual stages will be submitted to your cluster as they become eligible to execute.

To validate that cluster mode is properly configured, you can follow the same validation instructions given for cellranger-atac in the Installation page but add --jobmode=sge or --jobmode=lsf.

$ cellranger-atac mkfastq --run=./tiny-bcl --samplesheet=./tiny-sheet.csv --jobmode=sge
 
Martian Runtime - 1.1.0
 
Running preflight checks (please wait)...
2016-09-13 12:00:00 [runtime] (ready)           ID.HAWT7ADXX.MAKE_FASTQS_CS.MAKE_FASTQS.PREPARE_SAMPLESHEET
2016-09-13 12:00:00 [runtime] (split_complete)  ID.HAWT7ADXX.MAKE_FASTQS_CS.MAKE_FASTQS.PREPARE_SAMPLESHEET
...

Once the preflight checks are finished, when you check your job queue, you will begin to see stages queuing up:

$ qstat
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
8675309 0.56000 ID.HAWT7AD jdoe         qw    09/13/2016 12:00:00 all.q@cluster.university.edu       1
8675310 0.55500 ID.HAWT7AD jdoe         qw    09/13/2016 12:00:00 all.q@cluster.university.edu       1

If you encounter a pipeline failure, an error message will appear:

[error] Pipestance failed. Please see log at:
HAWT7ADXX/MAKE_FASTQS_CS/MAKE_FASTQS/MAKE_FASTQS_PREFLIGHT/fork0/chnk0/_errors
 
Saving diagnostics to HAWT7ADXX/HAWT7ADXX.debug.tgz
For assistance, upload this file to 10x by running:
 
uploadto10x <your_email> HAWT7ADXX/HAWT7ADXX.debug.tgz

The _errors file will contain a jobcmd error:

$ cat HAWT7ADXX/MAKE_FASTQS_CS/MAKE_FASTQS/MAKE_FASTQS_PREFLIGHT/fork0/chnk0/_errors
 
jobcmd error:
exit status 1

The most likely reason for this failure is an invalid job submission template. This occurs when the job submission via qsub or bsub commands failed.

Cluster Mode Mechanics

After configuring Cell Ranger ATAC for cluster mode, the cellranger-atac pipelines can be run with --jobmode=sge or --jobmode=lsf. This will make the underlying Martian pipeline framework launch each stage through the qsub or bsub commands when running in SGE or LSF modes, respectively. As stages' jobs are queued, launched, and completed, the pipeline framework will track their status using the metadata files that each stage maintains in the pipeline output directory.

Like local-mode pipelines, cluster-mode pipelines can be restarted after failure. They maintain the same order of execution for dependent subsections of the pipeline. All executed stage code is identical to local mode, and the quantitative results will be identical to the limit of each stage's reproducibility.

In addition, the Cell Ranger ATAC UI can still be used with cluster mode. Because the Martian pipeline framework runs on the node from which the command was issued, the UI will also run from that node.

Memory Requests and Consumption

Stages in the Cell Ranger ATAC pipelines each request a specific number of cores and memory to aid with resource management. These values are used to prevent oversubscription of the computing system when running pipelines in local (non-cluster) mode. The way in which CPU and memory requests are handled in cluster mode is defined by

  1. How the __MRO_THREADS__ and __MRO_MEM_GB__ variables are used within the job template.
  2. How your specific cluster's job manager schedules resources.

SGE / Grid Engine

SGE supports requesting memory via the mem_free resource natively, although your cluster may have another mechanism for requesting memory. To pass each stage's memory request through to SGE, add an additional line to your sge.template that requests mem_free, h_vmem, h_rss, or the custom memory resource defined by your cluster:

$ cat sge.template
#$ -N __MRO_JOB_NAME__
#$ -V
#$ -pe threads __MRO_THREADS__
#$ -l mem_free=__MRO_MEM_GB__G
#$ -cwd
#$ -o __MRO_STDOUT__
#$ -e __MRO_STDERR__
 
__MRO_CMD__

Note that the h_vmem (virtual memory) and mem_free/h_rss (physical memory) represent two different quantities, and that Cell Ranger ATAC stages' __MRO_MEM_GB__ requests are expressed as physical memory. As such, using h_vmem in your job template may cause certain stages to be unduly killed if their virtual memory consumption is substantially larger than their physical memory consumption. It follows that we do not recommend using h_vmem.

Platform LSF

LSF supports job memory requests through the -M and -R [mem=...] options, but these requests generally must be expressed in MB, not GB. As such, your LSF job template should use the __MRO_MEM_MB__ variable rather than __MRO_MEM_GB__. For example,

$ cat bsub.template
#BSUB -J __MRO_JOB_NAME__
#BSUB -n __MRO_THREADS__
#BSUB -o __MRO_STDOUT__
#BSUB -e __MRO_STDERR__
#BSUB -R "rusage[mem=__MRO_MEM_MB__]"
#BSUB -R span[hosts=1]
 
__MRO_CMD__

Requesting Memory via Cores

For clusters whose job managers do not support memory requests, it is possible to request memory in the form of cores via the --mempercore command-line option. This option will scale up the number of threads requested via the __MRO_THREADS__ variable according to how much memory a stage requires when given to the ratio of memory on your nodes.

For example, given a cluster whose nodes have 16 cores and 128 GB of memory (8 GB per core), the following pipeline invocation command

$ cellranger-atac mkfastq --run=./tiny-bcl --samplesheet=./tiny-sheet.csv --jobmode=sge --mempercore=8

will issue the following resource requests:

As the final bullet point illustrates, this mode can result in wasted CPU cycles and is only provided for clusters that cannot allocate memory as an independent resource.

Every cluster configuration is different, so if you are unsure of how your cluster resource management is configured, please contact your cluster administrator or help desk.

Rate Limiting Job Submissions

Some Cell Ranger ATAC pipeline stages are divided into hundreds of jobs. By default, the rate at which these jobs are submitted to the cluster is throttled to at most 64 at a time and at least 100ms between each submission to avoid running into limits on clusters which impose quotas on the total number of pending jobs a user can submit.

If your cluster does not have such limits or is not shared with other users, you can control how the Martian pipeline runner sends job submissions to your cluster by using the --maxjobs and --jobinterval parameters.

You can increase the cap on the number of concurrent jobs to 200 with the --maxjobs parameter:

$ cellranger-atac count --id=sample ... --jobmode=sge --maxjobs=200

You may also change the rate limit on how often the Martian pipeline runner sends submissions to the cluster. To add a five-second pause between job submissions, use the --jobinterval parameter:

$ cellranger-atac count --id=sample ... --jobmode=sge --jobinterval=5000

The job interval parameter is in milliseconds. The minimum allowable value is 1.