Knowledge Center         Contents    Previous  Next    Index  
Platform Computing Corp.

bsub

submits a batch job to LSF

Synopsis

bsub [options] command [arguments] 
bsub [-h | -V]  

Option List

-B

-H

-I | -Ip | -Is

-K

-N

-r | -rn

-ul

-x

-a esub_application

-app application_profile_name

-b [[month:]day:]hour:minute

-C core_limit

-c [hour:]minute[/host_name | /host_model]

-cwd "current_working_directory"

-D data_limit

-E "pre_exec_command [arguments ...]"

-Ep "post_exec_command [arguments ...]"

-e error_file

-eo error_file

-ext[sched] "external_scheduler_options"

-F file_limit

-f "local_file operator [remote_file]" ...

-G user_group

-g job_group_name

-i input_file | -is input_file

-J job_name | -J "job_name[index_list]%job_slot_limit"

-jsdl file_name | -jsdl_strict file_name

-k "checkpoint_dir [init=initial_checkpoint_period] [checkpoint_period] [method=method_name]"

-L login_shell

-Lp ls_project_name

-M mem_limit

-m "host_name[@cluster_name][[!] | +[pref_level]] | host_group[[!] | +[pref_level]] ..."

-mig migration_threshold

-n min_proc[,max_proc]

-o output_file

-oo output_file

-P project_name

-p process_limit

-Q "[exit_code ...] [EXCLUDE(exit_code ...)]"

-q "queue_name ..."

-R "res_req" [-R "res_req" ...]

-S stack_limit

-s signal

-sla service_class_name

-sp priority

-T thread_limit

-t [[month:]day:]hour:minute

-U reservation_ID

-u mail_user

-v swap_limit

-W [hour:]minute[/host_name | /host_model]

-We [hour:]minute[/host_name | /host_model]

-w 'dependency_expression'

-wa 'signal'

-wt '[hour:]minute'

-Zs

-h

-V

Description

Submits a job for batch execution and assigns it a unique numerical job ID.

Runs the job on a host that satisfies all requirements of the job, when all conditions on the job, host, queue, application profile, and cluster are satisfied. If LSF cannot run all jobs immediately, LSF scheduling policies determine the order of dispatch. Jobs are started and suspended according to the current system load.

Sets the user's execution environment for the job, including the current working directory, file creation mask, and all environment variables, and sets LSF environment variables before starting the job.

When a job is run, the command line and stdout/stderr buffers are stored in the directory home_directory/.lsbatch on the execution host. If this directory is not accessible, /tmp/.lsbtmp user_ID is used as the job's home directory. If the current working directory is under the home directory on the submission host, then the current working directory is also set to be the same relative directory under the home directory on the execution host.

By default, if the current working directory is not accessible on the execution host, the job runs in /tmp. If the environment variable LSB_EXIT_IF_CWD_NOTEXIST is set to Y and the current working directory is not accessible on the execution host, the job exits with the exit code 2.

If no command is supplied, bsub prompts for the command from the standard input. On UNIX, the input is terminated by entering CTRL-D on a new line. On Windows, the input is terminated by entering CTRL-Z on a new line.

To kill a batch job submitted with bsub, use bkill.

Use bmod to modify jobs submitted with bsub. bmod takes similar options to bsub.

Jobs submitted to a chunk job queue with the following options are not chunked; they are dispatched individually:

To submit jobs from UNIX to display GUIs through Microsoft Terminal Services on Windows, submit the job with bsub and define the environment variables LSF_LOGON_DESKTOP=1 and LSB_TSJOB=1 on the UNIX host. Use tssub to submit a Terminal Services job from Windows hosts. See Using Platform LSF on Windows for more details.

If the parameter LSB_STDOUT_DIRECT in lsf.conf is set to Y or y, and you use the -o or -oo option, the standard output of a job is written to the file you specify as the job runs. If LSB_STDOUT_DIRECT is not set, and you use -o or -oo, the standard output of a job is written to a temporary file and copied to the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.

Default Behavior

LSF assumes that uniform user names and user ID spaces exist among all the hosts in the cluster. That is, a job submitted by a given user runs under the same user's account on the execution host. For situations where nonuniform user names and user ID spaces exist, account mapping must be used to determine the account used to run a job.

bsub uses the command name as the job name. Quotation marks are significant.

Options related to file names and job spooling directories support paths that contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows.

Options related to command names and job names can contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows.

Options for the following resource usage limits are specified in KB:

Use LSF_UNIT_FOR_LIMITS in lsf.conf to specify a larger unit for the limit (MB, GB, TB, PB, or EB).

If fairshare is defined and you belong to multiple user groups, the job is scheduled under the user group that allows the quickest dispatch.

The job is not checkpointable.

bsub automatically selects an appropriate queue. If you defined a default queue list by setting LSB_DEFAULTQUEUE environment variable, the queue is selected from your list. If LSB_DEFAULTQUEUE is not defined, the queue is selected from the system default queue list specified by the LSF administrator with the DEFAULT_QUEUE parameter in lsb.params.

LSF tries to obtain resource requirement information for the job from the remote task list that is maintained by the load sharing library. If the job is not listed in the remote task list, the default resource requirement is to run the job on a host or hosts that are of the same host type as the submission host.

bsub assumes only one processor is requested.

bsub does not start a login shell but runs the job file under the execution environment from which the job was submitted.

The input file for the batch job is /dev/null (no input).

bsub sends mail to you when the job is done. The default destination is defined by LSB_MAILTO in lsf.conf. The mail message includes the job report, the job output (if any), and the error message (if any).

bsub charges the job to the default project. The default project is the project you define by setting the environment variable LSB_DEFAULTPROJECT. If you do not set LSB_DEFAULTPROJECT, the default project is the project specified by the LSF administrator with DEFAULT_PROJECT parameter in lsb.params. If DEFAULT_PROJECT is not defined, then LSF uses default as the default project name.

Options

-B

Sends mail to you when the job is dispatched and begins execution.

-H

Holds the job in the PSUSP state when the job is submitted. The job is not scheduled until you tell the system to resume the job (see bresume(1)).

-I | -Ip | -Is

Submits a batch interactive job. A new job cannot be submitted until the interactive job is completed or terminated.

Sends the job's standard output (or standard error) to the terminal. Does not send mail to you when the job is done unless you specify the -N option.

Terminal support is available for a batch interactive job.

When you specify the -Ip option, submits a batch interactive job and creates a pseudo-terminal when the job starts. Some applications (for example, vi) require a pseudo-terminal in order to run correctly.

When you specify the -Is option, submits a batch interactive job and creates a pseudo-terminal with shell mode support when the job starts. This option should be specified for submitting interactive shells, or applications which redefine the CTRL-C and CTRL-Z keys (for example, jove).

If the -i input_file option is specified, you cannot interact with the job's standard input via the terminal.

If the -o out_file option is specified, sends the job's standard output to the specified output file. If the -e err_file option is specified, sends the job's standard error to the specified error file.

You cannot use -I, -Ip, or -Is with the -K option.

Interactive jobs cannot be checkpointed.

Interactive jobs cannot be rerunnable (bsub -r).

The options that create a pseudo-terminal (-Ip and -Is) are not supported on Windows.

-K

Submits a batch job and waits for the job to complete. Sends the message "Waiting for dispatch" to the terminal when you submit the job. Sends the message "Job is finished" to the terminal when the job is done.

You are not able to submit another job until the job is completed. This is useful when completion of the job is required in order to proceed, such as a job script. If the job needs to be rerun due to transient failures, bsub returns after the job finishes successfully. bsub exits with the same exit code as the job so that job scripts can take appropriate actions based on the exit codes. bsub exits with value 126 if the job was terminated while pending.

You cannot use the -K option with the -I, -Ip, or -Is options.

-N

Sends the job report to you by mail when the job finishes. When used without any other options, behaves the same as the default.

Use only with -o, -oo, -I, -Ip, and -Is options, which do not send mail, to force LSF to send you a mail message when the job is done.

-r | -rn

If the execution host becomes unavailable while a job is running, specifies that the job be rerun on another host. LSF requeues the job in the same job queue with the same job ID. When an available execution host is found, reruns the job as if it were submitted new, even if the job has been checkpointed. You receive a mail message informing you of the host failure and requeuing of the job.

If the system goes down while a job is running, specifies that the job is requeued when the system restarts.

Reruns a job if the execution host or the system fails; it does not rerun a job if the job itself fails.

-rn specifies that the job is never rerunnable. bsub -rn disables job rerun if the job was submitted to a rerunnable queue or application profile with job rerun configured. The command level job rerunnable setting overrides the application profile and queue level setting. bsub -rn is different from bmod -rn, which cannot override the application profile and queue level rerunnable job setting.

Members of a chunk job can be rerunnable. If the execution host becomes unavailable, rerunnable chunk job members are removed from the queue and dispatched to a different execution host.

Interactive jobs (bsub -I) cannot be rerunnable.

-ul

Passes the current operating system user shell limits for the job submission user to the execution host. User limits cannot override queue hard limits. If user limits exceed queue hard limits, the job is rejected.

restriction:  
UNIX and Linux only. -ul is not supported on Windows.

The following bsub options for job-level runtime limits override the value of the user shell limits:

LSF collects the user limit settings from the user's running environment that are supported by the operating system, and sets the value to submission options if the value is no unlimited. If the operating system has other kinds of shell limits, LSF does not collect them. LSF collects the following operating system user limits:

-x

Puts the host running your job into exclusive execution mode.

In exclusive execution mode, your job runs by itself on a host. It is dispatched only to a host with no other jobs running, and LSF does not send any other jobs to the host until the job completes.

To submit a job in exclusive execution mode, the queue must be configured to allow exclusive jobs.

When the job is dispatched, bhosts(1) reports the host status as closed_Excl, and lsload(1) reports the host status as lockU.

Until your job is complete, the host is not selected by LIM in response to placement requests made by lsplace(1), lsrun(1) or lsgrun(1) or any other load sharing applications.

You can force other batch jobs to run on the host by using the -m host_name option of brun(1) to explicitly specify the locked host.

You can force LIM to run other interactive jobs on the host by using the -m host_name option of lsrun(1) or lsgrun(1) to explicitly specify the locked host.

-a esub_application

Specifies one or more application-specific esub executables that you want LSF to associate with the job.

The value of -a must correspond to the application name of an actual esub file. For example, to use bsub -a fluent, the file esub.fluent must exist in LSF_SERVERDIR.

For example, to submit a job that invokes two application-specific esub executables named esub.license and esub.fluent, enter:

bsub -a license fluent my_job 

mesub uses the method name license to invoke the esub named LSF_SERVERDIR/esub.license, and the method name fluent to invoke the esub named LSF_SERVERDIR/esub.fluent.

The name of an application-specific esub program is passed to the master esub. The master esub program (LSF_SERVERDIR/mesub) handles job submission requirements of the application. Application-specific esub programs can specify their own job submission requirements. The value of -a is set in the LSB_SUB_ADDITIONAL option in the LSB_SUB_PARM file used by esub.

If an LSF administrator specifies one or more mandatory esub executables using the parameter LSB_ESUB_METHOD, LSF invokes the mandatory executables first, followed by the executable named esub (without .esub_application in the file name) if it exists in LSF_SERVERDIR, and then any application-specific esub executables (with .esub_application in the file name) specified by -a.esub_application.

The name of the esub program must be a valid file name. It can contain only alphanumeric characters, underscore (_) and hyphen (-).

restriction:  
After LSF version 5.1, the value of -a and LSB_ESUB_METHOD must correspond to an actual esub file in LSF_SERVERDIR. For example, to use bsub -a fluent, the file esub.fluent must exist in LSF_SERVERDIR.
-app application_profile_name

Submits the job to the specified application profile. You must specify an existing application profile. If the application profile does not exist in lsb.applications, the job is rejected.

-b [[month:]day:]hour:minute

Dispatches the job for execution on or after the specified date and time. The date and time are in the form of [[month:]day:]hour:minute where the number ranges are as follows: month 1-12, day 1-31, hour 0-23, minute 0-59.

At least two fields must be specified. These fields are assumed to be hour:minute. If three fields are given, they are assumed to be day:hour:minute, and four fields are assumed to be month:day:hour:minute.

-C core_limit

Sets a per-process (soft) core file size limit for all the processes that belong to this batch job (see getrlimit(2)).

By default, the limit is specified in KB. Use LSF_UNIT_FOR_LIMITS in lsf.conf to specify a larger unit for the limit (MB, GB, TB, PB, or EB).

The behavior of this option depends on platform-specific UNIX or Linux systems.

In some cases, the process is sent a SIGXFSZ signal if the job attempts to create a core file larger than the specified limit. The SIGXFSZ signal normally terminates the process.

In other cases, the writing of the core file terminates at the specified limit.

-c [hour:]minute[/host_name | /host_model]

Limits the total CPU time the job can use. This option is useful for preventing runaway jobs or jobs that use up too many resources. When the total CPU time for the whole job has reached the limit, a SIGXCPU signal is first sent to the job, then SIGINT, SIGTERM, and SIGKILL.

If LSB_JOB_CPULIMIT in lsf.conf is set to n, LSF-enforced CPU limit is disabled and LSF passes the limit to the operating system. When one process in the job exceeds the CPU limit, the limit is enforced by the operating system.

The CPU limit is in the form of [hour:]minute. The minutes can be specified as a number greater than 59. For example, three and a half hours can either be specified as 3:30, or 210.

The CPU time you specify is the normalized CPU time. This is done so that the job does approximately the same amount of processing for a given CPU limit, even if it is sent to host with a faster or slower CPU. Whenever a normalized CPU time is given, the actual time on the execution host is the specified time multiplied by the CPU factor of the normalization host then divided by the CPU factor of the execution host.

Optionally, you can supply a host name or a host model name defined in LSF. You must insert a slash (/) between the CPU limit and the host name or model name. (See lsinfo(1) to get host model information.) If a host name or model name is not given, LSF uses the default CPU time normalization host defined at the queue level (DEFAULT_HOST_SPEC in lsb.queues) if it has been configured, otherwise uses the default CPU time normalization host defined at the cluster level (DEFAULT_HOST_SPEC in lsb.params) if it has been configured, otherwise uses the submission host.

Jobs submitted to a chunk job queue are not chunked if the CPU limit is greater than 30 minutes.

-cwd "current_working_directory"

Specifies the current working directory for the job.

By default, if the current working directory is not accessible on the execution host, the job runs in /tmp. If the environment variable LSB_EXIT_IF_CWD_NOTEXIST is set to Y and the current working directory is not accessible on the execution host, the job exits with the exit code 2.

-D data_limit

Sets a per-process (soft) data segment size limit for each of the processes that belong to the batch job (see getrlimit(2)). The limit is specified in KB.

This option affects calls to sbrk() and brk() . An sbrk() or malloc() call to extend the data segment beyond the data limit returns an error.

note:  
Linux does not use sbrk() and brk() within its calloc() and malloc(). Instead, it uses (mmap()) to create memory. DATALIMIT cannot be enforced on Linux applications that call sbrk() and malloc().
-E "pre_exec_command [arguments ...]"

Runs the specified pre-execution command on the execution host before actually running the job. For a parallel job, the pre-execution command runs on the first host selected for the parallel job. If you want the pre-execution command to run on a specific first execution host, specify one or more first execution host candidates at the job level using -m, at the queue level with PRE_EXEC in lsb.queues, or at the application level with PRE_EXEC in lsb.applications.

If the pre-execution command returns a zero (0) exit code, LSF runs the job on the selected host. Otherwise, the job and its associated pre-execution command goes back to PEND status and is rescheduled. LSF keeps trying to run pre-execution commands and pending jobs. After the pre-execution command runs successfully, LSF runs the job. You must ensure that the pre-execution command can run multiple times without causing side effects, such as reserving the same resource more than once.

The standard input and output for the pre-execution command are directed to the same files as the job. The pre-execution command runs under the same user ID, environment, home, and working directory as the job. If the pre-execution command is not in the user's usual execution path (the $PATH variable), the full path name of the command must be specified.

-Ep "post_exec_command [arguments ...]"

Runs the specified post-execution command on the execution host after the job finishes.

If both application-level (POST_EXEC in lsb.applications) and job-level post-execution commands are specified, job level post-execution overrides application-level post-execution commands. Queue-level post-execution commands (POST_EXEC in lsb.queues) run after application-level post-execution and job-level post-execution commands.

-e error_file

Specify a file path. Appends the standard error output of the job to the specified file.

If the parameter LSB_STDOUT_DIRECT in lsf.conf is set to Y or y, the standard error output of a job is written to the file you specify as the job runs. If LSB_STDOUT_DIRECT is not set, it is written to a temporary file and copied to the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.

If you use the special character %J in the name of the error file, then %J is replaced by the job ID of the job. If you use the special character %I in the name of the error file, then %I is replaced by the index of the job in the array if the job is a member of an array. Otherwise, %I is replaced by 0 (zero).

If the current working directory is not accessible on the execution host after the job starts, LSF writes the standard error output file to /tmp/.

note:  
The file path can contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows, including the directory, file name, and expanded values for %J (job_ID) and %I (index_ID).
-eo error_file

Specify a file path. Overwrites the standard error output of the job to the specified file.

If the parameter LSB_STDOUT_DIRECT in lsf.conf is set to Y or y, the standard error output of a job is written to the file you specify as the job runs, which occurs every time the job is submitted with the overwrite option, even if it is requeued manually or by the system. If LSB_STDOUT_DIRECT is not set, it is written to a temporary file and copied to the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.

If you use the special character %J in the name of the error file, then %J is replaced by the job ID of the job. If you use the special character %I in the name of the error file, then %I is replaced by the index of the job in the array if the job is a member of an array. Otherwise, %I is replaced by 0 (zero).

If the current working directory is not accessible on the execution host after the job starts, LSF writes the standard error output file to /tmp/.

note:  
The file path can contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows, including the directory, file name, and expanded values for %J (job_ID) and %I (index_ID).
-ext[sched] "external_scheduler_options"

Application-specific external scheduling options for the job.

To enable jobs to accept external scheduler options, set LSF_ENABLE_EXTSCHEDULER=y in lsf.conf.

You can abbreviate the -extsched option to -ext.

You can specify only one type of external scheduler option in a single -extsched string.

For example, SGI IRIX hosts and AlphaServer SC hosts running RMS can exist in the same cluster, but they accept different external scheduler options. Use external scheduler options to define job requirements for either IRIX cpusets OR RMS, but not both. Your job runs either on IRIX or RMS. If external scheduler options are not defined, the job may run on IRIX but it does not run on an RMS host.

The options set by -extsched can be combined with the queue-level MANDATORY_EXTSCHED or DEFAULT_EXTSCHED parameters. If -extsched and MANDATORY_EXTSCHED set the same option, the MANDATORY_EXTSCHED setting is used. If -extsched and DEFAULT_EXTSCHED set the same options, the -extsched setting is used.

Use DEFAULT_EXTSCHED in lsb.queues to set default external scheduler options for a queue.

To make certain external scheduler options mandatory for all jobs submitted to a queue, specify MANDATORY_EXTSCHED in lsb.queues with the external scheduler options you need or your jobs.

See Using Platform LSF HPC for information about specific external scheduler options.

-F file_limit

Sets a per-process (soft) file size limit for each of the processes that belong to the batch job (see getrlimit(2)). The limit is specified in KB.

If a job process attempts to write to a file that exceeds the file size limit, then that process is sent a SIGXFSZ signal. The SIGXFSZ signal normally terminates the process.

-f "local_file operator [remote_file]" ...

Copies a file between the local (submission) host and the remote (execution) host. Specify absolute or relative paths, including the file names. You should specify the remote file as a file name with no path when running in non-shared systems.

If the remote file is not specified, it defaults to the local file, which must be given. Use multiple -f options to specify multiple files.

operator

An operator that specifies whether the file is copied to the remote host, or whether it is copied back from the remote host. The operator must be surrounded by white space.

The following describes the operators:

> Copies the local file to the remote file before the job starts. Overwrites the remote file if it exists.

< Copies the remote file to the local file after the job completes. Overwrites the local file if it exists.

<< Appends the remote file to the local file after the job completes. The local file must exist.

>< Copies the local file to the remote file before the job starts. Overwrites the remote file if it exists. Then copies the remote file to the local file after the job completes. Overwrites the local file.

<> Copies the local file to the remote file before the job starts. Overwrites the remote file if it exists. Then copies the remote file to the local file after the job completes. Overwrites the local file.

If you use the -i input_file option, then you do not have to use the -f option to copy the specified input file to the execution host. LSF does this for you, and removes the input file from the execution host after the job completes.

If you use the -o out_file,-e err_file, -oo out_file, or the -eo err_file option, and you want the specified file to be copied back to the submission host when the job completes, then you must use the -f option.

If the submission and execution hosts have different directory structures, you must make sure that the directory where the remote file and local file are placed exists.

If the local and remote hosts have different file name spaces, you must always specify relative path names. If the local and remote hosts do not share the same file system, you must make sure that the directory containing the remote file exists. It is recommended that only the file name be given for the remote file when running in heterogeneous file systems. This places the file in the job's current working directory. If the file is shared between the submission and execution hosts, then no file copy is performed.

LSF uses lsrcp to transfer files (see lsrcp(1) command). lsrcp contacts RES on the remote host to perform the file transfer. If RES is not available, rcp is used (see rcp(1)). The user must make sure that the rcp binary is in the user's $PATH on the execution host.

Jobs that are submitted from LSF client hosts should specify the -f option only if rcp is allowed. Similarly, rcp must be allowed if account mapping is used.

-G user_group

Only useful with fairshare scheduling.

Associates the job with the specified group. Specify any group that you belong to that does not contain any subgroups. You must be a direct member of the specified user group.

-g job_group_name

Submits jobs in the job group specified by job_group_name The job group does not have to exist before submitting the job. For example:

bsub -g /risk_group/portfolio1/current myjob
Job <105> is submitted to default queue. 

Submits myjob to the job group /risk_group/portfolio1/current.

If group /risk_group/portfolio1/current exists, job 105 is attached to the job group.

Job group names can be up to 512 characters long.

If group /risk_group/portfolio1/current does not exist, LSF checks its parent recursively, and if no groups in the hierarchy exist, all three job groups are created with the specified hierarchy and the job is attached to group.

You can use -g with -sla. All jobs in a job group attached to a service class are scheduled as SLA jobs. It is not possible to have some jobs in a job group not part of the service class. Multiple job groups can be created under the same SLA. You can submit additional jobs to the job group without specifying the service class name again.

For example, the following attaches the job to the service class named opera, and the group /risk_group/portfolio1/current:

bsub -sla opera -g /risk_group/portfolio1/current myjob 

To submit another job to the same job group, you can omit the SLA name:

bsub -g /risk_group/portfolio1/current myjob2 
-i input_file | -is input_file

Gets the standard input for the job from specified file. Specify an absolute or relative path. The input file can be any type of file, though it is typically a shell script text file.

Unless you use -is, you can use the special characters %J and %I in the name of the input file. %J is replaced by the job ID. %I is replaced by the index of the job in the array, if the job is a member of an array, otherwise by 0 (zero). The special characters %J and %I are not valid with the -is option.

note:  
The file path can contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows, including the directory, file name, and expanded values for %J (job_ID) and %I (index_ID).

If the file exists on the execution host, LSF uses it. Otherwise, LSF attempts to copy the file from the submission host to the execution host. For the file copy to be successful, you must allow remote copy (rcp) access, or you must submit the job from a server host where RES is running. The file is copied from the submission host to a temporary file in the directory specified by the JOB_SPOOL_DIR parameter in lsb.params, or your $HOME/.lsbatch directory on the execution host. LSF removes this file when the job completes.

By default, the input file is spooled to LSB_SHAREDIR/cluster_name/lsf_indir. If the lsf_indir directory does not exist, LSF creates it before spooling the file. LSF removes the spooled file when the job completes. Use the -is option if you need to modify or remove the input file before the job completes. Removing or modifying the original input file does not affect the submitted job.

If JOB_SPOOL_DIR is specified, the -is option spools the input file to the specified directory and uses the spooled file as the input file for the job.

JOB_SPOOL_DIR can be any valid path up to a maximum length up to 4094 characters on UNIX and Linux or up to 255 characters for Windows.

JOB_SPOOL_DIR must be readable and writable by the job submission user, and it must be shared by the master host and the submission host. If the specified directory is not accessible or does not exist, bsub -is cannot write to the default directory LSB_SHAREDIR/cluster_name/lsf_indir and the job fails.

-J job_name | -J "job_name[index_list]%job_slot_limit"

Assigns the specified name to the job, and, for job arrays, specifies the indices of the job array and optionally the maximum number of jobs that can run at any given time.

The job name does not need to be unique.

Job names can contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows.

To specify a job array, enclose the index list in square brackets, as shown, and enclose the entire job array specification in quotation marks, as shown. The index list is a comma-separated list whose elements have the syntax start[-end[:step]] where start, end and step are positive integers. If the step is omitted, a step of one is assumed. The job array index starts at one.

By default, the maximum number of jobs in a job array is 1000, which means the maximum size of a job array (that is, the maximum job array index) can never exceed 1000 jobs.

To change the maximum job array value, set MAX_JOB_ARRAY_SIZE in lsb.params to any positive integer between 1 and 2147483646. The maximum number of jobs in a job array cannot exceed the value set by MAX_JOB_ARRAY_SIZE.

You may also use a positive integer to specify the system-wide job slot limit (the maximum number of jobs that can run at any given time) for this job array.

All jobs in the array share the same job ID and parameters. Each element of the array is distinguished by its array index.

After a job is submitted, you use the job name to identify the job. Specify "job_ID[index]" to work with elements of a particular array. Specify "job_name[index]" to work with elements of all arrays with the same name. Since job names are not unique, multiple job arrays may have the same name with a different or same set of indices.

-jsdl file_name | -jsdl_strict file_name

Submits a job using a JSDL file to specify job submission options.

LSF provides an extension to the JSDL specification so that you can submit jobs using LSF features not defined in the JSDL standard schema. The JSDL schema (jsdl.xsd), the POSIX extension (jsdl-posix.xsd), and the LSF extension (jsdl-lsf.xsd) are located in the LSF_LIBDIR directory.

note:  
For a detailed mapping of JSDL elements to LSF submission options, and for a complete list of supported and unsupported elements, see the chapter "Submitting Jobs Using JSDL" in Administering Platform LSF.

If you specify duplicate or conflicting job submission parameters, LSF resolves the conflict by applying the following rules:

  1. The parameters specified in the command line override all other parameters.
  2. A job script or user input for an interactive job overrides parameters specified in the JSDL file.
-k "checkpoint_dir [init=initial_checkpoint_period] [checkpoint_period] [method=method_name]"

Makes a job checkpointable and specifies the checkpoint directory. Specify a relative or absolute path name. The quotes (") are required is you specify a checkpoint period, initial checkpoint period, or custom checkpoint and restart method name.

When a job is checkpointed, the checkpoint information is stored in checkpoint_dir/job_ID/file_name. Multiple jobs can checkpoint into the same directory. The system can create multiple files.

The checkpoint directory is used for restarting the job (see brestart(1)). The checkpoint directory can be any valid path.

Optionally, specifies a checkpoint period in minutes. Specify a positive integer. The running job is checkpointed automatically every checkpoint period. The checkpoint period can be changed using bchkpnt(1). Because checkpointing is a heavyweight operation, you should choose a checkpoint period greater than half an hour.

Optionally, specifies an initial checkpoint period in minutes. Specify a positive integer. The first checkpoint does not happen until the initial period has elapsed. After the first checkpoint, the job checkpoint frequency is controlled by the normal job checkpoint interval.

Optionally, specifies a custom checkpoint and restart method to use with the job. Use method=default to indicate to use the default LSF checkpoint and restart programs for the job, echkpnt.default and erestart.default.

The echkpnt.method_name and erestart.method_name programs must be in LSF_SERVERDIR or in the directory specified by LSB_ECHKPNT_METHOD_DIR (environment variable or set in lsf.conf).

If a custom checkpoint and restart method is already specified with LSB_ECHKPNT_METHOD (environment variable or in lsf.conf), the method you specify with bsub -k overrides this.

Process checkpointing is not available on all host types, and may require linking programs with a special libraries (see libckpt.a(3)). LSF invokes echkpnt (see echkpnt(8)) found in LSF_SERVERDIR to checkpoint the job. You can override the default echkpnt for the job by defining as environment variables or in lsf.conf LSB_ECHKPNT_METHOD and LSB_ECHKPNT_METHOD_DIR to point to your own echkpnt. This allows you to use other checkpointing facilities, including application-level checkpointing.

The checkpoint method directory should be accessible by all users who need to run the custom echkpnt and erestart programs.

Only running members of a chunk job can be checkpointed.

-L login_shell

Initializes the execution environment using the specified login shell. The specified login shell must be an absolute path. This is not necessarily the shell under which the job is executed.

Login shell is not supported on Windows.

-Lp ls_project_name

Assigns the job to the specified License Scheduler project.

-M mem_limit

Sets a per-process (soft) memory limit for all the processes that belong to this batch job (see getrlimit(2)).

By default, the limit is specified in KB. Use LSF_UNIT_FOR_LIMITS in lsf.conf to specify a larger unit for the limit (MB, GB, TB, PB, or EB).

If LSB_MEMLIMIT_ENFORCE or LSB_JOB_MEMLIMIT are set to y in lsf.conf, LSF kills the job when it exceeds the memory limit. Otherwise, LSF passes the memory limit to the operating system. UNIX operating systems that support RUSAGE_RSS for setrlimit() can apply the memory limit to each process.

The following operating systems do not support the memory limit at the OS level:

- Windows

- Sun Solaris 2.x

-m "host_name[@cluster_name][[!] | +[pref_level]] | host_group[[!] |+[pref_level]] ..."

Runs the job on one of the specified hosts.

By default, if multiple hosts are candidates, runs the job on the least-loaded host.

To change the order of preference, put a plus (+) after the names of hosts or host groups that you would prefer to use, optionally followed by a preference level. For preference level, specify a positive integer, with higher numbers indicating greater preferences for those hosts. For example, -m "hostA groupB+2 hostC+1" indicates that groupB is the most preferred and hostA is the least preferred.

The keyword others can be specified with or without a preference level to refer to other hosts not otherwise listed. The keyword others must be specified with at least one host name or host group, it cannot be specified by itself. For example, -m "hostA+ others" means that hostA is preferred over all other hosts.

If you also use -q, the specified queue must be configured to include all the hosts in the your host list. Otherwise, the job is not submitted. To find out what hosts are configured for the queue, use bqueues -l.

If the host group contains the keyword all, LSF dispatches the job to any available host, even if the host is not defined for the specified queue.

To display configured host groups, use bmgroup.

For the MultiCluster job forwarding model, you cannot specify a remote host by name.

For parallel jobs, specify first execution host candidates when you want to ensure that a host has the required resources or runtime environment to handle processes that run on the first execution host.

To specify one or more hosts or host groups as first execution host candidates, add the (!) symbol after the host name, as shown in the following example:

bsub -n 2 -m "host1 host2! hostgroupA! host3 host4" my_parallel_job 

LSF runs my_parallel_job according to the following steps:

  1. LSF selects either host2 or a host defined in hostgroupA as the first execution host for the parallel job.
  2. note:  
    First execution host candidates specified at the job-level (command line) override candidates defined at the queue level (in lsb.queues).
  3. If any of the first execution host candidates have enough processors to run the job, the entire job runs on the first execution host, and not on any other hosts.
  4. In the example, if host2 or a member of hostgroupA has two or more processors, the entire job runs on the first execution host.
  5. If the first execution host does not have enough processors to run the entire job, LSF selects additional hosts that are not defined as first execution host candidates.
  6. Follow these guidelines when you specify first execution host candidates:

In a MultiCluster environment, insert the (!) symbol after the cluster name, as shown in the following example:

bsub -n 2 -m "host2@cluster2! host3@cluster2" my_parallel_job 
-mig migration_threshold

Specifies the migration threshold for checkpointable or rerunnable jobs in minutes. Enables automatic job migration and specifies the migration threshold, in minutes. A value of 0 (zero) specifies that a suspended job should be migrated immediately.

Command-level job migration threshold overrides application profile and queue-level settings.

Where a host migration threshold is also specified, and is lower than the job value, the host value is used.

-n min_proc[,max_proc]

Submits a parallel job and specifies the number of processors required to run the job (some of the processors may be on the same multiprocessor host).

You can specify a minimum and maximum number of processors to use. The job can start if at least the minimum number of processors is available. If you do not specify a maximum, the number you specify represents the exact number of processors to use.

If PARALLEL_SCHED_BY_SLOT=Y in lsb.params, this option specifies the number of slots required to run the job, not the number of processors.

Jobs that request fewer slots than the minimum PROCLIMIT defined for the queue or application profile to which the job is submitted, or more slots than the maximum PROCLIMIT are rejected. If the job requests minimum and maximum job slots, the maximum slots requested cannot be less than the minimum PROCLIMIT, and the minimum slots requested cannot be more than the maximum PROCLIMIT.

For example, if the queue defines PROCLIMIT=4 8:

See the PROCLIMIT parameter in lsb.queues(5) and lsb.applications(5) for more information.

In a MultiCluster environment, if a queue exports jobs to remote clusters (see the SNDJOBS_TO parameter in lsb.queues(5)), then the process limit is not imposed on jobs submitted to this queue.

Once at the required number of processors is available, the job is dispatched to the first host selected. The list of selected host names for the job are specified in the environment variables LSB_HOSTS and LSB_MCPU_HOSTS. The job itself is expected to start parallel components on these hosts and establish communication among them, optionally using RES.

Specify first execution host candidates using the -m option when you want to ensure that a host has the required resources or runtime environment to handle processes that run on the first execution host.

If you specify one or more first execution host candidates, LSF looks for a first execution host that satisfies the resource requirements. If the first execution host does not have enough processors or job slots to run the entire job, LSF looks for additional hosts.

-o output_file

Specify a file path. Appends the standard output of the job to the specified file. Sends the output by mail if the file does not exist, or the system has trouble writing to it.

If only a file name is specified, LSF writes the output file to the current working directory. If the current working directory is not accessible on the execution host after the job starts, LSF writes the standard output file to /tmp/.

If you use the special character %J in the name of the output file, then %J is replaced by the job ID of the job. If you use the special character %I in the name of the output file, then %I is replaced by the index of the job in the array, if the job is a member of an array. Otherwise, %I is replaced by 0 (zero).

note:  
The file path can contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows, including the directory, file name, and expanded values for %J (job_ID) and %I (index_ID).

If the parameter LSB_STDOUT_DIRECT in lsf.conf is set to Y or y, the standard output of a job is written to the file you specify as the job runs. If LSB_STDOUT_DIRECT is not set, it is written to a temporary file and copied to the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.

If you use -o without -e or -eo, the standard error of the job is stored in the output file.

If you use -o without -N, the job report is stored in the output file as the file header.

If you use both -o and -N, the output is stored in the output file and the job report is sent by mail. The job report itself does not contain the output, but the report advises you where to find your output.

-oo output_file

Specify a file path. Overwrites the standard output of the job to the specified file if it exists, or sends the output to a new file if it does not exist. Sends the output by mail if the system has trouble writing to the file.

If only a file name is specified, LSF writes the output file to the current working directory. If the current working directory is not accessible on the execution host after the job starts, LSF writes the standard output file to /tmp/.

If you use the special character %J in the name of the output file, then %J is replaced by the job ID of the job. If you use the special character %I in the name of the output file, then %I is replaced by the index of the job in the array, if the job is a member of an array. Otherwise, %I is replaced by 0 (zero).

note:  
The file path can contain up to 4094 characters for UNIX and Linux, or up to 255 characters for Windows, including the directory, file name, and expanded values for %J (job_ID) and %I (index_ID).

If the parameter LSB_STDOUT_DIRECT in lsf.conf is set to Y or y, the standard output of a job overwrites the output file you specify as the job runs, which occurs every time the job is submitted with the overwrite option, even if it is requeued manually or by the system. If LSB_STDOUT_DIRECT is not set, the output is written to a temporary file that overwrites the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.

If you use -oo without -e or -eo, the standard error of the job is stored in the output file.

If you use -oo without -N, the job report is stored in the output file as the file header.

If you use both -oo and -N, the output is stored in the output file and the job report is sent by mail. The job report itself does not contain the output, but the report advises you where to find your output.

-P project_name

Assigns the job to the specified project.

On IRIX 6, you must be a member of the project as listed in /etc/project(4). If you are a member of the project, then /etc/projid(4) maps the project name to a numeric project ID. Before the submitted job executes, a new array session (newarraysess(2)) is created and the project ID is assigned to it using setprid(2).

-p process_limit

Sets the limit of the number of processes to process_limit for the whole job. The default is no limit. Exceeding the limit causes the job to terminate.

-Q "[exit_code ...] [EXCLUDE(exit_code ...)]"

Specify automatic job requeue exit values. Use spaces to separate multiple exit codes. The reserved keyword all specifies all exit codes. Exit codes are typically between 0 and 255. Use a tilde (~) to exclude specified number or numbers from the list.

exit_code has the following form:

"[all] [~number ...] | [number ...]" 

Job level exit values override application-level and queue-level values.

Jobs running with the specified exit code share the same application and queue with other jobs.

Define an exit code as EXCLUDE(exit_code) to enable exclusive job requeue. Exclusive job requeue does not work for parallel jobs.

If mbatchd is restarted, it does not remember the previous hosts from which the job exited with an exclusive requeue exit code. In this situation, it is possible for a job to be dispatched to hosts on which the job has previously exited with an exclusive exit code.

-q "queue_name ..."

Submits the job to one of the specified queues. Quotes are optional for a single queue. The specified queues must be defined for the local cluster. For a list of available queues in your local cluster, use bqueues.

When a list of queue names is specified, LSF selects the most appropriate queue in the list for your job based on the job's resource limits, and other restrictions, such as the requested hosts, your accessibility to a queue, queue status (closed or open), etc. The order in which the queues are considered is the same order in which these queues are listed. The queue listed first is considered first.

-R "res_req" [-R "res_req" ...]

Runs the job on a host that meets the specified resource requirements. A resource requirement string describes the resources a job needs. LSF uses resource requirements to select hosts for job execution.

The size of the resource requirement string cannot exceed 512 characters. If you need to include a hyphen (-) or other non-alphabet characters within the string, enclose the text in single quotation marks, for example, bsub -R "select[hname!='host06-x12']".

A resource requirement string is divided into the following sections. Each section has a different syntax.

The resource requirement string sections have the following syntax:

select[selection_string] order[order_string] rusage[usage_string 
[, usage_string][|| usage_string] ...] span[span_string] 
same[same_string] 

The square brackets must be typed as shown for each section.

If select keyword and square brackets are omitted from the selection string, then the entire string is treated as a selection string (select[selection_string]). A selection string that omits the select keyword must be the first string in the resource requirement string.

When LSF_STRICT_RESREQ=Y in lsf.conf, LSF rejects resource requirement strings where an rusage section contains a non-consumable resource.

Any resource for run queue length, such as r15s, r1m or r15m, specified in the resource requirements refers to the normalized run queue length.

By default, memory (mem) and swap (swp) limits in select[] and rusage[] sections are specified in MB. Use LSF_UNIT_FOR_LIMITS in lsf.conf to specify a larger unit for the these limits (MB, GB, TB, PB, or EB).

For example, to submit a job that runs on Solaris 7 or Solaris 8:

bsub -R "sol7 || sol8" myjob 

The following command runs the job called myjob on an HP-UX host that is lightly loaded (CPU utilization) and has at least 15 MB of swap memory available.

bsub -R "swp > 15 && hpux order[ut]" myjob 

bsub also accepts multiple -R options for the order, same, rusage, and select sections. You can specify multiple strings instead of using the && operator:

bsub -R "select[swp > 15]" -R "select[hpux] order[r15m]" -R 
rusage[mem=100]" -R "order[ut]" -R "same[type]" -R 
rusage[tmp=50:duration=60]" -R "same[model]" myjob 

LSF merges the multiple -R options into one string and selects a host that meets all of the resource requirements. The number of -R option sections is unlimited, up to a maximum of 512 characters for the entire string.

remember:  
Use multiple -R options only with the order, same, rusage, and select sections of the resource requirements string and with the bsub and bmod commands.

You defined a resource called bigmem in lsf.shared and defined it as an exclusive resource for hostE in lsf.cluster.mycluster. Use the following command to submit a job that runs on hostE:

bsub -R "bigmem" myjob 

or

bsub -R "defined(bigmem)" myjob 

You configured a static shared resource for licenses for the Verilog application as a resource called verilog_lic. To submit a job that runs on a host when there is a license available:

bsub -R "select[defined(verilog_lic)] rusage[verilog_lic=1]" myjob 

The following job requests 20 MB memory for the duration of the job, and 1 license for 2 minutes:

bsub -R "rusage[mem=20, license=1:duration=2]" myjob 

The following job requests 20 MB of memory and 50 MB of swap space for 1 hour, and 1 license for 2 minutes:

bsub -R "rusage[mem=20:swp=50:duration=1h, license=1:duration=2]" myjob 

The following job requests 20 MB of memory for the duration of the job, 50 MB of swap space for 1 hour, and 1 license for 2 minutes.

bsub -R "rusage[mem=20,swp=50:duration=1h, license=1:duration=2]" myjob 

The following job requests 50 MB of swap space, linearly decreasing the amount reserved over a duration of 2 hours, and requests 1 license for 2 minutes:

bsub -R "rusage[swp=50:duration=2h:decay=1, license=1:duration=2]" myjob 

The following job requests two resources with same duration but different decay:

bsub -R "rusage[mem=20:duration=30:decay=1, lic=1:duration=30]" myjob 

You are running an application version 1.5 as a resource called app_lic_v15 and the same application version 2.0.1 as a resource called app_lic_v201. The license key for version 2.0.1 is backward compatible with version 1.5, but the license key for version 1.5 does not work with 2.0.1.

Job-level resource requirement specifications that use the || operator take precedence over any queue-level resource requirement specifications.

-S stack_limit

Sets a per-process (soft) stack segment size limit for each of the processes that belong to the batch job (see getrlimit(2)).

By default, the limit is specified in KB. Use LSF_UNIT_FOR_LIMITS in lsf.conf to specify a larger unit for the limit (MB, GB, TB, PB, or EB).

-s signal

Send the specified signal when a queue-level run window closes.

By default, when the window closes, LSF suspends jobs running in the queue (job state becomes SSUSP) and stops dispatching jobs from the queue.

Use -s to specify a signal number; when the run window closes, the job is signalled by this signal instead of being suspended.

-sla service_class_name

Specifies the service class where the job is to run.

If the SLA does not exist or the user is not a member of the service class, the job is rejected.

If EGO-enabled SLA scheduling is configured with ENABLE_DEFAULT_EGO_SLA in lsb.params, jobs submitted without -sla are attached to the configured default SLA.

You can use -g with -sla. All jobs in a job group attached to a service class are scheduled as SLA jobs. It is not possible to have some jobs in a job group not part of the service class. Multiple job groups can be created under the same SLA. You can submit additional jobs to the job group without specifying the service class name again.

tip:  
You should submit your jobs with a runtime limit (-W option) or you should specify a run time limit in a queue or application profile (RUNLIMIT in the queue definition in lsb.queues or RUNLIMIT in the application profile definition in lsb.applications). If you do not specify a run time limit, LSF automatically adjusts the optimum number of running jobs according to the observed run time of finished jobs.

Use bsla to display the properties of service classes configured in LSB_CONFDIR/cluster_name/configdir/lsb.serviceclasses (see lsb.serviceclasses(5)) and dynamic information about the state of each service class.

-sp priority

Specifies user-assigned job priority which allow users to order their jobs in a queue. Valid values for priority are any integers between 1 and MAX_USER_PRIORITY (configured in lsb.params, displayed by bparams -l). Job priorities that are not valid are rejected. LSF and queue administrators can specify priorities beyond MAX_USER_PRIORITY.

The job owner can change the priority of their own jobs. LSF and queue administrators can change the priority of all jobs in a queue.

Job order is the first consideration to determine job eligibility for dispatch. Jobs are still subject to all scheduling policies regardless of job priority. Jobs with the same priority are ordered first come first served.

User-assigned job priority can be configured with automatic job priority escalation to automatically increase the priority of jobs that have been pending for a specified period of time (JOB_PRIORITY_OVER_TIME in lsb.params).

When absolute priority scheduling is configured in the submission queue (APS_PRIORITY in lsb.queues), the user-assigned job priority is used for the JPRIORITY factor in the APS calculation.

-T thread_limit

Sets the limit of the number of concurrent threads to thread_limit for the whole job. The default is no limit.

Exceeding the limit causes the job to terminate. The system sends the following signals in sequence to all processes belongs to the job: SIGINT, SIGTERM, and SIGKILL.

-t [[month:]day:]hour:minute

Specifies the job termination deadline.

If a UNIX or Linux job is still running at the termination time, the job is sent a SIGUSR2 signal, and is killed if it does not terminate within ten minutes.

If a Windows job is still running at the termination time, it is killed immediately. (For a detailed description of how these jobs are killed, see bkill.)

In the queue definition, a TERMINATE action can be configured to override the bkill default action (see the JOB_CONTROLS parameter in lsb.queues(5)).

In an application profile definition, a TERMINATE_CONTROL action can be configured to override the bkill default action (see the TERMINATE_CONTROL parameter in lsb.applications(5)).

The format for the termination time is [[month:]day:]hour:minute where the number ranges are as follows: month 1-12, day 1-31, hour 0-23, minute 0-59.

At least two fields must be specified. These fields are assumed to be hour:minute. If three fields are given, they are assumed to be day:hour:minute, and four fields are assumed to be month:day:hour:minute.

-U reservation_ID

If an advance reservation has been created with the brsvadd command, the -U option makes use of the reservation.

For example, if the following command was used to create the reservation user1#0,

brsvadd -n 1024 -m hostA -u user1 -b 13:0 -e 18:0
Reservation "user1#0" is created 

The following command uses the reservation:

bsub -U user1#0 myjob 

The job can only use hosts reserved by the reservation user1#0. LSF only selects hosts in the reservation. You can use the -m option to specify particular hosts within the list of hosts reserved by the reservation, but you cannot specify other hosts not included in the original reservation.

If you do not specify hosts (bsub -m) or resource requirements (bsub -R), the default resource requirement is to select hosts that are of any host type (LSF assumes "type==any" instead of "type==local" as the default select string).

If you later delete the advance reservation while it is still active, any pending jobs still keep the "type==any" attribute.

A job can only use one reservation. There is no restriction on the number of jobs that can be submitted to a reservation; however, the number of slots available on the hosts in the reservation may run out. For example, reservation user2#0 reserves 128 slots on hostA. When all 128 slots on hostA are used by jobs referencing user2#0, hostA is no longer available to other jobs using reservation user2#0. Any single user or user group can have a maximum of 100 reservation IDs

Jobs referencing the reservation are killed when the reservation expires. LSF administrators can prevent running jobs from being killed when the reservation expires by changing the termination time of the job using the reservation (bmod -t) before the reservation window closes.

To use an advance reservation on a remote host, submit the job and specify the remote advance reservation ID. For example:

bsub -U user1#01@cluster1 

In this example, we assume the default queue is configured to forward jobs to the remote cluster.

-u mail_user

Sends mail to the specified email destination. To specify a Windows user account, include the domain name in uppercase letters and use a single backslash (DOMAIN_NAME\user_name) in a Windows command line or a double backslash (DOMAIN_NAME\\user_name) in a UNIX command line.

-v swap_limit

Set the total process virtual memory limit to swap_limit for the whole job. The default is no limit. Exceeding the limit causes the job to terminate.

By default, the limit is specified in KB. Use LSF_UNIT_FOR_LIMITS in lsf.conf to specify a larger unit for the limit (MB, GB, TB, PB, or EB).

-W [hour:]minute[/host_name | /host_model]

Sets the runtime limit of the batch job. If a UNIX or Linux job runs longer than the specified run limit, the job is sent a SIGUSR2 signal, and is killed if it does not terminate within ten minutes. If a Windows job runs longer than the specified run limit, it is killed immediately. (For a detailed description of how these jobs are killed, see bkill.)

In the queue definition, a TERMINATE action can be configured to override the bkill default action (see the JOB_CONTROLS parameter in lsb.queues(5)).

In an application profile definition, a TERMINATE_CONTROL action can be configured to override the bkill default action (see the TERMINATE_CONTROL parameter in lsb.applications(5)).

If you want to provide LSF with an estimated run time without killing jobs that exceed this value, submit the job with -We, or define the RUNTIME parameter in lsb.applications and submit the job to that application profile. LSF uses the estimated runtime value for scheduling purposes only..

The run limit is in the form of [hour:]minute. The minutes can be specified as a number greater than 59. For example, three and a half hours can either be specified as 3:30, or 210.

The run limit you specify is the normalized run time. This is done so that the job does approximately the same amount of processing, even if it is sent to host with a faster or slower CPU. Whenever a normalized run time is given, the actual time on the execution host is the specified time multiplied by the CPU factor of the normalization host then divided by the CPU factor of the execution host.

If ABS_RUNLIMIT=Y is defined in lsb.params, the runtime limit and the runtime estimate are not normalized by the host CPU factor. Absolute wall-clock run time is used for all jobs submitted with a runtime limit or runtime estimate.

Optionally, you can supply a host name or a host model name defined in LSF. You must insert `/' between the run limit and the host name or model name. (See lsinfo(1) to get host model information.)

If no host or host model is given, LSF uses the default runtime normalization host defined at the queue level (DEFAULT_HOST_SPEC in lsb.queues) if it has been configured; otherwise, LSF uses the default CPU time normalization host defined at the cluster level (DEFAULT_HOST_SPEC in lsb.params) if it has been configured; otherwise, LSF uses the submission host.

For MultiCluster jobs, if no other CPU time normalization host is defined and information about the submission host is not available, LSF uses the host with the largest CPU factor (the fastest host in the cluster).

If the job also has termination time specified through the bsub -t option, LSF determines whether the job can actually run for the specified length of time allowed by the run limit before the termination time. If not, then the job is aborted.

If the IGNORE_DEADLINE parameter is set in lsb.queues(5), this behavior is overridden and the run limit is ignored.

Jobs submitted to a chunk job queue are not chunked if the run limit is greater than 30 minutes.

-We [hour:]minute[/host_name | /host_model]

Specifies an estimated run time for the job. LSF uses the estimated value for job scheduling purposes only, and does not kill jobs that exceed this value unless the jobs also exceed a defined runtime limit. The format of runtime estimate is same as run limit set by the -W option.

Use JOB_RUNLIMIT_RATIO in lsb.params to limit the runtime estimate users can set. If JOB_RUNLIMIT_RATIO is set to 0 no restriction is applied to the runtime estimate.

The job-level runtime estimate setting overrides the RUNTIME setting in an application profile in lsb.applications.

-w 'dependency_expression'

LSF does not place your job unless the dependency expression evaluates to TRUE. If you specify a dependency on a job that LSF cannot find (such as a job that has not yet been submitted), your job submission fails.

The dependency expression is a logical expression composed of one or more dependency conditions. To make dependency expression of multiple conditions, use the following logical operators:

&& (AND)

|| (OR)

! (NOT)

Use parentheses to indicate the order of operations, if necessary.

Enclose the dependency expression in single quotes (') to prevent the shell from interpreting special characters (space, any logic operator, or parentheses). If you use single quotes for the dependency expression, use double quotes (") for quoted items within it, such as job names.

In dependency conditions, job names specify only your own jobs, unless you are the LSF administrator. By default, if you use the job name to specify a dependency condition, and more than one of your jobs has the same name, all of your jobs that have that name must satisfy the test. If JOB_DEP_LAST_SUB in lsb.params is set to 1, the test is done on the job submitted most recently.

Use double quotes (") around job names that begin with a number. In the job name, specify the wildcard character asterisk (*) at the end of a string, to indicate all jobs whose name begins with the string. For example, if you use jobA* as the job name, it specifies jobs named jobA, jobA1, jobA_test, jobA.log, etc.

Use the * with dependency conditions to define one-to-one dependency among job array elements such that each element of one array depends on the corresponding element of another array. The job array size must be identical.

For example:

bsub -w "done(myarrayA[*])" -J "myArrayB[1-10]" myJob2  

indicates that before element 1 of myArrayB can start, element 1 of myArrayA must be completed, and so on.

You can also use the * to establish one-to-one array element dependencies with bmod after an array has been submitted.

If you want to specify array dependency by array name, set JOB_DEP_LAST_SUB in lsb.params. If you do not have this parameter set, the job is rejected if one of your previous arrays has the same name but a different index.

In dependency conditions, the variable op represents one of the following relational operators:

>

>=

<

<=

==

!=

Use the following conditions to form the dependency expression.

done(job_ID |"job_name" ...)

The job state is DONE.

LSF refers to the oldest job of job_name in memory.

ended(job_ID | "job_name")

The job state is EXIT or DONE.

exit(job_ID | "job_name" [,[operator] exit_code])

The job state is EXIT, and the job's exit code satisfies the comparison test.

If you specify an exit code with no operator, the test is for equality (== is assumed).

If you specify only the job, any exit code satisfies the test.

external(job_ID | "job_name", "status_text")

The job has the specified job status.

If you specify the first word of the message description (no spaces), the text of the job's status begins with the specified word. Only the first word is evaluated.

job_ID | "job_name"

If you specify a job without a dependency condition, the test is for the DONE state (LSF assumes the "done" dependency condition by default).

numdone(job_ID, operator number | *)

For a job array, the number of jobs in the DONE state satisfies the test. Use * (with no operator) to specify all the jobs in the array.

numended(job_ID, operator number | *)

For a job array, the number of jobs in the DONE or EXIT states satisfies the test. Use * (with no operator) to specify all the jobs in the array.

numexit(job_ID, operator number | *)

For a job array, the number of jobs in the EXIT state satisfies the test. Use * (with no operator) to specify all the jobs in the array.

numhold(job_ID, operator number | *)

For a job array, the number of jobs in the PSUSP state satisfies the test. Use * (with no operator) to specify all the jobs in the array.

numpend(job_ID, operator number | *)

For a job array, the number of jobs in the PEND state satisfies the test. Use * (with no operator) to specify all the jobs in the array.

numrun(job_ID, operator number | *)

For a job array, the number of jobs in the RUN state satisfies the test. Use * (with no operator) to specify all the jobs in the array.

numstart(job_ID, operator number | *)

For a job array, the number of jobs in the RUN, USUSP, or SSUSP states satisfies the test. Use * (with no operator) to specify all the jobs in the array.

post_done(job_ID | "job_name")

The job state is POST_DONE (post-execution processing of the specified job has completed without errors).

post_err(job_ID | "job_name")

The job state is POST_ERR (post-execution processing of the specified job has completed with errors).

started(job_ID | "job_name")

The job state is:

-wa 'signal'

Specifies the job action to be taken before a job control action occurs.

A job warning action must be specified with a job action warning time in order for job warning to take effect.

If -wa is specified, LSF sends the warning action to the job before the actual control action is taken. This allows the job time to save its result before being terminated by the job control action.

The warning action specified by -wa option overrides JOB_WARNING_ACTION in the queue. JOB_WARNING_ACTION is used as the default when no command line option is specified.

For example the following specifies that 2 minutes before the job reaches its runtime limit, an URG signal is sent to the job:

bsub -W 60 -wt '2' -wa 'URG' myjob 
-wt '[hour:]minute'

Specifies the amount of time before a job control action occurs that a job warning action is to be taken. Job action warning time is not normalized.

A job action warning time must be specified with a job warning action in order for job warning to take effect.

The warning time specified by the bsub -wt option overrides JOB_ACTION_WARNING_TIME in the queue. JOB_ACTION_WARNING_TIME is used as the default when no command line option is specified.

For example the following specifies that 2 minutes before the job reaches its runtime limit, an URG signal is sent to the job:

bsub -W 60 -wt '2' -wa 'URG' myjob 
-Zs

Spools a job command file to the directory specified by the JOB_SPOOL_DIR parameter in lsb.params, and uses the spooled file as the command file for the job.

By default, the command file is spooled to LSB_SHAREDIR/cluster_name/lsf_cmddir. If the lsf_cmddir directory does not exist, LSF creates it before spooling the file. LSF removes the spooled file when the job completes.

If JOB_SPOOL_DIR is specified, the -Zs option spools the command file to the specified directory and uses the spooled file as the input file for the job.

JOB_SPOOL_DIR can be any valid path up to a maximum length up to 4094 characters on UNIX and Linux or up to 255 characters for Windows.

JOB_SPOOL_DIR must be readable and writable by the job submission user, and it must be shared by the master host and the submission host. If the specified directory is not accessible or does not exist, bsub -Zs cannot write to the default directory LSB_SHAREDIR/cluster_name/lsf_cmddir and the job fails.

The -Zs option is not supported for embedded job commands because LSF is unable to determine the first command to be spooled in an embedded job command.

-h

Prints command usage to stderr and exits.

-V

Prints LSF release version to stderr and exits.

command [argument]

The job can be specified by a command line argument command, or through the standard input if the command is not present on the command line. The command can be anything that is provided to a UNIX Bourne shell (see sh(1)). command is assumed to begin with the first word that is not part of a bsub option. All arguments that follow command are provided as the arguments to the command.

The job command can be up to 4094 characters long for UNIX and Linux or up to 255 characters for Windows. If no job name is specified with -J, bjobs, bhist and bacct displays the command as the job name.

If the batch job is not given on the command line, bsub reads the job commands from standard input. If the standard input is a controlling terminal, the user is prompted with bsub> for the commands of the job. The input is terminated by entering CTRL-D on a new line. You can submit multiple commands through standard input.

The commands are executed in the order in which they are given. bsub options can also be specified in the standard input if the line begins with #BSUB; e.g., #BSUB -x. If an option is given on both the bsub command line, and in the standard input, the command line option overrides the option in the standard input. The user can specify the shell to run the commands by specifying the shell path name in the first line of the standard input, such as #!/bin/csh. If the shell is not given in the first line, the Bourne shell is used. The standard input facility can be used to spool a user's job script; such as bsub < script.

See Examples for examples of specifying commands through standard input.

Output

If the job is successfully submitted, displays the job ID and the queue to which the job has been submitted.

Examples

bsub sleep 100 

Submit the UNIX command sleep together with its argument 100 as a batch job.

bsub -q short -o my_output_file "pwd; ls"  

Submit the UNIX command pwd and ls as a batch job to the queue named short and store the job output in my_output file.

bsub -m "host1 host3 host8 host9" my_program 

Submit my_program to run on one of the candidate hosts: host1, host3, host8 and host9.

bsub -q "queue1 queue2 queue3" -c 5 my_program 

Submit my_program to one of the candidate queues: queue1, queue2, and queue3 which are selected according to the CPU time limit specified by -c 5.

bsub -I ls 

Submit a batch interactive job which displays the output of ls at the user's terminal.

bsub -Ip vi myfile 

Submit a batch interactive job to edit myfile.

bsub -Is csh 

Submit a batch interactive job that starts csh as an interactive shell.

bsub -b 20:00 -J my_job_name my_program 

Submit my_program to run after 8 p.m. and assign it the job name my_job_name.

bsub my_script 

Submit my_script as a batch job. Since my_script is specified as a command line argument, the my_script file is not spooled. Later changes to the my_script file before the job completes may affect this job.

bsub < default_shell_script 

where default_shell_script contains:

sim1.exe
sim2.exe 

The file default_shell_script is spooled, and the commands are run under the Bourne shell since a shell specification is not given in the first line of the script.

bsub < csh_script  

where csh_script contains:

#!/bin/csh
sim1.exe
sim2.exe 
csh_script is spooled and the commands are run under /bin/csh.  
bsub -q night < my_script 

where my_script contains:

#!/bin/sh
#BSUB -q test
#BSUB -o outfile -e errfile # my default stdout, stderr files
#BSUB -m "host1 host2" # my default candidate hosts
#BSUB -f "input > tmp" -f "output << tmp"
#BSUB -D 200 -c 10/host1
#BSUB -t 13:00
#BSUB -k "dir 5"
sim1.exe
sim2.exe 

The job is submitted to the night queue instead of test, because the command line overrides the script.

bsub -b 20:00 -J my_job_name  
bsub> sleep 1800
bsub> my_program
bsub> CTRL-D 

The job commands are entered interactively.

bsub -T 4 myjob 

Submits myjob with a maximum number of concurrent threads of 4.

bsub -W 15 -sla Kyuquot sleep 100 

Submit the UNIX command sleep together with its argument 100 as a batch job to the service class named Kyuquot.

Limitations

When using account mapping, the command bpeek(1) does not work. File transfer via the -f option to bsub(1) requires rcp(1) to be working between the submission and execution hosts. Use the -N option to request mail, and/or the -o and -e options to specify an output file and error file, respectively.

See also

bjobs, bkill, bqueues, bhosts, bmgroup, bmod, bchkpnt, brestart, bgadd, bgdel, bjgroup, sh, getrlimit, sbrk, libckpt.a, lsb.users, lsbqueues, lsb.params, lsb.hosts, lsb.serviceclasses, mbatchd


Platform Computing Inc.
www.platform.com
Knowledge Center         Contents    Previous  Next    Index