aiida.scheduler.plugins package

Submodules

Plugin for direct execution.

class aiida.scheduler.plugins.direct.DirectJobResource(**kwargs)[source]

Bases: aiida.scheduler.datastructures.NodeNumberJobResource

__module__ = 'aiida.scheduler.plugins.direct'
class aiida.scheduler.plugins.direct.DirectScheduler[source]

Bases: aiida.scheduler.Scheduler

Support for the direct execution bypassing schedulers.

__abstractmethods__ = frozenset([])
__module__ = 'aiida.scheduler.plugins.direct'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 119
_abc_registry = <_weakrefset.WeakSet object>
_convert_time(string)[source]

Convert a string in the format HH:MM:SS to a number of seconds.

_features = {'can_query_by_user': True}
_get_joblist_command(jobs=None, user=None)[source]

The command to report full information on existing jobs.

TODO: in the case of job arrays, decide what to do (i.e., if we want
to pass the -t options to list each subjob).
_get_kill_command(jobid)[source]

Return the command to kill the job with specified jobid.

_get_submit_command(submit_script)[source]

Return the string to execute to submit a given script.

Note

One needs to redirect stdout and stderr to /dev/null otherwise the daemon remains hanging for the script to run

Parameters:submit_script – the path of the submit script relative to the working directory. IMPORTANT: submit_script should be already escaped.
_get_submit_script_header(job_tmpl)[source]

Return the submit script header, using the parameters from the job_tmpl.

Args:
job_tmpl: an JobTemplate instance with relevant parameters set.
_job_resource_class

alias of DirectJobResource

_logger = <celery.utils.log.ProcessAwareLogger object>
_parse_joblist_output(retval, stdout, stderr)[source]

Parse the queue output string, as returned by executing the command returned by _get_joblist_command command (qstat -f).

Return a list of JobInfo objects, one of each job, each relevant parameters implemented.

Note

depending on the scheduler configuration, finished jobs may either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.

_parse_kill_output(retval, stdout, stderr)[source]

Parse the output of the kill command.

To be implemented by the plugin.

Returns:True if everything seems ok, False otherwise.
_parse_submit_output(retval, stdout, stderr)[source]

Parse the output of the submit command, as returned by executing the command returned by _get_submit_command command.

To be implemented by the plugin.

Return a string with the JobID.

getJobs(jobs=None, user=None, as_dict=False)[source]

Overrides original method from DirectScheduler in order to list missing processes as DONE.

Plugin for LSF. This has been tested on the CERN lxplus cluster (LSF 9.1.3)

class aiida.scheduler.plugins.lsf.LsfJobResource(**kwargs)[source]

Bases: aiida.scheduler.datastructures.JobResource

An implementation of JobResource for LSF, that supports the OPTIONAL specification of a parallel environment (a string) + the total number of processors.

‘parallel_env’ should contain a string of the form “host1 host2! hostgroupA! host3 host4” where the “!” symbol indicates the first execution host candidates. Other hosts are added only if the number of processors asked is more than those of the first execution host. See https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_command_ref/bsub.1.dita?lang=en for more details about the parallel environment definition (the -m option of bsub).

__init__(**kwargs)[source]

Initialize the job resources from the passed arguments (the valid keys can be obtained with the function self.get_valid_keys()).

Raises:
  • ValueError – on invalid parameters.
  • TypeError – on invalid parameters.
  • ConfigurationError – if default_mpiprocs_per_machine was set for this computer, since LsfJobResource cannot accept this parameter.
__module__ = 'aiida.scheduler.plugins.lsf'
_default_fields = ('parallel_env', 'tot_num_mpiprocs', 'default_mpiprocs_per_machine')
classmethod accepts_default_mpiprocs_per_machine()[source]

Return True if this JobResource accepts a ‘default_mpiprocs_per_machine’ key, False otherwise.

get_tot_num_mpiprocs()[source]

Return the total number of cpus of this job resource.

class aiida.scheduler.plugins.lsf.LsfScheduler[source]

Bases: aiida.scheduler.Scheduler

Support for the IBM LSF scheduler ‘https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_welcome.html

__abstractmethods__ = frozenset([])
__module__ = 'aiida.scheduler.plugins.lsf'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 119
_abc_registry = <_weakrefset.WeakSet object>
_features = {'can_query_by_user': False}
_get_detailed_jobinfo_command(jobid)[source]

Return the command to run to get the detailed information on a job, even after the job has finished.

The output text is just retrieved, and returned for logging purposes.

_get_joblist_command(jobs=None, user=None)[source]

The command to report full information on existing jobs.

Separates the fields with the _field_separator string order: jobnum, state, walltime, queue[=partition], user, numnodes, numcores, title

_get_kill_command(jobid)[source]

Return the command to kill the job with specified jobid.

_get_submit_command(submit_script)[source]

Return the string to execute to submit a given script.

Parameters:submit_script – the path of the submit script relative to the working directory. IMPORTANT: submit_script should be already escaped.

Return the submit script final part, using the parameters from the job_tmpl.

Parameters:job_tmpl – a JobTemplate instance with relevant parameters set.
_get_submit_script_header(job_tmpl)[source]

Return the submit script header, using the parameters from the job_tmpl. See the following manual https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.2/lsf_command_ref/bsub.1.dita?lang=en for more details about the possible options to bsub, in particular for the parallel environment definition (with the -m option).

Parameters:job_tmpl – an JobTemplate instance with relevant parameters set.
_job_resource_class

alias of LsfJobResource

_joblist_fields = ['id', 'stat', 'exit_reason', 'exec_host', 'user', 'slots', 'max_req_proc', 'exec_host', 'queue', 'finish_time', 'start_time', '%complete', 'submit_time', 'name']
_logger = <celery.utils.log.ProcessAwareLogger object>
_parse_joblist_output(retval, stdout, stderr)[source]

Parse the queue output string, as returned by executing the command returned by _get_joblist_command command, that is here implemented as a list of lines, one for each job, with _field_separator as separator. The order is described in the _get_joblist_command function.

Return a list of JobInfo objects, one of each job, each relevant parameters implemented.

Note: depending on the scheduler configuration, finished jobs may
either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.
_parse_kill_output(retval, stdout, stderr)[source]

Parse the output of the kill command.

Returns:True if everything seems ok, False otherwise.
_parse_submit_output(retval, stdout, stderr)[source]

Parse the output of the submit command, as returned by executing the command returned by _get_submit_command command.

To be implemented by the plugin.

Return a string with the JobID.

_parse_time_string(string, fmt='%b %d %H:%M')[source]

Parse a time string and returns a datetime object. Example format: ‘Feb 2 07:39’ or ‘Feb 2 07:39 L’

Base classes for PBSPro and PBS/Torque plugins.

class aiida.scheduler.plugins.pbsbaseclasses.PbsBaseClass[source]

Bases: object

Base class with support for the PBSPro scheduler (http://www.pbsworks.com/) and for PBS and Torque (http://www.adaptivecomputing.com/products/open-source/torque/).

Only a few properties need to be redefined, see examples of the pbspro and torque plugins

__dict__ = dict_proxy({'__module__': 'aiida.scheduler.plugins.pbsbaseclasses', '_logger': <celery.utils.log.ProcessAwareLogger object>, '_get_submit_script_header': <function _get_submit_script_header>, '__dict__': <attribute '__dict__' of 'PbsBaseClass' objects>, '_parse_submit_output': <function _parse_submit_output>, '_job_resource_class': <class 'aiida.scheduler.plugins.pbsbaseclasses.PbsJobResource'>, '__weakref__': <attribute '__weakref__' of 'PbsBaseClass' objects>, '_map_status': {'C': u'DONE', 'B': u'RUNNING', 'E': u'RUNNING', 'F': u'DONE', 'H': u'QUEUED_HELD', 'M': u'UNDETERMINED', 'Q': u'QUEUED', 'S': u'SUSPENDED', 'R': u'RUNNING', 'U': u'SUSPENDED', 'T': u'QUEUED', 'W': u'QUEUED', 'X': u'DONE'}, '_get_joblist_command': <function _get_joblist_command>, '_get_submit_command': <function _get_submit_command>, '_convert_time': <function _convert_time>, '_parse_time_string': <function _parse_time_string>, '_get_resource_lines': <function _get_resource_lines>, '_parse_kill_output': <function _parse_kill_output>, '_parse_joblist_output': <function _parse_joblist_output>, '_get_detailed_jobinfo_command': <function _get_detailed_jobinfo_command>, '_features': {'can_query_by_user': False}, '__doc__': '\n Base class with support for the PBSPro scheduler\n (http://www.pbsworks.com/) and for PBS and Torque\n (http://www.adaptivecomputing.com/products/open-source/torque/).\n\n Only a few properties need to be redefined, see examples of the pbspro and\n torque plugins\n ', '_get_kill_command': <function _get_kill_command>})
__module__ = 'aiida.scheduler.plugins.pbsbaseclasses'
__weakref__

list of weak references to the object (if defined)

_convert_time(string)[source]

Convert a string in the format HH:MM:SS to a number of seconds.

_features = {'can_query_by_user': False}
_get_detailed_jobinfo_command(jobid)[source]

Return the command to run to get the detailed information on a job, even after the job has finished.

The output text is just retrieved, and returned for logging purposes.

_get_joblist_command(jobs=None, user=None)[source]

The command to report full information on existing jobs.

TODO: in the case of job arrays, decide what to do (i.e., if we want
to pass the -t options to list each subjob).
_get_kill_command(jobid)[source]

Return the command to kill the job with specified jobid.

_get_resource_lines(num_machines, num_mpiprocs_per_machine, num_cores_per_machine, max_memory_kb, max_wallclock_seconds)[source]

Return a set a list of lines (possibly empty) with the header lines relative to:

  • num_machines
  • num_mpiprocs_per_machine
  • num_cores_per_machine
  • max_memory_kb
  • max_wallclock_seconds

This is done in an external function because it may change in different subclasses.

_get_submit_command(submit_script)[source]

Return the string to execute to submit a given script.

Args:
submit_script: the path of the submit script relative to the working
directory. IMPORTANT: submit_script should be already escaped.
_get_submit_script_header(job_tmpl)[source]

Return the submit script header, using the parameters from the job_tmpl.

Args:
job_tmpl: an JobTemplate instance with relevant parameters set.

TODO: truncate the title if too long

_job_resource_class

alias of PbsJobResource

_logger = <celery.utils.log.ProcessAwareLogger object>
_map_status = {'B': u'RUNNING', 'C': u'DONE', 'E': u'RUNNING', 'F': u'DONE', 'H': u'QUEUED_HELD', 'M': u'UNDETERMINED', 'Q': u'QUEUED', 'R': u'RUNNING', 'S': u'SUSPENDED', 'T': u'QUEUED', 'U': u'SUSPENDED', 'W': u'QUEUED', 'X': u'DONE'}
_parse_joblist_output(retval, stdout, stderr)[source]

Parse the queue output string, as returned by executing the command returned by _get_joblist_command command (qstat -f).

Return a list of JobInfo objects, one of each job, each relevant parameters implemented.

Note: depending on the scheduler configuration, finished jobs may
either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.
_parse_kill_output(retval, stdout, stderr)[source]

Parse the output of the kill command.

To be implemented by the plugin.

Returns:True if everything seems ok, False otherwise.
_parse_submit_output(retval, stdout, stderr)[source]

Parse the output of the submit command, as returned by executing the command returned by _get_submit_command command.

To be implemented by the plugin.

Return a string with the JobID.

_parse_time_string(string, fmt='%a %b %d %H:%M:%S %Y')[source]

Parse a time string in the format returned from qstat -f and returns a datetime object.

class aiida.scheduler.plugins.pbsbaseclasses.PbsJobResource(*args, **kwargs)[source]

Bases: aiida.scheduler.datastructures.NodeNumberJobResource

__init__(*args, **kwargs)[source]

It extends the base class init method and calculates the num_cores_per_machine fields to pass to PBSlike schedulers.

Checks that num_cores_per_machine is a multiple of num_cores_per_mpiproc and/or num_mpiprocs_per_machine

Check sequence

  1. If num_cores_per_mpiproc and num_cores_per_machine both are specified check whether it satisfies the check
  2. If only num_cores_per_mpiproc is passed, calculate num_cores_per_machine
  3. If only num_cores_per_machine is passed, use it
__module__ = 'aiida.scheduler.plugins.pbsbaseclasses'

Plugin for PBSPro. This has been tested on PBSPro v. 12.

class aiida.scheduler.plugins.pbspro.PbsproScheduler[source]

Bases: aiida.scheduler.plugins.pbsbaseclasses.PbsBaseClass, aiida.scheduler.Scheduler

Subclass to support the PBSPro scheduler (http://www.pbsworks.com/).

I redefine only what needs to change from the base class

__abstractmethods__ = frozenset([])
__module__ = 'aiida.scheduler.plugins.pbspro'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 119
_abc_registry = <_weakrefset.WeakSet object>
_get_resource_lines(num_machines, num_mpiprocs_per_machine, num_cores_per_machine, max_memory_kb, max_wallclock_seconds)[source]

Return the lines for machines, memory and wallclock relative to pbspro.

_logger = <celery.utils.log.ProcessAwareLogger object>

Plugin for SGE. This has been tested on GE 6.2u3.

Plugin originally written by Marco Dorigo. Email: marco(DOT)dorigo(AT)rub(DOT)de

class aiida.scheduler.plugins.sge.SgeJobResource(**kwargs)[source]

Bases: aiida.scheduler.datastructures.ParEnvJobResource

__module__ = 'aiida.scheduler.plugins.sge'
class aiida.scheduler.plugins.sge.SgeScheduler[source]

Bases: aiida.scheduler.Scheduler

Support for the Sun Grid Engine scheduler and its variants/forks (Son of Grid Engine, Oracle Grid Engine, …)

__abstractmethods__ = frozenset([])
__module__ = 'aiida.scheduler.plugins.sge'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 119
_abc_registry = <_weakrefset.WeakSet object>
_features = {'can_query_by_user': True}
_get_detailed_jobinfo_command(jobid)[source]

Return the command to run to get the detailed information on a job. This is typically called after the job has finished, to retrieve the most detailed information possible about the job. This is done because most schedulers just make finished jobs disappear from the ‘qstat’ command, and instead sometimes it is useful to know some more detailed information about the job exit status, etc.

_get_joblist_command(jobs=None, user=None)[source]

The command to report full information on existing jobs.

TODO: in the case of job arrays, decide what to do (i.e., if we want
to pass the -t options to list each subjob).

!!!ALL COPIED FROM PBSPRO!!! TODO: understand if it is worth escaping the username, or rather leave it unescaped to allow to pass $USER

_get_kill_command(jobid)[source]

Return the command to kill the job with specified jobid.

_get_submit_command(submit_script)[source]

Return the string to execute to submit a given script.

Args:
submit_script: the path of the submit script relative to the working
directory. IMPORTANT: submit_script should be already escaped.
_get_submit_script_header(job_tmpl)[source]

Return the submit script header, using the parameters from the job_tmpl.

Args:
job_tmpl: an JobTemplate instance with relevant parameters set.

TODO: truncate the title if too long

_job_resource_class

alias of SgeJobResource

_logger = <celery.utils.log.ProcessAwareLogger object>
_parse_joblist_output(retval, stdout, stderr)[source]

Parse the joblist output (‘qstat’), as returned by executing the command returned by _get_joblist_command method.

To be implemented by the plugin.

Return a list of JobInfo objects, one of each job, each with at least its default params implemented.

_parse_kill_output(retval, stdout, stderr)[source]

Parse the output of the kill command.

To be implemented by the plugin.

Returns:True if everything seems ok, False otherwise.
_parse_submit_output(retval, stdout, stderr)[source]

Parse the output of the submit command, as returned by executing the command returned by _get_submit_command command.

To be implemented by the plugin.

Return a string with the JobID.

_parse_time_string(string, fmt='%Y-%m-%dT%H:%M:%S')[source]

Parse a time string in the format returned from qstat -xml -ext and returns a datetime object. Example format: 2013-06-13T11:53:11

Plugin for SLURM. This has been tested on SLURM 14.03.7 on the CSCS.ch machines.

class aiida.scheduler.plugins.slurm.SlurmJobResource(*args, **kwargs)[source]

Bases: aiida.scheduler.datastructures.NodeNumberJobResource

__init__(*args, **kwargs)[source]

It extends the base class init method and calculates the num_cores_per_mpiproc fields to pass to Slurm schedulers.

Checks that num_cores_per_machine should be a multiple of num_cores_per_mpiproc and/or num_mpiprocs_per_machine

Check sequence

  1. If num_cores_per_mpiproc and num_cores_per_machine both are specified check whether it satisfies the check
  2. If only num_cores_per_machine is passed, calculate num_cores_per_mpiproc which should always be an integer value
  3. If only num_cores_per_mpiproc is passed, use it
__module__ = 'aiida.scheduler.plugins.slurm'
class aiida.scheduler.plugins.slurm.SlurmScheduler[source]

Bases: aiida.scheduler.Scheduler

Support for the SLURM scheduler (http://slurm.schedmd.com/).

__abstractmethods__ = frozenset([])
__module__ = 'aiida.scheduler.plugins.slurm'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 119
_abc_registry = <_weakrefset.WeakSet object>
_convert_time(string)[source]

Convert a string in the format DD-HH:MM:SS to a number of seconds.

_features = {'can_query_by_user': False}
_get_detailed_jobinfo_command(jobid)[source]

Return the command to run to get the detailed information on a job, even after the job has finished.

The output text is just retrieved, and returned for logging purposes. –parsable split the fields with a pipe (|), adding a pipe also at the end.

_get_joblist_command(jobs=None, user=None)[source]

The command to report full information on existing jobs.

Separate the fields with the _field_separator string order: jobnum, state, walltime, queue[=partition], user, numnodes, numcores, title

_get_kill_command(jobid)[source]

Return the command to kill the job with specified jobid.

_get_submit_command(submit_script)[source]

Return the string to execute to submit a given script.

Args:
submit_script: the path of the submit script relative to the working
directory. IMPORTANT: submit_script should be already escaped.
_get_submit_script_header(job_tmpl)[source]

Return the submit script header, using the parameters from the job_tmpl.

Args:
job_tmpl: an JobTemplate instance with relevant parameters set.

TODO: truncate the title if too long

_job_resource_class

alias of SlurmJobResource

_logger = <celery.utils.log.ProcessAwareLogger object>
_parse_joblist_output(retval, stdout, stderr)[source]

Parse the queue output string, as returned by executing the command returned by _get_joblist_command command, that is here implemented as a list of lines, one for each job, with _field_separator as separator. The order is described in the _get_joblist_command function.

Return a list of JobInfo objects, one of each job, each relevant parameters implemented.

Note: depending on the scheduler configuration, finished jobs may
either appear here, or not. This function will only return one element for each job find in the qstat output; missing jobs (for whatever reason) simply will not appear here.
_parse_kill_output(retval, stdout, stderr)[source]

Parse the output of the kill command.

To be implemented by the plugin.

Returns:True if everything seems ok, False otherwise.
_parse_submit_output(retval, stdout, stderr)[source]

Parse the output of the submit command, as returned by executing the command returned by _get_submit_command command.

To be implemented by the plugin.

Return a string with the JobID.

_parse_time_string(string, fmt='%Y-%m-%dT%H:%M:%S')[source]

Parse a time string in the format returned from qstat -f and returns a datetime object.

fields = [('%i', 'job_id'), ('%t', 'state_raw'), ('%r', 'annotation'), ('%B', 'executing_host'), ('%u', 'username'), ('%D', 'number_nodes'), ('%C', 'number_cpus'), ('%R', 'allocated_machines'), ('%P', 'partition'), ('%l', 'time_limit'), ('%M', 'time_used'), ('%S', 'dispatch_time'), ('%j', 'job_name'), ('%V', 'submission_time')]
class aiida.scheduler.plugins.test_direct.TestParserGetJobList(methodName='runTest')[source]

Bases: unittest.case.TestCase

Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline

__module__ = 'aiida.scheduler.plugins.test_direct'
test_parse_linux_joblist_output()[source]

Test whether _parse_joblist can parse the qstat -f output

test_parse_mac_joblist_output()[source]

Test whether _parse_joblist can parse the qstat -f output

test_parse_mac_wrong()[source]

Test whether _parse_joblist can parse the qstat -f output

class aiida.scheduler.plugins.test_lsf.TestParserBjobs(methodName='runTest')[source]

Bases: unittest.case.TestCase

Tests to verify if the function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline

__module__ = 'aiida.scheduler.plugins.test_lsf'
test_parse_common_joblist_output()[source]

Test whether _parse_joblist can parse the bjobs output

class aiida.scheduler.plugins.test_lsf.TestParserBkill(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_lsf'
test_kill_output()[source]

Test the parsing of the output of the submission command

class aiida.scheduler.plugins.test_lsf.TestParserSubmit(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_lsf'
test_submit_output()[source]

Test the parsing of the output of the submission command

class aiida.scheduler.plugins.test_lsf.TestSubmitScript(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_lsf'
test_submit_script()[source]

Test the creation of a simple submission script.

test_submit_script_with_num_machines()[source]

Test to verify that script fails if we specify only num_machines.

class aiida.scheduler.plugins.test_pbspro.TestParserQstat(methodName='runTest')[source]

Bases: unittest.case.TestCase

Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline

__module__ = 'aiida.scheduler.plugins.test_pbspro'
test_parse_common_joblist_output()[source]

Test whether _parse_joblist can parse the qstat -f output

test_parse_with_unexpected_newlines()[source]

Test whether _parse_joblist can parse the qstat -f output also when there are unexpected newlines

class aiida.scheduler.plugins.test_pbspro.TestSubmitScript(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_pbspro'
test_submit_script()[source]

Test to verify if scripts works fine with default options

test_submit_script_bad_shebang()[source]

Test to verify if scripts works fine with default options

test_submit_script_with_num_cores_per_machine()[source]

Test to verify if script works fine if we specify only num_cores_per_machine value.

test_submit_script_with_num_cores_per_machine_and_mpiproc1()[source]

Test to verify if scripts works fine if we pass both num_cores_per_machine and num_cores_per_mpiproc correct values. It should pass in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine

test_submit_script_with_num_cores_per_machine_and_mpiproc2()[source]

Test to verify if scripts works fine if we pass num_cores_per_machine and num_cores_per_mpiproc wrong values. It should fail in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine

test_submit_script_with_num_cores_per_mpiproc()[source]

Test to verify if scripts works fine if we pass only num_cores_per_mpiproc value

class aiida.scheduler.plugins.test_sge.TestCommand(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_sge'
_parse_time_string(string, fmt='%Y-%m-%dT%H:%M:%S')[source]

Parse a time string in the format returned from qstat -xml -ext and returns a datetime object. Example format: 2013-06-13T11:53:11

test_detailed_jobinfo_command()[source]
test_get_joblist_command()[source]
test_get_submit_command()[source]
test_parse_joblist_output()[source]
test_parse_submit_output()[source]
test_submit_script()[source]
class aiida.scheduler.plugins.test_slurm.TestParserSqueue(methodName='runTest')[source]

Bases: unittest.case.TestCase

Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline

__module__ = 'aiida.scheduler.plugins.test_slurm'
test_parse_common_joblist_output()[source]

Test whether _parse_joblist can parse the qstat -f output

class aiida.scheduler.plugins.test_slurm.TestSubmitScript(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_slurm'
test_submit_script()[source]

Test the creation of a simple submission script.

test_submit_script_bad_shebang()[source]
test_submit_script_with_num_cores_per_machine()[source]

Test to verify if script works fine if we specify only num_cores_per_machine value.

test_submit_script_with_num_cores_per_machine_and_mpiproc1()[source]

Test to verify if scripts works fine if we pass both num_cores_per_machine and num_cores_per_mpiproc correct values. It should pass in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine

test_submit_script_with_num_cores_per_machine_and_mpiproc2()[source]

Test to verify if scripts works fine if we pass num_cores_per_machine and num_cores_per_mpiproc wrong values. It should fail in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine

test_submit_script_with_num_cores_per_mpiproc()[source]

Test to verify if scripts works fine if we pass only num_cores_per_mpiproc value

class aiida.scheduler.plugins.test_slurm.TestTimes(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_slurm'
test_time_conversion()[source]

Test conversion of (relative) times.

From docs, acceptable time formats include “minutes”, “minutes:seconds”, “hours:minutes:seconds”, “days-hours”, “days-hours:minutes” and “days-hours:minutes:seconds”.

class aiida.scheduler.plugins.test_torque.TestParserQstat(methodName='runTest')[source]

Bases: unittest.case.TestCase

Tests to verify if teh function _parse_joblist_output behave correctly The tests is done parsing a string defined above, to be used offline

__module__ = 'aiida.scheduler.plugins.test_torque'
test_parse_common_joblist_output()[source]

Test whether _parse_joblist can parse the qstat -f output

test_parse_with_unexpected_newlines()[source]

Test whether _parse_joblist can parse the qstat -f output also when there are unexpected newlines

class aiida.scheduler.plugins.test_torque.TestSubmitScript(methodName='runTest')[source]

Bases: unittest.case.TestCase

__module__ = 'aiida.scheduler.plugins.test_torque'
test_submit_script()[source]

Test to verify if scripts works fine with default options

test_submit_script_with_num_cores_per_machine()[source]

Test to verify if script works fine if we specify only num_cores_per_machine value.

test_submit_script_with_num_cores_per_machine_and_mpiproc1()[source]

Test to verify if scripts works fine if we pass both num_cores_per_machine and num_cores_per_mpiproc correct values It should pass in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine

test_submit_script_with_num_cores_per_machine_and_mpiproc2()[source]

Test to verify if scripts works fine if we pass num_cores_per_machine and num_cores_per_mpiproc wrong values It should fail in check: res.num_cores_per_mpiproc * res.num_mpiprocs_per_machine = res.num_cores_per_machine

test_submit_script_with_num_cores_per_mpiproc()[source]

Test to verify if scripts works fine if we pass only num_cores_per_mpiproc value

Plugin for PBS/Torque. This has been tested on Torque v.2.4.16 (from Ubuntu).

class aiida.scheduler.plugins.torque.TorqueScheduler[source]

Bases: aiida.scheduler.plugins.pbsbaseclasses.PbsBaseClass, aiida.scheduler.Scheduler

Subclass to support the Torque scheduler..

I redefine only what needs to change from the base class

__abstractmethods__ = frozenset([])
__module__ = 'aiida.scheduler.plugins.torque'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 119
_abc_registry = <_weakrefset.WeakSet object>
_get_resource_lines(num_machines, num_mpiprocs_per_machine, num_cores_per_machine, max_memory_kb, max_wallclock_seconds)[source]

Return the lines for machines, memory and wallclock relative to pbspro.

_logger = <celery.utils.log.ProcessAwareLogger object>