aiida.engine package

Module with all the internals that make up the engine of aiida-core.

aiida.engine.run(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed.

Parameters
  • process (aiida.engine.Process) – the process class or process function to run

  • inputs (dict) – the inputs to be passed to the process

Returns

the outputs of the process

Return type

dict

aiida.engine.run_get_pk(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed.

Parameters
  • process (aiida.engine.Process) – the process class or process function to run

  • inputs (dict) – the inputs to be passed to the process

Returns

tuple of the outputs of the process and process node pk

Return type

(dict, int)

aiida.engine.run_get_node(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed.

Parameters
  • process (aiida.engine.Process) – the process class or process function to run

  • inputs (dict) – the inputs to be passed to the process

Returns

tuple of the outputs of the process and the process node

Return type

(dict, aiida.orm.ProcessNode)

aiida.engine.submit(process, **inputs)[source]

Submit the process with the supplied inputs to the daemon immediately returning control to the interpreter.

Parameters
  • process (aiida.engine.Process) – the process class to submit

  • inputs (dict) – the inputs to be passed to the process

Returns

the calculation node of the process

Return type

aiida.orm.ProcessNode

class aiida.engine.ProcessBuilder(process_class)[source]

Bases: aiida.engine.processes.builder.ProcessBuilderNamespace

A process builder that helps setting up the inputs for creating a new process.

__abstractmethods__ = frozenset({})
__init__(process_class)[source]

Construct a ProcessBuilder instance for the given Process class.

Parameters

process_class – the Process subclass

__module__ = 'aiida.engine.processes.builder'
_abc_impl = <_abc_data object>
property process_class
class aiida.engine.ProcessBuilderNamespace(port_namespace)[source]

Bases: collections.abc.MutableMapping

Input namespace for the ProcessBuilder.

Dynamically generates the getters and setters for the input ports of a given PortNamespace

__abstractmethods__ = frozenset({})
__delitem__(item)[source]
__dict__ = mappingproxy({'__module__': 'aiida.engine.processes.builder', '__doc__': 'Input namespace for the `ProcessBuilder`.\n\n Dynamically generates the getters and setters for the input ports of a given PortNamespace\n ', '__init__': <function ProcessBuilderNamespace.__init__>, '__setattr__': <function ProcessBuilderNamespace.__setattr__>, '__repr__': <function ProcessBuilderNamespace.__repr__>, '__dir__': <function ProcessBuilderNamespace.__dir__>, '__iter__': <function ProcessBuilderNamespace.__iter__>, '__len__': <function ProcessBuilderNamespace.__len__>, '__getitem__': <function ProcessBuilderNamespace.__getitem__>, '__setitem__': <function ProcessBuilderNamespace.__setitem__>, '__delitem__': <function ProcessBuilderNamespace.__delitem__>, '_update': <function ProcessBuilderNamespace._update>, '_inputs': <function ProcessBuilderNamespace._inputs>, '_prune': <function ProcessBuilderNamespace._prune>, '__dict__': <attribute '__dict__' of 'ProcessBuilderNamespace' objects>, '__weakref__': <attribute '__weakref__' of 'ProcessBuilderNamespace' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc_data object>})
__dir__()[source]

Default dir() implementation.

__getitem__(item)[source]
__init__(port_namespace)[source]

Dynamically construct the get and set properties for the ports of the given port namespace.

For each port in the given port namespace a get and set property will be constructed dynamically and added to the ProcessBuilderNamespace. The docstring for these properties will be defined by calling str() on the Port, which should return the description of the Port.

Parameters

port_namespace (str) – the inputs PortNamespace for which to construct the builder

__iter__()[source]
__len__()[source]
__module__ = 'aiida.engine.processes.builder'
__repr__()[source]

Return repr(self).

__setattr__(attr, value)[source]

Assign the given value to the port with key attr.

Note

Any attributes without a leading underscore being set correspond to inputs and should hence be validated with respect to the corresponding input port from the process spec

Parameters
  • attr (str) – attribute

  • value – value

__setitem__(item, value)[source]
__weakref__

list of weak references to the object (if defined)

_abc_impl = <_abc_data object>
_inputs(prune=False)[source]

Return the entire mapping of inputs specified for this builder.

Parameters

prune – boolean, when True, will prune nested namespaces that contain no actual values whatsoever

Returns

mapping of inputs ports and their input values.

_prune(value)[source]

Prune a nested mapping from all mappings that are completely empty.

Note

a nested mapping that is completely empty means it contains at most other empty mappings. Other null values, such as None or empty lists, should not be pruned.

Parameters

value – a nested mapping of port values

Returns

the same mapping but without any nested namespace that is completely empty.

_update(*args, **kwds)[source]

Update the values of the builder namespace passing a mapping as argument or individual keyword value pairs.

The method is prefixed with an underscore in order to not reserve the name for a potential port, but in principle the method functions just as collections.abc.MutableMapping.update.

Parameters
  • args (list) – a single mapping that should be mapped on the namespace

  • kwds (dict) – keyword value pairs that should be mapped onto the ports

class aiida.engine.CalcJob(*args, **kwargs)[source]

Bases: aiida.engine.processes.process.Process

Implementation of the CalcJob process.

__abstractmethods__ = frozenset({})
__init__(*args, **kwargs)[source]

Construct a CalcJob instance.

Construct the instance only if it is a sub class of CalcJob, otherwise raise InvalidOperation.

See documentation of aiida.engine.Process.

__module__ = 'aiida.engine.processes.calcjobs.calcjob'
_abc_impl = <_abc_data object>
_node_class

alias of aiida.orm.nodes.process.calculation.calcjob.CalcJobNode

_spec_class

alias of aiida.engine.processes.process_spec.CalcJobProcessSpec

classmethod define(spec)[source]
classmethod get_state_classes()[source]
on_terminated()[source]

Cleanup the node by deleting the calulation job state.

Note

This has to be done before calling the super because that will seal the node after we cannot change it

property options

Return the options of the metadata that were specified when this process instance was launched.

Returns

options dictionary

Return type

dict

parse(retrieved_temporary_folder=None)[source]

Parse a retrieved job calculation.

This is called once it’s finished waiting for the calculation to be finished and the data has been retrieved.

prepare_for_submission(folder)[source]

Prepare files for submission of calculation.

presubmit(folder)[source]

Prepares the calculation folder with all inputs, ready to be copied to the cluster.

Parameters

folder (aiida.common.folders.Folder) – a SandboxFolder that can be used to write calculation input files and the scheduling script.

Return calcinfo

the CalcInfo object containing the information needed by the daemon to handle operations.

Rtype calcinfo

aiida.common.CalcInfo

run()[source]

Run the calculation job.

This means invoking the presubmit and storing the temporary folder in the node’s repository. Then we move the process in the Wait state, waiting for the UPLOAD transport task to be started.

spec_options = <aiida.engine.processes.ports.PortNamespace object>
class aiida.engine.ExitCode[source]

Bases: aiida.engine.processes.exit_code.ExitCode

A simple data class to define an exit code for a Process.

When an instance of this clas is returned from a Process._run() call, it will be interpreted that the Process should be terminated and that the exit status and message of the namedtuple should be set to the corresponding attributes of the node.

Note

this class explicitly sub-classes a namedtuple to not break backwards compatibility and to have it behave exactly as a tuple.

Parameters
  • status (int) – positive integer exit status, where a non-zero value indicated the process failed, default is 0

  • message (str) – optional message with more details about the failure mode

  • invalidates_cache (bool) – optional flag, indicating that a process should not be used in caching

__dict__ = mappingproxy({'__module__': 'aiida.engine.processes.exit_code', '__doc__': 'A simple data class to define an exit code for a :class:`~aiida.engine.processes.process.Process`.\n\n When an instance of this clas is returned from a `Process._run()` call, it will be interpreted that the `Process`\n should be terminated and that the exit status and message of the namedtuple should be set to the corresponding\n attributes of the node.\n\n .. note:: this class explicitly sub-classes a namedtuple to not break backwards compatibility and to have it behave\n exactly as a tuple.\n\n :param status: positive integer exit status, where a non-zero value indicated the process failed, default is `0`\n :type status: int\n\n :param message: optional message with more details about the failure mode\n :type message: str\n\n :param invalidates_cache: optional flag, indicating that a process should not be used in caching\n :type invalidates_cache: bool\n ', 'format': <function ExitCode.format>, '__dict__': <attribute '__dict__' of 'ExitCode' objects>})
__module__ = 'aiida.engine.processes.exit_code'
format(**kwargs)[source]

Create a clone of this exit code where the template message is replaced by the keyword arguments.

Parameters

kwargs – replacement parameters for the template message

Returns

ExitCode

class aiida.engine.ExitCodesNamespace(dictionary=None)[source]

Bases: aiida.common.extendeddicts.AttributeDict

A namespace of ExitCode instances that can be accessed through getattr as well as getitem.

Additionally, the collection can be called with an identifier, that can either reference the integer status of the ExitCode that needs to be retrieved or the key in the collection.

__call__(identifier)[source]

Return a specific exit code identified by either its exit status or label.

Parameters

identifier (str) – the identifier of the exit code. If the type is integer, it will be interpreted as the exit code status, otherwise it be interpreted as the exit code label

Returns

an ExitCode instance

Return type

aiida.engine.ExitCode

Raises

ValueError – if no exit code with the given label is defined for this process

__module__ = 'aiida.engine.processes.exit_code'
aiida.engine.calcfunction(function)[source]

A decorator to turn a standard python function into a calcfunction. Example usage:

>>> from aiida.orm import Int
>>>
>>> # Define the calcfunction
>>> @calcfunction
>>> def sum(a, b):
>>>    return a + b
>>> # Run it with some input
>>> r = sum(Int(4), Int(5))
>>> print(r)
9
>>> r.get_incoming().all() 
[Neighbor(link_type='', link_label='result',
node=<CalcFunctionNode: uuid: ce0c63b3-1c84-4bb8-ba64-7b70a36adf34 (pk: 3567)>)]
>>> r.get_incoming().get_node_by_label('result').get_incoming().all_nodes()
[4, 5]
Parameters

function (callable) – The function to decorate.

Returns

The decorated function.

Return type

callable

aiida.engine.workfunction(function)[source]

A decorator to turn a standard python function into a workfunction. Example usage:

>>> from aiida.orm import Int
>>>
>>> # Define the workfunction
>>> @workfunction
>>> def select(a, b):
>>>    return a
>>> # Run it with some input
>>> r = select(Int(4), Int(5))
>>> print(r)
4
>>> r.get_incoming().all() 
[Neighbor(link_type='', link_label='result',
node=<WorkFunctionNode: uuid: ce0c63b3-1c84-4bb8-ba64-7b70a36adf34 (pk: 3567)>)]
>>> r.get_incoming().get_node_by_label('result').get_incoming().all_nodes()
[4, 5]
Parameters

function (callable) – The function to decorate.

Returns

The decorated function.

Return type

callable

class aiida.engine.FunctionProcess(*args, **kwargs)[source]

Bases: aiida.engine.processes.process.Process

Function process class used for turning functions into a Process

__abstractmethods__ = frozenset({})
__init__(*args, **kwargs)[source]

Process constructor.

Parameters
  • inputs (dict) – process inputs

  • logger (logging.Logger) – aiida logger

  • runner – process runner

  • parent_pid (int) – id of parent process

  • enable_persistence (bool) – whether to persist this process

Type

aiida.engine.runners.Runner

__module__ = 'aiida.engine.processes.functions'
_abc_impl = <_abc_data object>
static _func(*_args, **_kwargs)[source]

This is used internally to store the actual function that is being wrapped and will be replaced by the build method.

_func_args = None
_setup_db_record()[source]

Set up the database record for the process.

classmethod args_to_dict(*args)[source]

Create an input dictionary (of form label -> value) from supplied args.

Parameters

args (list) – The values to use for the dictionary

Returns

A label -> value dictionary

Return type

dict

static build(func, node_class)[source]

Build a Process from the given function.

All function arguments will be assigned as process inputs. If keyword arguments are specified then these will also become inputs.

Parameters
Returns

A Process class that represents the function

Return type

FunctionProcess

classmethod create_inputs(*args, **kwargs)[source]

Create the input args for the FunctionProcess.

Return type

dict

execute()[source]

Execute the process.

classmethod get_or_create_db_record()[source]

Create a process node that represents what happened in this process.

Returns

A process node

Return type

aiida.orm.ProcessNode

property process_class

Return the class that represents this Process, for the FunctionProcess this is the function itself.

For a standard Process or sub class of Process, this is the class itself. However, for legacy reasons, the Process class is a wrapper around another class. This function returns that original class, i.e. the class that really represents what was being executed.

Returns

A Process class that represents the function

Return type

FunctionProcess

run()[source]

Run the process.

Return type

aiida.engine.ExitCode

classmethod validate_inputs(*args, **kwargs)[source]

Validate the positional and keyword arguments passed in the function call.

Raises

TypeError – if more positional arguments are passed than the function defines

class aiida.engine.PortNamespace(*args, **kwargs)[source]

Bases: aiida.engine.processes.ports.WithNonDb, plumpy.ports.PortNamespace

Sub class of plumpy.PortNamespace which implements the serialize method to support automatic recursive serialization of a given mapping onto the ports of the PortNamespace.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.engine.processes.ports'
__setitem__(key, port)[source]

Ensure that a Port being added inherits the non_db attribute if not explicitly defined at construction.

The reasoning is that if a PortNamespace has non_db=True, which is different from the default value, very often all leaves should be also non_db=True. To prevent a user from having to specify it manually everytime we overload the value here, unless it was specifically set during construction.

Note that the non_db attribute is not present for all Port sub classes so we have to check for it first.

_abc_impl = <_abc_data object>
serialize(mapping, breadcrumbs=())[source]

Serialize the given mapping onto this Portnamespace.

It will recursively call this function on any nested PortNamespace or the serialize function on any Ports.

Parameters
  • mapping – a mapping of values to be serialized

  • breadcrumbs – a tuple with the namespaces of parent namespaces

Returns

the serialized mapping

static validate_port_name(port_name)[source]

Validate the given port name.

Valid port names adhere to the following restrictions:

  • Is a valid link label (see below)

  • Does not contain two or more consecutive underscores

Valid link labels adhere to the following restrictions:

  • Has to be a valid python identifier

  • Can only contain alphanumeric characters and underscores

  • Can not start or end with an underscore

Parameters

port_name – the proposed name of the port to be added

Raises
  • TypeError – if the port name is not a string type

  • ValueError – if the port name is invalid

class aiida.engine.InputPort(*args, **kwargs)[source]

Bases: aiida.engine.processes.ports.WithSerialize, aiida.engine.processes.ports.WithNonDb, plumpy.ports.InputPort

Sub class of plumpy.InputPort which mixes in the WithSerialize and WithNonDb mixins to support automatic value serialization to database storable types and support non database storable input types as well.

__abstractmethods__ = frozenset({})
__init__(*args, **kwargs)[source]

Override the constructor to check the type of the default if set and warn if not immutable.

__module__ = 'aiida.engine.processes.ports'
_abc_impl = <_abc_data object>
get_description()[source]

Return a description of the InputPort, which will be a dictionary of its attributes

Returns

a dictionary of the stringified InputPort attributes

class aiida.engine.OutputPort(name, valid_type=None, help=None, required=True, validator=None)[source]

Bases: plumpy.ports.Port

__abstractmethods__ = frozenset({})
__module__ = 'plumpy.ports'
_abc_impl = <_abc_data object>
class aiida.engine.CalcJobOutputPort(*args, **kwargs)[source]

Bases: plumpy.ports.OutputPort

Sub class of plumpy.OutputPort which adds the _pass_to_parser attribute.

__abstractmethods__ = frozenset({})
__init__(*args, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'aiida.engine.processes.ports'
_abc_impl = <_abc_data object>
property pass_to_parser
class aiida.engine.WithNonDb(*args, **kwargs)[source]

Bases: object

A mixin that adds support to a port to flag a that should not be stored in the database using the non_db=True flag.

The mixins have to go before the main port class in the superclass order to make sure the mixin has the chance to strip out the non_db keyword.

__dict__ = mappingproxy({'__module__': 'aiida.engine.processes.ports', '__doc__': '\n A mixin that adds support to a port to flag a that should not be stored\n in the database using the non_db=True flag.\n\n The mixins have to go before the main port class in the superclass order\n to make sure the mixin has the chance to strip out the non_db keyword.\n ', '__init__': <function WithNonDb.__init__>, 'non_db_explicitly_set': <property object>, 'non_db': <property object>, '__dict__': <attribute '__dict__' of 'WithNonDb' objects>, '__weakref__': <attribute '__weakref__' of 'WithNonDb' objects>})
__init__(*args, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'aiida.engine.processes.ports'
__weakref__

list of weak references to the object (if defined)

property non_db

Return whether the value of this Port should be stored as a Node in the database.

Returns

boolean, True if it should be storable as a Node, False otherwise

property non_db_explicitly_set

Return whether the a value for non_db was explicitly passed in the construction of the Port.

Returns

boolean, True if non_db was explicitly defined during construction, False otherwise

class aiida.engine.WithSerialize(*args, **kwargs)[source]

Bases: object

A mixin that adds support for a serialization function which is automatically applied on inputs that are not AiiDA data types.

__dict__ = mappingproxy({'__module__': 'aiida.engine.processes.ports', '__doc__': '\n A mixin that adds support for a serialization function which is automatically applied on inputs\n that are not AiiDA data types.\n ', '__init__': <function WithSerialize.__init__>, 'serialize': <function WithSerialize.serialize>, '__dict__': <attribute '__dict__' of 'WithSerialize' objects>, '__weakref__': <attribute '__weakref__' of 'WithSerialize' objects>})
__init__(*args, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'aiida.engine.processes.ports'
__weakref__

list of weak references to the object (if defined)

serialize(value)[source]

Serialize the given value if it is not already a Data type and a serializer function is defined

Parameters

value – the value to be serialized

Returns

a serialized version of the value or the unchanged value

class aiida.engine.Process(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Bases: plumpy.processes.Process

This class represents an AiiDA process which can be executed and will have full provenance saved in the database.

SINGLE_OUTPUT_LINKNAME = 'result'
class SaveKeys[source]

Bases: enum.Enum

Keys used to identify things in the saved instance state bundle.

CALC_ID = 'calc_id'
__module__ = 'aiida.engine.processes.process'
__abstractmethods__ = frozenset({})
__init__(inputs=None, logger=None, runner=None, parent_pid=None, enable_persistence=True)[source]

Process constructor.

Parameters
  • inputs (dict) – process inputs

  • logger (logging.Logger) – aiida logger

  • runner – process runner

  • parent_pid (int) – id of parent process

  • enable_persistence (bool) – whether to persist this process

Type

aiida.engine.runners.Runner

__module__ = 'aiida.engine.processes.process'
_abc_impl = <_abc_data object>
_auto_persist = {'_CREATION_TIME', '_enable_persistence', '_future', '_parent_pid', '_paused', '_pid', '_pre_paused_status', '_status'}
_create_and_setup_db_record()[source]

Create and setup the database record for this process

Returns

the uuid of the process

Return type

uuid.UUID

_flat_inputs()[source]

Return a flattened version of the parsed inputs dictionary.

The eventual keys will be a concatenation of the nested keys. Note that the metadata dictionary, if present, is not passed, as those are dealt with separately in _setup_metadata.

Returns

flat dictionary of parsed inputs

Return type

dict

_flat_outputs()[source]

Return a flattened version of the registered outputs dictionary.

The eventual keys will be a concatenation of the nested keys.

Returns

flat dictionary of parsed outputs

_flatten_inputs(port, port_value, parent_name='', separator='__')[source]

Function that will recursively flatten the inputs dictionary, omitting inputs for ports that are marked as being non database storable

Parameters
  • port (plumpy.ports.Port) – port against which to map the port value, can be InputPort or PortNamespace

  • port_value – value for the current port, can be a Mapping

  • parent_name (str) – the parent key with which to prefix the keys

  • separator (str) – character to use for the concatenation of keys

Returns

flat list of inputs

Return type

list

_flatten_outputs(port, port_value, parent_name='', separator='__')[source]

Function that will recursively flatten the outputs dictionary.

Parameters
  • port (plumpy.ports.Port) – port against which to map the port value, can be OutputPort or PortNamespace

  • port_value – value for the current port, can be a Mapping

  • parent_name (str) – the parent key with which to prefix the keys

  • separator (str) – character to use for the concatenation of keys

Returns

flat list of outputs

Return type

list

static _get_namespace_list(namespace=None, agglomerate=True)[source]

Get the list of namespaces in a given namespace.

Parameters
  • namespace (str) – name space

  • agglomerate (bool) – If set to true, all parent namespaces of the given namespace will also be searched.

Returns

namespace list

Return type

list

_node_class

alias of aiida.orm.nodes.process.process.ProcessNode

_save_checkpoint()[source]

Save the current state in a chechpoint if persistence is enabled and the process state is not terminal

If the persistence call excepts with a PersistenceError, it will be caught and a warning will be logged.

_setup_db_record()[source]

Create the database record for this process and the links with respect to its inputs

This function will set various attributes on the node that serve as a proxy for attributes of the Process. This is essential as otherwise this information could only be introspected through the Process itself, which is only available to the interpreter that has it in memory. To make this data introspectable from any interpreter, for example for the command line interface, certain Process attributes are proxied through the calculation node.

In addition, the parent calculation will be setup with a CALL link if applicable and all inputs will be linked up as well.

_setup_inputs()[source]

Create the links between the input nodes and the ProcessNode that represents this process.

_setup_metadata()[source]

Store the metadata on the ProcessNode.

_spec_class

alias of aiida.engine.processes.process_spec.ProcessSpec

classmethod build_process_type()[source]

The process type.

Returns

string of the process type

Return type

str

Note: This could be made into a property ‘process_type’ but in order to have it be a property of the class it would need to be defined in the metaclass, see https://bugs.python.org/issue20659

decode_input_args(encoded)[source]

Decode saved input arguments as they came from the saved instance state Bundle

Parameters

encoded – encoded (serialized) inputs

Returns

The decoded input args

classmethod define(spec)[source]
encode_input_args(inputs)[source]

Encode input arguments such that they may be saved in a Bundle

Parameters

inputs – A mapping of the inputs as passed to the process

Returns

The encoded (serialized) inputs

exit_codes = {'ERROR_INVALID_OUTPUT': ExitCode(status=10, message='The process returned an invalid output.', invalidates_cache=False), 'ERROR_LEGACY_FAILURE': ExitCode(status=2, message='The process failed with legacy failure mode.', invalidates_cache=False), 'ERROR_MISSING_OUTPUT': ExitCode(status=11, message='The process did not register a required output.', invalidates_cache=False), 'ERROR_UNSPECIFIED': ExitCode(status=1, message='The process has failed with an unspecified error.', invalidates_cache=False)}
exposed_inputs(process_class, namespace=None, agglomerate=True)[source]

Gather a dictionary of the inputs that were exposed for a given Process class under an optional namespace.

Parameters
  • process_class (aiida.engine.Process) – Process class whose inputs to try and retrieve

  • namespace (str) – PortNamespace in which to look for the inputs

  • agglomerate (bool) – If set to true, all parent namespaces of the given namespace will also be searched for inputs. Inputs in lower-lying namespaces take precedence.

Returns

exposed inputs

Return type

dict

exposed_outputs(node, process_class, namespace=None, agglomerate=True)[source]

Return the outputs which were exposed from the process_class and emitted by the specific node

Parameters
  • node (aiida.orm.nodes.process.ProcessNode) – process node whose outputs to try and retrieve

  • namespace (str) – Namespace in which to search for exposed outputs.

  • agglomerate (bool) – If set to true, all parent namespaces of the given namespace will also be searched for outputs. Outputs in lower-lying namespaces take precedence.

Returns

exposed outputs

Return type

dict

classmethod get_builder()[source]
classmethod get_exit_statuses(exit_code_labels)[source]

Return the exit status (integers) for the given exit code labels.

Parameters

exit_code_labels – a list of strings that reference exit code labels of this process class

Returns

list of exit status integers that correspond to the given exit code labels

Raises

AttributeError – if at least one of the labels does not correspond to an existing exit code

classmethod get_or_create_db_record()[source]

Create a process node that represents what happened in this process.

Returns

A process node

Return type

aiida.orm.ProcessNode

get_parent_calc()[source]

Get the parent process node

Returns

the parent process node if there is one

Return type

aiida.orm.ProcessNode

get_provenance_inputs_iterator()[source]

Get provenance input iterator.

Return type

filter

init()[source]
classmethod is_valid_cache(node)[source]

Check if the given node can be cached from.

Warning

When overriding this method, make sure to call super().is_valid_cache(node) and respect its output. Otherwise, the ‘invalidates_cache’ keyword on exit codes will not work.

This method allows extending the behavior of ProcessNode.is_valid_cache from Process sub-classes, for example in plug-ins.

kill(msg=None)[source]

Kill the process and all the children calculations it called

Parameters

msg (str) – message

Return type

bool

load_instance_state(saved_state, load_context)[source]

Load instance state.

Parameters
  • saved_state – saved instance state

  • load_context (plumpy.persistence.LoadSaveContext) –

property metadata

Return the metadata that were specified when this process instance was launched.

Returns

metadata dictionary

Return type

dict

property node

Return the ProcessNode used by this process to represent itself in the database.

Returns

instance of sub class of ProcessNode

Return type

aiida.orm.ProcessNode

on_create()[source]

Called when a Process is created.

on_entered(from_state)[source]
on_entering(state)[source]
on_except(exc_info)[source]

Log the exception by calling the report method with formatted stack trace from exception info object and store the exception string as a node attribute

Parameters

exc_info – the sys.exc_info() object (type, value, traceback)

on_finish(result, successful)[source]

Set the finish status on the process node.

Parameters
on_output_emitting(output_port, value)[source]

The process has emitted a value on the given output port.

Parameters
  • output_port (str) – The output port name the value was emitted on

  • value – The value emitted

on_paused(msg=None)[source]

The Process was paused so set the paused attribute on the process node

Parameters

msg (str) – message

on_playing()[source]

The Process was unpaused so remove the paused attribute on the process node

on_terminated()[source]

Called when a Process enters a terminal state.

out(output_port, value=None)[source]

Attach output to output port.

The name of the port will be used as the link label.

Parameters
  • output_port (str) – name of output port

  • value – value to put inside output port

out_many(out_dict)[source]

Attach outputs to multiple output ports.

Keys of the dictionary will be used as output port names, values as outputs.

Parameters

out_dict (dict) – output dictionary

report(msg, *args, **kwargs)[source]

Log a message to the logger, which should get saved to the database through the attached DbLogHandler.

The pk, class name and function name of the caller are prepended to the given message

Parameters
  • msg (str) – message to log

  • args (list) – args to pass to the log call

  • kwargs (dict) – kwargs to pass to the log call

property runner

Get process runner.

Return type

aiida.engine.runners.Runner

save_instance_state(out_state, save_context)[source]

Save instance state.

See documentation of plumpy.processes.Process.save_instance_state().

set_status(status)[source]

The status of the Process is about to be changed, so we reflect this is in node’s attribute proxy.

Parameters

status (str) – the status message

spec_metadata = <aiida.engine.processes.ports.PortNamespace object>
submit(process, *args, **kwargs)[source]

Submit process for execution.

Parameters

process (aiida.engine.Process) – process

update_node_state(state)[source]
update_outputs()[source]

Attach new outputs to the node since the last call.

Does nothing, if self.metadata.store_provenance is False.

property uuid

Return the UUID of the process which corresponds to the UUID of its associated ProcessNode.

Returns

the UUID associated to this process instance

class aiida.engine.ProcessState[source]

Bases: enum.Enum

The possible states that a Process can be in.

CREATED = 'created'
EXCEPTED = 'excepted'
FINISHED = 'finished'
KILLED = 'killed'
RUNNING = 'running'
WAITING = 'waiting'
__module__ = 'plumpy.process_states'
aiida.engine.ToContext

alias of builtins.dict

aiida.engine.assign_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to ASSIGN. When the awaitable target is completed it will be assigned to the context for a key that is to be defined later

Parameters

target – an instance of a Process or Awaitable

Returns

the awaitable

Return type

Awaitable

aiida.engine.append_(target)[source]

Convenience function that will construct an Awaitable for a given class instance with the context action set to APPEND. When the awaitable target is completed it will be appended to a list in the context for a key that is to be defined later

Parameters

target – an instance of a Process or Awaitable

Returns

the awaitable

Return type

Awaitable

class aiida.engine.BaseRestartWorkChain(*args, **kwargs)[source]

Bases: aiida.engine.processes.workchains.workchain.WorkChain

Base restart work chain.

This work chain serves as the starting point for more complex work chains that will be designed to run a sub process that might need multiple restarts to come to a successful end. These restarts may be necessary because a single process run is not sufficient to achieve a fully converged result, or certain errors maybe encountered which are recoverable.

This work chain implements the most basic functionality to achieve this goal. It will launch the sub process, restarting until it is completed successfully or the maximum number of iterations is reached. After completion of the sub process it will be inspected, and a list of process handlers are called successively. These process handlers are defined as class methods that are decorated with process_handler().

The idea is to sub class this work chain and leverage the generic error handling that is implemented in the few outline methods. The minimally required outline would look something like the following:

cls.setup
while_(cls.should_run_process)(
    cls.run_process,
    cls.inspect_process,
)

Each of these methods can of course be overriden but they should be general enough to fit most process cycles. The run_process method will take the inputs for the process from the context under the key inputs. The user should, therefore, make sure that before the run_process method is called, that the to be used inputs are stored under self.ctx.inputs. One can update the inputs based on the results from a prior process by calling an outline method just before the run_process step, for example:

cls.setup
while_(cls.should_run_process)(
    cls.prepare_inputs,
    cls.run_process,
    cls.inspect_process,
)

Where in the prepare_calculation method, the inputs dictionary at self.ctx.inputs is updated before the next process will be run with those inputs.

The _process_class attribute should be set to the Process class that should be run in the loop. Finally, to define handlers that will be called during the inspect_process simply define a class method with the signature (self, node) and decorate it with the process_handler decorator, for example:

@process_handler
def handle_problem(self, node):
    if some_problem:
        self.ctx.inputs = improved_inputs
        return ProcessHandlerReport()

The process_handler and ProcessHandlerReport support various arguments to control the flow of the logic of the inspect_process. Refer to their respective documentation for details.

__abstractmethods__ = frozenset({})
__init__(*args, **kwargs)[source]

Construct the instance.

__module__ = 'aiida.engine.processes.workchains.restart'
_abc_impl = <_abc_data object>
_considered_handlers_extra = 'considered_handlers'
_process_class = None
_wrap_bare_dict_inputs(port_namespace, inputs)[source]

Wrap bare dictionaries in inputs in a Dict node if dictated by the corresponding inputs portnamespace.

Parameters
  • port_namespace – a PortNamespace

  • inputs – a dictionary of inputs intended for submission of the process

Returns

an attribute dictionary with all bare dictionaries wrapped in Dict if dictated by the port namespace

classmethod define(spec)[source]

Define the process specification.

classmethod get_process_handlers()[source]
inspect_process()[source]

Analyse the results of the previous process and call the handlers when necessary.

If the process is excepted or killed, the work chain will abort. Otherwise any attached handlers will be called in order of their specified priority. If the process was failed and no handler returns a report indicating that the error was handled, it is considered an unhandled process failure and the process is relaunched. If this happens twice in a row, the work chain is aborted. In the case that at least one handler returned a report the following matrix determines the logic that is followed:

Process Handler Handler Action result report? exit code —————————————– Success yes == 0 Restart Success yes != 0 Abort Failed yes == 0 Restart Failed yes != 0 Abort

If no handler returned a report and the process finished successfully, the work chain’s work is considered done and it will move on to the next step that directly follows the while conditional, if there is one defined in the outline.

classmethod is_process_handler(process_handler_name)[source]

Return whether the given method name corresponds to a process handler of this class.

Parameters

process_handler_name – string name of the instance method

Returns

boolean, True if corresponds to process handler, False otherwise

on_terminated()[source]

Clean the working directories of all child calculation jobs if clean_workdir=True in the inputs.

results()[source]

Attach the outputs specified in the output specification from the last completed process.

run_process()[source]

Run the next process, taking the input dictionary from the context at self.ctx.inputs.

setup()[source]

Initialize context variables that are used during the logical flow of the BaseRestartWorkChain.

should_run_process()[source]

Return whether a new process should be run.

This is the case as long as the last process has not finished successfully and the maximum number of restarts has not yet been exceeded.

class aiida.engine.ProcessHandlerReport(do_break, exit_code)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__module__ = 'aiida.engine.processes.workchains.utils'
static __new__(_cls, do_break=False, exit_code=ExitCode(status=0, message=None, invalidates_cache=False))

Create new instance of ProcessHandlerReport(do_break, exit_code)

__repr__()

Return a nicely formatted representation string

__slots__ = ()
_asdict()

Return a new OrderedDict which maps field names to their values.

_fields = ('do_break', 'exit_code')
_fields_defaults = {}
classmethod _make(iterable)

Make a new ProcessHandlerReport object from a sequence or iterable

_replace(**kwds)

Return a new ProcessHandlerReport object replacing specified fields with new values

property do_break

Alias for field number 0

property exit_code

Alias for field number 1

aiida.engine.process_handler(wrapped=None, *, priority=0, exit_codes=None, enabled=True)[source]

Decorator to register a BaseRestartWorkChain instance method as a process handler.

The decorator will validate the priority and exit_codes optional keyword arguments and then add itself as an attribute to the wrapped instance method. This is used in the inspect_process to return all instance methods of the class that have been decorated by this function and therefore are considered to be process handlers.

Requirements on the function signature of process handling functions. The function to which the decorator is applied needs to take two arguments:

  • self: This is the instance of the work chain itself

  • node: This is the process node that finished and is to be investigated

The function body typically consists of a conditional that will check for a particular problem that might have occurred for the sub process. If a particular problem is handled, the process handler should return an instance of the aiida.engine.ProcessHandlerReport tuple. If no other process handlers should be considered, the set do_break attribute should be set to True. If the work chain is to be aborted entirely, the exit_code of the report can be set to an ExitCode instance with a non-zero status.

Parameters
  • cls – the work chain class to register the process handler with

  • priority – optional integer that defines the order in which registered handlers will be called during the handling of a finished process. Higher priorities will be handled first. Default value is 0. Multiple handlers with the same priority is allowed, but the order of those is not well defined.

  • exit_codes – single or list of ExitCode instances. If defined, the handler will return None if the exit code set on the node does not appear in the exit_codes. This is useful to have a handler called only when the process failed with a specific exit code.

  • enabled – boolean, by default True, which will cause the handler to be called during inspect_process. When set to False, the handler will be skipped. This static value can be overridden on a per work chain instance basis through the input handler_overrides.

class aiida.engine.WorkChain(inputs=None, logger=None, runner=None, enable_persistence=True)[source]

Bases: aiida.engine.processes.process.Process

The WorkChain class is the principle component to implement workflows in AiiDA.

_CONTEXT = 'CONTEXT'
_STEPPER_STATE = 'stepper_state'
__abstractmethods__ = frozenset({})
__init__(inputs=None, logger=None, runner=None, enable_persistence=True)[source]

Construct a WorkChain instance.

Construct the instance only if it is a sub class of WorkChain, otherwise raise InvalidOperation.

Parameters
  • inputs (dict) – work chain inputs

  • logger (logging.Logger) – aiida logger

  • runner – work chain runner

  • enable_persistence (bool) – whether to persist this work chain

Type

aiida.engine.runners.Runner

__module__ = 'aiida.engine.processes.workchains.workchain'
_abc_impl = <_abc_data object>
_auto_persist = {'_CREATION_TIME', '_awaitables', '_enable_persistence', '_future', '_parent_pid', '_paused', '_pid', '_pre_paused_status', '_status'}
_do_step()[source]

Execute the next step in the outline and return the result.

If the stepper returns a non-finished status and the return value is of type ToContext, the contents of the ToContext container will be turned into awaitables if necessary. If any awaitables were created, the process will enter in the Wait state, otherwise it will go to Continue. When the stepper returns that it is done, the stepper result will be converted to None and returned, unless it is an integer or instance of ExitCode.

_node_class

alias of aiida.orm.nodes.process.workflow.workchain.WorkChainNode

_spec_class

alias of WorkChainSpec

_store_nodes(data)[source]

Recurse through a data structure and store any unstored nodes that are found along the way

Parameters

data – a data structure potentially containing unstored nodes

_update_process_status()[source]

Set the process status with a message accounting the current sub processes that we are waiting for.

action_awaitables()[source]

Handle the awaitables that are currently registered with the work chain

Depending on the class type of the awaitable’s target a different callback function will be bound with the awaitable and the runner will be asked to call it when the target is completed

property ctx

Get context.

Return type

aiida.common.extendeddicts.AttributeDict

insert_awaitable(awaitable)[source]

Insert an awaitable that should be terminated before before continuing to the next step.

Parameters

awaitable (aiida.engine.processes.workchains.awaitable.Awaitable) – the thing to await

load_instance_state(saved_state, load_context)[source]

Load instance state.

Parameters
  • saved_state – saved instance state

  • load_context (plumpy.persistence.LoadSaveContext) –

on_exiting()[source]

Ensure that any unstored nodes in the context are stored, before the state is exited

After the state is exited the next state will be entered and if persistence is enabled, a checkpoint will be saved. If the context contains unstored nodes, the serialization necessary for checkpointing will fail.

on_process_finished(awaitable, pk)[source]

Callback function called by the runner when the process instance identified by pk is completed.

The awaitable will be effectuated on the context of the work chain and removed from the internal list. If all awaitables have been dealt with, the work chain process is resumed.

Parameters
  • awaitable – an Awaitable instance

  • pk (int) – the pk of the awaitable’s target

on_run()[source]
on_wait(awaitables)[source]
remove_awaitable(awaitable)[source]

Remove an awaitable.

Precondition: must be an awaitable that was previously inserted.

Parameters

awaitable – the awaitable to remove

run()[source]
save_instance_state(out_state, save_context)[source]

Save instance stace.

Parameters
  • out_state – state to save in

  • save_context (plumpy.persistence.LoadSaveContext) –

to_context(**kwargs)[source]

Add a dictionary of awaitables to the context.

This is a convenience method that provides syntactic sugar, for a user to add multiple intersteps that will assign a certain value to the corresponding key in the context of the work chain.

aiida.engine.if_(condition)[source]

A conditional that can be used in a workchain outline.

Use as:

if_(cls.conditional)(
  cls.step1,
  cls.step2
)

Each step can, of course, also be any valid workchain step e.g. conditional.

Parameters

condition – The workchain method that will return True or False

aiida.engine.while_(condition)[source]

A while loop that can be used in a workchain outline.

Use as:

while_(cls.conditional)(
  cls.step1,
  cls.step2
)

Each step can, of course, also be any valid workchain step e.g. conditional.

Parameters

condition – The workchain method that will return True or False

aiida.engine.interruptable_task(coro, loop=None)[source]

Turn the given coroutine into an interruptable task by turning it into an InterruptableFuture and returning it.

Parameters
  • coro – the coroutine that should be made interruptable

  • loop – the event loop in which to run the coroutine, by default uses tornado.ioloop.IOLoop.current()

Returns

an InterruptableFuture

class aiida.engine.InterruptableFuture[source]

Bases: tornado.concurrent.Future

A future that can be interrupted by calling interrupt.

__module__ = 'aiida.engine.utils'
interrupt(reason)[source]

This method should be called to interrupt the coroutine represented by this InterruptableFuture.

with_interrupt(yieldable)[source]

Yield a yieldable which will be interrupted if this future is interrupted

from tornado import ioloop, gen
loop = ioloop.IOLoop.current()

interruptable = InterutableFuture()
loop.add_callback(interruptable.interrupt, RuntimeError("STOP"))
loop.run_sync(lambda: interruptable.with_interrupt(gen.sleep(2)))
>>> RuntimeError: STOP
Parameters

yieldable – The yieldable

Returns

The result of the yieldable

aiida.engine.is_process_function(function)[source]

Return whether the given function is a process function

Parameters

function – a function

Returns

True if the function is a wrapped process function, False otherwise

Submodules

Exceptions that can be thrown by parts of the workflow engine.

exception aiida.engine.exceptions.PastException[source]

Bases: aiida.common.exceptions.AiidaException

Raised when an attempt is made to continue a Process that has already excepted before.

__module__ = 'aiida.engine.exceptions'

Top level functions that can be used to launch a Process.

aiida.engine.launch.run(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed.

Parameters
  • process (aiida.engine.Process) – the process class or process function to run

  • inputs (dict) – the inputs to be passed to the process

Returns

the outputs of the process

Return type

dict

aiida.engine.launch.run_get_pk(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed.

Parameters
  • process (aiida.engine.Process) – the process class or process function to run

  • inputs (dict) – the inputs to be passed to the process

Returns

tuple of the outputs of the process and process node pk

Return type

(dict, int)

aiida.engine.launch.run_get_node(process, *args, **inputs)[source]

Run the process with the supplied inputs in a local runner that will block until the process is completed.

Parameters
  • process (aiida.engine.Process) – the process class or process function to run

  • inputs (dict) – the inputs to be passed to the process

Returns

tuple of the outputs of the process and the process node

Return type

(dict, aiida.orm.ProcessNode)

aiida.engine.launch.submit(process, **inputs)[source]

Submit the process with the supplied inputs to the daemon immediately returning control to the interpreter.

Parameters
  • process (aiida.engine.Process) – the process class to submit

  • inputs (dict) – the inputs to be passed to the process

Returns

the calculation node of the process

Return type

aiida.orm.ProcessNode

Definition of AiiDA’s process persister and the necessary object loaders.

class aiida.engine.persistence.AiiDAPersister[source]

Bases: plumpy.persistence.Persister

Persister to take saved process instance states and persisting them to the database.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.engine.persistence'
_abc_impl = <_abc_data object>
delete_checkpoint(pid, tag=None)[source]

Delete a persisted process checkpoint, where no error will be raised if the checkpoint does not exist.

Parameters
  • pid – the process id of the plumpy.Process

  • tag – optional checkpoint identifier to allow retrieving a specific sub checkpoint

delete_process_checkpoints(pid)[source]

Delete all persisted checkpoints related to the given process id.

Parameters

pid – the process id of the aiida.engine.processes.process.Process

get_checkpoints()[source]

Return a list of all the current persisted process checkpoints

Returns

list of PersistedCheckpoint tuples with element containing the process id and optional checkpoint tag.

get_process_checkpoints(pid)[source]

Return a list of all the current persisted process checkpoints for the specified process.

Parameters

pid – the process pid

Returns

list of PersistedCheckpoint tuples with element containing the process id and optional checkpoint tag.

load_checkpoint(pid, tag=None)[source]

Load a process from a persisted checkpoint by its process id.

Parameters
  • pid – the process id of the plumpy.Process

  • tag – optional checkpoint identifier to allow retrieving a specific sub checkpoint

Returns

a bundle with the process state

Return type

plumpy.Bundle

Raises

plumpy.PersistenceError Raised if there was a problem loading the checkpoint

save_checkpoint(process, tag=None)[source]

Persist a Process instance.

Parameters
  • processaiida.engine.Process

  • tag – optional checkpoint identifier to allow distinguishing multiple checkpoints for the same process

Raises

plumpy.PersistenceError Raised if there was a problem saving the checkpoint

class aiida.engine.persistence.ObjectLoader[source]

Bases: plumpy.loaders.DefaultObjectLoader

Custom object loader for aiida-core.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.engine.persistence'
_abc_impl = <_abc_data object>
load_object(identifier)[source]

Attempt to load the object identified by the given identifier.

Note

We override the plumpy.DefaultObjectLoader to be able to throw an ImportError instead of a ValueError which in the context of aiida-core is not as apt, since we are loading classes.

Parameters

identifier – concatenation of module and resource name

Returns

loaded object

Raises

ImportError – if the object cannot be loaded

aiida.engine.persistence.get_object_loader()[source]

Return the global AiiDA object loader.

Returns

The global object loader

Return type

plumpy.ObjectLoader

Runners that can run and submit processes.

class aiida.engine.runners.Runner(poll_interval=0, loop=None, communicator=None, rmq_submit=False, persister=None)[source]

Bases: object

Class that can launch processes by running in the current interpreter or by submitting them to the daemon.

__dict__ = mappingproxy({'__module__': 'aiida.engine.runners', '__doc__': 'Class that can launch processes by running in the current interpreter or by submitting them to the daemon.', '_persister': None, '_communicator': None, '_controller': None, '_closed': False, '__init__': <function Runner.__init__>, '__enter__': <function Runner.__enter__>, '__exit__': <function Runner.__exit__>, 'loop': <property object>, 'transport': <property object>, 'persister': <property object>, 'communicator': <property object>, 'plugin_version_provider': <property object>, 'job_manager': <property object>, 'controller': <property object>, 'is_daemon_runner': <property object>, 'is_closed': <function Runner.is_closed>, 'start': <function Runner.start>, 'stop': <function Runner.stop>, 'run_until_complete': <function Runner.run_until_complete>, 'close': <function Runner.close>, 'instantiate_process': <function Runner.instantiate_process>, 'submit': <function Runner.submit>, 'schedule': <function Runner.schedule>, '_run': <function Runner._run>, 'run': <function Runner.run>, 'run_get_node': <function Runner.run_get_node>, 'run_get_pk': <function Runner.run_get_pk>, 'call_on_calculation_finish': <function Runner.call_on_calculation_finish>, 'get_calculation_future': <function Runner.get_calculation_future>, '_poll_calculation': <function Runner._poll_calculation>, '__dict__': <attribute '__dict__' of 'Runner' objects>, '__weakref__': <attribute '__weakref__' of 'Runner' objects>})
__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
__init__(poll_interval=0, loop=None, communicator=None, rmq_submit=False, persister=None)[source]

Construct a new runner

Parameters
  • poll_interval – interval in seconds between polling for status of active calculations

  • loop (tornado.ioloop.IOLoop) – an event loop to use, if none is suppled a new one will be created

  • communicator (kiwipy.Communicator) – the communicator to use

  • rmq_submit – if True, processes will be submitted to RabbitMQ, otherwise they will be scheduled here

  • persister (plumpy.Persister) – the persister to use to persist processes

__module__ = 'aiida.engine.runners'
__weakref__

list of weak references to the object (if defined)

_closed = False
_communicator = None
_controller = None
_persister = None
_poll_calculation(calc_node, callback)[source]
_run(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters
  • process – the process class or process function to run

  • inputs – the inputs to be passed to the process

Returns

tuple of the outputs of the process and the calculation node

call_on_calculation_finish(pk, callback)[source]

Callback to be called when the calculation of the given pk is terminated

Parameters
  • pk – the pk of the calculation

  • callback – the function to be called upon calculation termination

close()[source]

Close the runner by stopping the loop.

property communicator

Get the communicator used by this runner

Returns

the communicator

Return type

kiwipy.Communicator

property controller
get_calculation_future(pk)[source]

Get a future for an orm Calculation. The future will have the calculation node as the result when finished.

Returns

A future representing the completion of the calculation node

instantiate_process(process, *args, **inputs)[source]
is_closed()[source]
property is_daemon_runner

Return whether the runner is a daemon runner, which means it submits processes over RabbitMQ.

Returns

True if the runner is a daemon runner

Return type

bool

property job_manager
property loop

Get the event loop of this runner

Returns

the event loop

Return type

tornado.ioloop.IOLoop

property persister
property plugin_version_provider
run(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters
  • process – the process class or process function to run

  • inputs – the inputs to be passed to the process

Returns

the outputs of the process

run_get_node(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters
  • process – the process class or process function to run

  • inputs – the inputs to be passed to the process

Returns

tuple of the outputs of the process and the calculation node

run_get_pk(process, *args, **inputs)[source]

Run the process with the supplied inputs in this runner that will block until the process is completed. The return value will be the results of the completed process

Parameters
  • process – the process class or process function to run

  • inputs – the inputs to be passed to the process

Returns

tuple of the outputs of the process and process node pk

run_until_complete(future)[source]

Run the loop until the future has finished and return the result.

schedule(process, *args, **inputs)[source]

Schedule a process to be executed by this runner

Parameters
  • process – the process class to submit

  • inputs – the inputs to be passed to the process

Returns

the calculation node of the process

start()[source]

Start the internal event loop.

stop()[source]

Stop the internal event loop.

submit(process, *args, **inputs)[source]

Submit the process with the supplied inputs to this runner immediately returning control to the interpreter. The return value will be the calculation node of the submitted process

Parameters
  • process – the process class to submit

  • inputs – the inputs to be passed to the process

Returns

the calculation node of the process

property transport

A transport queue to batch process multiple tasks that require a Transport.

class aiida.engine.transports.TransportQueue(loop=None)[source]

Bases: object

A queue to get transport objects from authinfo. This class allows clients to register their interest in a transport object which will be provided at some point in the future.

Internally the class will wait for a specific interval at the end of which it will open the transport and give it to all the clients that asked for it up to that point. This way opening of transports (a costly operation) can be minimised.

class AuthInfoEntry(authinfo, transport, callbacks, callback_handle)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

__module__ = 'aiida.engine.transports'
static __new__(_cls, authinfo, transport, callbacks, callback_handle)

Create new instance of AuthInfoEntry(authinfo, transport, callbacks, callback_handle)

__repr__()

Return a nicely formatted representation string

__slots__ = ()
_asdict()

Return a new OrderedDict which maps field names to their values.

_fields = ('authinfo', 'transport', 'callbacks', 'callback_handle')
_fields_defaults = {}
classmethod _make(iterable)

Make a new AuthInfoEntry object from a sequence or iterable

_replace(**kwds)

Return a new AuthInfoEntry object replacing specified fields with new values

property authinfo

Alias for field number 0

property callback_handle

Alias for field number 3

property callbacks

Alias for field number 2

property transport

Alias for field number 1

__dict__ = mappingproxy({'__module__': 'aiida.engine.transports', '__doc__': '\n A queue to get transport objects from authinfo. This class allows clients\n to register their interest in a transport object which will be provided at\n some point in the future.\n\n Internally the class will wait for a specific interval at the end of which\n it will open the transport and give it to all the clients that asked for it\n up to that point. This way opening of transports (a costly operation) can\n be minimised.\n ', 'AuthInfoEntry': <class 'aiida.engine.transports.AuthInfoEntry'>, '__init__': <function TransportQueue.__init__>, 'loop': <function TransportQueue.loop>, 'request_transport': <function TransportQueue.request_transport>, '__dict__': <attribute '__dict__' of 'TransportQueue' objects>, '__weakref__': <attribute '__weakref__' of 'TransportQueue' objects>})
__init__(loop=None)[source]
Parameters

loop (tornado.ioloop.IOLoop) – The event loop to use, will use tornado.ioloop.IOLoop.current() if not supplied

__module__ = 'aiida.engine.transports'
__weakref__

list of weak references to the object (if defined)

loop()[source]

Get the loop being used by this transport queue

request_transport(authinfo)[source]

Request a transport from an authinfo. Because the client is not allowed to request a transport immediately they will instead be given back a future that can be yielded to get the transport:

@tornado.gen.coroutine
def transport_task(transport_queue, authinfo):
    with transport_queue.request_transport(authinfo) as request:
        transport = yield request
        # Do some work with the transport
Parameters

authinfo – The authinfo to be used to get transport

Returns

A future that can be yielded to give the transport

class aiida.engine.transports.TransportRequest[source]

Bases: object

Information kept about request for a transport object

__dict__ = mappingproxy({'__module__': 'aiida.engine.transports', '__doc__': ' Information kept about request for a transport object ', '__init__': <function TransportRequest.__init__>, '__dict__': <attribute '__dict__' of 'TransportRequest' objects>, '__weakref__': <attribute '__weakref__' of 'TransportRequest' objects>})
__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'aiida.engine.transports'
__weakref__

list of weak references to the object (if defined)

Utilities for the workflow engine.

aiida.engine.utils.interruptable_task(coro, loop=None)[source]

Turn the given coroutine into an interruptable task by turning it into an InterruptableFuture and returning it.

Parameters
  • coro – the coroutine that should be made interruptable

  • loop – the event loop in which to run the coroutine, by default uses tornado.ioloop.IOLoop.current()

Returns

an InterruptableFuture

class aiida.engine.utils.InterruptableFuture[source]

Bases: tornado.concurrent.Future

A future that can be interrupted by calling interrupt.

__module__ = 'aiida.engine.utils'
interrupt(reason)[source]

This method should be called to interrupt the coroutine represented by this InterruptableFuture.

with_interrupt(yieldable)[source]

Yield a yieldable which will be interrupted if this future is interrupted

from tornado import ioloop, gen
loop = ioloop.IOLoop.current()

interruptable = InterutableFuture()
loop.add_callback(interruptable.interrupt, RuntimeError("STOP"))
loop.run_sync(lambda: interruptable.with_interrupt(gen.sleep(2)))
>>> RuntimeError: STOP
Parameters

yieldable – The yieldable

Returns

The result of the yieldable

aiida.engine.utils.is_process_function(function)[source]

Return whether the given function is a process function

Parameters

function – a function

Returns

True if the function is a wrapped process function, False otherwise