aiida.restapi package#

In this module, AiiDA provides REST API to access different AiiDA nodes stored in database. The REST API is implemented using Flask RESTFul framework.

Subpackages#

Submodules#

Implementation of RESTful API for AiiDA based on flask and flask_restful.

Author: Snehal P. Waychal and Fernando Gargiulo @ Theos, EPFL

class aiida.restapi.api.AiidaApi(app=None, **kwargs)[source]#

Bases: Api

AiiDA customized version of the flask_restful Api class

__init__(app=None, **kwargs)[source]#

The need to have a special constructor is to include directly the addition of resources with the parameters required to initialize the resource classes.

Parameters:

kwargs – parameters to be passed to the resources for configuration and PREFIX

__module__ = 'aiida.restapi.api'#
handle_error(e)[source]#

This method handles the 404 “URL not found” exception and return custom message :param e: raised exception :return: list of available endpoints

class aiida.restapi.api.App(*args, **kwargs)[source]#

Bases: Flask

Basic Flask App customized for this REST Api purposes

__annotations__ = {}#
__init__(*args, **kwargs)[source]#
__module__ = 'aiida.restapi.api'#

Resources for REST API

class aiida.restapi.resources.BaseResource(profile, **kwargs)[source]#

Bases: Resource

Each derived class will instantiate a different type of translator. This is the only difference in the classes.

class BaseTranslator(**kwargs)#

Bases: object

Generic class for translator. It contains the methods required to build a related QueryBuilder object

__dict__ = mappingproxy({'__module__': 'aiida.restapi.translator.base', '__doc__': 'Generic class for translator. It contains the methods\n    required to build a related QueryBuilder object\n    ', '__label__': None, '_aiida_class': None, '_aiida_type': None, '_has_uuid': None, '_result_type': None, '_default': ['**'], '_default_projections': ['**'], '_is_qb_initialized': False, '_is_id_query': None, '_total_count': None, '__init__': <function BaseTranslator.__init__>, '__repr__': <function BaseTranslator.__repr__>, 'get_projectable_properties': <staticmethod(<function BaseTranslator.get_projectable_properties>)>, 'init_qb': <function BaseTranslator.init_qb>, 'count': <function BaseTranslator.count>, 'get_total_count': <function BaseTranslator.get_total_count>, 'set_filters': <function BaseTranslator.set_filters>, 'get_default_projections': <function BaseTranslator.get_default_projections>, 'set_default_projections': <function BaseTranslator.set_default_projections>, 'set_projections': <function BaseTranslator.set_projections>, 'set_order': <function BaseTranslator.set_order>, 'set_query': <function BaseTranslator.set_query>, 'get_query_help': <function BaseTranslator.get_query_help>, 'set_limit_offset': <function BaseTranslator.set_limit_offset>, 'get_formatted_result': <function BaseTranslator.get_formatted_result>, 'get_results': <function BaseTranslator.get_results>, '_check_id_validity': <function BaseTranslator._check_id_validity>, '__dict__': <attribute '__dict__' of 'BaseTranslator' objects>, '__weakref__': <attribute '__weakref__' of 'BaseTranslator' objects>, '__annotations__': {}})#
__init__(**kwargs)#

Initialise the parameters. Create the basic query_help

keyword Class (default None but becomes this class): is the class from which one takes the initial values of the attributes. By default is this class so that class atributes are translated into object attributes. In case of inheritance one cane use the same constructore but pass the inheriting class to pass its attributes.

__label__ = None#
__module__ = 'aiida.restapi.translator.base'#
__repr__()#

This function is required for the caching system to be able to compare two NodeTranslator objects. Comparison is done on the value returned by __repr__

Returns:

representation of NodeTranslator objects. Returns nothing because the inputs of self.get_nodes are sufficient to determine the identity of two queries.

__weakref__#

list of weak references to the object (if defined)

_aiida_class = None#
_aiida_type = None#
_check_id_validity(node_id)#

Checks whether id corresponds to an object of the expected type, whenever type is a valid column of the database (ex. for nodes, but not for users)

Parameters:

node_id – id (or id starting pattern)

Returns:

True if node_id valid, False if invalid. If True, sets the id filter attribute correctly

Raises:

RestValidationError – if no node is found or id pattern does not identify a unique node

_default = ['**']#
_default_projections = ['**']#
_has_uuid = None#
_is_id_query = None#
_is_qb_initialized = False#
_result_type = None#
_total_count = None#
count()#

Count the number of rows returned by the query and set total_count

get_default_projections()#

Method to get default projections of the node :return: self._default_projections

get_formatted_result(label)#

Runs the query and retrieves results tagged as “label”.

Parameters:

label (str) – the tag of the results to be extracted out of the query rows.

Returns:

a list of the query results

static get_projectable_properties()#

This method is extended in specific translators classes. It returns a dict as follows: dict(fields=projectable_properties, ordering=ordering) where projectable_properties is a dict and ordering is a list

get_query_help()#
Returns:

return QB json dictionary

get_results()#

Returns either list of nodes or details of single node from database.

Returns:

either list of nodes or details of single node from database

get_total_count()#

Returns the number of rows of the query.

Returns:

total_count

init_qb()#

Initialize query builder object by means of _query_help

set_default_projections()#

It calls the set_projections() methods internally to add the default projections in query_help

Returns:

None

set_filters(filters=None)#

Add filters in query_help.

Parameters:

filters

it is a dictionary where keys are the tag names given in the path in query_help and their values are the dictionary of filters want to add for that tag name. Format for the Filters dictionary:

filters = {
    "tag1" : {k1:v1, k2:v2},
    "tag2" : {k1:v1, k2:v2},
}

Returns:

query_help dict including filters if any.

set_limit_offset(limit=None, offset=None)#

Sets limits and offset directly to the query_builder object

Parameters:
  • limit

  • offset

Returns:

set_order(orders)#

Add order_by clause in query_help :param orders: dictionary of orders you want to apply on final results :return: None or exception if any.

set_projections(projections)#

Add the projections in query_help

Parameters:

projections – it is a dictionary where keys are the tag names given in the path in query_help and values are the list of the names you want to project in the final output

Returns:

updated query_help with projections

set_query(filters=None, orders=None, projections=None, query_type=None, node_id=None, attributes=None, attributes_filter=None, extras=None, extras_filter=None)#

Adds filters, default projections, order specs to the query_help, and initializes the qb object

Parameters:
  • filters – dictionary with the filters

  • orders – dictionary with the order for each tag

  • projections – dictionary with the projection. It is discarded if query_type==’attributes’/’extras’

  • query_type – (string) specify the result or the content (“attr”)

  • id – (integer) id of a specific node

  • filename – name of the file to return its content

  • attributes – flag to show attributes in nodes endpoint

  • attributes_filter – list of node attributes to query

  • extras – flag to show attributes in nodes endpoint

  • extras_filter – list of node extras to query

__annotations__ = {}#
__init__(profile, **kwargs)[source]#

Construct the resource.

__module__ = 'aiida.restapi.resources'#
_load_and_verify(node_id=None)[source]#

Load node and verify it is of the required type

_parse_pk_uuid = None#
_translator_class#

alias of BaseTranslator

get(id=None, page=None)[source]#

Get method for the resource :param id: node identifier :param page: page no, used for pagination :return: http response

load_profile(profile=None)[source]#

Load the required profile.

This will load the profile specified by the profile keyword in the query parameters, and if not specified it will default to the profile defined in the constructor.

method_decorators = [<AdapterWrapper at 0x7f1d0e4db640 for function>]#
methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

parse_path(path)[source]#

Parse the request path.

parse_pk_uuid = None#
parse_query_string(query_string)[source]#

Parse the request query string.

static unquote_request()[source]#

Unquote and return various parts of the request.

Returns:

Tuple of the request path, url, url root and query string.

class aiida.restapi.resources.CalcJobNode(profile, **kwargs)[source]#

Bases: ProcessNode

Resource for CalcJobNode

class CalcJobTranslator(**kwargs)#

Bases: ProcessTranslator

Translator relative to resource ‘calculations’ and aiida class Calculation

class CalcJobNode(backend: 'StorageBackend' | None = None, user: User | None = None, computer: Computer | None = None, **kwargs: Any)#

Bases: CalculationNode

ORM class for all nodes representing the execution of a CalcJob.

CALC_JOB_STATE_KEY = 'state'#
IMMIGRATED_KEY = 'imported'#
REMOTE_WORKDIR_KEY = 'remote_workdir'#
RETRIEVE_LIST_KEY = 'retrieve_list'#
RETRIEVE_TEMPORARY_LIST_KEY = 'retrieve_temporary_list'#
SCHEDULER_DETAILED_JOB_INFO_KEY = 'detailed_job_info'#
SCHEDULER_JOB_ID_KEY = 'job_id'#
SCHEDULER_LAST_CHECK_TIME_KEY = 'scheduler_lastchecktime'#
SCHEDULER_LAST_JOB_INFO_KEY = 'last_job_info'#
SCHEDULER_STATE_KEY = 'scheduler_state'#
_CLS_NODE_CACHING#

alias of CalcJobNodeCaching

__abstractmethods__ = frozenset({})#
__annotations__ = {'METADATA_INPUTS_KEY': <class 'str'>, '_CLS_COLLECTION': 'Type[CollectionType]', '__plugin_type_string': 'ClassVar[str]', '__qb_fields__': 'Sequence[QbField]', '__query_type_string': 'ClassVar[str]', '_hash_ignored_attributes': 'Tuple[str, ...]', '_logger': 'Optional[Logger]', '_updatable_attributes': 'Tuple[str, ...]', 'fields': 'QbFields'}#
__module__ = 'aiida.orm.nodes.process.calculation.calcjob'#
__parameters__ = ()#
__plugin_type_string: ClassVar[str]#
__qb_fields__: Sequence[QbField] = [QbStrField('scheduler_state', dtype=Optional[str], is_attribute=True), QbStrField('state', dtype=Optional[str], is_attribute=True), QbStrField('remote_workdir', dtype=Optional[str], is_attribute=True), QbStrField('job_id', dtype=Optional[str], is_attribute=True), QbStrField('scheduler_lastchecktime', dtype=Optional[str], is_attribute=True), QbStrField('last_job_info', dtype=Optional[str], is_attribute=True), QbDictField('detailed_job_info', dtype=Optional[dict], is_attribute=True), QbArrayField('retrieve_list', dtype=Optional[List[str]], is_attribute=True), QbArrayField('retrieve_temporary_list', dtype=Optional[List[str]], is_attribute=True), QbField('imported', dtype=Optional[bool], is_attribute=True)]#
__query_type_string: ClassVar[str]#
_abc_impl = <_abc._abc_data object>#
_hash_ignored_attributes: Tuple[str, ...] = ('metadata_inputs', 'queue_name', 'account', 'qos', 'priority', 'max_wallclock_seconds', 'max_memory_kb', 'version')#
_logger: Logger | None = <Logger aiida.orm.nodes.process.calculation.calcjob.CalcJobNode (WARNING)>#
_tools = None#
_updatable_attributes: Tuple[str, ...] = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', 'process_label', 'process_state', 'process_status', 'state', 'imported', 'remote_workdir', 'retrieve_list', 'retrieve_temporary_list', 'job_id', 'scheduler_state', 'scheduler_lastchecktime', 'last_job_info', 'detailed_job_info')#
static _validate_retrieval_directive(directives: Sequence[str | Tuple[str, str, str]]) None#

Validate a list or tuple of file retrieval directives.

Parameters:

directives – a list or tuple of file retrieval directives

Raises:

ValueError – if the format of the directives is invalid

delete_state() None#

Delete the calculation job state attribute if it exists.

fields: QbFields = {'attributes': 'QbDictField(attributes.*) -> Dict[str, Any]',  'computer_pk': 'QbNumericField(attributes.computer_pk) -> Optional[int]',  'ctime': 'QbNumericField(ctime) -> datetime',  'description': 'QbStrField(description) -> str',  'detailed_job_info': 'QbDictField(attributes.detailed_job_info) -> '                       'Optional[dict]',  'exception': 'QbStrField(attributes.exception) -> Optional[str]',  'exit_message': 'QbStrField(attributes.exit_message) -> Optional[str]',  'exit_status': 'QbNumericField(attributes.exit_status) -> Optional[int]',  'extras': 'QbDictField(extras.*) -> Dict[str, Any]',  'imported': 'QbField(attributes.imported) -> Optional[bool]',  'job_id': 'QbStrField(attributes.job_id) -> Optional[str]',  'label': 'QbStrField(label) -> str',  'last_job_info': 'QbStrField(attributes.last_job_info) -> Optional[str]',  'mtime': 'QbNumericField(mtime) -> datetime',  'node_type': 'QbStrField(node_type) -> str',  'paused': 'QbField(attributes.paused) -> bool',  'pk': 'QbNumericField(pk) -> int',  'process_label': 'QbStrField(attributes.process_label) -> Optional[str]',  'process_state': 'QbStrField(attributes.process_state) -> Optional[str]',  'process_status': 'QbStrField(attributes.process_status) -> Optional[str]',  'process_type': 'QbStrField(attributes.process_type) -> Optional[str]',  'remote_workdir': 'QbStrField(attributes.remote_workdir) -> Optional[str]',  'repository_metadata': 'QbDictField(repository_metadata) -> Dict[str, Any]',  'retrieve_list': 'QbArrayField(attributes.retrieve_list) -> '                   'Optional[List[str]]',  'retrieve_temporary_list': 'QbArrayField(attributes.retrieve_temporary_list) '                             '-> Optional[List[str]]',  'scheduler_lastchecktime': 'QbStrField(attributes.scheduler_lastchecktime) -> '                             'Optional[str]',  'scheduler_state': 'QbStrField(attributes.scheduler_state) -> Optional[str]',  'sealed': 'QbField(attributes.sealed) -> bool',  'state': 'QbStrField(attributes.state) -> Optional[str]',  'user_pk': 'QbNumericField(user_pk) -> int',  'uuid': 'QbStrField(uuid) -> str'}#
get_authinfo() AuthInfo#

Return the AuthInfo that is configured for the Computer set for this node.

Returns:

AuthInfo

get_description() str#

Return a description of the node based on its properties.

get_detailed_job_info() dict | None#

Return the detailed job info dictionary.

The scheduler is polled for the detailed job info after the job is completed and ready to be retrieved.

Returns:

the dictionary with detailed job info if defined or None

get_job_id() str | None#

Return job id that was assigned to the calculation by the scheduler.

Returns:

the string representation of the scheduler job id

get_last_job_info() JobInfo | None#

Return the last information asked to the scheduler about the status of the job.

The last job info is updated on every poll of the scheduler, except for the final poll when the job drops from the scheduler’s job queue. For completed jobs, the last job info therefore contains the “second-to-last” job info that still shows the job as running. Please use get_detailed_job_info() instead.

Returns:

a JobInfo object (that closely resembles a dictionary) or None.

get_option(name: str) Any | None#

Return the value of an option that was set for this CalcJobNode.

Parameters:

name – the option name

Returns:

the option value or None

Raises:

ValueError for unknown option

get_options() Dict[str, Any]#

Return the dictionary of options set for this CalcJobNode

Returns:

dictionary of the options and their values

get_parser_class() Type[Parser] | None#

Return the output parser object for this calculation or None if no parser is set.

Returns:

a Parser class.

Raises:

aiida.common.exceptions.EntryPointError – if the parser entry point can not be resolved.

get_remote_workdir() str | None#

Return the path to the remote (on cluster) scratch folder of the calculation.

Returns:

a string with the remote path

get_retrieve_list() Sequence[str | Tuple[str, str, str]] | None#

Return the list of files/directories to be retrieved on the cluster after the calculation has completed.

Returns:

a list of file directives

get_retrieve_temporary_list() Sequence[str | Tuple[str, str, str]] | None#

Return list of files to be retrieved from the cluster which will be available during parsing.

Returns:

a list of file directives

get_retrieved_node() FolderData | None#

Return the retrieved data folder.

Returns:

the retrieved FolderData node or None if not found

get_scheduler_lastchecktime() datetime | None#

Return the time of the last update of the scheduler state by the daemon or None if it was never set.

Returns:

a datetime object or None

get_scheduler_state() JobState | None#

Return the status of the calculation according to the cluster scheduler.

Returns:

a JobState enum instance.

get_scheduler_stderr() AnyStr | None#

Return the scheduler stdout output if the calculation has finished and been retrieved, None otherwise.

Returns:

scheduler stdout output or None

get_scheduler_stdout() AnyStr | None#

Return the scheduler stderr output if the calculation has finished and been retrieved, None otherwise.

Returns:

scheduler stderr output or None

get_state() CalcJobState | None#

Return the calculation job active sub state.

The calculation job state serves to give more granular state information to CalcJobs, in addition to the generic process state, while the calculation job is active. The state can take values from the enumeration defined in aiida.common.datastructures.CalcJobState and can be used to query for calculation jobs in specific active states.

Returns:

instance of aiida.common.datastructures.CalcJobState or None if invalid value, or not set

get_transport() Transport#

Return the transport for this calculation.

Returns:

Transport configured with the AuthInfo associated to the computer of this node

property is_imported: bool#

Return whether the calculation job was imported instead of being an actual run.

Return the link label used for the retrieved FolderData node.

property res: CalcJobResultManager#

To be used to get direct access to the parsed parameters.

Returns:

an instance of the CalcJobResultManager.

Note:

a practical example on how it is meant to be used: let’s say that there is a key ‘energy’ in the dictionary of the parsed results which contains a list of floats. The command calc.res.energy will return such a list.

set_detailed_job_info(detailed_job_info: dict | None) None#

Set the detailed job info dictionary.

Parameters:

detailed_job_info – a dictionary with metadata with the accounting of a completed job

set_job_id(job_id: int | str) None#

Set the job id that was assigned to the calculation by the scheduler.

Note

the id will always be stored as a string

Parameters:

job_id – the id assigned by the scheduler after submission

set_last_job_info(last_job_info: JobInfo) None#

Set the last job info.

Parameters:

last_job_info – a JobInfo object

set_option(name: str, value: Any) None#

Set an option to the given value

Parameters:
  • name – the option name

  • value – the value to set

Raises:

ValueError for unknown option

Raises:

TypeError for values with invalid type

set_options(options: Dict[str, Any]) None#

Set the options for this CalcJobNode

Parameters:

options – dictionary of option and their values to set

set_remote_workdir(remote_workdir: str) None#

Set the absolute path to the working directory on the remote computer where the calculation is run.

Parameters:

remote_workdir – absolute filepath to the remote working directory

set_retrieve_list(retrieve_list: Sequence[str | Tuple[str, str, str]]) None#

Set the retrieve list.

This list of directives will instruct the daemon what files to retrieve after the calculation has completed. list or tuple of files or paths that should be retrieved by the daemon.

Parameters:

retrieve_list – list or tuple of with filepath directives

set_retrieve_temporary_list(retrieve_temporary_list: Sequence[str | Tuple[str, str, str]]) None#

Set the retrieve temporary list.

The retrieve temporary list stores files that are retrieved after completion and made available during parsing and are deleted as soon as the parsing has been completed.

Parameters:

retrieve_temporary_list – list or tuple of with filepath directives

set_scheduler_state(state: JobState) None#

Set the scheduler state.

Parameters:

state – an instance of JobState

set_state(state: CalcJobState) None#

Set the calculation active job state.

Raise:

ValueError if state is invalid

property tools: CalculationTools#

Return the calculation tools that are registered for the process type associated with this calculation.

If the entry point name stored in the process_type of the CalcJobNode has an accompanying entry point in the aiida.tools.calculations entry point category, it will attempt to load the entry point and instantiate it passing the node to the constructor. If the entry point does not exist, cannot be resolved or loaded, a warning will be logged and the base CalculationTools class will be instantiated and returned.

Returns:

CalculationTools instance

__annotations__ = {}#
__label__ = 'calcjobs'#
__module__ = 'aiida.restapi.translator.nodes.process.calculation.calcjob'#
_aiida_class#

alias of CalcJobNode

_aiida_type = 'process.calculation.calcjob.CalcJobNode'#
_result_type = 'calcjobs'#
static get_derived_properties(node)#

Generic function extended for calcjob. Currently it is not implemented.

Parameters:

node – node object

Returns:

empty dict

static get_input_files(node, filename)#

Get the submitted input files for job calculation :param node: aiida node :return: the retrieved input files for job calculation

static get_output_files(node, filename)#

Get the retrieved output files for job calculation :param node: aiida node :return: the retrieved output files for job calculation

__annotations__ = {}#
__module__ = 'aiida.restapi.resources'#
_translator_class#

alias of CalcJobTranslator

get(id=None, page=None)[source]#

Get method for the Process resource.

Parameters:

id – node identifier

Returns:

http response

methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

class aiida.restapi.resources.Computer(profile, **kwargs)[source]#

Bases: BaseResource

Resource for Computer

class ComputerTranslator(**kwargs)#

Bases: BaseTranslator

Translator relative to resource ‘computers’ and aiida class Computer

__annotations__ = {}#
__label__ = 'computers'#
__module__ = 'aiida.restapi.translator.computer'#
_aiida_class#

alias of Computer

_aiida_type = 'Computer'#
_has_uuid = True#
_result_type = 'computers'#
get_projectable_properties()#

Get projectable properties specific for Computer :return: dict of projectable properties and column_order list

__annotations__ = {}#
__module__ = 'aiida.restapi.resources'#
_parse_pk_uuid = 'uuid'#
_translator_class#

alias of ComputerTranslator

methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

class aiida.restapi.resources.Group(profile, **kwargs)[source]#

Bases: BaseResource

Resource for Group

class GroupTranslator(**kwargs)#

Bases: BaseTranslator

Translator relative to resource ‘groups’ and aiida class Group

__annotations__ = {}#
__label__ = 'groups'#
__module__ = 'aiida.restapi.translator.group'#
_aiida_class#

alias of Group

_aiida_type = 'groups.Group'#
_has_uuid = True#
_result_type = 'groups'#
get_projectable_properties()#

Get projectable properties specific for Group :return: dict of projectable properties and column_order list

__annotations__ = {}#
__module__ = 'aiida.restapi.resources'#
_parse_pk_uuid = 'uuid'#
_translator_class#

alias of GroupTranslator

methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

class aiida.restapi.resources.Node(profile, **kwargs)[source]#

Bases: BaseResource

Differs from BaseResource in trans.set_query() mostly because it takes query_type as an input and the presence of additional result types like “tree”

__annotations__ = {}#
__module__ = 'aiida.restapi.resources'#
_parse_pk_uuid = 'uuid'#
_translator_class#

alias of NodeTranslator

get(id=None, page=None)[source]#

Get method for the Node resource.

Parameters:
  • id – node identifier

  • page – page no, used for pagination

Returns:

http response

methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

class aiida.restapi.resources.ProcessNode(profile, **kwargs)[source]#

Bases: Node

Resource for ProcessNode

class ProcessTranslator(**kwargs)#

Bases: NodeTranslator

Translator relative to resource ‘data’ and aiida class ~aiida.orm.nodes.data.data.Data

class ProcessNode(backend: 'StorageBackend' | None = None, user: User | None = None, computer: Computer | None = None, **kwargs: Any)#

Bases: Sealable, Node

Base class for all nodes representing the execution of a process

This class and its subclasses serve as proxies in the database, for actual Process instances being run. The Process instance in memory will leverage an instance of this class (the exact sub class depends on the sub class of Process) to persist important information of its state to the database. This serves as a way for the user to inspect the state of the Process during its execution as well as a permanent record of its execution in the provenance graph, after the execution has terminated.

CHECKPOINT_KEY = 'checkpoints'#
EXCEPTION_KEY = 'exception'#
EXIT_MESSAGE_KEY = 'exit_message'#
EXIT_STATUS_KEY = 'exit_status'#
METADATA_INPUTS_KEY: str = 'metadata_inputs'#
PROCESS_LABEL_KEY = 'process_label'#
PROCESS_PAUSED_KEY = 'paused'#
PROCESS_STATE_KEY = 'process_state'#
PROCESS_STATUS_KEY = 'process_status'#
_CLS_NODE_CACHING#

alias of ProcessNodeCaching

alias of ProcessNodeLinks

__abstractmethods__ = frozenset({})#
__annotations__ = {'METADATA_INPUTS_KEY': <class 'str'>, '_CLS_COLLECTION': 'Type[CollectionType]', '__plugin_type_string': 'ClassVar[str]', '__qb_fields__': 'Sequence[QbField]', '__query_type_string': 'ClassVar[str]', '_hash_ignored_attributes': 'Tuple[str, ...]', '_logger': 'Optional[Logger]', '_updatable_attributes': 'Tuple[str, ...]', 'fields': 'QbFields'}#
__module__ = 'aiida.orm.nodes.process.process'#
__parameters__ = ()#
__plugin_type_string: ClassVar[str]#
__qb_fields__: Sequence[QbField] = [QbStrField('process_type', dtype=Optional[str], is_attribute=True), QbNumericField('computer_pk', dtype=Optional[int], is_attribute=True), QbStrField('process_label', dtype=Optional[str], is_attribute=True), QbStrField('process_state', dtype=Optional[str], is_attribute=True), QbStrField('process_status', dtype=Optional[str], is_attribute=True), QbNumericField('exit_status', dtype=Optional[int], is_attribute=True), QbStrField('exit_message', dtype=Optional[str], is_attribute=True), QbStrField('exception', dtype=Optional[str], is_attribute=True), QbField('paused', dtype=bool, is_attribute=True)]#
__query_type_string: ClassVar[str]#
__str__() str#

Return str(self).

_abc_impl = <_abc._abc_data object>#
_hash_ignored_attributes: Tuple[str, ...] = ('metadata_inputs',)#
_logger: Logger | None = <Logger aiida.orm.nodes.process.process.ProcessNode (WARNING)>#
_unstorable_message = 'only Data, WorkflowNode, CalculationNode or their subclasses can be stored'#
_updatable_attributes: Tuple[str, ...] = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', 'process_label', 'process_state', 'process_status')#
property called: List[ProcessNode]#

Return a list of nodes that the process called

Returns:

list of process nodes called by this process

property called_descendants: List[ProcessNode]#

Return a list of all nodes that have been called downstream of this process

This will recursively find all the called processes for this process and its children.

property caller: ProcessNode | None#

Return the process node that called this process node, or None if it does not have a caller

Returns:

process node that called this process node instance or None

property checkpoint: str | None#

Return the checkpoint bundle set for the process

Returns:

checkpoint bundle if it exists, None otherwise

delete_checkpoint() None#

Delete the checkpoint bundle set for the process

property exception: str | None#

Return the exception of the process or None if the process is not excepted.

If the process is marked as excepted yet there is no exception attribute, an empty string will be returned.

Returns:

the exception message or None

property exit_code: ExitCode | None#

Return the exit code of the process.

It is reconstituted from the exit_status and exit_message attributes if both of those are defined.

Returns:

The exit code if defined, or None.

property exit_message: str | None#

Return the exit message of the process

Returns:

the exit message

property exit_status: int | None#

Return the exit status of the process

Returns:

the exit status, an integer exit code or None

fields: QbFields = {'attributes': 'QbDictField(attributes.*) -> Dict[str, Any]',  'computer_pk': 'QbNumericField(attributes.computer_pk) -> Optional[int]',  'ctime': 'QbNumericField(ctime) -> datetime',  'description': 'QbStrField(description) -> str',  'exception': 'QbStrField(attributes.exception) -> Optional[str]',  'exit_message': 'QbStrField(attributes.exit_message) -> Optional[str]',  'exit_status': 'QbNumericField(attributes.exit_status) -> Optional[int]',  'extras': 'QbDictField(extras.*) -> Dict[str, Any]',  'label': 'QbStrField(label) -> str',  'mtime': 'QbNumericField(mtime) -> datetime',  'node_type': 'QbStrField(node_type) -> str',  'paused': 'QbField(attributes.paused) -> bool',  'pk': 'QbNumericField(pk) -> int',  'process_label': 'QbStrField(attributes.process_label) -> Optional[str]',  'process_state': 'QbStrField(attributes.process_state) -> Optional[str]',  'process_status': 'QbStrField(attributes.process_status) -> Optional[str]',  'process_type': 'QbStrField(attributes.process_type) -> Optional[str]',  'repository_metadata': 'QbDictField(repository_metadata) -> Dict[str, Any]',  'sealed': 'QbField(attributes.sealed) -> bool',  'user_pk': 'QbNumericField(user_pk) -> int',  'uuid': 'QbStrField(uuid) -> str'}#
get_builder_restart() ProcessBuilder#

Return a ProcessBuilder that is ready to relaunch the process that created this node.

The process class will be set based on the process_type of this node and the inputs of the builder will be prepopulated with the inputs registered for this node. This functionality is very useful if a process has completed and you want to relaunch it with slightly different inputs.

Returns:

~aiida.engine.processes.builder.ProcessBuilder instance

get_metadata_inputs() Dict[str, Any] | None#

Return the mapping of inputs corresponding to metadata ports that were passed to the process.

property is_excepted: bool#

Return whether the process has excepted

Excepted means that during execution of the process, an exception was raised that was not caught.

Returns:

True if during execution of the process an exception occurred, False otherwise

Return type:

bool

property is_failed: bool#

Return whether the process has failed

Failed means that the process terminated nominally but it had a non-zero exit status.

Returns:

True if the process has failed, False otherwise

Return type:

bool

property is_finished: bool#

Return whether the process has finished

Finished means that the process reached a terminal state nominally. Note that this does not necessarily mean successfully, but there were no exceptions and it was not killed.

Returns:

True if the process has finished, False otherwise

Return type:

bool

property is_finished_ok: bool#

Return whether the process has finished successfully

Finished successfully means that it terminated nominally and had a zero exit status.

Returns:

True if the process has finished successfully, False otherwise

Return type:

bool

property is_killed: bool#

Return whether the process was killed

Killed means the process was killed directly by the user or by the calling process being killed.

Returns:

True if the process was killed, False otherwise

Return type:

bool

property is_terminated: bool#

Return whether the process has terminated

Terminated means that the process has reached any terminal state.

Returns:

True if the process has terminated, False otherwise

Return type:

bool

property logger#

Get the logger of the Calculation object, so that it also logs to the DB.

Returns:

LoggerAdapter object, that works like a logger, but also has the ‘extra’ embedded

pause() None#

Mark the process as paused by setting the corresponding attribute.

This serves only to reflect that the corresponding Process is paused and so this method should not be called by anyone but the Process instance itself.

property paused: bool#

Return whether the process is paused

Returns:

True if the Calculation is marked as paused, False otherwise

property process_class: Type[Process]#

Return the process class that was used to create this node.

Returns:

Process class

Raises:

ValueError – if no process type is defined, it is an invalid process type string or cannot be resolved to load the corresponding class

property process_label: str | None#

Return the process label

Returns:

the process label

property process_state: ProcessState | None#

Return the process state

Returns:

the process state instance of ProcessState enum

property process_status: str | None#

Return the process status

The process status is a generic status message e.g. the reason it might be paused or when it is being killed

Returns:

the process status

classmethod recursive_merge(left: dict[Any, Any], right: dict[Any, Any]) None#

Recursively merge the right dictionary into the left dictionary.

Parameters:
  • left – Base dictionary.

  • right – Dictionary to recurisvely merge on top of left dictionary.

set_checkpoint(checkpoint: str) None#

Set the checkpoint bundle set for the process

Parameters:

state – string representation of the stepper state info

set_exception(exception: str) None#

Set the exception of the process

Parameters:

exception – the exception message

set_exit_message(message: str | None) None#

Set the exit message of the process, if None nothing will be done

Parameters:

message – a string message

set_exit_status(status: None | Enum | int) None#

Set the exit status of the process

Parameters:

state – an integer exit code or None, which will be interpreted as zero

set_metadata_inputs(value: Dict[str, Any]) None#

Set the mapping of inputs corresponding to metadata ports that were passed to the process.

set_process_label(label: str) None#

Set the process label

Parameters:

label – process label string

set_process_state(state: str | ProcessState | None)#

Set the process state

Parameters:

state – value or instance of ProcessState enum

set_process_status(status: str | None) None#

Set the process status

The process status is a generic status message e.g. the reason it might be paused or when it is being killed. If status is None, the corresponding attribute will be deleted.

Parameters:

status – string process status

set_process_type(process_type_string: str) None#

Set the process type string.

Parameters:

process_type – the process type string identifying the class using this process node as storage.

unpause() None#

Mark the process as unpaused by removing the corresponding attribute.

This serves only to reflect that the corresponding Process is unpaused and so this method should not be called by anyone but the Process instance itself.

__annotations__ = {}#
__label__ = 'process'#
__module__ = 'aiida.restapi.translator.nodes.process.process'#
_aiida_class#

alias of ProcessNode

_aiida_type = 'process.ProcessNode'#
_result_type = 'process'#
get_projectable_properties()#

Get projectable properties specific for Process nodes :return: dict of projectable properties and column_order list

static get_report(process)#

Show the log report for one or multiple processes.

__annotations__ = {}#
__module__ = 'aiida.restapi.resources'#
_translator_class#

alias of ProcessTranslator

get(id=None, page=None)[source]#

Get method for the Process resource.

Parameters:

id – node identifier

Returns:

http response

methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

class aiida.restapi.resources.QueryBuilder(**kwargs)[source]#

Bases: BaseResource

Representation of a QueryBuilder REST API resource (instantiated with a serialised QueryBuilder instance).

It supports POST requests taking in JSON as_dict() objects and returning the QueryBuilder result accordingly.

GET_MESSAGE = 'Method Not Allowed. Use HTTP POST requests to use the AiiDA QueryBuilder. POST JSON data, which MUST be a valid QueryBuilder.as_dict() dictionary as a JSON object. See the documentation at https://aiida.readthedocs.io/projects/aiida-core/en/latest/topics/database.html#converting-the-querybuilder-to-from-a-dictionary for more information.'#
__annotations__ = {'decorators': 't.ClassVar[list[t.Callable]]', 'init_every_request': 't.ClassVar[bool]', 'methods': 't.ClassVar[t.Collection[str] | None]', 'provide_automatic_options': 't.ClassVar[bool | None]'}#
__init__(**kwargs)[source]#

Construct the resource.

__module__ = 'aiida.restapi.resources'#
_translator_class#

alias of NodeTranslator

get()[source]#

Static return to state information about this endpoint.

methods: t.ClassVar[t.Collection[str] | None] = {'GET', 'POST'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

post()[source]#

POST method to pass query help JSON.

If the posted JSON is not a valid QueryBuilder serialisation, the request will fail with an internal server error.

This uses the NodeTranslator in order to best return Nodes according to the general AiiDA REST API data format, while still allowing the return of other AiiDA entities.

Returns:

QueryBuilder result of AiiDA entities in “standard” REST API format.

class aiida.restapi.resources.ServerInfo(**kwargs)[source]#

Bases: Resource

Endpoint to return general server info

__annotations__ = {}#
__init__(**kwargs)[source]#
__module__ = 'aiida.restapi.resources'#
get()[source]#

It returns the general info about the REST API :return: returns current AiiDA version defined in aiida/__init__.py

methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

class aiida.restapi.resources.User(profile, **kwargs)[source]#

Bases: BaseResource

Resource for User

class UserTranslator(**kwargs)#

Bases: BaseTranslator

Translator relative to resource ‘users’ and aiida class User

__annotations__ = {}#
__label__ = 'users'#
__module__ = 'aiida.restapi.translator.user'#
_aiida_class#

alias of User

_aiida_type = 'User'#
_default_projections = ['id', 'first_name', 'last_name', 'institution']#
_has_uuid = False#
_result_type = 'users'#
get_projectable_properties()#

Get projectable properties specific for User :return: dict of projectable properties and column_order list

__annotations__ = {}#
__module__ = 'aiida.restapi.resources'#
_parse_pk_uuid = 'pk'#
_translator_class#

alias of UserTranslator

methods: t.ClassVar[t.Collection[str] | None] = {'GET'}#

The methods this view is registered for. Uses the same default (["GET", "HEAD", "OPTIONS"]) as route and add_url_rule by default.

It defines the method with all required parameters to run restapi locally.

aiida.restapi.run_api.configure_api(flask_app=<class 'aiida.restapi.api.App'>, flask_api=<class 'aiida.restapi.api.AiidaApi'>, **kwargs)[source]#

Configures a flask.Flask instance and returns it.

Parameters:
  • flask_app (flask.Flask) – Class inheriting from flask app class

  • flask_api (flask_restful.Api) – flask_restful API class to be used to wrap the app

  • config – directory containing the config.py configuration file

  • catch_internal_server – If true, catch and print internal server errors with full python traceback. Useful during app development.

  • wsgi_profile – use WSGI profiler middleware for finding bottlenecks in the web application

  • posting – Whether or not to include POST-enabled endpoints (currently only /querybuilder).

Returns:

Flask RESTful API

Return type:

flask_restful.Api

aiida.restapi.run_api.run_api(flask_app=<class 'aiida.restapi.api.App'>, flask_api=<class 'aiida.restapi.api.AiidaApi'>, **kwargs)[source]#

Takes a flask.Flask instance and runs it.

Parameters:
  • flask_app (flask.Flask) – Class inheriting from flask app class

  • flask_api (flask_restful.Api) – flask_restful API class to be used to wrap the app

  • hostname – hostname to run app on (only when using built-in server)

  • port – port to run app on (only when using built-in server)

  • config – directory containing the config.py file used to configure the RESTapi

  • catch_internal_server – If true, catch and print all inter server errors

  • debug – enable debugging

  • wsgi_profile – use WSGI profiler middleware for finding bottlenecks in web application

  • posting – Whether or not to include POST-enabled endpoints (currently only /querybuilder).

Returns:

tuple (app, api) if hookup==False or runs app if hookup==True