# aiida.orm.nodes package¶

Module with Node sub classes for data and processes.

class aiida.orm.nodes.ArrayData(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Store a set of arrays on disk (rather than on the database) in an efficient way using numpy.save() (therefore, this class requires numpy to be installed).

Each array is stored within the Node folder as a different .npy file.

Note

Before storing, no caching is done: if you perform a get_array() call, the array will be re-read from disk. If instead the ArrayData node has already been stored, the array is cached in memory after the first read, and the cached array is used thereafter. If too much RAM memory is used, you can clear the cache with the clear_internal_cache() method.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.array.array'
_abc_impl = <_abc_data object>
_arraynames_from_files()[source]

Return a list of all arrays stored in the node, listing the files (and not relying on the properties).

_arraynames_from_properties()[source]

Return a list of all arrays stored in the node, listing the attributes starting with the correct prefix.

_cached_arrays = None
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.array.array.ArrayData (REPORT)>
_plugin_type_string = 'data.array.ArrayData.'
_query_type_string = 'data.array.'
_validate()[source]

Check if the list of .npy files stored inside the node and the list of properties match. Just a name check, no check on the size since this would require to reload all arrays and this may take time and memory.

array_prefix = 'array|'
clear_internal_cache()[source]

Clear the internal memory cache where the arrays are stored after being read from disk (used in order to reduce at minimum the readings from disk). This function is useful if you want to keep the node in memory, but you do not want to waste memory to cache the arrays in RAM.

delete_array(name)[source]

Delete an array from the node. Can only be called before storing.

Parameters

name – The name of the array to delete from the node.

get_array(name)[source]

Return an array stored in the node

Parameters

name – The name of the array to return.

get_arraynames()[source]

Return a list of all arrays stored in the node, listing the files (and not relying on the properties).

New in version 0.7: Renamed from arraynames

get_iterarrays()[source]

Iterator that returns tuples (name, array) for each array stored in the node.

New in version 1.0: Renamed from iterarrays

get_shape(name)[source]

Return the shape of an array (read from the value cached in the properties for efficiency reasons).

Parameters

name – The name of the array.

initialize()[source]

Initialize internal variables for the backend node

This needs to be called explicitly in each specific subclass implementation of the init.

set_array(name, array)[source]

Store a new numpy array inside the node. Possibly overwrite the array if it already existed.

Internally, it stores a name.npy file in numpy format.

Parameters
• name – The name of the array.

• array – The numpy array to store.

class aiida.orm.nodes.BandsData(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Class to handle bands data

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.array.bands'
_abc_impl = <_abc_data object>
_get_band_segments(cartesian)[source]

Return the band segments.

_get_bandplot_data(cartesian, prettify_format=None, join_symbol=None, get_segments=False, y_origin=0.0)[source]

Get data to plot a band structure

Parameters
• cartesian – if True, distances (for the x-axis) are computed in cartesian coordinates, otherwise they are computed in reciprocal coordinates. cartesian=True will fail if no cell has been set.

• prettify_format – by default, strings are not prettified. If you want to prettify them, pass a valid prettify_format string (see valid options in the docstring of :py:func:prettify_labels).

• join_symbols – by default, strings are not joined. If you pass a string, this is used to join strings that are much closer than a given threshold. The most typical string is the pipe symbol: |.

• get_segments – if True, also computes the band split into segments

• y_origin – if present, shift bands so to set the value specified at y=0

Returns

a plot_info dictiorary, whose keys are x (array of distances for the x axis of the plot); y (array of bands), labels (list of tuples in the format (float x value of the label, label string), band_type_idx (array containing an index for each band: if there is only one spin, then it’s an array of zeros, of length equal to the number of bands at each point; if there are two spins, then it’s an array of zeros or ones depending on the type of spin; the length is always equalt to the total number of bands per kpoint).

static _get_mpl_body_template(paths)[source]
Parameters

paths – paths of k-points

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.array.bands.BandsData (REPORT)>
_matplotlib_get_dict(main_file_name='', comments=True, title='', legend=None, legend2=None, y_max_lim=None, y_min_lim=None, y_origin=0.0, prettify_format=None, **kwargs)[source]

Prepare the data to send to the python-matplotlib plotting script.

Parameters
• comments – if True, print comments (if it makes sense for the given format)

• plot_info – a dictionary

• setnumber_offset – an offset to be applied to all set numbers (i.e. s0 is replaced by s[offset], s1 by s[offset+1], etc.)

• color_number – the color number for lines, symbols, error bars and filling (should be less than the parameter MAX_NUM_AGR_COLORS defined below)

• title – the title

• legend – the legend (applied only to the first of the set)

• legend2 – the legend for second-type spins (applied only to the first of the set)

• y_max_lim – the maximum on the y axis (if None, put the maximum of the bands)

• y_min_lim – the minimum on the y axis (if None, put the minimum of the bands)

• y_origin – the new origin of the y axis -> all bands are replaced by bands-y_origin

• prettify_format – if None, use the default prettify format. Otherwise specify a string with the prettifier to use.

• kwargs – additional customization variables; only a subset is accepted, see internal variable ‘valid_additional_keywords

_plugin_type_string = 'data.array.bands.BandsData.'
_prepare_agr(main_file_name='', comments=True, setnumber_offset=0, color_number=1, color_number2=2, legend='', title='', y_max_lim=None, y_min_lim=None, y_origin=0.0, prettify_format=None)[source]

Prepare an xmgrace agr file.

Parameters
• comments – if True, print comments (if it makes sense for the given format)

• plot_info – a dictionary

• setnumber_offset – an offset to be applied to all set numbers (i.e. s0 is replaced by s[offset], s1 by s[offset+1], etc.)

• color_number – the color number for lines, symbols, error bars and filling (should be less than the parameter MAX_NUM_AGR_COLORS defined below)

• color_number2 – the color number for lines, symbols, error bars and filling for the second-type spins (should be less than the parameter MAX_NUM_AGR_COLORS defined below)

• legend – the legend (applied only to the first set)

• title – the title

• y_max_lim – the maximum on the y axis (if None, put the maximum of the bands); applied after shifting the origin by y_origin

• y_min_lim – the minimum on the y axis (if None, put the minimum of the bands); applied after shifting the origin by y_origin

• y_origin – the new origin of the y axis -> all bands are replaced by bands-y_origin

• prettify_format – if None, use the default prettify format. Otherwise specify a string with the prettifier to use.

_prepare_agr_batch(main_file_name='', comments=True, prettify_format=None)[source]

Prepare two files, data and batch, to be plot with xmgrace as: xmgrace -batch file.dat

Parameters
• main_file_name – if the user asks to write the main content on a file, this contains the filename. This should be used to infer a good filename for the additional files. In this case, we remove the extension, and add ‘_data.dat’

• comments – if True, print comments (if it makes sense for the given format)

• prettify_format – if None, use the default prettify format. Otherwise specify a string with the prettifier to use.

_prepare_dat_blocks(main_file_name='', comments=True)[source]

Format suitable for gnuplot using blocks. Columns with x and y (path and band energy). Several blocks, separated by two empty lines, one per energy band.

Parameters

comments – if True, print comments (if it makes sense for the given format)

_prepare_dat_multicolumn(main_file_name='', comments=True)[source]

Write an N x M matrix. First column is the distance between kpoints, The other columns are the bands. Header contains number of kpoints and the number of bands (commented).

Parameters

comments – if True, print comments (if it makes sense for the given format)

_prepare_gnuplot(main_file_name=None, title='', comments=True, prettify_format=None, y_max_lim=None, y_min_lim=None, y_origin=0.0)[source]

Prepare an gnuplot script to plot the bands, with the .dat file returned as an independent file.

Parameters
• main_file_name – if the user asks to write the main content on a file, this contains the filename. This should be used to infer a good filename for the additional files. In this case, we remove the extension, and add ‘_data.dat’

• title – if specified, add a title to the plot

• comments – if True, print comments (if it makes sense for the given format)

• prettify_format – if None, use the default prettify format. Otherwise specify a string with the prettifier to use.

_prepare_json(main_file_name='', comments=True)[source]

Prepare a json file in a format compatible with the AiiDA band visualizer

Parameters

comments – if True, print comments (if it makes sense for the given format)

_prepare_mpl_pdf(main_file_name='', *args, **kwargs)[source]

Prepare a python script using matplotlib to plot the bands, with the JSON returned as an independent file.

For the possible parameters, see documentation of _matplotlib_get_dict()

_prepare_mpl_png(main_file_name='', *args, **kwargs)[source]

Prepare a python script using matplotlib to plot the bands, with the JSON returned as an independent file.

For the possible parameters, see documentation of _matplotlib_get_dict()

_prepare_mpl_singlefile(*args, **kwargs)[source]

Prepare a python script using matplotlib to plot the bands

For the possible parameters, see documentation of _matplotlib_get_dict()

_prepare_mpl_withjson(main_file_name='', *args, **kwargs)[source]

Prepare a python script using matplotlib to plot the bands, with the JSON returned as an independent file.

For the possible parameters, see documentation of _matplotlib_get_dict()

_query_type_string = 'data.array.bands.'
_set_pbc(value)[source]

validate the pbc, then store them

_validate_bands_occupations(bands, occupations=None, labels=None)[source]

Validate the list of bands and of occupations before storage. Kpoints must be set in advance. Bands and occupations must be convertible into arrays of Nkpoints x Nbands floats or Nspins x Nkpoints x Nbands; Nkpoints must correspond to the number of kpoints.

property array_labels

Get the labels associated with the band arrays

get_bands(also_occupations=False, also_labels=False)[source]

Returns an array (nkpoints x num_bands or nspins x nkpoints x num_bands) of energies. :param also_occupations: if True, returns also the occupations array. Default = False

set_bands(bands, units=None, occupations=None, labels=None)[source]

Set an array of band energies of dimension (nkpoints x nbands). Kpoints must be set in advance. Can contain floats or None. :param bands: a list of nkpoints lists of nbands bands, or a 2D array of shape (nkpoints x nbands), with band energies for each kpoint :param units: optional, energy units :param occupations: optional, a 2D list or array of floats of same shape as bands, with the occupation associated to each band

set_kpointsdata(kpointsdata)[source]

Load the kpoints from a kpoint object. :param kpointsdata: an instance of KpointsData class

show_mpl(**kwargs)[source]

Call a show() command for the band structure using matplotlib. This uses internally the ‘mpl_singlefile’ format, with empty main_file_name.

Other kwargs are passed to self._exportcontent.

property units

Units in which the data in bands were stored. A string

class aiida.orm.nodes.BaseType(*args, **kwargs)[source]

Data sub class to be used as a base for data containers that represent base python data types.

__abstractmethods__ = frozenset({})
__eq__(other)[source]

Fallback equality comparison by uuid (can be overwritten by specific types)

__hash__ = None
__init__(*args, **kwargs)[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__module__ = 'aiida.orm.nodes.data.base'
__ne__(other)[source]

Return self!=value.

__str__()[source]

Return str(self).

_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.base.BaseType (REPORT)>
_plugin_type_string = 'data.base.BaseType.'
_query_type_string = 'data.base.'
new(value=None)[source]
property value
class aiida.orm.nodes.Bool(*args, **kwargs)[source]

Data sub class to represent a boolean value.

__abstractmethods__ = frozenset({})
__bool__()[source]
__int__()[source]
__module__ = 'aiida.orm.nodes.data.bool'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.bool.Bool (REPORT)>
_plugin_type_string = 'data.bool.Bool.'
_query_type_string = 'data.bool.'
_type

alias of builtins.bool

class aiida.orm.nodes.CalcFunctionNode(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

ORM class for all nodes representing the execution of a calcfunction.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.process.calculation.calcfunction'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.process.calculation.calcfunction.CalcFunctionNode (REPORT)>
_plugin_type_string = 'process.calculation.calcfunction.CalcFunctionNode.'
_query_type_string = 'process.calculation.calcfunction.'
validate_outgoing(target: Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from ourself to a given node.

A calcfunction cannot return Data, so if we receive an outgoing link to a stored Data node, that means the user created a Data node within our function body and stored it themselves or they are returning an input node. The latter use case is reserved for @workfunctions, as they can have RETURN links.

Parameters
• target – the node to which the link is going

Raises
• TypeError – if target is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

class aiida.orm.nodes.CalcJobNode(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

ORM class for all nodes representing the execution of a CalcJob.

CALC_JOB_STATE_KEY = 'state'
REMOTE_WORKDIR_KEY = 'remote_workdir'
RETRIEVE_LIST_KEY = 'retrieve_list'
RETRIEVE_SINGLE_FILE_LIST_KEY = 'retrieve_singlefile_list'
RETRIEVE_TEMPORARY_LIST_KEY = 'retrieve_temporary_list'
SCHEDULER_DETAILED_JOB_INFO_KEY = 'detailed_job_info'
SCHEDULER_JOB_ID_KEY = 'job_id'
SCHEDULER_LAST_CHECK_TIME_KEY = 'scheduler_lastchecktime'
SCHEDULER_LAST_JOB_INFO_KEY = 'last_job_info'
SCHEDULER_STATE_KEY = 'scheduler_state'
__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.process.calculation.calcjob'
_abc_impl = <_abc_data object>
_get_objects_to_hash()List[Any][source]

Return a list of objects which should be included in the hash.

This method is purposefully overridden from the base Node class, because we do not want to include the repository folder in the hash. The reason is that the hash of this node is computed in the store method, at which point the input files that will be stored in the repository have not yet been generated. Including these anyway in the computation of the hash would mean that the hash of the node would change as soon as the process has started and the input files have been written to the repository.

_hash_ignored_attributes: Tuple[str, ] = ('queue_name', 'account', 'qos', 'priority', 'max_wallclock_seconds', 'max_memory_kb')
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.process.calculation.calcjob.CalcJobNode (REPORT)>
_plugin_type_string = 'process.calculation.calcjob.CalcJobNode.'
_query_type_string = 'process.calculation.calcjob.'
property _raw_input_folder

Get the input folder object.

Returns

the input folder object.

Raise

NotExistent: if the raw folder hasn’t been created yet

_repository_base_path = 'raw_input'
_tools = None
_updatable_attributes: Tuple[str, ] = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', 'process_label', 'process_state', 'process_status', 'state', 'remote_workdir', 'retrieve_list', 'retrieve_temporary_list', 'retrieve_singlefile_list', 'job_id', 'scheduler_state', 'scheduler_lastchecktime', 'last_job_info', 'detailed_job_info')
static _validate_retrieval_directive(directives: Sequence[Union[str, Tuple[str, str, str]]])None[source]

Validate a list or tuple of file retrieval directives.

Parameters

directives – a list or tuple of file retrieval directives

Raises

ValueError – if the format of the directives is invalid

delete_state()None[source]

Delete the calculation job state attribute if it exists.

get_authinfo() → AuthInfo[source]

Return the AuthInfo that is configured for the Computer set for this node.

Returns

AuthInfo

get_builder_restart() → ProcessBuilder[source]

Return a ProcessBuilder that is ready to relaunch the same CalcJob that created this node.

The process class will be set based on the process_type of this node and the inputs of the builder will be prepopulated with the inputs registered for this node. This functionality is very useful if a process has completed and you want to relaunch it with slightly different inputs.

In addition to prepopulating the input nodes, which is implemented by the base ProcessNode class, here we also add the options that were passed in the metadata input of the CalcJob process.

get_description()str[source]

Return a description of the node based on its properties.

get_detailed_job_info() → Optional[dict][source]

Return the detailed job info dictionary.

The scheduler is polled for the detailed job info after the job is completed and ready to be retrieved.

Returns

the dictionary with detailed job info if defined or None

get_job_id() → Optional[str][source]

Return job id that was assigned to the calculation by the scheduler.

Returns

the string representation of the scheduler job id

get_last_job_info() → Optional[JobInfo][source]

Return the last information asked to the scheduler about the status of the job.

The last job info is updated on every poll of the scheduler, except for the final poll when the job drops from the scheduler’s job queue. For completed jobs, the last job info therefore contains the “second-to-last” job info that still shows the job as running. Please use get_detailed_job_info() instead.

Returns

a JobInfo object (that closely resembles a dictionary) or None.

get_option(name: str) → Optional[Any][source]

Retun the value of an option that was set for this CalcJobNode

Parameters

name – the option name

Returns

the option value or None

Raises

ValueError for unknown option

get_options()Dict[str, Any][source]

Return the dictionary of options set for this CalcJobNode

Returns

dictionary of the options and their values

get_parser_class() → Optional[Type[Parser]][source]

Return the output parser object for this calculation or None if no parser is set.

Returns

a Parser class.

Raises

aiida.common.exceptions.EntryPointError – if the parser entry point can not be resolved.

get_remote_workdir() → Optional[str][source]

Return the path to the remote (on cluster) scratch folder of the calculation.

Returns

a string with the remote path

get_retrieve_list() → Optional[Sequence[Union[str, Tuple[str, str, str]]]][source]

Return the list of files/directories to be retrieved on the cluster after the calculation has completed.

Returns

a list of file directives

get_retrieve_singlefile_list()[source]

Return the list of files to be retrieved on the cluster after the calculation has completed.

Returns

list of single file retrieval directives

Deprecated since version 1.0.0: Will be removed in v2.0.0, use aiida.orm.nodes.process.calculation.calcjob.CalcJobNode.get_retrieve_temporary_list() instead.

get_retrieve_temporary_list() → Optional[Sequence[Union[str, Tuple[str, str, str]]]][source]

Return list of files to be retrieved from the cluster which will be available during parsing.

Returns

a list of file directives

get_retrieved_node() → Optional[FolderData][source]

Return the retrieved data folder.

Returns

get_scheduler_lastchecktime() → Optional[datetime.datetime][source]

Return the time of the last update of the scheduler state by the daemon or None if it was never set.

Returns

a datetime object or None

get_scheduler_state() → Optional[JobState][source]

Return the status of the calculation according to the cluster scheduler.

Returns

a JobState enum instance.

get_scheduler_stderr() → Optional[AnyStr][source]

Return the scheduler stdout output if the calculation has finished and been retrieved, None otherwise.

Returns

scheduler stdout output or None

get_scheduler_stdout() → Optional[AnyStr][source]

Return the scheduler stderr output if the calculation has finished and been retrieved, None otherwise.

Returns

scheduler stderr output or None

get_state() → Optional[aiida.common.datastructures.CalcJobState][source]

Return the calculation job active sub state.

The calculation job state serves to give more granular state information to CalcJobs, in addition to the generic process state, while the calculation job is active. The state can take values from the enumeration defined in aiida.common.datastructures.CalcJobState and can be used to query for calculation jobs in specific active states.

Returns

instance of aiida.common.datastructures.CalcJobState or None if invalid value, or not set

get_transport() → Transport[source]

Return the transport for this calculation.

Returns

Transport configured with the AuthInfo associated to the computer of this node

Return the link label used for the retrieved FolderData node.

property res

Returns

an instance of the CalcJobResultManager.

Note

a practical example on how it is meant to be used: let’s say that there is a key ‘energy’ in the dictionary of the parsed results which contains a list of floats. The command calc.res.energy will return such a list.

set_detailed_job_info(detailed_job_info: Optional[dict])None[source]

Set the detailed job info dictionary.

Parameters

detailed_job_info – a dictionary with metadata with the accounting of a completed job

set_job_id(job_id: Union[int, str])None[source]

Set the job id that was assigned to the calculation by the scheduler.

Note

the id will always be stored as a string

Parameters

job_id – the id assigned by the scheduler after submission

set_last_job_info(last_job_info: JobInfo)None[source]

Set the last job info.

Parameters

last_job_info – a JobInfo object

set_option(name: str, value: Any)None[source]

Set an option to the given value

Parameters
• name – the option name

• value – the value to set

Raises

ValueError for unknown option

Raises

TypeError for values with invalid type

set_options(options: Dict[str, Any])None[source]

Set the options for this CalcJobNode

Parameters

options – dictionary of option and their values to set

set_remote_workdir(remote_workdir: str)None[source]

Set the absolute path to the working directory on the remote computer where the calculation is run.

Parameters

remote_workdir – absolute filepath to the remote working directory

set_retrieve_list(retrieve_list: Sequence[Union[str, Tuple[str, str, str]]])None[source]

Set the retrieve list.

This list of directives will instruct the daemon what files to retrieve after the calculation has completed. list or tuple of files or paths that should be retrieved by the daemon.

Parameters

retrieve_list – list or tuple of with filepath directives

set_retrieve_singlefile_list(retrieve_singlefile_list)[source]

Set the retrieve singlefile list.

The files will be stored as SinglefileData instances and added as output nodes to this calculation node. The format of a single file directive is a tuple or list of length 3 with the following entries:

2. the SinglefileData class or sub class to use to store

3. the filepath relative to the remote working directory of the calculation

Parameters

retrieve_singlefile_list – list or tuple of single file directives

Deprecated since version 1.0.0: Will be removed in v2.0.0. Use set_retrieve_temporary_list() instead.

set_retrieve_temporary_list(retrieve_temporary_list: Sequence[Union[str, Tuple[str, str, str]]])None[source]

Set the retrieve temporary list.

The retrieve temporary list stores files that are retrieved after completion and made available during parsing and are deleted as soon as the parsing has been completed.

Parameters

retrieve_temporary_list – list or tuple of with filepath directives

set_scheduler_state(state: JobState)None[source]

Set the scheduler state.

Parameters

state – an instance of JobState

set_state(state: aiida.common.datastructures.CalcJobState)None[source]

Set the calculation active job state.

Raise

ValueError if state is invalid

property tools

Return the calculation tools that are registered for the process type associated with this calculation.

If the entry point name stored in the process_type of the CalcJobNode has an accompanying entry point in the aiida.tools.calculations entry point category, it will attempt to load the entry point and instantiate it passing the node to the constructor. If the entry point does not exist, cannot be resolved or loaded, a warning will be logged and the base CalculationTools class will be instantiated and returned.

Returns

CalculationTools instance

class aiida.orm.nodes.CalculationNode(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Base class for all nodes representing the execution of a calculation process.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.process.calculation.calculation'
_abc_impl = <_abc_data object>
_cachable = True
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.process.calculation.calculation.CalculationNode (REPORT)>
_plugin_type_string = 'process.calculation.CalculationNode.'
_query_type_string = 'process.calculation.'
_storable = True
_unstorable_message = 'storing for this node has been disabled'
property inputs

The returned Manager allows you to easily explore the nodes connected to this node via an incoming INPUT_CALC link. The incoming nodes are reachable by their link labels which are attributes of the manager.

property outputs

The returned Manager allows you to easily explore the nodes connected to this node via an outgoing CREATE link. The outgoing nodes are reachable by their link labels which are attributes of the manager.

class aiida.orm.nodes.CifData(ase=None, file=None, filename=None, values=None, source=None, scan_type=None, parse_policy=None, **kwargs)[source]

Wrapper for Crystallographic Interchange File (CIF)

Note

the file (physical) is held as the authoritative source of information, so all conversions are done through the physical file: when setting ase or values, a physical CIF file is generated first, the values are updated from the physical CIF file.

_PARSE_POLICIES = ('eager', 'lazy')
_PARSE_POLICY_DEFAULT = 'eager'
_SCAN_TYPES = ('standard', 'flex')
_SCAN_TYPE_DEFAULT = 'standard'
_SET_INCOMPATIBILITIES = [('ase', 'file'), ('ase', 'values'), ('file', 'values')]
__abstractmethods__ = frozenset({})
__init__(ase=None, file=None, filename=None, values=None, source=None, scan_type=None, parse_policy=None, **kwargs)[source]

Construct a new instance and set the contents to that of the file.

Parameters
• file – an absolute filepath or filelike object for CIF. Hint: Pass io.BytesIO(b”my string”) to construct the SinglefileData directly from a string.

• filename – specify filename to use (defaults to name of provided file).

• ase – ASE Atoms object to construct the CifData instance from.

• values – PyCifRW CifFile object to construct the CifData instance from.

• source

• scan_type – scan type string for parsing with PyCIFRW (‘standard’ or ‘flex’). See CifFile.ReadCif

• parse_policy – ‘eager’ (parse CIF file on set_file) or ‘lazy’ (defer parsing until needed)

__module__ = 'aiida.orm.nodes.data.cif'
_abc_impl = <_abc_data object>
_ase = None
_get_object_ase()[source]

Converts CifData to ase.Atoms

Returns

an ase.Atoms object

_get_object_pycifrw()[source]

Converts CifData to PyCIFRW.CifFile

Returns

a PyCIFRW.CifFile object

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.cif.CifData (REPORT)>
_plugin_type_string = 'data.cif.CifData.'
_prepare_cif(**kwargs)[source]

Return CIF string of CifData object.

If parsed values are present, a CIF string is created and written to file. If no parsed values are present, the CIF string is read from file.

_query_type_string = 'data.cif.'
_validate()[source]

Validates MD5 hash of CIF file.

_values = None
property ase

ASE object, representing the CIF.

Note

requires ASE module.

classmethod from_md5(md5)[source]

Return a list of all CIF files that match a given MD5 hash.

Note

the hash has to be stored in a _md5 attribute, otherwise the CIF file will not be found.

generate_md5()[source]

Computes and returns MD5 hash of the CIF file.

get_ase(**kwargs)[source]

Returns ASE object, representing the CIF. This function differs from the property ase by the possibility to pass the keyworded arguments (kwargs) to ase.io.cif.read_cif().

Note

requires ASE module.

get_formulae(mode='sum', custom_tags=None)[source]

Return chemical formulae specified in CIF file.

Note: This does not compute the formula, it only reads it from the appropriate tag. Use refine_inline to compute formulae.

classmethod get_or_create(filename, use_first=False, store_cif=True)[source]

Pass the same parameter of the init; if a file with the same md5 is found, that CifData is returned.

Parameters
• filename – an absolute filename on disk

• use_first – if False (default), raise an exception if more than one CIF file is found. If it is True, instead, use the first available CIF file.

• store_cif (bool) – If false, the CifData objects are not stored in the database. default=True.

Return (cif, created)

where cif is the CifData object, and create is either True if the object was created, or False if the object was retrieved from the DB.

get_spacegroup_numbers()[source]

Get the spacegroup international number.

get_structure(converter='pymatgen', store=False, **kwargs)[source]

New in version 1.0: Renamed from _get_aiida_structure

Parameters
• converter – specify the converter. Default ‘pymatgen’.

• store – if True, intermediate calculation gets stored in the AiiDA database for record. Default False.

• primitive_cell – if True, primitive cell is returned, conventional cell if False. Default False.

• occupancy_tolerance – If total occupancy of a site is between 1 and occupancy_tolerance, the occupancies will be scaled down to 1. (pymatgen only)

• site_tolerance – This tolerance is used to determine if two sites are sitting in the same position, in which case they will be combined to a single disordered site. Defaults to 1e-4. (pymatgen only)

Returns
property has_atomic_sites

Returns whether there are any atomic sites defined in the cif data. That is to say, it will check all the values for the _atom_site_fract_* tags and if they are all equal to ? that means there are no relevant atomic sites defined and the function will return False. In all other cases the function will return True

Returns

False when at least one atomic site fractional coordinate is not equal to ? and True otherwise

property has_attached_hydrogens

Check if there are hydrogens without coordinates, specified as attached to the atoms of the structure.

Returns

True if there are attached hydrogens, False otherwise.

property has_partial_occupancies

Return if the cif data contains partial occupancies

A partial occupancy is defined as site with an occupancy that differs from unity, within a precision of 1E-6

Returns

True if there are partial occupancies, False otherwise

property has_undefined_atomic_sites

Return whether the cif data contains any undefined atomic sites.

An undefined atomic site is defined as a site where at least one of the fractional coordinates specified in the _atom_site_fract_* tags, cannot be successfully interpreted as a float. If the cif data contains any site that matches this description, or it does not contain any atomic site tags at all, the cif data is said to have undefined atomic sites.

Returns

boolean, True if no atomic sites are defined or if any of the defined sites contain undefined positions and False otherwise

property has_unknown_species

Returns whether the cif contains atomic species that are not recognized by AiiDA.

The known species are taken from the elements dictionary in aiida.common.constants, with the exception of the “unknown” placeholder element with symbol ‘X’, as this could not be used to construct a real structure. If any of the formula of the cif data contain species that are not in that elements dictionary, the function will return True and False in all other cases. If there is no formulae to be found, it will return None

Returns

True when there are unknown species in any of the formulae, False if not, None if no formula found

parse(scan_type=None)[source]

Parses CIF file and sets attributes.

Parameters

scan_type – See set_scan_type

static read_cif(fileobj, index=- 1, **kwargs)[source]

A wrapper method that simulates the behavior of the old function ase.io.cif.read_cif by using the new generic ase.io.read function.

Somewhere from 3.12 to 3.17 the tag concept was bundled with each Atom object. When reading a CIF file, this is incremented and signifies the atomic species, even though the CIF file do not have specific tags embedded. On reading CIF files we thus force the ASE tag to zero for all Atom elements.

set_ase(aseatoms)[source]

Set the contents of the CifData starting from an ASE atoms object

Parameters

aseatoms – the ASE atoms object

set_file(file, filename=None)[source]

Set the file.

If the source is set and the MD5 checksum of new file is different from the source, the source has to be deleted.

Parameters
• file – filepath or filelike object of the CIF file to store. Hint: Pass io.BytesIO(b”my string”) to construct the file directly from a string.

• filename – specify filename to use (defaults to name of provided file).

set_parse_policy(parse_policy)[source]

Set the parse policy.

Parameters

parse_policy – Either ‘eager’ (parse CIF file on set_file) or ‘lazy’ (defer parsing until needed)

set_scan_type(scan_type)[source]

Set the scan_type for PyCifRW.

The ‘flex’ scan_type of PyCifRW is faster for large CIF files but does not yet support the CIF2 format as of 02/2018. See the CifFile.ReadCif function

Parameters

scan_type – Either ‘standard’ or ‘flex’ (see _scan_types)

set_values(values)[source]

Set internal representation to values.

Warning: This also writes a new CIF file.

Parameters

values – PyCifRW CifFile object

Note

requires PyCifRW module.

store(*args, **kwargs)[source]

Store the node.

property values

PyCifRW structure, representing the CIF datablocks.

Note

requires PyCifRW module.

class aiida.orm.nodes.Code(remote_computer_exec=None, local_executable=None, input_plugin_name=None, files=None, **kwargs)[source]

A code entity. It can either be ‘local’, or ‘remote’.

• Local code: it is a collection of files/dirs (added using the add_path() method), where one file is flagged as executable (using the set_local_executable() method).

• Remote code: it is a pair (remotecomputer, remotepath_of_executable) set using the set_remote_computer_exec() method.

For both codes, one can set some code to be executed right before or right after the execution of the code, using the set_preexec_code() and set_postexec_code() methods (e.g., the set_preexec_code() can be used to load specific modules required for the code to be run).

HIDDEN_KEY = 'hidden'
__abstractmethods__ = frozenset({})
__init__(remote_computer_exec=None, local_executable=None, input_plugin_name=None, files=None, **kwargs)[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__module__ = 'aiida.orm.nodes.data.code'
__str__()[source]

Return str(self).

_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.code.Code (REPORT)>
_plugin_type_string = 'data.code.Code.'
_query_type_string = 'data.code.'
_set_local()[source]

Set the code as a ‘local’ code, meaning that all the files belonging to the code will be copied to the cluster, and the file set with set_exec_filename will be run.

It also deletes the flags related to the local case (if any)

_set_remote()[source]

Set the code as a ‘remote’ code, meaning that the code itself has no files attached, but only a location on a remote computer (with an absolute path of the executable on the remote computer).

It also deletes the flags related to the local case (if any)

_validate()[source]

Perform validation of the Data object.

Note

validation of data source checks license and requires attribution to be provided in field ‘description’ of source in the case of any CC-BY* license. If such requirement is too strict, one can remove/comment it out.

can_run_on(computer)[source]

Return True if this code can run on the given computer, False otherwise.

Local codes can run on any machine; remote codes can run only on the machine on which they reside.

TODO: add filters to mask the remote machines on which a local code can run.

property full_label

Get full label of this code.

Returns label of the form <code-label>@<computer-name>.

classmethod get(pk=None, label=None, machinename=None)[source]

Get a Computer object with given identifier string, that can either be the numeric ID (pk), or the label (and computername) (if unique).

Parameters
• pk – the numeric ID (pk) for code

• label – the code label identifying the code to load

• machinename – the machine name where code is setup

Raises
get_append_text()[source]

Return the postexec_code, or an empty string if no post-exec code was defined.

get_builder()[source]

Create and return a new ProcessBuilder for the CalcJob class of the plugin configured for this code.

The configured calculation plugin class is defined by the get_input_plugin_name method.

Note

it also sets the builder.code value.

Returns

a ProcessBuilder instance with the code input already populated with ourselves

Raises
classmethod get_code_helper(label, machinename=None)[source]
Parameters
• label – the code label identifying the code to load

• machinename – the machine name where code is setup

Raises
get_computer_label()[source]

Get label of this code’s computer.

get_computer_name()[source]

Get label of this code’s computer.

Deprecated since version 1.4.0: Will be removed in v2.0.0, use the self.get_computer_label() method instead.

get_description()[source]

Return a string description of this Code instance.

Returns

string description of this Code instance

get_execname()[source]

Return the executable string to be put in the script. For local codes, it is ./LOCAL_EXECUTABLE_NAME For remote codes, it is the absolute path to the executable.

classmethod get_from_string(code_string)[source]

Get a Computer object with given identifier string in the format label@machinename. See the note below for details on the string detection algorithm.

Note

the (leftmost) ‘@’ symbol is always used to split code and computername. Therefore do not use ‘@’ in the code name if you want to use this function (‘@’ in the computer name are instead valid).

Parameters

code_string – the code string identifying the code to load

Raises
get_full_text_info(verbose=False)[source]

Return a list of lists with a human-readable detailed information on this code.

Deprecated since version 1.4.0: Will be removed in v2.0.0.

Returns

list of lists where each entry consists of two elements: a key and a value

get_input_plugin_name()[source]

Return the name of the default input plugin (or None if no input plugin was set.

get_local_executable()[source]
get_prepend_text()[source]

Return the code that will be put in the scheduler script before the execution, or an empty string if no pre-exec code was defined.

get_remote_computer()[source]
get_remote_exec_path()[source]
property hidden

Determines whether the Code is hidden or not

hide()[source]

Hide the code (prevents from showing it in the verdi code list)

is_local()[source]

Return True if the code is ‘local’, False if it is ‘remote’ (see also documentation of the set_local and set_remote functions).

property label

Return the node label.

Returns

the label

classmethod list_for_plugin(plugin, labels=True)[source]

Return a list of valid code strings for a given plugin.

Parameters
• plugin – The string of the plugin.

• labels – if True, return a list of code names, otherwise return the code PKs (integers).

Returns

a list of string, with the code names if labels is True, otherwise a list of integers with the code PKs.

relabel(new_label, raise_error=True)[source]

Relabel this code.

Parameters
• new_label – new code label

• raise_error – Set to False in order to return a list of errors instead of raising them.

Deprecated since version 1.2.0: Will remove raise_error in v2.0.0. Use try/except instead.

reveal()[source]

Reveal the code (allows to show it in the verdi code list) By default, it is revealed

set_append_text(code)[source]

Pass a string of code that will be put in the scheduler script after the execution of the code.

set_files(files)[source]

Given a list of filenames (or a single filename string), add it to the path (all at level zero, i.e. without folders). Therefore, be careful for files with the same name!

Todo

decide whether to check if the Code must be a local executable to be able to call this function.

set_input_plugin_name(input_plugin)[source]

Set the name of the default input plugin, to be used for the automatic generation of a new calculation.

set_local_executable(exec_name)[source]

Set the filename of the local executable. Implicitly set the code as local.

set_prepend_text(code)[source]

Pass a string of code that will be put in the scheduler script before the execution of the code.

set_remote_computer_exec(remote_computer_exec)[source]

Set the code as remote, and pass the computer on which it resides and the absolute path on that computer.

Parameters

remote_computer_exec – a tuple (computer, remote_exec_path), where computer is a aiida.orm.Computer and remote_exec_path is the absolute path of the main executable on remote computer.

class aiida.orm.nodes.Data(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

The base class for all Data nodes.

AiiDA Data classes are subclasses of Node and must support multiple inheritance.

Architecture note: Calculation plugins are responsible for converting raw output data from simulation codes to Data nodes. Data nodes are responsible for validating their content (see _validate method).

__abstractmethods__ = frozenset({})
__copy__()[source]

Copying a Data node is not supported, use copy.deepcopy or call Data.clone().

__deepcopy__(memo)[source]

Create a clone of the Data node by pipiong through to the clone method and return the result.

Returns

an unstored clone of this Data node

__module__ = 'aiida.orm.nodes.data.data'
_abc_impl = <_abc_data object>
_export_format_replacements = {}
_exportcontent(fileformat, main_file_name='', **kwargs)[source]

Converts a Data node to one (or multiple) files.

Note: Export plugins should return utf8-encoded bytes, which can be directly dumped to file.

Parameters
• fileformat (str) – the extension, uniquely specifying the file format.

• main_file_name (str) – (empty by default) Can be used by plugin to infer sensible names for additional files, if necessary. E.g. if the main file is ‘../myplot.gnu’, the plugin may decide to store the dat file under ‘../myplot_data.dat’.

• kwargs – other parameters are passed down to the plugin

Returns

a tuple of length 2. The first element is the content of the otuput file. The second is a dictionary (possibly empty) in the format {filename: filecontent} for any additional file that should be produced.

Return type

(bytes, dict)

_get_converters()[source]

Get all implemented converter formats. The convention is to find all _get_object_… methods. Returns a list of strings.

_get_exporters()[source]

Get all implemented export formats. The convention is to find all _prepare_… methods. Returns a dictionary of method_name: method_function

_get_importers()[source]

Get all implemented import formats. The convention is to find all _parse_… methods. Returns a list of strings.

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.data.Data (REPORT)>
_plugin_type_string = 'data.Data.'
_query_type_string = 'data.'
_source_attributes = ['db_name', 'db_uri', 'uri', 'id', 'version', 'extras', 'source_md5', 'description', 'license']
_storable = True
_unstorable_message = 'storing for this node has been disabled'
_validate()[source]

Perform validation of the Data object.

Note

validation of data source checks license and requires attribution to be provided in field ‘description’ of source in the case of any CC-BY* license. If such requirement is too strict, one can remove/comment it out.

clone()[source]

Create a clone of the Data node.

Returns

an unstored clone of this Data node

convert(object_format=None, *args)[source]

Convert the AiiDA StructureData into another python object

Parameters

object_format – Specify the output format

property creator

Return the creator of this node or None if it does not exist.

Returns

the creating node or None

export(path, fileformat=None, overwrite=False, **kwargs)[source]

Save a Data object to a file.

Parameters
• fname – string with file name. Can be an absolute or relative path.

• fileformat – kind of format to use for the export. If not present, it will try to use the extension of the file name.

• overwrite – if set to True, overwrites file found at path. Default=False

• kwargs – additional parameters to be passed to the _exportcontent method

Returns

the list of files created

classmethod get_export_formats()[source]

Get the list of valid export format strings

Returns

a list of valid formats

importfile(fname, fileformat=None)[source]

Populate a Data object from a file.

Parameters
• fname – string with file name. Can be an absolute or relative path.

• fileformat – kind of format to use for the export. If not present, it will try to use the extension of the file name.

importstring(inputstring, fileformat, **kwargs)[source]

Converts a Data object to other text format.

Parameters

fileformat – a string (the extension) to describe the file format.

Returns

a string with the structure description.

set_source(source)[source]

Sets the dictionary describing the source of Data object.

property source

Gets the dictionary describing the source of Data object. Possible fields:

• db_name: name of the source database.

• db_uri: URI of the source database.

• uri: URI of the object’s source. Should be a permanent link.

• id: object’s source identifier in the source database.

• version: version of the object’s source.

• extras: a dictionary with other fields for source description.

• source_md5: MD5 checksum of object’s source.

• description: human-readable free form description of the object’s source.

Note

some limitations for setting the data source exist, see _validate method.

Returns

dictionary describing the source of Data object.

class aiida.orm.nodes.Dict(**kwargs)[source]

Data sub class to represent a dictionary.

The dictionary contents of a Dict node are stored in the database as attributes. The dictionary can be initialized through the dict argument in the constructor. After construction, values can be retrieved and updated through the item getters and setters, respectively:

node[‘key’] = ‘value’

Alternatively, the dict property returns an instance of the AttributeManager that can be used to get and set values through attribute notation:

node.dict.key = ‘value’

Note that trying to set dictionary values directly on the node, e.g. node.key = value, will not work as intended. It will merely set the key attribute on the node instance, but will not be stored in the database. As soon as the node goes out of scope, the value will be lost.

It is also relevant to note here the difference in something being an “attribute of a node” (in the sense that it is stored in the “attribute” column of the database when the node is stored) and something being an “attribute of a python object” (in the sense of being able to modify and access it as if it was a property of the variable, e.g. node.key = value). This is true of all types of nodes, but it becomes more relevant for Dict nodes where one is constantly manipulating these attributes.

Finally, all dictionary mutations will be forbidden once the node is stored.

__abstractmethods__ = frozenset({})
__getitem__(key)[source]
__init__(**kwargs)[source]

Store a dictionary as a Node instance.

Usual rules for attribute names apply, in particular, keys cannot start with an underscore, or a ValueError will be raised.

Initial attributes can be changed, deleted or added as long as the node is not stored.

Parameters

dict – the dictionary to set

__module__ = 'aiida.orm.nodes.data.dict'
__setitem__(key, value)[source]
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.dict.Dict (REPORT)>
_plugin_type_string = 'data.dict.Dict.'
_query_type_string = 'data.dict.'
property dict

Return an instance of AttributeManager that transforms the dictionary into an attribute dict.

Note

this will allow one to do node.dict.key as well as node.dict[key].

Returns

an instance of the AttributeResultManager.

get_dict()[source]

Return a dictionary with the parameters currently set.

Returns

dictionary

keys()[source]

Iterator of valid keys stored in the Dict object.

Returns

iterator over the keys of the current dictionary

set_dict(dictionary)[source]

Replace the current dictionary with another one.

Parameters

dictionary – dictionary to set

update_dict(dictionary)[source]

Update the current dictionary with the keys provided in the dictionary.

Note

works exactly as dict.update() where new keys are simply added and existing keys are overwritten.

Parameters

dictionary – a dictionary with the keys to substitute

class aiida.orm.nodes.Float(*args, **kwargs)[source]

Data sub class to represent a float value.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.float'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.float.Float (REPORT)>
_plugin_type_string = 'data.float.Float.'
_query_type_string = 'data.float.'
_type

alias of builtins.float

class aiida.orm.nodes.FolderData(**kwargs)[source]

Data sub class to represent a folder on a file system.

__abstractmethods__ = frozenset({})
__init__(**kwargs)[source]

Construct a new FolderData to which any files and folders can be added.

Use the tree keyword to simply wrap a directory:

folder = FolderData(tree=’/absolute/path/to/directory’)

Alternatively, one can construct the node first and then use the various repository methods to add objects:

folder = FolderData() folder.put_object_from_tree(‘/absolute/path/to/directory’) folder.put_object_from_filepath(‘/absolute/path/to/file.txt’) folder.put_object_from_filelike(filelike_object)

Parameters

tree (str) – absolute path to a folder to wrap

__module__ = 'aiida.orm.nodes.data.folder'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.folder.FolderData (REPORT)>
_plugin_type_string = 'data.folder.FolderData.'
_query_type_string = 'data.folder.'
class aiida.orm.nodes.Int(*args, **kwargs)[source]

Data sub class to represent an integer value.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.int'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.int.Int (REPORT)>
_plugin_type_string = 'data.int.Int.'
_query_type_string = 'data.int.'
_type

alias of builtins.int

class aiida.orm.nodes.KpointsData(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Class to handle array of kpoints in the Brillouin zone. Provide methods to generate either user-defined k-points or path of k-points along symmetry lines. Internally, all k-points are defined in terms of crystal (fractional) coordinates. Cell and lattice vector coordinates are in Angstroms, reciprocal lattice vectors in Angstrom^-1 . :note: The methods setting and using the Bravais lattice info assume the PRIMITIVE unit cell is provided in input to the set_cell or set_cell_from_structure methods.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.array.kpoints'
_abc_impl = <_abc_data object>
_change_reference(kpoints, to_cartesian=True)[source]

Change reference system, from cartesian to crystal coordinates (units of b1,b2,b3) or viceversa. :param kpoints: a list of (3) point coordinates :return kpoints: a list of (3) point coordinates in the new reference

property _dimension

Dimensionality of the structure, found from its pbc (i.e. 1 if it’s a 1D structure, 2 if its 2D, 3 if it’s 3D …). :return dimensionality: 0, 1, 2 or 3 :note: will return 3 if pbc has not been set beforehand

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.array.kpoints.KpointsData (REPORT)>
_plugin_type_string = 'data.array.kpoints.KpointsData.'
_query_type_string = 'data.array.kpoints.'
_set_cell(value)[source]

Validate if ‘value’ is a allowed crystal unit cell :param value: something compatible with a 3x3 tuple of floats

_set_labels(value)[source]

set label names. Must pass in input a list like: [[0,'X'],[34,'L'],... ]

_set_pbc(value)[source]

validate the pbc, then store them

_validate_kpoints_weights(kpoints, weights)[source]

Validate the list of kpoints and of weights before storage. Kpoints and weights must be convertible respectively to an array of N x dimension and N floats

property cell

The crystal unit cell. Rows are the crystal vectors in Angstroms. :return: a 3x3 numpy.array

get_description()[source]

Returns a string with infos retrieved from kpoints node’s properties. :param node: :return: retstr

get_kpoints(also_weights=False, cartesian=False)[source]

Return the list of kpoints

Parameters
• also_weights – if True, returns also the list of weights. Default = False

• cartesian – if True, returns points in cartesian coordinates, otherwise, returns in crystal coordinates. Default = False.

get_kpoints_mesh(print_list=False)[source]

Get the mesh of kpoints.

Parameters

print_list – default=False. If True, prints the mesh of kpoints as a list

Raises

AttributeError – if no mesh has been set

Return mesh,offset

(if print_list=False) a list of 3 integers and a list of three floats 0<x<1, representing the mesh and the offset of kpoints

Return kpoints

(if print_list = True) an explicit list of kpoints coordinates, similar to what returned by get_kpoints()

property labels

Labels associated with the list of kpoints. List of tuples with kpoint index and kpoint name: [(0,'G'),(13,'M'),...]

property pbc

The periodic boundary conditions along the vectors a1,a2,a3.

Returns

a tuple of three booleans, each one tells if there are periodic boundary conditions for the i-th real-space direction (i=1,2,3)

property reciprocal_cell

Compute reciprocal cell from the internally set cell.

Returns

reciprocal cell in units of 1/Angstrom with cell vectors stored as rows. Use e.g. reciprocal_cell[0] to access the first reciprocal cell vector.

set_cell(cell, pbc=None)[source]

Set a cell to be used for symmetry analysis. To set a cell from an AiiDA structure, use “set_cell_from_structure”.

Parameters
• cell – 3x3 matrix of cell vectors. Orientation: each row represent a lattice vector. Units are Angstroms.

• pbc – list of 3 booleans, True if in the nth crystal direction the structure is periodic. Default = [True,True,True]

set_cell_from_structure(structuredata)[source]

Set a cell to be used for symmetry analysis from an AiiDA structure. Inherits both the cell and the pbc’s. To set manually a cell, use “set_cell”

Parameters

structuredata – an instance of StructureData

set_kpoints(kpoints, cartesian=False, labels=None, weights=None, fill_values=0)[source]

Set the list of kpoints. If a mesh has already been stored, raise a ModificationNotAllowed

Parameters
• kpoints

a list of kpoints, each kpoint being a list of one, two or three coordinates, depending on self.pbc: if structure is 1D (only one True in self.pbc) one allows singletons or scalars for each k-point, if it’s 2D it can be a length-2 list, and in all cases it can be a length-3 list. Examples:

• [[0.,0.,0.],[0.1,0.1,0.1],…] for 1D, 2D or 3D

• [[0.,0.],[0.1,0.1,],…] for 1D or 2D

• [[0.],[0.1],…] for 1D

• [0., 0.1, …] for 1D (list of scalars)

For 0D (all pbc are False), the list can be any of the above or empty - then only Gamma point is set. The value of k for the non-periodic dimension(s) is set by fill_values

• cartesian – if True, the coordinates given in input are treated as in cartesian units. If False, the coordinates are crystal, i.e. in units of b1,b2,b3. Default = False

• labels – optional, the list of labels to be set for some of the kpoints. See labels for more info

• weights – optional, a list of floats with the weight associated to the kpoint list

• fill_values – scalar to be set to all non-periodic dimensions (indicated by False in self.pbc), or list of values for each of the non-periodic dimensions.

set_kpoints_mesh(mesh, offset=None)[source]

Set KpointsData to represent a uniformily spaced mesh of kpoints in the Brillouin zone. This excludes the possibility of set/get kpoints

Parameters
• mesh – a list of three integers, representing the size of the kpoint mesh along b1,b2,b3.

• offset – (optional) a list of three floats between 0 and 1. [0.,0.,0.] is Gamma centered mesh [0.5,0.5,0.5] is half shifted [1.,1.,1.] by periodicity should be equivalent to [0.,0.,0.] Default = [0.,0.,0.].

set_kpoints_mesh_from_density(distance, offset=None, force_parity=False)[source]

Set a kpoints mesh using a kpoints density, expressed as the maximum distance between adjacent points along a reciprocal axis

Parameters
• distance – distance (in 1/Angstrom) between adjacent kpoints, i.e. the number of kpoints along each reciprocal axis i is $$|b_i|/distance$$ where $$|b_i|$$ is the norm of the reciprocal cell vector.

• offset – (optional) a list of three floats between 0 and 1. [0.,0.,0.] is Gamma centered mesh [0.5,0.5,0.5] is half shifted Default = [0.,0.,0.].

• force_parity – (optional) if True, force each integer in the mesh to be even (except for the non-periodic directions).

Note

a cell should be defined first.

Note

the number of kpoints along non-periodic axes is always 1.

class aiida.orm.nodes.List(**kwargs)[source]

Data sub class to represent a list.

_LIST_KEY = 'list'
__abstractmethods__ = frozenset({})
__delitem__(key)[source]
__eq__(other)[source]

Fallback equality comparison by uuid (can be overwritten by specific types)

__getitem__(item)[source]
__hash__ = None
__init__(**kwargs)[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__len__()[source]
__module__ = 'aiida.orm.nodes.data.list'
__ne__(other)[source]

Return self!=value.

__setitem__(key, value)[source]
__str__()[source]

Return str(self).

_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.list.List (REPORT)>
_plugin_type_string = 'data.list.List.'
_query_type_string = 'data.list.'
_using_list_reference()[source]

This function tells the class if we are using a list reference. This means that calls to self.get_list return a reference rather than a copy of the underlying list and therefore self.set_list need not be called. This knwoledge is essential to make sure this class is performant.

Currently the implementation assumes that if the node needs to be stored then it is using the attributes cache which is a reference.

Returns

True if using self.get_list returns a reference to the underlying sequence. False otherwise.

Return type

bool

append(value)[source]

S.append(value) – append value to the end of the sequence

count(value)[source]

Return number of occurrences of value.

extend(value)[source]

S.extend(iterable) – extend sequence by appending elements from the iterable

get_list()[source]

Return the contents of this node.

Returns

a list

index(value)[source]

Return first index of value..

insert(i, value)[source]

S.insert(index, value) – insert value before index

pop(**kwargs)[source]

Remove and return item at index (default last).

remove(value)[source]

S.remove(value) – remove first occurrence of value. Raise ValueError if the value is not present.

reverse()[source]

S.reverse() – reverse IN PLACE

set_list(data)[source]

Set the contents of this node.

Parameters

data – the list to set

sort(key=None, reverse=False)[source]
class aiida.orm.nodes.Node(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Base class for all nodes in AiiDA.

Stores attributes starting with an underscore.

Caches files and attributes before the first save, and saves everything only on store(). After the call to store(), attributes cannot be changed.

Only after storing (or upon loading from uuid) extras can be modified and in this case they are directly set on the db.

In the plugin, also set the _plugin_type_string, to be set in the DB in the ‘type’ field.

class Collection(*args, **kwds)[source]

The collection of nodes.

__module__ = 'aiida.orm.nodes.node'
__parameters__ = ()
delete(node_id: int)None[source]

Delete a Node from the collection with the given id

Parameters

node_id – the node id

__abstractmethods__ = frozenset({})
__annotations__ = {'_hash_ignored_attributes': typing.Tuple[str, ...], '_incoming_cache': typing.Union[typing.List[aiida.orm.utils.links.LinkTriple], NoneType], '_logger': typing.Union[logging.Logger, NoneType], '_repository': typing.Union[aiida.orm.utils._repository.Repository, NoneType], '_updatable_attributes': typing.Tuple[str, ...]}
__copy__()[source]

Copying a Node is not supported in general, but only for the Data sub class.

__deepcopy__(memo)[source]

Deep copying a Node is not supported in general, but only for the Data sub class.

__eq__(other: Any)bool[source]

Fallback equality comparison by uuid (can be overwritten by specific types)

__hash__()int[source]

Python-Hash: Implementation that is compatible with __eq__

__init__(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)None[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__module__ = 'aiida.orm.nodes.node'
__repr__()str[source]

Return repr(self).

__str__()str[source]

Return str(self).

_abc_impl = <_abc_data object>
_add_incoming_cache(source: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Parameters
• source – the node from which the link is coming

Raises

aiida.common.UniquenessError – if the given link triple already exists in the cache

_add_outputs_from_cache(cache_node: aiida.orm.nodes.node.Node)None[source]

Replicate the output links and nodes from the cached node onto this node.

_cachable = False
_get_hash(ignore_errors: bool = True, **kwargs: Any) → Optional[str][source]

Return the hash for this node based on its attributes.

This will always work, even before storing.

Parameters

ignore_errors – return None on aiida.common.exceptions.HashingError (logging the exception)

_get_objects_to_hash()List[Any][source]

Return a list of objects which should be included in the hash.

_get_same_node() → Optional[aiida.orm.nodes.node.Node][source]

Returns a stored node from which the current Node can be cached or None if it does not exist

If a node is returned it is a valid cache, meaning its _aiida_hash extra matches self.get_hash(). If there are multiple valid matches, the first one is returned. If no matches are found, None is returned.

Returns

a stored Node instance with the same hash as this code or None

Note: this should be only called on stored nodes, or internally from .store() since it first calls clean_value() on the attributes to normalise them.

_hash_ignored_attributes: Tuple[str, ] = ()
_incoming_cache: Optional[List[aiida.orm.utils.links.LinkTriple]] = None
_iter_all_same_nodes(allow_before_store=False) → Iterator[aiida.orm.nodes.node.Node][source]

Returns an iterator of all same nodes.

Note: this should be only called on stored nodes, or internally from .store() since it first calls clean_value() on the attributes to normalise them.

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.node.Node (REPORT)>
_plugin_type_string = ''
_query_type_string = ''
_repository: Optional[aiida.orm.utils._repository.Repository] = None
_repository_base_path = 'path'
_storable = False
_store(with_transaction: bool = True, clean: bool = True)aiida.orm.nodes.node.Node[source]

Store the node in the database while saving its attributes and repository directory.

Parameters
• with_transaction – if False, do not use a transaction because the caller will already have opened one.

• clean – boolean, if True, will clean the attributes and extras before attempting to store

_store_from_cache(cache_node: aiida.orm.nodes.node.Node, with_transaction: bool)None[source]

Store this node from an existing cache node.

_unstorable_message = 'only Data, WorkflowNode, CalculationNode or their subclasses can be stored'
_updatable_attributes: Tuple[str, ] = ()
_validate()bool[source]

Check if the attributes and files retrieved from the database are valid.

Must be able to work even before storing: therefore, use the get_attr and similar methods that automatically read either from the DB or from the internal attribute cache.

For the base class, this is always valid. Subclasses will reimplement this. In the subclass, always call the super()._validate() method first!

add_comment(content: str, user: Optional[aiida.orm.users.User] = None)aiida.orm.comments.Comment[source]

Parameters
• content – string with comment

• user – the user to associate with the comment, will use default if not supplied

Returns

the newly created comment

add_incoming(source: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Add a link of the given type from a given node to ourself.

Parameters
• source – the node from which the link is coming

Raises
• TypeError – if source is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

property backend_entity

Get the implementing class for this object

Returns

the class model

class_node_type = ''
clear_hash()None[source]

Sets the stored hash of the Node to None.

property computer

Return the computer of this node.

Returns

the computer or None

Return type

Computer or None

property ctime

Return the node ctime.

Returns

the ctime

delete_object(path: Optional[str] = None, force: bool = False, key: Optional[str] = None)None[source]

Delete the object from the repository.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Parameters
• key – fully qualified identifier for the object within the repository

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

property description

Return the node description.

Returns

the description

classmethod from_backend_entity(backend_entity: BackendNode)Node[source]

Construct an entity from a backend entity instance

Parameters

backend_entity – the backend entity

Returns

an AiiDA entity instance

get_all_same_nodes()List[aiida.orm.nodes.node.Node][source]

Return a list of stored nodes which match the type and hash of the current node.

All returned nodes are valid caches, meaning their _aiida_hash extra matches self.get_hash().

Note: this can be called only after storing a Node (since at store time attributes will be cleaned with clean_value and the hash should become idempotent to the action of serialization/deserialization)

get_cache_source() → Optional[str][source]

Return the UUID of the node that was used in creating this node from the cache, or None if it was not cached.

Returns

source node UUID or None

get_comment(identifier: int)aiida.orm.comments.Comment[source]

Return a comment corresponding to the given identifier.

Parameters

identifier – the comment pk

Raises
Returns

the comment

get_comments()List[aiida.orm.comments.Comment][source]

Return a sorted list of comments for this node.

Returns

the list of comments, sorted by pk

get_description()str[source]

Return a string with a description of the node.

Returns

a description string

get_hash(ignore_errors: bool = True, **kwargs: Any) → Optional[str][source]

Return the hash for this node based on its attributes.

Parameters

ignore_errors – return None on aiida.common.exceptions.HashingError (logging the exception)

get_incoming(node_class: Type[Node] = None, link_type: Union[aiida.common.links.LinkType, Sequence[aiida.common.links.LinkType]] = (), link_label_filter: Optional[str] = None, only_uuid: bool = False)aiida.orm.utils.links.LinkManager[source]

Return a list of link triples that are (directly) incoming into this node.

Parameters
• node_class – If specified, should be a class or tuple of classes, and it filters only elements of that specific type (or a subclass of ‘type’)

• link_type – If specified should be a string or tuple to get the inputs of this link type, if None then returns all inputs of all link types.

• link_label_filter – filters the incoming nodes by its link label. Here wildcards (% and _) can be passed in link label filter as we are using “like” in QB.

• only_uuid – project only the node UUID instead of the instance onto the NodeTriple.node entries

get_object(path: Optional[str] = None, key: Optional[str] = None) → File[source]

Return the object with the given path.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

Returns

a File named tuple

get_object_content(path: Optional[str] = None, mode: str = 'r', key: Optional[str] = None) → Union[str, bytes][source]

Return the content of a object with the given path.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

get_outgoing(node_class: Type[Node] = None, link_type: Union[aiida.common.links.LinkType, Sequence[aiida.common.links.LinkType]] = (), link_label_filter: Optional[str] = None, only_uuid: bool = False)aiida.orm.utils.links.LinkManager[source]

Return a list of link triples that are (directly) outgoing of this node.

Parameters
• node_class – If specified, should be a class or tuple of classes, and it filters only elements of that specific type (or a subclass of ‘type’)

• link_type – If specified should be a string or tuple to get the inputs of this link type, if None then returns all outputs of all link types.

• link_label_filter – filters the outgoing nodes by its link label. Here wildcards (% and _) can be passed in link label filter as we are using “like” in QB.

• only_uuid – project only the node UUID instead of the instance onto the NodeTriple.node entries

static get_schema()Dict[str, Any][source]
Every node property contains:
• display_name: display name of the property

• help text: short help text of the property

• is_foreign_key: is the property foreign key to other type of the node

• type: type of the property. e.g. str, dict, int

Returns

get schema of the node

Deprecated since version 1.0.0: Will be removed in v2.0.0. Use get_projectable_properties() instead.

Return the list of stored link triples directly incoming to or outgoing of this node.

Note this will only return link triples that are stored in the database. Anything in the cache is ignored.

Parameters
• node_class – If specified, should be a class, and it filters only elements of that (subclass of) type

• link_type – Only get inputs of this link type, if empty tuple then returns all inputs of all link types.

• link_label_filter – filters the incoming nodes by its link label. This should be a regex statement as one would pass directly to a QueryBuilder filter statement with the ‘like’ operation.

• link_directionincoming or outgoing to get the incoming or outgoing links, respectively.

• only_uuid – project only the node UUID instead of the instance onto the NodeTriple.node entries

Feturn whether there are unstored incoming links in the cache.

Returns

boolean, True when there are links in the incoming cache, False otherwise

initialize()None[source]

Initialize internal variables for the backend node

This needs to be called explicitly in each specific subclass implementation of the init.

property is_created_from_cache

Return whether this node was created from a cached node.

Returns

boolean, True if the node was created by cloning a cached node, False otherwise

property is_valid_cache

Hook to exclude certain Node instances from being considered a valid cache.

property label

Return the node label.

Returns

the label

list_object_names(path: Optional[str] = None, key: Optional[str] = None)List[str][source]

Return a list of the object names contained in this repository, optionally in the given sub directory.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

list_objects(path: Optional[str] = None, key: Optional[str] = None)List[File][source]

Return a list of the objects contained in this repository, optionally in the given sub directory.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

Returns

a list of File named tuples representing the objects present in directory with the given path

Raises

FileNotFoundError – if the path does not exist in the repository of this node

property logger

Return the logger configured for this Node.

Returns

Logger object

property mtime

Return the node mtime.

Returns

the mtime

property node_type

Return the node type.

Returns

the node type

open(path: Optional[str] = None, mode: str = 'r', key: Optional[str] = None) → aiida.orm.nodes.node.WarnWhenNotEntered[source]

Open a file handle to the object with the given path.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Starting from v2.0.0 this will raise if not used in a context manager.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

• mode – the mode under which to open the handle

property process_type

Return the node process type.

Returns

the process type

put_object_from_file(filepath: str, path: Optional[str] = None, mode: Optional[str] = None, encoding: Optional[str] = None, force: bool = False, key: Optional[str] = None)None[source]

Store a new object under path with contents of the file located at filepath on this file system.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: First positional argument path has been deprecated and renamed to filepath.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Parameters
• filepath – absolute path of file whose contents to copy to the repository

• path – the relative path where to store the object in the repository.

• key – fully qualified identifier for the object within the repository

• mode – the file mode with which the object will be written Deprecated: will be removed in v2.0.0

• encoding – the file encoding with which the object will be written Deprecated: will be removed in v2.0.0

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

put_object_from_filelike(handle: IO[Any], path: Optional[str] = None, mode: str = 'w', encoding: str = 'utf8', force: bool = False, key: Optional[str] = None)None[source]

Store a new object under path with contents of filelike object handle.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Parameters
• handle – filelike object with the content to be stored

• path – the relative path where to store the object in the repository.

• key – fully qualified identifier for the object within the repository

• mode – the file mode with which the object will be written

• encoding – the file encoding with which the object will be written

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

put_object_from_tree(filepath: str, path: Optional[str] = None, contents_only: bool = True, force: bool = False, key: Optional[str] = None)None[source]

Store a new object under path with the contents of the directory located at filepath on this file system.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: First positional argument path has been deprecated and renamed to filepath.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Deprecated since version 1.4.0: Keyword contents_only is deprecated and will be removed in v2.0.0.

Parameters
• filepath – absolute path of directory whose contents to copy to the repository

• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

• contents_only – boolean, if True, omit the top level directory of the path and only copy its contents.

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

rehash()None[source]

Regenerate the stored hash of the Node.

remove_comment(identifier: int)None[source]

Delete an existing comment.

Parameters

identifier – the comment pk

store(with_transaction: bool = True, use_cache=None)aiida.orm.nodes.node.Node[source]

Store the node in the database while saving its attributes and repository directory.

After being called attributes cannot be changed anymore! Instead, extras can be changed only AFTER calling this store() function.

Note

After successful storage, those links that are in the cache, and for which also the parent node is already stored, will be automatically stored. The others will remain unstored.

Parameters

with_transaction – if False, do not use a transaction because the caller will already have opened one.

store_all(with_transaction: bool = True, use_cache=None)aiida.orm.nodes.node.Node[source]

Store the node, together with all input links.

Unstored nodes from cached incoming linkswill also be stored.

Parameters

with_transaction – if False, do not use a transaction because the caller will already have opened one.

update_comment(identifier: int, content: str)None[source]

Update the content of an existing comment.

Parameters
• identifier – the comment pk

• content – the new comment content

Raises
property user

Return the user of this node.

Returns

the user

Return type

User

property uuid

Return the node UUID.

Returns

the string representation of the UUID

validate_incoming(source: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from a given node to ourself.

This function will first validate the types of the inputs, followed by the node and link types and validate whether in principle a link of that type between the nodes of these types is allowed.

Subsequently, the validity of the “degree” of the proposed link is validated, which means validating the number of links of the given type from the given node type is allowed.

Parameters
• source – the node from which the link is coming

Raises
• TypeError – if source is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

validate_outgoing(target: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from ourself to a given node.

The validity of the triple (source, link, target) should be validated in the validate_incoming call. This method will be called afterwards and can be overriden by subclasses to add additional checks that are specific to that subclass.

Parameters
• target – the node to which the link is going

Raises
• TypeError – if target is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

validate_storability()None[source]

Verify that the current node is allowed to be stored.

Raises

aiida.common.exceptions.StoringNotAllowed – if the node does not match all requirements for storing

verify_are_parents_stored()None[source]

Verify that all parent nodes are already stored.

Raises

aiida.common.ModificationNotAllowed – if one of the source nodes of incoming links is not stored.

class aiida.orm.nodes.NumericType(*args, **kwargs)[source]

Sub class of Data to store numbers, overloading common operators (+, *, …).

__abstractmethods__ = frozenset({})
__add__(other)[source]

Decorator wrapper.

__div__(other)[source]

Decorator wrapper.

__float__()[source]
__floordiv__(other)[source]

Decorator wrapper.

__ge__(other)[source]

Decorator wrapper.

__gt__(other)[source]

Decorator wrapper.

__int__()[source]
__le__(other)[source]

Decorator wrapper.

__lt__(other)[source]

Decorator wrapper.

__mod__(other)[source]

Decorator wrapper.

__module__ = 'aiida.orm.nodes.data.numeric'
__mul__(other)[source]

Decorator wrapper.

__pow__(other)[source]

Decorator wrapper.

__radd__(other)[source]

Decorator wrapper.

__rdiv__(other)[source]

Decorator wrapper.

__rfloordiv__(other)[source]

Decorator wrapper.

__rmod__(other)[source]

Decorator wrapper.

__rmul__(other)[source]

Decorator wrapper.

__rsub__(other)[source]

Decorator wrapper.

__rtruediv__(other)[source]

Decorator wrapper.

__sub__(other)[source]

Decorator wrapper.

__truediv__(other)[source]

Decorator wrapper.

_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.numeric.NumericType (REPORT)>
_plugin_type_string = 'data.numeric.NumericType.'
_query_type_string = 'data.numeric.'
class aiida.orm.nodes.OrbitalData(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Used for storing collections of orbitals, as well as providing methods for accessing them internally.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.orbital'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.orbital.OrbitalData (REPORT)>
_plugin_type_string = 'data.orbital.OrbitalData.'
_query_type_string = 'data.orbital.'
clear_orbitals()[source]

Remove all orbitals that were added to the class Cannot work if OrbitalData has been already stored

get_orbitals(**kwargs)[source]

Returns all orbitals by default. If a site is provided, returns all orbitals cooresponding to the location of that site, additional arguments may be provided, which act as filters on the retrieved orbitals.

Parameters

site – if provided, returns all orbitals with position of site

Kwargs

attributes than can filter the set of returned orbitals

Return list_of_outputs

a list of orbitals

set_orbitals(orbitals)[source]

Sets the orbitals into the database. Uses the orbital’s inherent set_orbital_dict method to generate a orbital dict string.

Parameters

orbital – an orbital or list of orbitals to be set

class aiida.orm.nodes.ProcessNode(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Base class for all nodes representing the execution of a process

This class and its subclasses serve as proxies in the database, for actual Process instances being run. The Process instance in memory will leverage an instance of this class (the exact sub class depends on the sub class of Process) to persist important information of its state to the database. This serves as a way for the user to inspect the state of the Process during its execution as well as a permanent record of its execution in the provenance graph, after the execution has terminated.

CHECKPOINT_KEY = 'checkpoints'
EXCEPTION_KEY = 'exception'
EXIT_MESSAGE_KEY = 'exit_message'
EXIT_STATUS_KEY = 'exit_status'
PROCESS_LABEL_KEY = 'process_label'
PROCESS_PAUSED_KEY = 'paused'
PROCESS_STATE_KEY = 'process_state'
PROCESS_STATUS_KEY = 'process_status'
__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.process.process'
__str__()str[source]

Return str(self).

_abc_impl = <_abc_data object>
_get_objects_to_hash()List[Any][source]

Return a list of objects which should be included in the hash.

_hash_ignored_inputs = ['CALL_CALC', 'CALL_WORK']
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.process.process.ProcessNode (REPORT)>
_plugin_type_string = 'process.ProcessNode.'
_query_type_string = 'process.'
_unstorable_message = 'only Data, WorkflowNode, CalculationNode or their subclasses can be stored'
_updatable_attributes: Tuple[str, ] = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', 'process_label', 'process_state', 'process_status')
property called

Return a list of nodes that the process called

Returns

list of process nodes called by this process

property called_descendants

Return a list of all nodes that have been called downstream of this process

This will recursively find all the called processes for this process and its children.

property caller

Return the process node that called this process node, or None if it does not have a caller

Returns

process node that called this process node instance or None

property checkpoint

Return the checkpoint bundle set for the process

Returns

checkpoint bundle if it exists, None otherwise

delete_checkpoint()None[source]

Delete the checkpoint bundle set for the process

property exception

Return the exception of the process or None if the process is not excepted.

If the process is marked as excepted yet there is no exception attribute, an empty string will be returned.

Returns

the exception message or None

property exit_message

Return the exit message of the process

Returns

the exit message

property exit_status

Return the exit status of the process

Returns

the exit status, an integer exit code or None

get_builder_restart() → ProcessBuilder[source]

Return a ProcessBuilder that is ready to relaunch the process that created this node.

The process class will be set based on the process_type of this node and the inputs of the builder will be prepopulated with the inputs registered for this node. This functionality is very useful if a process has completed and you want to relaunch it with slightly different inputs.

Returns

~aiida.engine.processes.builder.ProcessBuilder instance

property is_excepted

Return whether the process has excepted

Excepted means that during execution of the process, an exception was raised that was not caught.

Returns

True if during execution of the process an exception occurred, False otherwise

Return type

bool

property is_failed

Return whether the process has failed

Failed means that the process terminated nominally but it had a non-zero exit status.

Returns

True if the process has failed, False otherwise

Return type

bool

property is_finished

Return whether the process has finished

Finished means that the process reached a terminal state nominally. Note that this does not necessarily mean successfully, but there were no exceptions and it was not killed.

Returns

True if the process has finished, False otherwise

Return type

bool

property is_finished_ok

Return whether the process has finished successfully

Finished successfully means that it terminated nominally and had a zero exit status.

Returns

True if the process has finished successfully, False otherwise

Return type

bool

property is_killed

Return whether the process was killed

Killed means the process was killed directly by the user or by the calling process being killed.

Returns

True if the process was killed, False otherwise

Return type

bool

property is_terminated

Return whether the process has terminated

Terminated means that the process has reached any terminal state.

Returns

True if the process has terminated, False otherwise

Return type

bool

property is_valid_cache

Return whether the node is valid for caching

Returns

True if this process node is valid to be used for caching, False otherwise

property logger

Get the logger of the Calculation object, so that it also logs to the DB.

Returns

LoggerAdapter object, that works like a logger, but also has the ‘extra’ embedded

pause()None[source]

Mark the process as paused by setting the corresponding attribute.

This serves only to reflect that the corresponding Process is paused and so this method should not be called by anyone but the Process instance itself.

property paused

Return whether the process is paused

Returns

True if the Calculation is marked as paused, False otherwise

property process_class

Return the process class that was used to create this node.

Returns

Process class

Raises

ValueError – if no process type is defined, it is an invalid process type string or cannot be resolved to load the corresponding class

property process_label

Return the process label

Returns

the process label

property process_state

Return the process state

Returns

the process state instance of ProcessState enum

property process_status

Return the process status

The process status is a generic status message e.g. the reason it might be paused or when it is being killed

Returns

the process status

set_checkpoint(checkpoint: Dict[str, Any])None[source]

Set the checkpoint bundle set for the process

Parameters

state – string representation of the stepper state info

set_exception(exception: str)None[source]

Set the exception of the process

Parameters

exception – the exception message

set_exit_message(message: Optional[str])None[source]

Set the exit message of the process, if None nothing will be done

Parameters

message – a string message

set_exit_status(status: Union[None, enum.Enum, int])None[source]

Set the exit status of the process

Parameters

state – an integer exit code or None, which will be interpreted as zero

set_process_label(label: str)None[source]

Set the process label

Parameters

label – process label string

set_process_state(state: )[source]

Set the process state

Parameters

state – value or instance of ProcessState enum

set_process_status(status: Optional[str])None[source]

Set the process status

The process status is a generic status message e.g. the reason it might be paused or when it is being killed. If status is None, the corresponding attribute will be deleted.

Parameters

status – string process status

set_process_type(process_type_string: str)None[source]

Set the process type string.

Parameters

process_type – the process type string identifying the class using this process node as storage.

unpause()None[source]

Mark the process as unpaused by removing the corresponding attribute.

This serves only to reflect that the corresponding Process is unpaused and so this method should not be called by anyone but the Process instance itself.

validate_incoming(source: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from a given node to ourself.

Adding an input link to a ProcessNode once it is stored is illegal because this should be taken care of by the engine in one go. If a link is being added after the node is stored, it is most likely not by the engine and it should not be allowed.

Parameters
• source – the node from which the link is coming

Raises
• TypeError – if source is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

class aiida.orm.nodes.ProjectionData(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

A class to handle arrays of projected wavefunction data. That is projections of a orbitals, usually an atomic-hydrogen orbital, onto a given bloch wavefunction, the bloch wavefunction being indexed by s, n, and k. E.g. the elements are the projections described as < orbital | Bloch wavefunction (s,n,k) >

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.array.projection'
_abc_impl = <_abc_data object>
_check_projections_bands(projection_array)[source]

Checks to make sure that a reference bandsdata is already set, and that projection_array is of the same shape of the bands data

Parameters

projwfc_arrays – nk x nb x nwfc array, to be checked against bands

Raise

AttributeError if energy is not already set

Raise

AttributeError if input_array is not of same shape as dos_energy

_find_orbitals_and_indices(**kwargs)[source]

Finds all the orbitals and their indicies associated with kwargs essential for retrieving the other indexed array parameters

Parameters

kwargs – kwargs that can call orbitals as in get_orbitals()

Returns

retrieve_indexes, list of indicicies of orbitals corresponding to the kwargs

Returns

all_orbitals, list of orbitals to which the indexes correspond

static _from_index_to_arrayname(index)[source]

Used internally to determine the array names.

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.array.projection.ProjectionData (REPORT)>
_plugin_type_string = 'data.array.projection.ProjectionData.'
_query_type_string = 'data.array.projection.'
get_pdos(**kwargs)[source]

Retrieves all the pdos arrays corresponding to the input kwargs

Parameters

kwargs – inputs describing the orbitals associated with the pdos arrays

Returns

a list of tuples containing the orbital, energy array and pdos array associated with all orbitals that correspond to kwargs

get_projections(**kwargs)[source]

Retrieves all the pdos arrays corresponding to the input kwargs

Parameters

kwargs – inputs describing the orbitals associated with the pdos arrays

Returns

a list of tuples containing the orbital, and projection arrays associated with all orbitals that correspond to kwargs

get_reference_bandsdata()[source]

Returns the reference BandsData, using the set uuid via set_reference_bandsdata

Returns

a BandsData instance

Raises
set_orbitals(**kwargs)[source]

This method is inherited from OrbitalData, but is blocked here. If used will raise a NotImplementedError

set_projectiondata(list_of_orbitals, list_of_projections=None, list_of_energy=None, list_of_pdos=None, tags=None, bands_check=True)[source]

Stores the projwfc_array using the projwfc_label, after validating both.

Parameters
• list_of_orbitals – list of orbitals, of class orbital data. They should be the ones up on which the projection array corresponds with.

• list_of_projections – list of arrays of projections of a atomic wavefunctions onto bloch wavefunctions. Since the projection is for every bloch wavefunction which can be specified by its spin (if used), band, and kpoint the dimensions must be nspin x nbands x nkpoints for the projwfc array. Or nbands x nkpoints if spin is not used.

• energy_axis – list of energy axis for the list_of_pdos

• list_of_pdos – a list of projected density of states for the atomic wavefunctions, units in states/eV

• tags – A list of tags, not supported currently.

• bands_check – if false, skips checks of whether the bands has been already set, and whether the sizes match. For use in parsers, where the BandsData has not yet been stored and therefore get_reference_bandsdata cannot be called

set_reference_bandsdata(value)[source]

Sets a reference bandsdata, creates a uuid link between this data object and a bandsdata object, must be set before any projection arrays

Parameters

value – a BandsData instance, a uuid or a pk

Raise

exceptions.NotExistent if there was no BandsData associated with uuid or pk

class aiida.orm.nodes.RemoteData(remote_path=None, **kwargs)[source]

Store a link to a file or folder on a remote machine.

Remember to pass a computer!

__abstractmethods__ = frozenset({})
__init__(remote_path=None, **kwargs)[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__module__ = 'aiida.orm.nodes.data.remote.base'
_abc_impl = <_abc_data object>
_clean()[source]

Remove all content of the remote folder on the remote computer

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.remote.base.RemoteData (REPORT)>
_plugin_type_string = 'data.remote.RemoteData.'
_query_type_string = 'data.remote.'
_validate()[source]

Perform validation of the Data object.

Note

validation of data source checks license and requires attribution to be provided in field ‘description’ of source in the case of any CC-BY* license. If such requirement is too strict, one can remove/comment it out.

get_authinfo()[source]
get_computer_name()[source]

Get label of this node’s computer.

Deprecated since version 1.4.0: Will be removed in v2.0.0, use the self.computer.label property instead.

get_remote_path()[source]
getfile(relpath, destpath)[source]

Connects to the remote folder and retrieves the content of a file.

Parameters
• relpath – The relative path of the file on the remote to retrieve.

• destpath – The absolute path of where to store the file on the local machine.

property is_empty

Check if remote folder is empty

listdir(relpath='.')[source]

Connects to the remote folder and lists the directory content.

Parameters

relpath – If ‘relpath’ is specified, lists the content of the given subfolder.

Returns

a flat list of file/directory names (as strings).

listdir_withattributes(path='.')[source]

Connects to the remote folder and lists the directory content.

Parameters

relpath – If ‘relpath’ is specified, lists the content of the given subfolder.

Returns

a list of dictionaries, where the documentation is in :py:class:Transport.listdir_withattributes.

set_remote_path(val)[source]
class aiida.orm.nodes.RemoteStashData(stash_mode: aiida.common.datastructures.StashMode, **kwargs)[source]

Data plugin that models an archived folder on a remote computer.

A stashed folder is essentially an instance of RemoteData that has been archived. Archiving in this context can simply mean copying the content of the folder to another location on the same or another filesystem as long as it is on the same machine. In addition, the folder may have been compressed into a single file for efficiency or even written to tape. The stash_mode attribute will distinguish how the folder was stashed which will allow the implementation to also unstash it and transform it back into a RemoteData such that it can be used as an input for new CalcJobs.

This class is a non-storable base class that merely registers the stash_mode attribute. Only its subclasses, that actually implement a certain stash mode, can be instantiated and therefore stored. The reason for this design is that because the behavior of the class can change significantly based on the mode employed to stash the files and implementing all these variants in the same class will lead to an unintuitive interface where certain properties or methods of the class will only be available or function properly based on the stash_mode.

__abstractmethods__ = frozenset({})
__init__(stash_mode: aiida.common.datastructures.StashMode, **kwargs)[source]

Construct a new instance

Parameters

stash_mode – the stashing mode with which the data was stashed on the remote.

__module__ = 'aiida.orm.nodes.data.remote.stash.base'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.remote.stash.base.RemoteStashData (REPORT)>
_plugin_type_string = 'data.remote.stash.RemoteStashData.'
_query_type_string = 'data.remote.stash.'
_storable = False
property stash_mode

Return the mode with which the data was stashed on the remote.

Returns

the stash mode.

class aiida.orm.nodes.RemoteStashFolderData(stash_mode: aiida.common.datastructures.StashMode, target_basepath: str, source_list: List, **kwargs)[source]

Data plugin that models a folder with files of a completed calculation job that has been stashed through a copy.

This data plugin can and should be used to stash files if and only if the stash mode is StashMode.COPY.

__abstractmethods__ = frozenset({})
__init__(stash_mode: aiida.common.datastructures.StashMode, target_basepath: str, source_list: List, **kwargs)[source]

Construct a new instance

Parameters
• stash_mode – the stashing mode with which the data was stashed on the remote.

• target_basepath – the target basepath.

• source_list – the list of source files.

__module__ = 'aiida.orm.nodes.data.remote.stash.folder'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.remote.stash.folder.RemoteStashFolderData (REPORT)>
_plugin_type_string = 'data.remote.stash.folder.RemoteStashFolderData.'
_query_type_string = 'data.remote.stash.folder.'
_storable = True
property source_list

Return the list of source files that were stashed.

Returns

the list of source files.

property target_basepath

Return the target basepath.

Returns

the target basepath.

class aiida.orm.nodes.SinglefileData(file, filename=None, **kwargs)[source]

Data class that can be used to store a single file in its repository.

DEFAULT_FILENAME = 'file.txt'
__abstractmethods__ = frozenset({})
__init__(file, filename=None, **kwargs)[source]

Construct a new instance and set the contents to that of the file.

Parameters
• file – an absolute filepath or filelike object whose contents to copy. Hint: Pass io.BytesIO(b”my string”) to construct the SinglefileData directly from a string.

• filename – specify filename to use (defaults to name of provided file).

__module__ = 'aiida.orm.nodes.data.singlefile'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.singlefile.SinglefileData (REPORT)>
_plugin_type_string = 'data.singlefile.SinglefileData.'
_query_type_string = 'data.singlefile.'
_validate()[source]

Ensure that there is one object stored in the repository, whose key matches value set for filename attr.

property filename

Return the name of the file stored.

Returns

the filename under which the file is stored in the repository

get_content()[source]

Return the content of the single file stored for this data node.

Returns

the content of the file as a string

open(path=None, mode='r', key=None)[source]

Return an open file handle to the content of this data node.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Starting from v2.0.0 this will raise if not used in a context manager.

Parameters
• path – the relative path of the object within the repository.

• key – optional key within the repository, by default is the filename set in the attributes

• mode – the mode with which to open the file handle (default: read mode)

Returns

a file handle

set_file(file, filename=None)[source]

Store the content of the file in the node’s repository, deleting any other existing objects.

Parameters
• file – an absolute filepath or filelike object whose contents to copy Hint: Pass io.BytesIO(b”my string”) to construct the file directly from a string.

• filename – specify filename to use (defaults to name of provided file).

class aiida.orm.nodes.Str(*args, **kwargs)[source]

Data sub class to represent a string value.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.str'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.str.Str (REPORT)>
_plugin_type_string = 'data.str.Str.'
_query_type_string = 'data.str.'
_type

alias of builtins.str

class aiida.orm.nodes.StructureData(cell=None, pbc=None, ase=None, pymatgen=None, pymatgen_structure=None, pymatgen_molecule=None, **kwargs)[source]

This class contains the information about a given structure, i.e. a collection of sites together with a cell, the boundary conditions (whether they are periodic or not) and other related useful information.

__abstractmethods__ = frozenset({})
__init__(cell=None, pbc=None, ase=None, pymatgen=None, pymatgen_structure=None, pymatgen_molecule=None, **kwargs)[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__module__ = 'aiida.orm.nodes.data.structure'
_abc_impl = <_abc_data object>
_adjust_default_cell(vacuum_factor=1.0, vacuum_addition=10.0, pbc=False, False, False)[source]

If the structure was imported from an xyz file, it lacks a defined cell, and the default cell is taken ([[1,0,0], [0,1,0], [0,0,1]]), leading to an unphysical definition of the structure. This method will adjust the cell

_dimensionality_label = {0: '', 1: 'length', 2: 'surface', 3: 'volume'}
_get_object_ase()[source]

Converts StructureData to ase.Atoms

Returns

an ase.Atoms object

_get_object_phonopyatoms()[source]

Converts StructureData to PhonopyAtoms

Returns

a PhonopyAtoms object

_get_object_pymatgen(**kwargs)[source]

Converts StructureData to pymatgen object

Returns

a pymatgen Structure for structures with periodic boundary conditions (in three dimensions) and Molecule otherwise

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors).

_get_object_pymatgen_molecule(**kwargs)[source]

Converts StructureData to pymatgen Molecule object

Returns

a pymatgen Molecule object corresponding to this StructureData object.

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors)

_get_object_pymatgen_structure(**kwargs)[source]

Converts StructureData to pymatgen Structure object :param add_spin: True to add the spins to the pymatgen structure. Default is False (no spin added).

Note

The spins are set according to the following rule:

• if the kind name ends with 1 -> spin=+1

• if the kind name ends with 2 -> spin=-1

Returns

a pymatgen Structure object corresponding to this StructureData object

Raises

ValueError – if periodic boundary conditions does not hold in at least one dimension of real space; if there are partial occupancies together with spins (defined by kind names ending with ‘1’ or ‘2’).

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors)

_internal_kind_tags = None
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.structure.StructureData (REPORT)>
_parse_xyz(inputstring)[source]

Read the structure from a string of format XYZ.

_plugin_type_string = 'data.structure.StructureData.'
_prepare_chemdoodle(main_file_name='')[source]

Write the given structure to a string of format required by ChemDoodle.

_prepare_cif(main_file_name='')[source]

Write the given structure to a string of format CIF.

_prepare_xsf(main_file_name='')[source]

Write the given structure to a string of format XSF (for XCrySDen).

_prepare_xyz(main_file_name='')[source]

Write the given structure to a string of format XYZ.

_query_type_string = 'data.structure.'
_set_incompatibilities = [('ase', 'cell'), ('ase', 'pbc'), ('ase', 'pymatgen'), ('ase', 'pymatgen_molecule'), ('ase', 'pymatgen_structure'), ('cell', 'pymatgen'), ('cell', 'pymatgen_molecule'), ('cell', 'pymatgen_structure'), ('pbc', 'pymatgen'), ('pbc', 'pymatgen_molecule'), ('pbc', 'pymatgen_structure'), ('pymatgen', 'pymatgen_molecule'), ('pymatgen', 'pymatgen_structure'), ('pymatgen_molecule', 'pymatgen_structure')]
_validate()[source]

Performs some standard validation tests.

append_atom(**kwargs)[source]

Append an atom to the Structure, taking care of creating the corresponding kind.

Parameters
• ase – the ase Atom object from which we want to create a new atom (if present, this must be the only parameter)

• position – the position of the atom (three numbers in angstrom)

• symbols – passed to the constructor of the Kind object.

• weights – passed to the constructor of the Kind object.

• name – passed to the constructor of the Kind object. See also the note below.

Note

Note on the ‘name’ parameter (that is, the name of the kind):

• if specified, no checks are done on existing species. Simply, a new kind with that name is created. If there is a name clash, a check is done: if the kinds are identical, no error is issued; otherwise, an error is issued because you are trying to store two different kinds with the same name.

• if not specified, the name is automatically generated. Before adding the kind, a check is done. If other species with the same properties already exist, no new kinds are created, but the site is added to the existing (identical) kind. (Actually, the first kind that is encountered). Otherwise, the name is made unique first, by adding to the string containing the list of chemical symbols a number starting from 1, until an unique name is found

Note

checks of equality of species are done using the compare_with() method.

append_kind(kind)[source]

Append a kind to the StructureData. It makes a copy of the kind.

Parameters

kind – the site to append, must be a Kind object.

append_site(site)[source]

Append a site to the StructureData. It makes a copy of the site.

Parameters

site – the site to append. It must be a Site object.

property cell

Returns the cell shape.

Returns

a 3x3 list of lists.

property cell_angles

Get the angles between the cell lattice vectors in degrees.

property cell_lengths

Get the lengths of cell lattice vectors in angstroms.

clear_kinds()[source]

Removes all kinds for the StructureData object.

Note

Also clear all sites!

clear_sites()[source]

Removes all sites for the StructureData object.

get_ase()[source]

Get the ASE object. Requires to be able to import ase.

Returns

an ASE object corresponding to this StructureData object.

Note

If any site is an alloy or has vacancies, a ValueError is raised (from the site.get_ase() routine).

get_cell_volume()[source]

Returns the cell volume in Angstrom^3.

Returns

a float.

get_cif(converter='ase', store=False, **kwargs)[source]

New in version 1.0: Renamed from _get_cif

Parameters
• converter – specify the converter. Default ‘ase’.

• store – If True, intermediate calculation gets stored in the AiiDA database for record. Default False.

Returns
get_composition()[source]

Returns the chemical composition of this structure as a dictionary, where each key is the kind symbol (e.g. H, Li, Ba), and each value is the number of occurences of that element in this structure. For BaZrO3 it would return {‘Ba’:1, ‘Zr’:1, ‘O’:3}. No reduction with smallest common divisor!

Returns

a dictionary with the composition

get_description()[source]

Parameters

self – the StructureData node

Returns

retsrt: the description string

get_dimensionality()[source]

This function checks the dimensionality of the structure and calculates its length/surface/volume :return: returns the dimensionality and length/surface/volume

get_formula(mode='hill', separator='')[source]

Return a string with the chemical formula.

Parameters
• mode

a string to specify how to generate the formula, can assume one of the following values:

• ’hill’ (default): count the number of atoms of each species, then use Hill notation, i.e. alphabetical order with C and H first if one or several C atom(s) is (are) present, e.g. ['C','H','H','H','O','C','H','H','H'] will return 'C2H6O' ['S','O','O','H','O','H','O'] will return 'H2O4S' From E. A. Hill, J. Am. Chem. Soc., 22 (8), pp 478–494 (1900)

• ’hill_compact’: same as hill but the number of atoms for each species is divided by the greatest common divisor of all of them, e.g. ['C','H','H','H','O','C','H','H','H','O','O','O'] will return 'CH3O2'

• ’reduce’: group repeated symbols e.g. ['Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'Ti', 'O', 'O', 'O'] will return 'BaTiO3BaTiO3BaTi2O3'

• ’group’: will try to group as much as possible parts of the formula e.g. ['Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'Ti', 'O', 'O', 'O'] will return '(BaTiO3)2BaTi2O3'

• ’count’: same as hill (i.e. one just counts the number of atoms of each species) without the re-ordering (take the order of the atomic sites), e.g. ['Ba', 'Ti', 'O', 'O', 'O','Ba', 'Ti', 'O', 'O', 'O'] will return 'Ba2Ti2O6'

• ’count_compact’: same as count but the number of atoms for each species is divided by the greatest common divisor of all of them, e.g. ['Ba', 'Ti', 'O', 'O', 'O','Ba', 'Ti', 'O', 'O', 'O'] will return 'BaTiO3'

• separator – a string used to concatenate symbols. Default empty.

Returns

a string with the formula

Note

in modes reduce, group, count and count_compact, the initial order in which the atoms were appended by the user is used to group and/or order the symbols in the formula

get_kind(kind_name)[source]

Return the kind object associated with the given kind name.

Parameters

kind_name – String, the name of the kind you want to get

Returns

The Kind object associated with the given kind_name, if a Kind with the given name is present in the structure.

Raise

ValueError if the kind_name is not present.

get_kind_names()[source]

Return a list of kind names (in the same order of the self.kinds property, but return the names rather than Kind objects)

Note

This is NOT necessarily a list of chemical symbols! Use get_symbols_set for chemical symbols

Returns

a list of strings.

get_pymatgen(**kwargs)[source]

Get pymatgen object. Returns Structure for structures with periodic boundary conditions (in three dimensions) and Molecule otherwise. :param add_spin: True to add the spins to the pymatgen structure. Default is False (no spin added).

Note

The spins are set according to the following rule:

• if the kind name ends with 1 -> spin=+1

• if the kind name ends with 2 -> spin=-1

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors).

get_pymatgen_molecule()[source]

Get the pymatgen Molecule object.

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors).

Returns

a pymatgen Molecule object corresponding to this StructureData object.

get_pymatgen_structure(**kwargs)[source]

Get the pymatgen Structure object. :param add_spin: True to add the spins to the pymatgen structure. Default is False (no spin added).

Note

The spins are set according to the following rule:

• if the kind name ends with 1 -> spin=+1

• if the kind name ends with 2 -> spin=-1

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors).

Returns

a pymatgen Structure object corresponding to this StructureData object.

Raises

ValueError – if periodic boundary conditions do not hold in at least one dimension of real space.

get_site_kindnames()[source]

Return a list with length equal to the number of sites of this structure, where each element of the list is the kind name of the corresponding site.

Note

This is NOT necessarily a list of chemical symbols! Use [ self.get_kind(s.kind_name).get_symbols_string() for s in self.sites] for chemical symbols

Returns

a list of strings

get_symbols_set()[source]

Return a set containing the names of all elements involved in this structure (i.e., for it joins the list of symbols for each kind k in the structure).

Returns

a set of strings of element names.

property has_vacancies

Return whether the structure has vacancies in the structure.

Returns

a boolean, True if at least one kind has a vacancy

property is_alloy

Return whether the structure contains any alloy kinds.

Returns

a boolean, True if at least one kind is an alloy

property kinds

Returns a list of kinds.

property pbc

Get the periodic boundary conditions.

Returns

a tuple of three booleans, each one tells if there are periodic boundary conditions for the i-th real-space direction (i=1,2,3)

reset_cell(new_cell)[source]

Reset the cell of a structure not yet stored to a new value.

Parameters

new_cell – list specifying the cell vectors

Raises

ModificationNotAllowed: if object is already stored

reset_sites_positions(new_positions, conserve_particle=True)[source]

Replace all the Site positions attached to the Structure

Parameters
• new_positions – list of (3D) positions for every sites.

• conserve_particle – if True, allows the possibility of removing a site. currently not implemented.

Raises

Note

it is assumed that the order of the new_positions is given in the same order of the one it’s substituting, i.e. the kind of the site will not be checked.

set_ase(aseatoms)[source]

Load the structure from a ASE object

set_cell(value)[source]

Set the cell.

set_cell_angles(value)[source]
set_cell_lengths(value)[source]
set_pbc(value)[source]

Set the periodic boundary conditions.

set_pymatgen(obj, **kwargs)[source]

Load the structure from a pymatgen object.

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors).

set_pymatgen_molecule(mol, margin=5)[source]

Load the structure from a pymatgen Molecule object.

Parameters

margin – the margin to be added in all directions of the bounding box of the molecule.

Note

Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors).

set_pymatgen_structure(struct)[source]

Load the structure from a pymatgen Structure object.

Note

periodic boundary conditions are set to True in all three directions.

Note

Requires the pymatgen module (version >= 3.3.5, usage of earlier versions may cause errors).

Raises

ValueError – if there are partial occupancies together with spins.

property sites

Returns a list of sites.

class aiida.orm.nodes.TrajectoryData(structurelist=None, **kwargs)[source]

Stores a trajectory (a sequence of crystal structures with timestamps, and possibly with velocities).

__abstractmethods__ = frozenset({})
__init__(structurelist=None, **kwargs)[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__module__ = 'aiida.orm.nodes.data.array.trajectory'
_abc_impl = <_abc_data object>
_internal_validate(stepids, cells, symbols, positions, times, velocities)[source]

Internal function to validate the type and shape of the arrays. See the documentation of py:meth:.set_trajectory for a description of the valid shape and type of the parameters.

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.array.trajectory.TrajectoryData (REPORT)>
_parse_xyz_pos(inputstring)[source]

Load positions from a XYZ file.

Note

The steps and symbols must be set manually before calling this import function as a consistency measure. Even though the symbols and steps could be extracted from the XYZ file, the data present in the XYZ file may or may not be correct and the same logic would have to be present in the XYZ-velocities function. It was therefore decided not to implement it at all but require it to be set explicitly.

Usage:

from aiida.orm.nodes.data.array.trajectory import TrajectoryData

t = TrajectoryData()
# get sites and number of timesteps
t.set_array('steps', arange(ntimesteps))
t.set_array('symbols', array([site.kind for site in s.sites]))
t.importfile('some-calc/AIIDA-PROJECT-pos-1.xyz', 'xyz_pos')

_parse_xyz_vel(inputstring)[source]

Load velocities from a XYZ file.

Note

The steps and symbols must be set manually before calling this import function as a consistency measure. See also comment for _parse_xyz_pos()

_plugin_type_string = 'data.array.trajectory.TrajectoryData.'
_prepare_cif(trajectory_index=None, main_file_name='')[source]

Write the given trajectory to a string of format CIF.

_prepare_xsf(index=None, main_file_name='')[source]

Write the given trajectory to a string of format XSF (for XCrySDen).

_query_type_string = 'data.array.trajectory.'
_validate()[source]

Verify that the required arrays are present and that their type and dimension are correct.

get_cells()[source]

Return the array of cells, if it has already been set.

Raises

KeyError – if the trajectory has not been set yet.

get_cif(index=None, **kwargs)[source]

New in version 1.0: Renamed from _get_cif

get_index_from_stepid(stepid)[source]

Given a value for the stepid (i.e., a value among those of the steps array), return the array index of that stepid, that can be used in other methods such as get_step_data() or get_step_structure().

New in version 0.7: Renamed from get_step_index

Note

Note that this function returns the first index found (i.e. if multiple steps are present with the same value, only the index of the first one is returned).

Raises

ValueError – if no step with the given value is found.

get_positions()[source]

Return the array of positions, if it has already been set.

Raises

KeyError – if the trajectory has not been set yet.

get_step_data(index)[source]

Return a tuple with all information concerning the stepid with given index (0 is the first step, 1 the second step and so on). If you know only the step value, use the get_index_from_stepid() method to get the corresponding index.

If no velocities were specified, None is returned as the last element.

Returns

A tuple in the format (stepid, time, cell, symbols, positions, velocities), where stepid is an integer, time is a float, cell is a $$3 \times 3$$ matrix, symbols is an array of length n, positions is a $$n \times 3$$ array, and velocities is either None or a $$n \times 3$$ array

Parameters

index – The index of the step that you want to retrieve, from 0 to self.numsteps - 1.

Raises
• IndexError – if you require an index beyond the limits.

• KeyError – if you did not store the trajectory yet.

get_step_structure(index, custom_kinds=None)[source]

Return an AiiDA aiida.orm.nodes.data.structure.StructureData node (not stored yet!) with the coordinates of the given step, identified by its index. If you know only the step value, use the get_index_from_stepid() method to get the corresponding index.

Note

The periodic boundary conditions are always set to True.

New in version 0.7: Renamed from step_to_structure

Parameters
• index – The index of the step that you want to retrieve, from 0 to self.numsteps- 1.

• custom_kinds – (Optional) If passed must be a list of aiida.orm.nodes.data.structure.Kind objects. There must be one kind object for each different string in the symbols array, with kind.name set to this string. If this parameter is omitted, the automatic kind generation of AiiDA aiida.orm.nodes.data.structure.StructureData nodes is used, meaning that the strings in the symbols array must be valid chemical symbols.

get_stepids()[source]

Return the array of steps, if it has already been set.

New in version 0.7: Renamed from get_steps

Raises

KeyError – if the trajectory has not been set yet.

get_structure(store=False, **kwargs)[source]

New in version 1.0: Renamed from _get_aiida_structure

Parameters
• converter – specify the converter. Default ‘ase’.

• store – If True, intermediate calculation gets stored in the AiiDA database for record. Default False.

Returns
get_times()[source]

Return the array of times (in ps), if it has already been set.

Raises

KeyError – if the trajectory has not been set yet.

get_velocities()[source]

Return the array of velocities, if it has already been set.

Note

This function (differently from all other get_* functions, will not raise an exception if the velocities are not set, but rather return None (both if no trajectory was not set yet, and if it the trajectory was set but no velocities were specified).

property numsites

Return the number of stored sites, or zero if nothing has been stored yet.

property numsteps

Return the number of stored steps, or zero if nothing has been stored yet.

set_structurelist(structurelist)[source]

Create trajectory from the list of aiida.orm.nodes.data.structure.StructureData instances.

Parameters

structurelist – a list of aiida.orm.nodes.data.structure.StructureData instances.

Raises

ValueError – if symbol lists of supplied structures are different

set_trajectory(symbols, positions, stepids=None, cells=None, times=None, velocities=None)[source]

Store the whole trajectory, after checking that types and dimensions are correct.

Parameters stepids, cells and velocities are optional variables. If nothing is passed for cells or velocities nothing will be stored. However, if no input is given for stepids a consecutive sequence [0,1,2,…,len(positions)-1] will be assumed.

Parameters
• symbols – string list with dimension n, where n is the number of atoms (i.e., sites) in the structure. The same list is used for each step. Normally, the string should be a valid chemical symbol, but actually any unique string works and can be used as the name of the atomic kind (see also the get_step_structure() method).

• positions – float array with dimension $$s \times n \times 3$$, where s is the length of the stepids array and n is the length of the symbols array. Units are angstrom. In particular, positions[i,j,k] is the k-th component of the j-th atom (or site) in the structure at the time step with index i (identified by step number step[i] and with timestamp times[i]).

• stepids – integer array with dimension s, where s is the number of steps. Typically represents an internal counter within the code. For instance, if you want to store a trajectory with one step every 10, starting from step 65, the array will be [65,75,85,...]. No checks are done on duplicate elements or on the ordering, but anyway this array should be sorted in ascending order, without duplicate elements. (If not specified, stepids will be set to numpy.arange(s) by default) It is internally stored as an array named ‘steps’.

• cells – if specified float array with dimension $$s \times 3 \times 3$$, where s is the length of the stepids array. Units are angstrom. In particular, cells[i,j,k] is the k-th component of the j-th cell vector at the time step with index i (identified by step number stepid[i] and with timestamp times[i]).

• times – if specified, float array with dimension s, where s is the length of the stepids array. Contains the timestamp of each step in picoseconds (ps).

• velocities – if specified, must be a float array with the same dimensions of the positions array. The array contains the velocities in the atoms.

show_mpl_heatmap(**kwargs)[source]

Show a heatmap of the trajectory with matplotlib.

show_mpl_pos(**kwargs)[source]

Shows the positions as a function of time, separate for XYZ coordinates

Parameters
• stepsize (int) – The stepsize for the trajectory, set higher than 1 to reduce number of points

• mintime (int) – Time to start from

• maxtime (int) – Maximum time

• elements (list) – A list of atomic symbols that should be displayed. If not specified, all atoms are displayed.

• indices (list) – A list of indices of that atoms that can be displayed. If not specified, all atoms of the correct species are displayed.

• dont_block (bool) – If True, interpreter is not blocked when figure is displayed.

property symbols

Return the array of symbols, if it has already been set.

Raises

KeyError – if the trajectory has not been set yet.

class aiida.orm.nodes.UpfData(file=None, filename=None, source=None, **kwargs)[source]

Data sub class to represent a pseudopotential single file in UPF format.

__abstractmethods__ = frozenset({})
__init__(file=None, filename=None, source=None, **kwargs)[source]

Create UpfData instance from pseudopotential file.

Parameters
• file – filepath or filelike object of the UPF potential file to store. Hint: Pass io.BytesIO(b”my string”) to construct directly from a string.

• filename – specify filename to use (defaults to name of provided file).

• source – Dictionary with information on source of the potential (see “.source” property).

__module__ = 'aiida.orm.nodes.data.upf'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.upf.UpfData (REPORT)>
_plugin_type_string = 'data.upf.UpfData.'
_prepare_json(main_file_name='')[source]

Returns UPF PP in json format.

_prepare_upf(main_file_name='')[source]

Return UPF content.

_query_type_string = 'data.upf.'
_validate()[source]

Validate the UPF potential file stored for this node.

property element

Return the element of the UPF pseudopotential.

Returns

the element

classmethod from_md5(md5)[source]

Return a list of all UpfData that match the given md5 hash.

Note

assumes hash of stored UpfData nodes is stored in the md5 attribute

Parameters

md5 – the file hash

Returns

list of existing UpfData nodes that have the same md5 hash

classmethod get_or_create(filepath, use_first=False, store_upf=True)[source]

Get the UpfData with the same md5 of the given file, or create it if it does not yet exist.

Parameters
• filepath – an absolute filepath on disk

• use_first – if False (default), raise an exception if more than one potential is found. If it is True, instead, use the first available pseudopotential.

• store_upf – boolean, if false, the UpfData if created will not be stored.

Returns

tuple of UpfData and boolean indicating whether it was created.

get_upf_family_names()[source]

Get the list of all upf family names to which the pseudo belongs.

classmethod get_upf_group(group_label)[source]

Return the UPF family group with the given label.

Parameters

group_label – the family group label

Returns

the Group with the given label, if it exists

classmethod get_upf_groups(filter_elements=None, user=None)[source]

Return all names of groups of type UpfFamily, possibly with some filters.

Parameters
• filter_elements – A string or a list of strings. If present, returns only the groups that contains one UPF for every element present in the list. The default is None, meaning that all families are returned.

• user – if None (default), return the groups for all users. If defined, it should be either a User instance or the user email.

Returns

list of Group entities of type UPF.

property md5sum

Return the md5 checksum of the UPF pseudopotential file.

Returns

the md5 checksum

set_file(file, filename=None)[source]

Store the file in the repository and parse it to set the element and md5 attributes.

Parameters
• file – filepath or filelike object of the UPF potential file to store. Hint: Pass io.BytesIO(b”my string”) to construct the file directly from a string.

• filename – specify filename to use (defaults to name of provided file).

store(*args, **kwargs)[source]

Store the node, reparsing the file so that the md5 and the element are correctly reset.

class aiida.orm.nodes.WorkChainNode(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

ORM class for all nodes representing the execution of a WorkChain.

STEPPER_STATE_INFO_KEY = 'stepper_state_info'
__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.process.workflow.workchain'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.process.workflow.workchain.WorkChainNode (REPORT)>
_plugin_type_string = 'process.workflow.workchain.WorkChainNode.'
_query_type_string = 'process.workflow.workchain.'
_updatable_attributes: Tuple[str, ] = ('sealed', 'paused', 'checkpoints', 'exception', 'exit_message', 'exit_status', 'process_label', 'process_state', 'process_status', 'stepper_state_info')
set_stepper_state_info(stepper_state_info: str)None[source]

Set the stepper state info

Parameters

state – string representation of the stepper state info

property stepper_state_info

Return the stepper state info

Returns

string representation of the stepper state info

class aiida.orm.nodes.WorkFunctionNode(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

ORM class for all nodes representing the execution of a workfunction.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.process.workflow.workfunction'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.process.workflow.workfunction.WorkFunctionNode (REPORT)>
_plugin_type_string = 'process.workflow.workfunction.WorkFunctionNode.'
_query_type_string = 'process.workflow.workfunction.'
validate_outgoing(target: Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from ourself to a given node.

A workfunction cannot create Data, so if we receive an outgoing RETURN link to an unstored Data node, that means the user created a Data node within our function body and is trying to return it. This use case should be reserved for @calcfunctions, as they can have CREATE links.

Parameters
• target – the node to which the link is going

Raises
• TypeError – if target is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

class aiida.orm.nodes.WorkflowNode(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Base class for all nodes representing the execution of a workflow process.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.process.workflow.workflow'
_abc_impl = <_abc_data object>
_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.process.workflow.workflow.WorkflowNode (REPORT)>
_plugin_type_string = 'process.workflow.WorkflowNode.'
_query_type_string = 'process.workflow.'
_storable = True
_unstorable_message = 'storing for this node has been disabled'
property inputs

The returned Manager allows you to easily explore the nodes connected to this node via an incoming INPUT_WORK link. The incoming nodes are reachable by their link labels which are attributes of the manager.

Returns

property outputs

The returned Manager allows you to easily explore the nodes connected to this node via an outgoing RETURN link. The outgoing nodes are reachable by their link labels which are attributes of the manager.

Returns

validate_outgoing(target: Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from ourself to a given node.

A workflow cannot ‘create’ Data, so if we receive an outgoing link to an unstored Data node, that means the user created a Data node within our function body and tries to attach it as an output. This is strictly forbidden and can cause provenance to be lost.

Parameters
• target – the node to which the link is going

Raises
• TypeError – if target is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

class aiida.orm.nodes.XyData(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

A subclass designed to handle arrays that have an “XY” relationship to each other. That is there is one array, the X array, and there are several Y arrays, which can be considered functions of X.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.orm.nodes.data.array.xy'
_abc_impl = <_abc_data object>
static _arrayandname_validator(array, name, units)[source]

Validates that the array is an numpy.ndarray and that the name is of type str. Raises InputValidationError if this not the case.

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.data.array.xy.XyData (REPORT)>
_plugin_type_string = 'data.array.xy.XyData.'
_query_type_string = 'data.array.xy.'
get_x()[source]

Tries to retrieve the x array and x name raises a NotExistent exception if no x array has been set yet. :return x_name: the name set for the x_array :return x_array: the x array set earlier :return x_units: the x units set earlier

get_y()[source]

Tries to retrieve the y arrays and the y names, raises a NotExistent exception if they have not been set yet, or cannot be retrieved :return y_names: list of strings naming the y_arrays :return y_arrays: list of y_arrays :return y_units: list of strings giving the units for the y_arrays

set_x(x_array, x_name, x_units)[source]

Sets the array and the name for the x values.

Parameters
• x_array – A numpy.ndarray, containing only floats

• x_name – a string for the x array name

• x_units – the units of x

set_y(y_arrays, y_names, y_units)[source]

Set array(s) for the y part of the dataset. Also checks if the x_array has already been set, and that, the shape of the y_arrays agree with the x_array. :param y_arrays: A list of y_arrays, numpy.ndarray :param y_names: A list of strings giving the names of the y_arrays :param y_units: A list of strings giving the units of the y_arrays

aiida.orm.nodes.to_aiida_type(value)[source]
aiida.orm.nodes.to_aiida_type(value: numpy.bool_)
aiida.orm.nodes.to_aiida_type(value: numpy.bool_)
aiida.orm.nodes.to_aiida_type(value: dict)
aiida.orm.nodes.to_aiida_type(value: numbers.Real)
aiida.orm.nodes.to_aiida_type(value: numbers.Integral)
aiida.orm.nodes.to_aiida_type(value: str)

Turns basic Python types (str, int, float, bool) into the corresponding AiiDA types.

## Submodules¶

Package for node ORM classes.

class aiida.orm.nodes.node.Node(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)[source]

Base class for all nodes in AiiDA.

Stores attributes starting with an underscore.

Caches files and attributes before the first save, and saves everything only on store(). After the call to store(), attributes cannot be changed.

Only after storing (or upon loading from uuid) extras can be modified and in this case they are directly set on the db.

In the plugin, also set the _plugin_type_string, to be set in the DB in the ‘type’ field.

class Collection(*args, **kwds)[source]

The collection of nodes.

__module__ = 'aiida.orm.nodes.node'
__parameters__ = ()
delete(node_id: int)None[source]

Delete a Node from the collection with the given id

Parameters

node_id – the node id

__abstractmethods__ = frozenset({})
__annotations__ = {'_hash_ignored_attributes': typing.Tuple[str, ...], '_incoming_cache': typing.Union[typing.List[aiida.orm.utils.links.LinkTriple], NoneType], '_logger': typing.Union[logging.Logger, NoneType], '_repository': typing.Union[aiida.orm.utils._repository.Repository, NoneType], '_updatable_attributes': typing.Tuple[str, ...]}
__copy__()[source]

Copying a Node is not supported in general, but only for the Data sub class.

__deepcopy__(memo)[source]

Deep copying a Node is not supported in general, but only for the Data sub class.

__eq__(other: Any)bool[source]

Fallback equality comparison by uuid (can be overwritten by specific types)

__hash__()int[source]

Python-Hash: Implementation that is compatible with __eq__

__init__(backend: Optional[Backend] = None, user: Optional[aiida.orm.users.User] = None, computer: Optional[aiida.orm.computers.Computer] = None, **kwargs: Any)None[source]
Parameters

backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity

__module__ = 'aiida.orm.nodes.node'
__repr__()str[source]

Return repr(self).

__str__()str[source]

Return str(self).

_abc_impl = <_abc_data object>
_add_incoming_cache(source: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Parameters
• source – the node from which the link is coming

Raises

aiida.common.UniquenessError – if the given link triple already exists in the cache

_add_outputs_from_cache(cache_node: aiida.orm.nodes.node.Node)None[source]

Replicate the output links and nodes from the cached node onto this node.

_cachable = False
_get_hash(ignore_errors: bool = True, **kwargs: Any) → Optional[str][source]

Return the hash for this node based on its attributes.

This will always work, even before storing.

Parameters

ignore_errors – return None on aiida.common.exceptions.HashingError (logging the exception)

_get_objects_to_hash() → List[Any][source]

Return a list of objects which should be included in the hash.

_get_same_node() → Optional[aiida.orm.nodes.node.Node][source]

Returns a stored node from which the current Node can be cached or None if it does not exist

If a node is returned it is a valid cache, meaning its _aiida_hash extra matches self.get_hash(). If there are multiple valid matches, the first one is returned. If no matches are found, None is returned.

Returns

a stored Node instance with the same hash as this code or None

Note: this should be only called on stored nodes, or internally from .store() since it first calls clean_value() on the attributes to normalise them.

_hash_ignored_attributes: Tuple[str, ] = ()
_incoming_cache: Optional[List[aiida.orm.utils.links.LinkTriple]] = None
_iter_all_same_nodes(allow_before_store=False) → Iterator[aiida.orm.nodes.node.Node][source]

Returns an iterator of all same nodes.

Note: this should be only called on stored nodes, or internally from .store() since it first calls clean_value() on the attributes to normalise them.

_logger: Optional[logging.Logger] = <Logger aiida.orm.nodes.node.Node (REPORT)>
_plugin_type_string = ''
_query_type_string = ''
_repository: Optional[aiida.orm.utils._repository.Repository] = None
_repository_base_path = 'path'
_storable = False
_store(with_transaction: bool = True, clean: bool = True)aiida.orm.nodes.node.Node[source]

Store the node in the database while saving its attributes and repository directory.

Parameters
• with_transaction – if False, do not use a transaction because the caller will already have opened one.

• clean – boolean, if True, will clean the attributes and extras before attempting to store

_store_from_cache(cache_node: aiida.orm.nodes.node.Node, with_transaction: bool)None[source]

Store this node from an existing cache node.

_unstorable_message = 'only Data, WorkflowNode, CalculationNode or their subclasses can be stored'
_updatable_attributes: Tuple[str, ] = ()
_validate()bool[source]

Check if the attributes and files retrieved from the database are valid.

Must be able to work even before storing: therefore, use the get_attr and similar methods that automatically read either from the DB or from the internal attribute cache.

For the base class, this is always valid. Subclasses will reimplement this. In the subclass, always call the super()._validate() method first!

add_comment(content: str, user: Optional[aiida.orm.users.User] = None)aiida.orm.comments.Comment[source]

Parameters
• content – string with comment

• user – the user to associate with the comment, will use default if not supplied

Returns

the newly created comment

add_incoming(source: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Add a link of the given type from a given node to ourself.

Parameters
• source – the node from which the link is coming

Raises
• TypeError – if source is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

property backend_entity

Get the implementing class for this object

Returns

the class model

class_node_type = ''
clear_hash()None[source]

Sets the stored hash of the Node to None.

property computer

Return the computer of this node.

Returns

the computer or None

Return type

Computer or None

property ctime

Return the node ctime.

Returns

the ctime

delete_object(path: Optional[str] = None, force: bool = False, key: Optional[str] = None)None[source]

Delete the object from the repository.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Parameters
• key – fully qualified identifier for the object within the repository

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

property description

Return the node description.

Returns

the description

classmethod from_backend_entity(backend_entity: BackendNode)Node[source]

Construct an entity from a backend entity instance

Parameters

backend_entity – the backend entity

Returns

an AiiDA entity instance

get_all_same_nodes() → List[aiida.orm.nodes.node.Node][source]

Return a list of stored nodes which match the type and hash of the current node.

All returned nodes are valid caches, meaning their _aiida_hash extra matches self.get_hash().

Note: this can be called only after storing a Node (since at store time attributes will be cleaned with clean_value and the hash should become idempotent to the action of serialization/deserialization)

get_cache_source() → Optional[str][source]

Return the UUID of the node that was used in creating this node from the cache, or None if it was not cached.

Returns

source node UUID or None

get_comment(identifier: int)aiida.orm.comments.Comment[source]

Return a comment corresponding to the given identifier.

Parameters

identifier – the comment pk

Raises
Returns

the comment

get_comments() → List[aiida.orm.comments.Comment][source]

Return a sorted list of comments for this node.

Returns

the list of comments, sorted by pk

get_description()str[source]

Return a string with a description of the node.

Returns

a description string

get_hash(ignore_errors: bool = True, **kwargs: Any) → Optional[str][source]

Return the hash for this node based on its attributes.

Parameters

ignore_errors – return None on aiida.common.exceptions.HashingError (logging the exception)

get_incoming(node_class: Type[Node] = None, link_type: Union[aiida.common.links.LinkType, Sequence[aiida.common.links.LinkType]] = (), link_label_filter: Optional[str] = None, only_uuid: bool = False)aiida.orm.utils.links.LinkManager[source]

Return a list of link triples that are (directly) incoming into this node.

Parameters
• node_class – If specified, should be a class or tuple of classes, and it filters only elements of that specific type (or a subclass of ‘type’)

• link_type – If specified should be a string or tuple to get the inputs of this link type, if None then returns all inputs of all link types.

• link_label_filter – filters the incoming nodes by its link label. Here wildcards (% and _) can be passed in link label filter as we are using “like” in QB.

• only_uuid – project only the node UUID instead of the instance onto the NodeTriple.node entries

get_object(path: Optional[str] = None, key: Optional[str] = None) → File[source]

Return the object with the given path.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

Returns

a File named tuple

get_object_content(path: Optional[str] = None, mode: str = 'r', key: Optional[str] = None) → Union[str, bytes][source]

Return the content of a object with the given path.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

get_outgoing(node_class: Type[Node] = None, link_type: Union[aiida.common.links.LinkType, Sequence[aiida.common.links.LinkType]] = (), link_label_filter: Optional[str] = None, only_uuid: bool = False)aiida.orm.utils.links.LinkManager[source]

Return a list of link triples that are (directly) outgoing of this node.

Parameters
• node_class – If specified, should be a class or tuple of classes, and it filters only elements of that specific type (or a subclass of ‘type’)

• link_type – If specified should be a string or tuple to get the inputs of this link type, if None then returns all outputs of all link types.

• link_label_filter – filters the outgoing nodes by its link label. Here wildcards (% and _) can be passed in link label filter as we are using “like” in QB.

• only_uuid – project only the node UUID instead of the instance onto the NodeTriple.node entries

static get_schema() → Dict[str, Any][source]
Every node property contains:
• display_name: display name of the property

• help text: short help text of the property

• is_foreign_key: is the property foreign key to other type of the node

• type: type of the property. e.g. str, dict, int

Returns

get schema of the node

Deprecated since version 1.0.0: Will be removed in v2.0.0. Use get_projectable_properties() instead.

Return the list of stored link triples directly incoming to or outgoing of this node.

Note this will only return link triples that are stored in the database. Anything in the cache is ignored.

Parameters
• node_class – If specified, should be a class, and it filters only elements of that (subclass of) type

• link_type – Only get inputs of this link type, if empty tuple then returns all inputs of all link types.

• link_label_filter – filters the incoming nodes by its link label. This should be a regex statement as one would pass directly to a QueryBuilder filter statement with the ‘like’ operation.

• link_directionincoming or outgoing to get the incoming or outgoing links, respectively.

• only_uuid – project only the node UUID instead of the instance onto the NodeTriple.node entries

Feturn whether there are unstored incoming links in the cache.

Returns

boolean, True when there are links in the incoming cache, False otherwise

initialize()None[source]

Initialize internal variables for the backend node

This needs to be called explicitly in each specific subclass implementation of the init.

property is_created_from_cache

Return whether this node was created from a cached node.

Returns

boolean, True if the node was created by cloning a cached node, False otherwise

property is_valid_cache

Hook to exclude certain Node instances from being considered a valid cache.

property label

Return the node label.

Returns

the label

list_object_names(path: Optional[str] = None, key: Optional[str] = None) → List[str][source]

Return a list of the object names contained in this repository, optionally in the given sub directory.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

list_objects(path: Optional[str] = None, key: Optional[str] = None) → List[File][source]

Return a list of the objects contained in this repository, optionally in the given sub directory.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

Returns

a list of File named tuples representing the objects present in directory with the given path

Raises

FileNotFoundError – if the path does not exist in the repository of this node

property logger

Return the logger configured for this Node.

Returns

Logger object

property mtime

Return the node mtime.

Returns

the mtime

property node_type

Return the node type.

Returns

the node type

open(path: Optional[str] = None, mode: str = 'r', key: Optional[str] = None) → aiida.orm.nodes.node.WarnWhenNotEntered[source]

Open a file handle to the object with the given path.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Starting from v2.0.0 this will raise if not used in a context manager.

Parameters
• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

• mode – the mode under which to open the handle

property process_type

Return the node process type.

Returns

the process type

put_object_from_file(filepath: str, path: Optional[str] = None, mode: Optional[str] = None, encoding: Optional[str] = None, force: bool = False, key: Optional[str] = None)None[source]

Store a new object under path with contents of the file located at filepath on this file system.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: First positional argument path has been deprecated and renamed to filepath.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Parameters
• filepath – absolute path of file whose contents to copy to the repository

• path – the relative path where to store the object in the repository.

• key – fully qualified identifier for the object within the repository

• mode – the file mode with which the object will be written Deprecated: will be removed in v2.0.0

• encoding – the file encoding with which the object will be written Deprecated: will be removed in v2.0.0

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

put_object_from_filelike(handle: IO[Any], path: Optional[str] = None, mode: str = 'w', encoding: str = 'utf8', force: bool = False, key: Optional[str] = None)None[source]

Store a new object under path with contents of filelike object handle.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Parameters
• handle – filelike object with the content to be stored

• path – the relative path where to store the object in the repository.

• key – fully qualified identifier for the object within the repository

• mode – the file mode with which the object will be written

• encoding – the file encoding with which the object will be written

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

put_object_from_tree(filepath: str, path: Optional[str] = None, contents_only: bool = True, force: bool = False, key: Optional[str] = None)None[source]

Store a new object under path with the contents of the directory located at filepath on this file system.

Warning

If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!

Deprecated since version 1.4.0: First positional argument path has been deprecated and renamed to filepath.

Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.

Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.

Deprecated since version 1.4.0: Keyword contents_only is deprecated and will be removed in v2.0.0.

Parameters
• filepath – absolute path of directory whose contents to copy to the repository

• path – the relative path of the object within the repository.

• key – fully qualified identifier for the object within the repository

• contents_only – boolean, if True, omit the top level directory of the path and only copy its contents.

• force – boolean, if True, will skip the mutability check

Raises

aiida.common.ModificationNotAllowed – if repository is immutable and force=False

rehash()None[source]

Regenerate the stored hash of the Node.

remove_comment(identifier: int)None[source]

Delete an existing comment.

Parameters

identifier – the comment pk

store(with_transaction: bool = True, use_cache=None)aiida.orm.nodes.node.Node[source]

Store the node in the database while saving its attributes and repository directory.

After being called attributes cannot be changed anymore! Instead, extras can be changed only AFTER calling this store() function.

Note

After successful storage, those links that are in the cache, and for which also the parent node is already stored, will be automatically stored. The others will remain unstored.

Parameters

with_transaction – if False, do not use a transaction because the caller will already have opened one.

store_all(with_transaction: bool = True, use_cache=None)aiida.orm.nodes.node.Node[source]

Store the node, together with all input links.

Unstored nodes from cached incoming linkswill also be stored.

Parameters

with_transaction – if False, do not use a transaction because the caller will already have opened one.

update_comment(identifier: int, content: str)None[source]

Update the content of an existing comment.

Parameters
• identifier – the comment pk

• content – the new comment content

Raises
property user

Return the user of this node.

Returns

the user

Return type

User

property uuid

Return the node UUID.

Returns

the string representation of the UUID

validate_incoming(source: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from a given node to ourself.

This function will first validate the types of the inputs, followed by the node and link types and validate whether in principle a link of that type between the nodes of these types is allowed.

Subsequently, the validity of the “degree” of the proposed link is validated, which means validating the number of links of the given type from the given node type is allowed.

Parameters
• source – the node from which the link is coming

Raises
• TypeError – if source is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

validate_outgoing(target: aiida.orm.nodes.node.Node, link_type: aiida.common.links.LinkType, link_label: str)None[source]

Validate adding a link of the given type from ourself to a given node.

The validity of the triple (source, link, target) should be validated in the validate_incoming call. This method will be called afterwards and can be overriden by subclasses to add additional checks that are specific to that subclass.

Parameters
• target – the node to which the link is going

Raises
• TypeError – if target is not a Node instance or link_type is not a LinkType enum

• ValueError – if the proposed link is invalid

validate_storability()None[source]

Verify that the current node is allowed to be stored.

Raises

aiida.common.exceptions.StoringNotAllowed – if the node does not match all requirements for storing

verify_are_parents_stored()None[source]

Verify that all parent nodes are already stored.

Raises

aiida.common.ModificationNotAllowed – if one of the source nodes of incoming links is not stored.