Main module to expose all orm classes and methods
aiida.orm.
ArrayData
Bases: aiida.orm.nodes.data.data.Data
aiida.orm.nodes.data.data.Data
Store a set of arrays on disk (rather than on the database) in an efficient way using numpy.save() (therefore, this class requires numpy to be installed).
Each array is stored within the Node folder as a different .npy file.
Before storing, no caching is done: if you perform a get_array() call, the array will be re-read from disk. If instead the ArrayData node has already been stored, the array is cached in memory after the first read, and the cached array is used thereafter. If too much RAM memory is used, you can clear the cache with the clear_internal_cache() method.
get_array()
clear_internal_cache()
__abstractmethods__
__module__
_abc_impl
_arraynames_from_files
Return a list of all arrays stored in the node, listing the files (and not relying on the properties).
_arraynames_from_properties
Return a list of all arrays stored in the node, listing the attributes starting with the correct prefix.
_cached_arrays
_logger
_plugin_type_string
_query_type_string
_validate
Check if the list of .npy files stored inside the node and the list of properties match. Just a name check, no check on the size since this would require to reload all arrays and this may take time and memory.
array_prefix
clear_internal_cache
Clear the internal memory cache where the arrays are stored after being read from disk (used in order to reduce at minimum the readings from disk). This function is useful if you want to keep the node in memory, but you do not want to waste memory to cache the arrays in RAM.
delete_array
Delete an array from the node. Can only be called before storing.
name – The name of the array to delete from the node.
get_array
Return an array stored in the node
name – The name of the array to return.
get_arraynames
New in version 0.7: Renamed from arraynames
get_iterarrays
Iterator that returns tuples (name, array) for each array stored in the node.
New in version 1.0: Renamed from iterarrays
get_shape
Return the shape of an array (read from the value cached in the properties for efficiency reasons).
name – The name of the array.
initialize
Initialize internal variables for the backend node
This needs to be called explicitly in each specific subclass implementation of the init.
set_array
Store a new numpy array inside the node. Possibly overwrite the array if it already existed.
Internally, it stores a name.npy file in numpy format.
array – The numpy array to store.
AuthInfo
Bases: aiida.orm.entities.Entity
aiida.orm.entities.Entity
ORM class that models the authorization information that allows a User to connect to a Computer.
Collection
Bases: aiida.orm.entities.Collection
aiida.orm.entities.Collection
The collection of AuthInfo entries.
__parameters__
delete
Delete an entry from the collection.
pk – the pk of the entry to delete
PROPERTY_WORKDIR
__init__
Create an AuthInfo instance for the given computer and user.
computer (aiida.orm.Computer) – a Computer instance
aiida.orm.Computer
user (aiida.orm.User) – a User instance
aiida.orm.User
aiida.orm.authinfos.AuthInfo
__str__
Return str(self).
computer
Return the computer associated with this instance.
aiida.orm.computers.Computer
enabled
Return whether this instance is enabled.
True if enabled, False otherwise
bool
get_auth_params
Return the dictionary of authentication parameters
a dictionary with authentication parameters
dict
get_metadata
Return the dictionary of metadata
a dictionary with metadata
get_transport
Return a fully configured transport that can be used to connect to the computer set for this instance.
aiida.transports.Transport
get_workdir
Return the working directory.
If no explicit work directory is set for this instance, the working directory of the computer will be returned.
the working directory
str
set_auth_params
Set the dictionary of authentication parameters
auth_params – a dictionary with authentication parameters
set_metadata
Set the dictionary of metadata
metadata (dict) – a dictionary with metadata
user
Return the user associated with this instance.
aiida.orm.users.User
AutoGroup
Bases: aiida.orm.groups.Group
aiida.orm.groups.Group
Group to be used to contain selected nodes generated while aiida.orm.autogroup.CURRENT_AUTOGROUP is set.
_type_string
BandsData
Bases: aiida.orm.nodes.data.array.kpoints.KpointsData
aiida.orm.nodes.data.array.kpoints.KpointsData
Class to handle bands data
_get_band_segments
Return the band segments.
_get_bandplot_data
Get data to plot a band structure
cartesian – if True, distances (for the x-axis) are computed in cartesian coordinates, otherwise they are computed in reciprocal coordinates. cartesian=True will fail if no cell has been set.
prettify_format – by default, strings are not prettified. If you want to prettify them, pass a valid prettify_format string (see valid options in the docstring of :py:func:prettify_labels).
join_symbols – by default, strings are not joined. If you pass a string, this is used to join strings that are much closer than a given threshold. The most typical string is the pipe symbol: |.
|
get_segments – if True, also computes the band split into segments
y_origin – if present, shift bands so to set the value specified at y=0
y=0
a plot_info dictiorary, whose keys are x (array of distances for the x axis of the plot); y (array of bands), labels (list of tuples in the format (float x value of the label, label string), band_type_idx (array containing an index for each band: if there is only one spin, then it’s an array of zeros, of length equal to the number of bands at each point; if there are two spins, then it’s an array of zeros or ones depending on the type of spin; the length is always equalt to the total number of bands per kpoint).
x
y
labels
band_type_idx
_get_mpl_body_template
paths – paths of k-points
_matplotlib_get_dict
Prepare the data to send to the python-matplotlib plotting script.
comments – if True, print comments (if it makes sense for the given format)
plot_info – a dictionary
setnumber_offset – an offset to be applied to all set numbers (i.e. s0 is replaced by s[offset], s1 by s[offset+1], etc.)
color_number – the color number for lines, symbols, error bars and filling (should be less than the parameter MAX_NUM_AGR_COLORS defined below)
title – the title
legend – the legend (applied only to the first of the set)
legend2 – the legend for second-type spins (applied only to the first of the set)
y_max_lim – the maximum on the y axis (if None, put the maximum of the bands)
y_min_lim – the minimum on the y axis (if None, put the minimum of the bands)
y_origin – the new origin of the y axis -> all bands are replaced by bands-y_origin
prettify_format – if None, use the default prettify format. Otherwise specify a string with the prettifier to use.
kwargs – additional customization variables; only a subset is accepted, see internal variable ‘valid_additional_keywords
_prepare_agr
Prepare an xmgrace agr file.
color_number2 – the color number for lines, symbols, error bars and filling for the second-type spins (should be less than the parameter MAX_NUM_AGR_COLORS defined below)
legend – the legend (applied only to the first set)
y_max_lim – the maximum on the y axis (if None, put the maximum of the bands); applied after shifting the origin by y_origin
y_origin
y_min_lim – the minimum on the y axis (if None, put the minimum of the bands); applied after shifting the origin by y_origin
_prepare_agr_batch
Prepare two files, data and batch, to be plot with xmgrace as: xmgrace -batch file.dat
main_file_name – if the user asks to write the main content on a file, this contains the filename. This should be used to infer a good filename for the additional files. In this case, we remove the extension, and add ‘_data.dat’
_prepare_dat_blocks
Format suitable for gnuplot using blocks. Columns with x and y (path and band energy). Several blocks, separated by two empty lines, one per energy band.
_prepare_dat_multicolumn
Write an N x M matrix. First column is the distance between kpoints, The other columns are the bands. Header contains number of kpoints and the number of bands (commented).
_prepare_gnuplot
Prepare an gnuplot script to plot the bands, with the .dat file returned as an independent file.
title – if specified, add a title to the plot
_prepare_json
Prepare a json file in a format compatible with the AiiDA band visualizer
_prepare_mpl_pdf
Prepare a python script using matplotlib to plot the bands, with the JSON returned as an independent file.
For the possible parameters, see documentation of _matplotlib_get_dict()
_matplotlib_get_dict()
_prepare_mpl_png
_prepare_mpl_singlefile
Prepare a python script using matplotlib to plot the bands
_prepare_mpl_withjson
_set_pbc
validate the pbc, then store them
_validate_bands_occupations
Validate the list of bands and of occupations before storage. Kpoints must be set in advance. Bands and occupations must be convertible into arrays of Nkpoints x Nbands floats or Nspins x Nkpoints x Nbands; Nkpoints must correspond to the number of kpoints.
array_labels
Get the labels associated with the band arrays
get_bands
Returns an array (nkpoints x num_bands or nspins x nkpoints x num_bands) of energies. :param also_occupations: if True, returns also the occupations array. Default = False
set_bands
Set an array of band energies of dimension (nkpoints x nbands). Kpoints must be set in advance. Can contain floats or None. :param bands: a list of nkpoints lists of nbands bands, or a 2D array of shape (nkpoints x nbands), with band energies for each kpoint :param units: optional, energy units :param occupations: optional, a 2D list or array of floats of same shape as bands, with the occupation associated to each band
set_kpointsdata
Load the kpoints from a kpoint object. :param kpointsdata: an instance of KpointsData class
show_mpl
Call a show() command for the band structure using matplotlib. This uses internally the ‘mpl_singlefile’ format, with empty main_file_name.
Other kwargs are passed to self._exportcontent.
units
Units in which the data in bands were stored. A string
BaseType
Data sub class to be used as a base for data containers that represent base python data types.
__eq__
Fallback equality comparison by uuid (can be overwritten by specific types)
__hash__
backend_entity (aiida.orm.implementation.entities.BackendEntity) – the backend model supporting this entity
aiida.orm.implementation.entities.BackendEntity
__ne__
Return self!=value.
new
value
Bool
Bases: aiida.orm.nodes.data.base.BaseType
aiida.orm.nodes.data.base.BaseType
Data sub class to represent a boolean value.
__bool__
__int__
_type
alias of builtins.bool
builtins.bool
CalcFunctionNode
Bases: aiida.orm.utils.mixins.FunctionCalculationMixin, aiida.orm.nodes.process.calculation.calculation.CalculationNode
aiida.orm.utils.mixins.FunctionCalculationMixin
aiida.orm.nodes.process.calculation.calculation.CalculationNode
ORM class for all nodes representing the execution of a calcfunction.
validate_outgoing
Validate adding a link of the given type from ourself to a given node.
A calcfunction cannot return Data, so if we receive an outgoing link to a stored Data node, that means the user created a Data node within our function body and stored it themselves or they are returning an input node. The latter use case is reserved for @workfunctions, as they can have RETURN links.
target – the node to which the link is going
link_type – the link type
link_label – the link label
TypeError – if target is not a Node instance or link_type is not a LinkType enum
ValueError – if the proposed link is invalid
CalcJobNode
Bases: aiida.orm.nodes.process.calculation.calculation.CalculationNode
ORM class for all nodes representing the execution of a CalcJob.
CALC_JOB_STATE_KEY
REMOTE_WORKDIR_KEY
RETRIEVE_LIST_KEY
RETRIEVE_SINGLE_FILE_LIST_KEY
RETRIEVE_TEMPORARY_LIST_KEY
SCHEDULER_DETAILED_JOB_INFO_KEY
SCHEDULER_JOB_ID_KEY
SCHEDULER_LAST_CHECK_TIME_KEY
SCHEDULER_LAST_JOB_INFO_KEY
SCHEDULER_STATE_KEY
_get_objects_to_hash
Return a list of objects which should be included in the hash.
This method is purposefully overridden from the base Node class, because we do not want to include the repository folder in the hash. The reason is that the hash of this node is computed in the store method, at which point the input files that will be stored in the repository have not yet been generated. Including these anyway in the computation of the hash would mean that the hash of the node would change as soon as the process has started and the input files have been written to the repository.
_hash_ignored_attributes
_raw_input_folder
Get the input folder object.
the input folder object.
NotExistent: if the raw folder hasn’t been created yet
_repository_base_path
_tools
_updatable_attributes
_validate_retrieval_directive
Validate a list or tuple of file retrieval directives.
directives – a list or tuple of file retrieval directives
ValueError – if the format of the directives is invalid
delete_state
Delete the calculation job state attribute if it exists.
get_authinfo
Return the AuthInfo that is configured for the Computer set for this node.
get_builder_restart
Return a ProcessBuilder that is ready to relaunch the same CalcJob that created this node.
The process class will be set based on the process_type of this node and the inputs of the builder will be prepopulated with the inputs registered for this node. This functionality is very useful if a process has completed and you want to relaunch it with slightly different inputs.
In addition to prepopulating the input nodes, which is implemented by the base ProcessNode class, here we also add the options that were passed in the metadata input of the CalcJob process.
get_description
Return a description of the node based on its properties.
get_detailed_job_info
Return the detailed job info dictionary.
The scheduler is polled for the detailed job info after the job is completed and ready to be retrieved.
the dictionary with detailed job info if defined or None
get_job_id
Return job id that was assigned to the calculation by the scheduler.
the string representation of the scheduler job id
get_last_job_info
Return the last information asked to the scheduler about the status of the job.
The last job info is updated on every poll of the scheduler, except for the final poll when the job drops from the scheduler’s job queue. For completed jobs, the last job info therefore contains the “second-to-last” job info that still shows the job as running. Please use get_detailed_job_info() instead.
get_detailed_job_info()
a JobInfo object (that closely resembles a dictionary) or None.
get_option
Retun the value of an option that was set for this CalcJobNode
name – the option name
the option value or None
ValueError for unknown option
get_options
Return the dictionary of options set for this CalcJobNode
dictionary of the options and their values
get_parser_class
Return the output parser object for this calculation or None if no parser is set.
a Parser class.
aiida.common.exceptions.EntryPointError – if the parser entry point can not be resolved.
get_remote_workdir
Return the path to the remote (on cluster) scratch folder of the calculation.
a string with the remote path
get_retrieve_list
Return the list of files/directories to be retrieved on the cluster after the calculation has completed.
a list of file directives
get_retrieve_singlefile_list
Return the list of files to be retrieved on the cluster after the calculation has completed.
list of single file retrieval directives
Deprecated since version 1.0.0: Will be removed in v2.0.0, use aiida.orm.nodes.process.calculation.calcjob.CalcJobNode.get_retrieve_temporary_list() instead.
aiida.orm.nodes.process.calculation.calcjob.CalcJobNode.get_retrieve_temporary_list()
get_retrieve_temporary_list
Return list of files to be retrieved from the cluster which will be available during parsing.
get_retrieved_node
Return the retrieved data folder.
the retrieved FolderData node or None if not found
get_scheduler_lastchecktime
Return the time of the last update of the scheduler state by the daemon or None if it was never set.
a datetime object or None
get_scheduler_state
Return the status of the calculation according to the cluster scheduler.
a JobState enum instance.
get_scheduler_stderr
Return the scheduler stdout output if the calculation has finished and been retrieved, None otherwise.
scheduler stdout output or None
get_scheduler_stdout
Return the scheduler stderr output if the calculation has finished and been retrieved, None otherwise.
scheduler stderr output or None
get_state
Return the calculation job active sub state.
The calculation job state serves to give more granular state information to CalcJobs, in addition to the generic process state, while the calculation job is active. The state can take values from the enumeration defined in aiida.common.datastructures.CalcJobState and can be used to query for calculation jobs in specific active states.
instance of aiida.common.datastructures.CalcJobState or None if invalid value, or not set
Return the transport for this calculation.
Transport configured with the AuthInfo associated to the computer of this node
link_label_retrieved
Return the link label used for the retrieved FolderData node.
res
To be used to get direct access to the parsed parameters.
an instance of the CalcJobResultManager.
a practical example on how it is meant to be used: let’s say that there is a key ‘energy’ in the dictionary of the parsed results which contains a list of floats. The command calc.res.energy will return such a list.
set_detailed_job_info
Set the detailed job info dictionary.
detailed_job_info – a dictionary with metadata with the accounting of a completed job
set_job_id
Set the job id that was assigned to the calculation by the scheduler.
Note
the id will always be stored as a string
job_id – the id assigned by the scheduler after submission
set_last_job_info
Set the last job info.
last_job_info – a JobInfo object
set_option
Set an option to the given value
value – the value to set
TypeError for values with invalid type
set_options
Set the options for this CalcJobNode
options – dictionary of option and their values to set
set_remote_workdir
Set the absolute path to the working directory on the remote computer where the calculation is run.
remote_workdir – absolute filepath to the remote working directory
set_retrieve_list
Set the retrieve list.
This list of directives will instruct the daemon what files to retrieve after the calculation has completed. list or tuple of files or paths that should be retrieved by the daemon.
retrieve_list – list or tuple of with filepath directives
set_retrieve_singlefile_list
Set the retrieve singlefile list.
The files will be stored as SinglefileData instances and added as output nodes to this calculation node. The format of a single file directive is a tuple or list of length 3 with the following entries:
the link label under which the file should be added the SinglefileData class or sub class to use to store the filepath relative to the remote working directory of the calculation
the link label under which the file should be added
the SinglefileData class or sub class to use to store
the filepath relative to the remote working directory of the calculation
retrieve_singlefile_list – list or tuple of single file directives
Deprecated since version 1.0.0: Will be removed in v2.0.0. Use set_retrieve_temporary_list() instead.
set_retrieve_temporary_list()
set_retrieve_temporary_list
Set the retrieve temporary list.
The retrieve temporary list stores files that are retrieved after completion and made available during parsing and are deleted as soon as the parsing has been completed.
retrieve_temporary_list – list or tuple of with filepath directives
set_scheduler_state
Set the scheduler state.
state – an instance of JobState
set_state
Set the calculation active job state.
ValueError if state is invalid
tools
Return the calculation tools that are registered for the process type associated with this calculation.
If the entry point name stored in the process_type of the CalcJobNode has an accompanying entry point in the aiida.tools.calculations entry point category, it will attempt to load the entry point and instantiate it passing the node to the constructor. If the entry point does not exist, cannot be resolved or loaded, a warning will be logged and the base CalculationTools class will be instantiated and returned.
CalculationTools instance
CalculationNode
Bases: aiida.orm.nodes.process.process.ProcessNode
aiida.orm.nodes.process.process.ProcessNode
Base class for all nodes representing the execution of a calculation process.
_cachable
_storable
_unstorable_message
inputs
Return an instance of NodeLinksManager to manage incoming INPUT_CALC links
The returned Manager allows you to easily explore the nodes connected to this node via an incoming INPUT_CALC link. The incoming nodes are reachable by their link labels which are attributes of the manager.
outputs
Return an instance of NodeLinksManager to manage outgoing CREATE links
The returned Manager allows you to easily explore the nodes connected to this node via an outgoing CREATE link. The outgoing nodes are reachable by their link labels which are attributes of the manager.
CifData
Bases: aiida.orm.nodes.data.singlefile.SinglefileData
aiida.orm.nodes.data.singlefile.SinglefileData
Wrapper for Crystallographic Interchange File (CIF)
the file (physical) is held as the authoritative source of information, so all conversions are done through the physical file: when setting ase or values, a physical CIF file is generated first, the values are updated from the physical CIF file.
ase
values
_PARSE_POLICIES
_PARSE_POLICY_DEFAULT
_SCAN_TYPES
_SCAN_TYPE_DEFAULT
_SET_INCOMPATIBILITIES
Construct a new instance and set the contents to that of the file.
file – an absolute filepath or filelike object for CIF. Hint: Pass io.BytesIO(b”my string”) to construct the SinglefileData directly from a string.
filename – specify filename to use (defaults to name of provided file).
ase – ASE Atoms object to construct the CifData instance from.
values – PyCifRW CifFile object to construct the CifData instance from.
source –
scan_type – scan type string for parsing with PyCIFRW (‘standard’ or ‘flex’). See CifFile.ReadCif
parse_policy – ‘eager’ (parse CIF file on set_file) or ‘lazy’ (defer parsing until needed)
_ase
_get_object_ase
Converts CifData to ase.Atoms
an ase.Atoms object
_get_object_pycifrw
Converts CifData to PyCIFRW.CifFile
a PyCIFRW.CifFile object
_prepare_cif
Return CIF string of CifData object.
If parsed values are present, a CIF string is created and written to file. If no parsed values are present, the CIF string is read from file.
Validates MD5 hash of CIF file.
_values
ASE object, representing the CIF.
requires ASE module.
from_md5
Return a list of all CIF files that match a given MD5 hash.
the hash has to be stored in a _md5 attribute, otherwise the CIF file will not be found.
_md5
generate_md5
Computes and returns MD5 hash of the CIF file.
get_ase
Returns ASE object, representing the CIF. This function differs from the property ase by the possibility to pass the keyworded arguments (kwargs) to ase.io.cif.read_cif().
get_formulae
Return chemical formulae specified in CIF file.
Note: This does not compute the formula, it only reads it from the appropriate tag. Use refine_inline to compute formulae.
get_or_create
Pass the same parameter of the init; if a file with the same md5 is found, that CifData is returned.
filename – an absolute filename on disk
use_first – if False (default), raise an exception if more than one CIF file is found. If it is True, instead, use the first available CIF file.
store_cif (bool) – If false, the CifData objects are not stored in the database. default=True.
where cif is the CifData object, and create is either True if the object was created, or False if the object was retrieved from the DB.
get_spacegroup_numbers
Get the spacegroup international number.
get_structure
Creates aiida.orm.nodes.data.structure.StructureData.
aiida.orm.nodes.data.structure.StructureData
New in version 1.0: Renamed from _get_aiida_structure
converter – specify the converter. Default ‘pymatgen’.
store – if True, intermediate calculation gets stored in the AiiDA database for record. Default False.
primitive_cell – if True, primitive cell is returned, conventional cell if False. Default False.
occupancy_tolerance – If total occupancy of a site is between 1 and occupancy_tolerance, the occupancies will be scaled down to 1. (pymatgen only)
site_tolerance – This tolerance is used to determine if two sites are sitting in the same position, in which case they will be combined to a single disordered site. Defaults to 1e-4. (pymatgen only)
aiida.orm.nodes.data.structure.StructureData node.
has_atomic_sites
Returns whether there are any atomic sites defined in the cif data. That is to say, it will check all the values for the _atom_site_fract_* tags and if they are all equal to ? that means there are no relevant atomic sites defined and the function will return False. In all other cases the function will return True
False when at least one atomic site fractional coordinate is not equal to ? and True otherwise
has_attached_hydrogens
Check if there are hydrogens without coordinates, specified as attached to the atoms of the structure.
True if there are attached hydrogens, False otherwise.
has_partial_occupancies
Return if the cif data contains partial occupancies
A partial occupancy is defined as site with an occupancy that differs from unity, within a precision of 1E-6
True if there are partial occupancies, False otherwise
has_undefined_atomic_sites
Return whether the cif data contains any undefined atomic sites.
An undefined atomic site is defined as a site where at least one of the fractional coordinates specified in the _atom_site_fract_* tags, cannot be successfully interpreted as a float. If the cif data contains any site that matches this description, or it does not contain any atomic site tags at all, the cif data is said to have undefined atomic sites.
boolean, True if no atomic sites are defined or if any of the defined sites contain undefined positions and False otherwise
has_unknown_species
Returns whether the cif contains atomic species that are not recognized by AiiDA.
The known species are taken from the elements dictionary in aiida.common.constants, with the exception of the “unknown” placeholder element with symbol ‘X’, as this could not be used to construct a real structure. If any of the formula of the cif data contain species that are not in that elements dictionary, the function will return True and False in all other cases. If there is no formulae to be found, it will return None
True when there are unknown species in any of the formulae, False if not, None if no formula found
parse
Parses CIF file and sets attributes.
scan_type – See set_scan_type
read_cif
A wrapper method that simulates the behavior of the old function ase.io.cif.read_cif by using the new generic ase.io.read function.
Somewhere from 3.12 to 3.17 the tag concept was bundled with each Atom object. When reading a CIF file, this is incremented and signifies the atomic species, even though the CIF file do not have specific tags embedded. On reading CIF files we thus force the ASE tag to zero for all Atom elements.
set_ase
Set the contents of the CifData starting from an ASE atoms object
aseatoms – the ASE atoms object
set_file
Set the file.
If the source is set and the MD5 checksum of new file is different from the source, the source has to be deleted.
file – filepath or filelike object of the CIF file to store. Hint: Pass io.BytesIO(b”my string”) to construct the file directly from a string.
set_parse_policy
Set the parse policy.
parse_policy – Either ‘eager’ (parse CIF file on set_file) or ‘lazy’ (defer parsing until needed)
set_scan_type
Set the scan_type for PyCifRW.
The ‘flex’ scan_type of PyCifRW is faster for large CIF files but does not yet support the CIF2 format as of 02/2018. See the CifFile.ReadCif function
scan_type – Either ‘standard’ or ‘flex’ (see _scan_types)
set_values
Set internal representation to values.
Warning: This also writes a new CIF file.
values – PyCifRW CifFile object
requires PyCifRW module.
store
Store the node.
PyCifRW structure, representing the CIF datablocks.
Code
A code entity. It can either be ‘local’, or ‘remote’.
Local code: it is a collection of files/dirs (added using the add_path() method), where one file is flagged as executable (using the set_local_executable() method).
Remote code: it is a pair (remotecomputer, remotepath_of_executable) set using the set_remote_computer_exec() method.
For both codes, one can set some code to be executed right before or right after the execution of the code, using the set_preexec_code() and set_postexec_code() methods (e.g., the set_preexec_code() can be used to load specific modules required for the code to be run).
HIDDEN_KEY
_set_local
Set the code as a ‘local’ code, meaning that all the files belonging to the code will be copied to the cluster, and the file set with set_exec_filename will be run.
It also deletes the flags related to the local case (if any)
_set_remote
Set the code as a ‘remote’ code, meaning that the code itself has no files attached, but only a location on a remote computer (with an absolute path of the executable on the remote computer).
Perform validation of the Data object.
validation of data source checks license and requires attribution to be provided in field ‘description’ of source in the case of any CC-BY* license. If such requirement is too strict, one can remove/comment it out.
can_run_on
Return True if this code can run on the given computer, False otherwise.
Local codes can run on any machine; remote codes can run only on the machine on which they reside.
TODO: add filters to mask the remote machines on which a local code can run.
full_label
Get full label of this code.
Returns label of the form <code-label>@<computer-name>.
get
Get a Computer object with given identifier string, that can either be the numeric ID (pk), or the label (and computername) (if unique).
pk – the numeric ID (pk) for code
label – the code label identifying the code to load
machinename – the machine name where code is setup
aiida.common.NotExistent – if no code identified by the given string is found
aiida.common.MultipleObjectsError – if the string cannot identify uniquely a code
aiida.common.InputValidationError – if neither a pk nor a label was passed in
get_append_text
Return the postexec_code, or an empty string if no post-exec code was defined.
get_builder
Create and return a new ProcessBuilder for the CalcJob class of the plugin configured for this code.
The configured calculation plugin class is defined by the get_input_plugin_name method.
it also sets the builder.code value.
builder.code
a ProcessBuilder instance with the code input already populated with ourselves
aiida.common.EntryPointError – if the specified plugin does not exist.
ValueError – if no default plugin was specified.
get_code_helper
get_computer_label
Get label of this code’s computer.
get_computer_name
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the self.get_computer_label() method instead.
Return a string description of this Code instance.
string description of this Code instance
get_execname
Return the executable string to be put in the script. For local codes, it is ./LOCAL_EXECUTABLE_NAME For remote codes, it is the absolute path to the executable.
get_from_string
Get a Computer object with given identifier string in the format label@machinename. See the note below for details on the string detection algorithm.
the (leftmost) ‘@’ symbol is always used to split code and computername. Therefore do not use ‘@’ in the code name if you want to use this function (‘@’ in the computer name are instead valid).
code_string – the code string identifying the code to load
aiida.common.InputValidationError – if code_string is not of string type
get_full_text_info
Return a list of lists with a human-readable detailed information on this code.
Deprecated since version 1.4.0: Will be removed in v2.0.0.
list of lists where each entry consists of two elements: a key and a value
get_input_plugin_name
Return the name of the default input plugin (or None if no input plugin was set.
get_local_executable
get_prepend_text
Return the code that will be put in the scheduler script before the execution, or an empty string if no pre-exec code was defined.
get_remote_computer
get_remote_exec_path
hidden
Determines whether the Code is hidden or not
hide
Hide the code (prevents from showing it in the verdi code list)
is_local
Return True if the code is ‘local’, False if it is ‘remote’ (see also documentation of the set_local and set_remote functions).
label
Return the node label.
the label
list_for_plugin
Return a list of valid code strings for a given plugin.
plugin – The string of the plugin.
labels – if True, return a list of code names, otherwise return the code PKs (integers).
a list of string, with the code names if labels is True, otherwise a list of integers with the code PKs.
relabel
Relabel this code.
new_label – new code label
raise_error – Set to False in order to return a list of errors instead of raising them.
Deprecated since version 1.2.0: Will remove raise_error in v2.0.0. Use try/except instead.
reveal
Reveal the code (allows to show it in the verdi code list) By default, it is revealed
set_append_text
Pass a string of code that will be put in the scheduler script after the execution of the code.
set_files
Given a list of filenames (or a single filename string), add it to the path (all at level zero, i.e. without folders). Therefore, be careful for files with the same name!
decide whether to check if the Code must be a local executable to be able to call this function.
set_input_plugin_name
Set the name of the default input plugin, to be used for the automatic generation of a new calculation.
set_local_executable
Set the filename of the local executable. Implicitly set the code as local.
set_prepend_text
Pass a string of code that will be put in the scheduler script before the execution of the code.
set_remote_computer_exec
Set the code as remote, and pass the computer on which it resides and the absolute path on that computer.
remote_computer_exec – a tuple (computer, remote_exec_path), where computer is a aiida.orm.Computer and remote_exec_path is the absolute path of the main executable on remote computer.
Bases: typing.Generic
typing.Generic
Container class that represents the collection of objects of a particular type.
_COLLECTIONS
__call__
Create a new objects collection using a new backend.
backend (aiida.orm.implementation.Backend) – the backend instance to get the collection for
aiida.orm.implementation.Backend
a new collection with the new backend
aiida.orm.Collection
__dict__
Construct a new entity collection.
entity_class (aiida.orm.Entity) – the entity type e.g. User, Computer, etc
aiida.orm.Entity
__orig_bases__
__weakref__
list of weak references to the object (if defined)
all
Get all entities in this collection
A list of all entities
list
backend
Return the backend.
the backend instance of this collection
count
Count entities in this collection according to criteria
filters (dict) – the keyword value pair filters to match
The number of entities found using the supplied criteria
int
entity_type
The entity type.
find
Find collection entries matching the filter criteria
order_by (list) – a list of (key, direction) pairs specifying the sort order
limit (int) – the maximum number of results to return
a list of resulting matches
Get a single collection entry that matches the filter criteria
filters (dict) – the filters identifying the object to get
the entry
get_collection
Get the collection for a given entity type and backend instance
entity_type (aiida.orm.Entity) – the entity type e.g. User, Computer, etc
query
Get a query builder for the objects of this collection
offset (int) – number of initial results to be skipped
a new query builder instance
aiida.orm.QueryBuilder
Comment
Base class to map a DbComment that represents a comment attached to a certain Node.
The collection of Comment entries.
Remove a Comment from the collection with the given id
comment_id (int) – the id of the comment to delete
TypeError – if comment_id is not an int
comment_id
NotExistent – if Comment with ID comment_id is not found
delete_all
Delete all Comments from the Collection
IntegrityError – if all Comments could not be deleted
delete_many
Delete Comments from the Collection based on filters
filters
filters (dict) – similar to QueryBuilder filter
(former) PK s of deleted Comments
PK
TypeError – if filters is not a dict
ValidationError – if filters is empty
Create a Comment for a given node and user
node (aiida.orm.Node) – a Node instance
aiida.orm.Node
content (str) – the comment content
a Comment object associated to the given node and user
aiida.orm.Comment
content
ctime
mtime
node
set_content
set_mtime
set_user
Computer
Computer entity.
The collection of Computer entries.
Delete the computer with the given id
Get a single collection entry that matches the filter criteria.
Try to retrieve a Computer from the DB with the given arguments; create (and store) a new Computer if such a Computer was not present yet.
label (str) – computer label
(computer, created) where computer is the computer (new or existing, in any case already stored) and created is a boolean saying
(aiida.orm.Computer, bool)
list_labels
Return a list with all the labels of the computers in the DB.
list_names
Return a list with all the names of the computers in the DB.
Deprecated since version 1.4.0: Will be removed in v2.0.0, use list_labels instead.
PROPERTY_MINIMUM_SCHEDULER_POLL_INTERVAL
PROPERTY_MINIMUM_SCHEDULER_POLL_INTERVAL__DEFAULT
PROPERTY_SHEBANG
Construct a new computer
Deprecated since version 1.4.0: The name keyword will be removed in v2.0.0, use label instead.
__repr__
Return repr(self).
_append_text_validator
Validates the append text string.
_default_mpiprocs_per_machine_validator
Validates the default number of CPUs per machine (node)
_description_validator
Validates the description.
_hostname_validator
Validates the hostname.
_mpirun_command_validator
Validates the mpirun_command variable. MUST be called after properly checking for a valid scheduler.
_name_validator
Validates the name.
_prepend_text_validator
Validates the prepend text string.
_scheduler_type_validator
Validates the transport string.
_transport_type_validator
_workdir_validator
configure
Configure a computer for a user with valid auth params passed via kwargs
user – the user to configure the computer for
the configuration keywords with corresponding values
the authinfo object for the configured user
aiida.orm.AuthInfo
copy
Return a copy of the current object to work with, not stored yet.
delete_property
Delete a property from this computer
name (str) – the name of the property
raise_exception (bool) – if True raise if the property does not exist, otherwise return None
description
Return the computer computer.
the description.
full_text_info
Return a (multiline) string with a human-readable detailed information on this computer.
Return the aiida.orm.authinfo.AuthInfo instance for the given user on this computer, if the computer is configured for the given user.
user – a User instance.
a AuthInfo instance
aiida.common.NotExistent – if the computer is not configured for the given user.
get_configuration
Get the configuration of computer for the given user as a dictionary
user (aiida.orm.User) – the user to to get the configuration for. Uses default user if None
get_default_mpiprocs_per_machine
Return the default number of CPUs per machine (node) for this computer, or None if it was not set.
Get the description for this computer
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the description property instead.
the description
get_hostname
Get this computer hostname
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the hostname property instead.
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the metadata property instead.
get_minimum_job_poll_interval
Get the minimum interval between subsequent requests to update the list of jobs currently running on this computer.
The minimum interval (in seconds)
float
get_mpirun_command
Return the mpirun command. Must be a list of strings, that will be then joined with spaces when submitting.
I also provide a sensible default that may be ok in many cases.
get_name
Return the computer name.
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the label property instead.
get_property
Get a property of this computer
name (str) – the property name
args – additional arguments
the property value
get_scheduler
Get a scheduler instance for this computer
the scheduler instance
aiida.schedulers.Scheduler
get_scheduler_type
Get the scheduler type for this computer
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the scheduler_type property instead.
the scheduler type
get_schema
display_name: display name of the property
help text: short help text of the property
is_foreign_key: is the property foreign key to other type of the node
type: type of the property. e.g. str, dict, int
get schema of the computer
Deprecated since version 1.0.0: Will be removed in v2.0.0. Use get_projectable_properties() instead.
get_projectable_properties()
get_shebang
Return a Transport class, configured with all correct parameters. The Transport is closed (meaning that if you want to run any operation with it, you have to open it first (i.e., e.g. for a SSH transport, you have to open a connection). To do this you can call transports.open(), or simply run within a with statement:
transports.open()
with
transport = Computer.get_transport() with transport: print(transports.whoami())
user – if None, try to obtain a transport for the default user. Otherwise, pass a valid User.
a (closed) Transport, already configured with the connection parameters to the supercomputer, as configured with verdi computer configure for the user specified as a parameter user.
verdi computer configure
get_transport_class
Get the transport class for this computer. Can be used to instantiate a transport instance.
the transport class
get_transport_type
Get the current transport type for this computer
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the transport_type property instead.
the transport type
Get the working directory for this computer :return: The currently configured working directory :rtype: str
hostname
Return the computer hostname.
the hostname.
is_user_configured
Is the user configured on this computer?
user – the user to check
True if configured, False otherwise
is_user_enabled
Is the given user enabled to run on this computer?
Return the computer label.
the label.
logger
metadata
Return the computer metadata.
the metadata.
name
scheduler_type
Return the computer scheduler type.
the scheduler type.
set_default_mpiprocs_per_machine
Set the default number of CPUs per machine (node) for this computer. Accepts None if you do not want to set this value.
set_description
Set the description for this computer
val (str) – the new description
set_hostname
Set the hostname of this computer
val (str) – The new hostname
Set the metadata.
set_minimum_job_poll_interval
Set the minimum interval between subsequent requests to update the list of jobs currently running on this computer.
interval (float) – The minimum interval in seconds
set_mpirun_command
Set the mpirun command. It must be a list of strings (you can use string.split() if you have a single, space-separated string).
set_name
Set the computer name.
set_property
Set a property on this computer
name – the property name
value – the new value
set_scheduler_type
scheduler_type – the new scheduler type
set_shebang
val (str) – A valid shebang line
set_transport_type
Set the transport type for this computer
transport_type (str) – the new transport type
set_workdir
Store the computer in the DB.
Differently from Nodes, a computer can be re-stored if its properties are to be changed (e.g. a new mpirun command, etc.)
transport_type
Return the computer transport type.
the transport_type.
validate
Check if the attributes and files retrieved from the DB are valid. Raise a ValidationError if something is wrong.
Must be able to work even before storing: therefore, use the get_attr and similar methods that automatically read either from the DB or from the internal attribute cache.
For the base class, this is always valid. Subclasses will reimplement this. In the subclass, always call the super().validate() method first!
Data
Bases: aiida.orm.nodes.node.Node
aiida.orm.nodes.node.Node
The base class for all Data nodes.
AiiDA Data classes are subclasses of Node and must support multiple inheritance.
Architecture note: Calculation plugins are responsible for converting raw output data from simulation codes to Data nodes. Data nodes are responsible for validating their content (see _validate method).
__copy__
Copying a Data node is not supported, use copy.deepcopy or call Data.clone().
__deepcopy__
Create a clone of the Data node by pipiong through to the clone method and return the result.
an unstored clone of this Data node
_export_format_replacements
_exportcontent
Converts a Data node to one (or multiple) files.
Note: Export plugins should return utf8-encoded bytes, which can be directly dumped to file.
fileformat (str) – the extension, uniquely specifying the file format.
main_file_name (str) – (empty by default) Can be used by plugin to infer sensible names for additional files, if necessary. E.g. if the main file is ‘../myplot.gnu’, the plugin may decide to store the dat file under ‘../myplot_data.dat’.
kwargs – other parameters are passed down to the plugin
a tuple of length 2. The first element is the content of the otuput file. The second is a dictionary (possibly empty) in the format {filename: filecontent} for any additional file that should be produced.
(bytes, dict)
_get_converters
Get all implemented converter formats. The convention is to find all _get_object_… methods. Returns a list of strings.
_get_exporters
Get all implemented export formats. The convention is to find all _prepare_… methods. Returns a dictionary of method_name: method_function
_get_importers
Get all implemented import formats. The convention is to find all _parse_… methods. Returns a list of strings.
_source_attributes
clone
Create a clone of the Data node.
convert
Convert the AiiDA StructureData into another python object
object_format – Specify the output format
creator
Return the creator of this node or None if it does not exist.
the creating node or None
export
Save a Data object to a file.
fname – string with file name. Can be an absolute or relative path.
fileformat – kind of format to use for the export. If not present, it will try to use the extension of the file name.
overwrite – if set to True, overwrites file found at path. Default=False
kwargs – additional parameters to be passed to the _exportcontent method
the list of files created
get_export_formats
Get the list of valid export format strings
a list of valid formats
importfile
Populate a Data object from a file.
importstring
Converts a Data object to other text format.
fileformat – a string (the extension) to describe the file format.
a string with the structure description.
set_source
Sets the dictionary describing the source of Data object.
source
Gets the dictionary describing the source of Data object. Possible fields:
db_name: name of the source database.
db_uri: URI of the source database.
uri: URI of the object’s source. Should be a permanent link.
id: object’s source identifier in the source database.
version: version of the object’s source.
extras: a dictionary with other fields for source description.
source_md5: MD5 checksum of object’s source.
description: human-readable free form description of the object’s source.
license: a string with a type of license.
some limitations for setting the data source exist, see _validate method.
dictionary describing the source of Data object.
Dict
Data sub class to represent a dictionary.
The dictionary contents of a Dict node are stored in the database as attributes. The dictionary can be initialized through the dict argument in the constructor. After construction, values can be retrieved and updated through the item getters and setters, respectively:
node[‘key’] = ‘value’
Alternatively, the dict property returns an instance of the AttributeManager that can be used to get and set values through attribute notation:
node.dict.key = ‘value’
Note that trying to set dictionary values directly on the node, e.g. node.key = value, will not work as intended. It will merely set the key attribute on the node instance, but will not be stored in the database. As soon as the node goes out of scope, the value will be lost.
It is also relevant to note here the difference in something being an “attribute of a node” (in the sense that it is stored in the “attribute” column of the database when the node is stored) and something being an “attribute of a python object” (in the sense of being able to modify and access it as if it was a property of the variable, e.g. node.key = value). This is true of all types of nodes, but it becomes more relevant for Dict nodes where one is constantly manipulating these attributes.
Finally, all dictionary mutations will be forbidden once the node is stored.
__getitem__
Store a dictionary as a Node instance.
Usual rules for attribute names apply, in particular, keys cannot start with an underscore, or a ValueError will be raised.
Initial attributes can be changed, deleted or added as long as the node is not stored.
dict – the dictionary to set
__setitem__
Return an instance of AttributeManager that transforms the dictionary into an attribute dict.
this will allow one to do node.dict.key as well as node.dict[key].
an instance of the AttributeResultManager.
get_dict
Return a dictionary with the parameters currently set.
dictionary
keys
Iterator of valid keys stored in the Dict object.
iterator over the keys of the current dictionary
set_dict
Replace the current dictionary with another one.
dictionary – dictionary to set
update_dict
Update the current dictionary with the keys provided in the dictionary.
works exactly as dict.update() where new keys are simply added and existing keys are overwritten.
dictionary – a dictionary with the keys to substitute
Entity
Bases: object
object
An AiiDA entity
_objects
Get the backend for this entity :return: the backend instance
backend_entity
Get the implementing class for this object
the class model
from_backend_entity
Construct an entity from a backend entity instance
backend_entity – the backend entity
an AiiDA entity instance
id
Return the id for this entity.
This identifier is guaranteed to be unique amongst entities of the same type for a single backend instance.
the entity’s id
init_from_backend
is_stored
Return whether the entity is stored.
boolean, True if stored, False otherwise
objects
pk
Return the primary key for this entity.
the entity’s principal key
Store the entity.
uuid
Return the UUID for this entity.
This identifier is unique across all entities types and backend instances.
the entity uuid
uuid.UUID
EntityAttributesMixin
Bases: abc.ABC
abc.ABC
Mixin class that adds all methods for the attributes column to an entity.
attributes
Return the complete attributes dictionary.
Warning
While the entity is unstored, this will return references of the attributes on the database model, meaning that changes on the returned values (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned attributes will be a deep copy and mutations of the database attributes will have to go through the appropriate set methods. Therefore, once stored, retrieving a deep copy can be a heavy operation. If you only need the keys or some values, use the iterators attributes_keys and attributes_items, or the getters get_attribute and get_attribute_many instead.
the attributes as a dictionary
attributes_items
Return an iterator over the attributes.
an iterator with attribute key value pairs
attributes_keys
Return an iterator over the attribute keys.
an iterator with attribute keys
clear_attributes
Delete all attributes.
delete_attribute
Delete an attribute.
key – name of the attribute
AttributeError – if the attribute does not exist
aiida.common.ModificationNotAllowed – if the entity is stored
delete_attribute_many
Delete multiple attributes.
keys – names of the attributes to delete
AttributeError – if at least one of the attribute does not exist
get_attribute
Return the value of an attribute.
While the entity is unstored, this will return a reference of the attribute on the database model, meaning that changes on the returned value (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned attribute will be a deep copy and mutations of the database attributes will have to go through the appropriate set methods.
default – return this value instead of raising if the attribute does not exist
the value of the attribute
AttributeError – if the attribute does not exist and no default is specified
get_attribute_many
Return the values of multiple attributes.
keys – a list of attribute names
a list of attribute values
AttributeError – if at least one attribute does not exist
reset_attributes
Reset the attributes.
This will completely clear any existing attributes and replace them with the new dictionary.
attributes – a dictionary with the attributes to set
aiida.common.ValidationError – if any of the keys are invalid, i.e. contain periods
set_attribute
Set an attribute to the given value.
value – value of the attribute
aiida.common.ValidationError – if the key is invalid, i.e. contains periods
set_attribute_many
Set multiple attributes.
This will override any existing attributes that are present in the new dictionary.
EntityExtrasMixin
Mixin class that adds all methods for the extras column to an entity.
clear_extras
Delete all extras.
delete_extra
Delete an extra.
key – name of the extra
AttributeError – if the extra does not exist
delete_extra_many
Delete multiple extras.
keys – names of the extras to delete
AttributeError – if at least one of the extra does not exist
extras
Return the complete extras dictionary.
While the entity is unstored, this will return references of the extras on the database model, meaning that changes on the returned values (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned extras will be a deep copy and mutations of the database extras will have to go through the appropriate set methods. Therefore, once stored, retrieving a deep copy can be a heavy operation. If you only need the keys or some values, use the iterators extras_keys and extras_items, or the getters get_extra and get_extra_many instead.
the extras as a dictionary
extras_items
Return an iterator over the extras.
an iterator with extra key value pairs
extras_keys
Return an iterator over the extra keys.
an iterator with extra keys
get_extra
Return the value of an extra.
While the entity is unstored, this will return a reference of the extra on the database model, meaning that changes on the returned value (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned extra will be a deep copy and mutations of the database extras will have to go through the appropriate set methods.
the value of the extra
AttributeError – if the extra does not exist and no default is specified
get_extra_many
Return the values of multiple extras.
keys – a list of extra names
a list of extra values
AttributeError – if at least one extra does not exist
reset_extras
Reset the extras.
This will completely clear any existing extras and replace them with the new dictionary.
extras – a dictionary with the extras to set
set_extra
Set an extra to the given value.
value – value of the extra
set_extra_many
Set multiple extras.
This will override any existing extras that are present in the new dictionary.
Float
Bases: aiida.orm.nodes.data.numeric.NumericType
aiida.orm.nodes.data.numeric.NumericType
Data sub class to represent a float value.
alias of builtins.float
builtins.float
FolderData
Data sub class to represent a folder on a file system.
Construct a new FolderData to which any files and folders can be added.
Use the tree keyword to simply wrap a directory:
folder = FolderData(tree=’/absolute/path/to/directory’)
Alternatively, one can construct the node first and then use the various repository methods to add objects:
folder = FolderData() folder.put_object_from_tree(‘/absolute/path/to/directory’) folder.put_object_from_filepath(‘/absolute/path/to/file.txt’) folder.put_object_from_filelike(filelike_object)
tree (str) – absolute path to a folder to wrap
Group
Bases: aiida.orm.entities.Entity, aiida.orm.entities.EntityExtrasMixin
aiida.orm.entities.EntityExtrasMixin
An AiiDA ORM implementation of group of nodes.
Collection of Groups
Delete a group
id – the id of the group to delete
Try to retrieve a group from the DB with the given arguments; create (and store) a new group if such a group was not present yet.
label (str) – group label
(group, created) where group is the group (new or existing, in any case already stored) and created is a boolean saying
(aiida.orm.Group, bool)
aiida.orm.Group
Create a new group. Either pass a dbgroup parameter, to reload a group from the DB (and then, no further parameters are allowed), or pass the parameters for the Group creation.
Deprecated since version 1.2.0: The parameter type_string will be removed in v2.0.0 and is now determined automatically.
label (str) – The group label, required on creation
description (str) – The group description (by default, an empty string)
user (aiida.orm.User) – The owner of the group (by default, the automatic user)
type_string (str) – a string identifying the type of group (by default, an empty string, indicating an user-defined group.
add_nodes
Add a node or a set of nodes to the group.
all the nodes and the group itself have to be stored.
nodes (aiida.orm.Node or list) – a single Node or a list of Nodes
clear
Remove all the nodes from this group.
Return the number of entities in this group.
integer number of entities contained within the group
the description of the group as a string
Custom get for group which can be used to get a group with the given attributes
kwargs – the attributes to match the group to
the group
schema of the group
is_empty
Return whether the group is empty, i.e. it does not contain any nodes.
True if it contains no nodes, False otherwise
is_user_defined
True if the group is user defined, False otherwise
the label of the group as a string
nodes
Return a generator/iterator that iterates over all nodes and returns the respective AiiDA subclasses of Node, and also allows to ask for the number of nodes in the group using len().
aiida.orm.convert.ConvertIterator
remove_nodes
Remove a node or a set of nodes to the group.
Verify that the group is allowed to be stored, which is the case along as type_string is set.
type_string
the string defining the type of the group
the user associated with this group
a string with the uuid
GroupTypeString
Bases: enum.Enum
enum.Enum
A simple enum of allowed group type strings.
Deprecated since version 1.2.0: This enum is deprecated and will be removed in v2.0.0.
IMPORTGROUP_TYPE
UPFGROUP_TYPE
USER
VERDIAUTOGROUP_TYPE
ImportGroup
Group to be used to contain all nodes from an export archive that has been imported.
Int
Data sub class to represent an integer value.
alias of builtins.int
builtins.int
KpointsData
Bases: aiida.orm.nodes.data.array.array.ArrayData
aiida.orm.nodes.data.array.array.ArrayData
Class to handle array of kpoints in the Brillouin zone. Provide methods to generate either user-defined k-points or path of k-points along symmetry lines. Internally, all k-points are defined in terms of crystal (fractional) coordinates. Cell and lattice vector coordinates are in Angstroms, reciprocal lattice vectors in Angstrom^-1 . :note: The methods setting and using the Bravais lattice info assume the PRIMITIVE unit cell is provided in input to the set_cell or set_cell_from_structure methods.
_change_reference
Change reference system, from cartesian to crystal coordinates (units of b1,b2,b3) or viceversa. :param kpoints: a list of (3) point coordinates :return kpoints: a list of (3) point coordinates in the new reference
_dimension
Dimensionality of the structure, found from its pbc (i.e. 1 if it’s a 1D structure, 2 if its 2D, 3 if it’s 3D …). :return dimensionality: 0, 1, 2 or 3 :note: will return 3 if pbc has not been set beforehand
_set_cell
Validate if ‘value’ is a allowed crystal unit cell :param value: something compatible with a 3x3 tuple of floats
_set_labels
set label names. Must pass in input a list like: [[0,'X'],[34,'L'],... ]
[[0,'X'],[34,'L'],... ]
_validate_kpoints_weights
Validate the list of kpoints and of weights before storage. Kpoints and weights must be convertible respectively to an array of N x dimension and N floats
cell
The crystal unit cell. Rows are the crystal vectors in Angstroms. :return: a 3x3 numpy.array
Returns a string with infos retrieved from kpoints node’s properties. :param node: :return: retstr
get_kpoints
Return the list of kpoints
also_weights – if True, returns also the list of weights. Default = False
cartesian – if True, returns points in cartesian coordinates, otherwise, returns in crystal coordinates. Default = False.
get_kpoints_mesh
Get the mesh of kpoints.
print_list – default=False. If True, prints the mesh of kpoints as a list
AttributeError – if no mesh has been set
(if print_list=False) a list of 3 integers and a list of three floats 0<x<1, representing the mesh and the offset of kpoints
(if print_list = True) an explicit list of kpoints coordinates, similar to what returned by get_kpoints()
Labels associated with the list of kpoints. List of tuples with kpoint index and kpoint name: [(0,'G'),(13,'M'),...]
[(0,'G'),(13,'M'),...]
pbc
The periodic boundary conditions along the vectors a1,a2,a3.
a tuple of three booleans, each one tells if there are periodic boundary conditions for the i-th real-space direction (i=1,2,3)
reciprocal_cell
Compute reciprocal cell from the internally set cell.
reciprocal cell in units of 1/Angstrom with cell vectors stored as rows. Use e.g. reciprocal_cell[0] to access the first reciprocal cell vector.
set_cell
Set a cell to be used for symmetry analysis. To set a cell from an AiiDA structure, use “set_cell_from_structure”.
cell – 3x3 matrix of cell vectors. Orientation: each row represent a lattice vector. Units are Angstroms.
pbc – list of 3 booleans, True if in the nth crystal direction the structure is periodic. Default = [True,True,True]
set_cell_from_structure
Set a cell to be used for symmetry analysis from an AiiDA structure. Inherits both the cell and the pbc’s. To set manually a cell, use “set_cell”
structuredata – an instance of StructureData
set_kpoints
Set the list of kpoints. If a mesh has already been stored, raise a ModificationNotAllowed
kpoints –
a list of kpoints, each kpoint being a list of one, two or three coordinates, depending on self.pbc: if structure is 1D (only one True in self.pbc) one allows singletons or scalars for each k-point, if it’s 2D it can be a length-2 list, and in all cases it can be a length-3 list. Examples:
[[0.,0.,0.],[0.1,0.1,0.1],…] for 1D, 2D or 3D [[0.,0.],[0.1,0.1,],…] for 1D or 2D [[0.],[0.1],…] for 1D [0., 0.1, …] for 1D (list of scalars)
[[0.,0.,0.],[0.1,0.1,0.1],…] for 1D, 2D or 3D
[[0.,0.],[0.1,0.1,],…] for 1D or 2D
[[0.],[0.1],…] for 1D
[0., 0.1, …] for 1D (list of scalars)
For 0D (all pbc are False), the list can be any of the above or empty - then only Gamma point is set. The value of k for the non-periodic dimension(s) is set by fill_values
cartesian – if True, the coordinates given in input are treated as in cartesian units. If False, the coordinates are crystal, i.e. in units of b1,b2,b3. Default = False
labels – optional, the list of labels to be set for some of the kpoints. See labels for more info
weights – optional, a list of floats with the weight associated to the kpoint list
fill_values – scalar to be set to all non-periodic dimensions (indicated by False in self.pbc), or list of values for each of the non-periodic dimensions.
set_kpoints_mesh
Set KpointsData to represent a uniformily spaced mesh of kpoints in the Brillouin zone. This excludes the possibility of set/get kpoints
mesh – a list of three integers, representing the size of the kpoint mesh along b1,b2,b3.
offset – (optional) a list of three floats between 0 and 1. [0.,0.,0.] is Gamma centered mesh [0.5,0.5,0.5] is half shifted [1.,1.,1.] by periodicity should be equivalent to [0.,0.,0.] Default = [0.,0.,0.].
set_kpoints_mesh_from_density
Set a kpoints mesh using a kpoints density, expressed as the maximum distance between adjacent points along a reciprocal axis
distance – distance (in 1/Angstrom) between adjacent kpoints, i.e. the number of kpoints along each reciprocal axis i is \(|b_i|/distance\) where \(|b_i|\) is the norm of the reciprocal cell vector.
offset – (optional) a list of three floats between 0 and 1. [0.,0.,0.] is Gamma centered mesh [0.5,0.5,0.5] is half shifted Default = [0.,0.,0.].
force_parity – (optional) if True, force each integer in the mesh to be even (except for the non-periodic directions).
a cell should be defined first.
the number of kpoints along non-periodic axes is always 1.
List
Bases: aiida.orm.nodes.data.data.Data, collections.abc.MutableSequence
collections.abc.MutableSequence
Data sub class to represent a list.
_LIST_KEY
__delitem__
__len__
_using_list_reference
This function tells the class if we are using a list reference. This means that calls to self.get_list return a reference rather than a copy of the underlying list and therefore self.set_list need not be called. This knwoledge is essential to make sure this class is performant.
Currently the implementation assumes that if the node needs to be stored then it is using the attributes cache which is a reference.
True if using self.get_list returns a reference to the underlying sequence. False otherwise.
append
S.append(value) – append value to the end of the sequence
Return number of occurrences of value.
extend
S.extend(iterable) – extend sequence by appending elements from the iterable
get_list
Return the contents of this node.
a list
index
Return first index of value..
insert
S.insert(index, value) – insert value before index
pop
Remove and return item at index (default last).
remove
S.remove(value) – remove first occurrence of value. Raise ValueError if the value is not present.
reverse
S.reverse() – reverse IN PLACE
set_list
Set the contents of this node.
data – the list to set
sort
Log
An AiiDA Log entity. Corresponds to a logged message against a particular AiiDA node.
This class represents the collection of logs and can be used to create and retrieve logs.
create_entry_from_record
Helper function to create a log entry from a record created as by the python logging library
record (logging.LogRecord) – The record created by the logging module
logging.LogRecord
An object implementing the log entry interface
aiida.orm.logs.Log
Remove a Log entry from the collection with the given id
log_id (int) – id of the Log to delete
TypeError – if log_id is not an int
log_id
NotExistent – if Log with ID log_id is not found
Delete all Logs in the collection
IntegrityError – if all Logs could not be deleted
Delete Logs based on filters
(former) PK s of deleted Logs
get_logs_for
Get all the log messages for a given entity and optionally sort
entity (aiida.orm.Entity) – the entity to get logs for
the list of log entries
Construct a new log
time (datetime.datetime) – time
datetime.datetime
loggername (str) – name of logger
levelname (str) – name of log level
dbnode_id (int) – id of database node
message (str) – log message
metadata (dict) – metadata
backend (aiida.orm.implementation.Backend) – database backend
dbnode_id
Get the id of the object that created the log entry
The id of the object that created the log entry
levelname
The name of the log level
The entry log level name
loggername
The name of the logger that created this entry
The entry loggername
message
Get the message corresponding to the entry
The entry message
Get the metadata corresponding to the entry
The entry metadata
time
Get the time corresponding to the entry
The entry timestamp
Node
Bases: aiida.orm.entities.Entity, aiida.orm.entities.EntityAttributesMixin, aiida.orm.entities.EntityExtrasMixin
aiida.orm.entities.EntityAttributesMixin
Base class for all nodes in AiiDA.
Stores attributes starting with an underscore.
Caches files and attributes before the first save, and saves everything only on store(). After the call to store(), attributes cannot be changed.
Only after storing (or upon loading from uuid) extras can be modified and in this case they are directly set on the db.
In the plugin, also set the _plugin_type_string, to be set in the DB in the ‘type’ field.
The collection of nodes.
Delete a Node from the collection with the given id
node_id – the node id
__annotations__
Copying a Node is not supported in general, but only for the Data sub class.
Deep copying a Node is not supported in general, but only for the Data sub class.
Python-Hash: Implementation that is compatible with __eq__
_add_incoming_cache
Add an incoming link to the cache.
source – the node from which the link is coming
aiida.common.UniquenessError – if the given link triple already exists in the cache
_add_outputs_from_cache
Replicate the output links and nodes from the cached node onto this node.
_get_hash
Return the hash for this node based on its attributes.
This will always work, even before storing.
ignore_errors – return None on aiida.common.exceptions.HashingError (logging the exception)
None
aiida.common.exceptions.HashingError
_get_same_node
Returns a stored node from which the current Node can be cached or None if it does not exist
If a node is returned it is a valid cache, meaning its _aiida_hash extra matches self.get_hash(). If there are multiple valid matches, the first one is returned. If no matches are found, None is returned.
a stored Node instance with the same hash as this code or None
Note: this should be only called on stored nodes, or internally from .store() since it first calls clean_value() on the attributes to normalise them.
_incoming_cache
_iter_all_same_nodes
Returns an iterator of all same nodes.
_repository
_store
Store the node in the database while saving its attributes and repository directory.
with_transaction – if False, do not use a transaction because the caller will already have opened one.
clean – boolean, if True, will clean the attributes and extras before attempting to store
_store_from_cache
Store this node from an existing cache node.
Check if the attributes and files retrieved from the database are valid.
For the base class, this is always valid. Subclasses will reimplement this. In the subclass, always call the super()._validate() method first!
add_comment
Add a new comment.
content – string with comment
user – the user to associate with the comment, will use default if not supplied
the newly created comment
add_incoming
Add a link of the given type from a given node to ourself.
TypeError – if source is not a Node instance or link_type is not a LinkType enum
class_node_type
clear_hash
Sets the stored hash of the Node to None.
Return the computer of this node.
the computer or None
Computer or None
Return the node ctime.
the ctime
delete_object
Delete the object from the repository.
If the repository belongs to a stored node, a ModificationNotAllowed exception will be raised. This check can be avoided by using the force flag, but this should be used with extreme caution!
Deprecated since version 1.4.0: Keyword key is deprecated and will be removed in v2.0.0. Use path instead.
Deprecated since version 1.4.0: Keyword force is deprecated and will be removed in v2.0.0.
key – fully qualified identifier for the object within the repository
force – boolean, if True, will skip the mutability check
aiida.common.ModificationNotAllowed – if repository is immutable and force=False
Return the node description.
get_all_same_nodes
Return a list of stored nodes which match the type and hash of the current node.
All returned nodes are valid caches, meaning their _aiida_hash extra matches self.get_hash().
Note: this can be called only after storing a Node (since at store time attributes will be cleaned with clean_value and the hash should become idempotent to the action of serialization/deserialization)
get_cache_source
Return the UUID of the node that was used in creating this node from the cache, or None if it was not cached.
source node UUID or None
get_comment
Return a comment corresponding to the given identifier.
identifier – the comment pk
aiida.common.NotExistent – if the comment with the given id does not exist
aiida.common.MultipleObjectsError – if the id cannot be uniquely resolved to a comment
the comment
get_comments
Return a sorted list of comments for this node.
the list of comments, sorted by pk
Return a string with a description of the node.
a description string
get_hash
get_incoming
Return a list of link triples that are (directly) incoming into this node.
node_class – If specified, should be a class or tuple of classes, and it filters only elements of that specific type (or a subclass of ‘type’)
link_type – If specified should be a string or tuple to get the inputs of this link type, if None then returns all inputs of all link types.
link_label_filter – filters the incoming nodes by its link label. Here wildcards (% and _) can be passed in link label filter as we are using “like” in QB.
only_uuid – project only the node UUID instead of the instance onto the NodeTriple.node entries
get_object
Return the object with the given path.
path – the relative path of the object within the repository.
a File named tuple
get_object_content
Return the content of a object with the given path.
get_outgoing
Return a list of link triples that are (directly) outgoing of this node.
link_type – If specified should be a string or tuple to get the inputs of this link type, if None then returns all outputs of all link types.
link_label_filter – filters the outgoing nodes by its link label. Here wildcards (% and _) can be passed in link label filter as we are using “like” in QB.
get schema of the node
get_stored_link_triples
Return the list of stored link triples directly incoming to or outgoing of this node.
Note this will only return link triples that are stored in the database. Anything in the cache is ignored.
node_class – If specified, should be a class, and it filters only elements of that (subclass of) type
link_type – Only get inputs of this link type, if empty tuple then returns all inputs of all link types.
link_label_filter – filters the incoming nodes by its link label. This should be a regex statement as one would pass directly to a QueryBuilder filter statement with the ‘like’ operation.
link_direction – incoming or outgoing to get the incoming or outgoing links, respectively.
has_cached_links
Feturn whether there are unstored incoming links in the cache.
boolean, True when there are links in the incoming cache, False otherwise
is_created_from_cache
Return whether this node was created from a cached node.
boolean, True if the node was created by cloning a cached node, False otherwise
is_valid_cache
Hook to exclude certain Node instances from being considered a valid cache.
list_object_names
Return a list of the object names contained in this repository, optionally in the given sub directory.
list_objects
Return a list of the objects contained in this repository, optionally in the given sub directory.
a list of File named tuples representing the objects present in directory with the given path
FileNotFoundError – if the path does not exist in the repository of this node
Return the logger configured for this Node.
Logger object
Return the node mtime.
the mtime
node_type
Return the node type.
the node type
open
Open a file handle to the object with the given path.
Deprecated since version 1.4.0: Starting from v2.0.0 this will raise if not used in a context manager.
mode – the mode under which to open the handle
process_type
Return the node process type.
the process type
put_object_from_file
Store a new object under path with contents of the file located at filepath on this file system.
Deprecated since version 1.4.0: First positional argument path has been deprecated and renamed to filepath.
filepath – absolute path of file whose contents to copy to the repository
path – the relative path where to store the object in the repository.
mode – the file mode with which the object will be written Deprecated: will be removed in v2.0.0
encoding – the file encoding with which the object will be written Deprecated: will be removed in v2.0.0
put_object_from_filelike
Store a new object under path with contents of filelike object handle.
handle – filelike object with the content to be stored
mode – the file mode with which the object will be written
encoding – the file encoding with which the object will be written
put_object_from_tree
Store a new object under path with the contents of the directory located at filepath on this file system.
Deprecated since version 1.4.0: Keyword contents_only is deprecated and will be removed in v2.0.0.
filepath – absolute path of directory whose contents to copy to the repository
contents_only – boolean, if True, omit the top level directory of the path and only copy its contents.
rehash
Regenerate the stored hash of the Node.
remove_comment
Delete an existing comment.
After being called attributes cannot be changed anymore! Instead, extras can be changed only AFTER calling this store() function.
After successful storage, those links that are in the cache, and for which also the parent node is already stored, will be automatically stored. The others will remain unstored.
store_all
Store the node, together with all input links.
Unstored nodes from cached incoming linkswill also be stored.
update_comment
Update the content of an existing comment.
content – the new comment content
Return the user of this node.
the user
User
Return the node UUID.
the string representation of the UUID
validate_incoming
Validate adding a link of the given type from a given node to ourself.
This function will first validate the types of the inputs, followed by the node and link types and validate whether in principle a link of that type between the nodes of these types is allowed.
Subsequently, the validity of the “degree” of the proposed link is validated, which means validating the number of links of the given type from the given node type is allowed.
The validity of the triple (source, link, target) should be validated in the validate_incoming call. This method will be called afterwards and can be overriden by subclasses to add additional checks that are specific to that subclass.
validate_storability
Verify that the current node is allowed to be stored.
aiida.common.exceptions.StoringNotAllowed – if the node does not match all requirements for storing
verify_are_parents_stored
Verify that all parent nodes are already stored.
aiida.common.ModificationNotAllowed – if one of the source nodes of incoming links is not stored.
NumericType
Sub class of Data to store numbers, overloading common operators (+, *, …).
+
*
__add__
Decorator wrapper.
__div__
__float__
__floordiv__
__ge__
__gt__
__le__
__lt__
__mod__
__mul__
__pow__
__radd__
__rdiv__
__rfloordiv__
__rmod__
__rmul__
__rsub__
__rtruediv__
__sub__
__truediv__
OrbitalData
Used for storing collections of orbitals, as well as providing methods for accessing them internally.
clear_orbitals
Remove all orbitals that were added to the class Cannot work if OrbitalData has been already stored
get_orbitals
Returns all orbitals by default. If a site is provided, returns all orbitals cooresponding to the location of that site, additional arguments may be provided, which act as filters on the retrieved orbitals.
site – if provided, returns all orbitals with position of site
attributes than can filter the set of returned orbitals
a list of orbitals
set_orbitals
Sets the orbitals into the database. Uses the orbital’s inherent set_orbital_dict method to generate a orbital dict string.
orbital – an orbital or list of orbitals to be set
OrderSpecifier
ProcessNode
Bases: aiida.orm.utils.mixins.Sealable, aiida.orm.nodes.node.Node
aiida.orm.utils.mixins.Sealable
Base class for all nodes representing the execution of a process
This class and its subclasses serve as proxies in the database, for actual Process instances being run. The Process instance in memory will leverage an instance of this class (the exact sub class depends on the sub class of Process) to persist important information of its state to the database. This serves as a way for the user to inspect the state of the Process during its execution as well as a permanent record of its execution in the provenance graph, after the execution has terminated.
CHECKPOINT_KEY
EXCEPTION_KEY
EXIT_MESSAGE_KEY
EXIT_STATUS_KEY
PROCESS_LABEL_KEY
PROCESS_PAUSED_KEY
PROCESS_STATE_KEY
PROCESS_STATUS_KEY
_hash_ignored_inputs
called
Return a list of nodes that the process called
list of process nodes called by this process
called_descendants
Return a list of all nodes that have been called downstream of this process
This will recursively find all the called processes for this process and its children.
caller
Return the process node that called this process node, or None if it does not have a caller
process node that called this process node instance or None
checkpoint
Return the checkpoint bundle set for the process
checkpoint bundle if it exists, None otherwise
delete_checkpoint
Delete the checkpoint bundle set for the process
exception
Return the exception of the process or None if the process is not excepted.
If the process is marked as excepted yet there is no exception attribute, an empty string will be returned.
the exception message or None
exit_message
Return the exit message of the process
the exit message
exit_status
Return the exit status of the process
the exit status, an integer exit code or None
Return a ProcessBuilder that is ready to relaunch the process that created this node.
~aiida.engine.processes.builder.ProcessBuilder instance
is_excepted
Return whether the process has excepted
Excepted means that during execution of the process, an exception was raised that was not caught.
True if during execution of the process an exception occurred, False otherwise
is_failed
Return whether the process has failed
Failed means that the process terminated nominally but it had a non-zero exit status.
True if the process has failed, False otherwise
is_finished
Return whether the process has finished
Finished means that the process reached a terminal state nominally. Note that this does not necessarily mean successfully, but there were no exceptions and it was not killed.
True if the process has finished, False otherwise
is_finished_ok
Return whether the process has finished successfully
Finished successfully means that it terminated nominally and had a zero exit status.
True if the process has finished successfully, False otherwise
is_killed
Return whether the process was killed
Killed means the process was killed directly by the user or by the calling process being killed.
True if the process was killed, False otherwise
is_terminated
Return whether the process has terminated
Terminated means that the process has reached any terminal state.
True if the process has terminated, False otherwise
Return whether the node is valid for caching
True if this process node is valid to be used for caching, False otherwise
Get the logger of the Calculation object, so that it also logs to the DB.
LoggerAdapter object, that works like a logger, but also has the ‘extra’ embedded
pause
Mark the process as paused by setting the corresponding attribute.
This serves only to reflect that the corresponding Process is paused and so this method should not be called by anyone but the Process instance itself.
paused
Return whether the process is paused
True if the Calculation is marked as paused, False otherwise
process_class
Return the process class that was used to create this node.
Process class
ValueError – if no process type is defined, it is an invalid process type string or cannot be resolved to load the corresponding class
process_label
Return the process label
the process label
process_state
Return the process state
the process state instance of ProcessState enum
process_status
Return the process status
The process status is a generic status message e.g. the reason it might be paused or when it is being killed
the process status
set_checkpoint
Set the checkpoint bundle set for the process
state – string representation of the stepper state info
set_exception
Set the exception of the process
exception – the exception message
set_exit_message
Set the exit message of the process, if None nothing will be done
message – a string message
set_exit_status
Set the exit status of the process
state – an integer exit code or None, which will be interpreted as zero
set_process_label
Set the process label
label – process label string
set_process_state
Set the process state
state – value or instance of ProcessState enum
set_process_status
Set the process status
The process status is a generic status message e.g. the reason it might be paused or when it is being killed. If status is None, the corresponding attribute will be deleted.
status – string process status
set_process_type
Set the process type string.
process_type – the process type string identifying the class using this process node as storage.
unpause
Mark the process as unpaused by removing the corresponding attribute.
This serves only to reflect that the corresponding Process is unpaused and so this method should not be called by anyone but the Process instance itself.
Adding an input link to a ProcessNode once it is stored is illegal because this should be taken care of by the engine in one go. If a link is being added after the node is stored, it is most likely not by the engine and it should not be allowed.
ProjectionData
Bases: aiida.orm.nodes.data.orbital.OrbitalData, aiida.orm.nodes.data.array.array.ArrayData
aiida.orm.nodes.data.orbital.OrbitalData
A class to handle arrays of projected wavefunction data. That is projections of a orbitals, usually an atomic-hydrogen orbital, onto a given bloch wavefunction, the bloch wavefunction being indexed by s, n, and k. E.g. the elements are the projections described as < orbital | Bloch wavefunction (s,n,k) >
_check_projections_bands
Checks to make sure that a reference bandsdata is already set, and that projection_array is of the same shape of the bands data
projwfc_arrays – nk x nb x nwfc array, to be checked against bands
AttributeError if energy is not already set
AttributeError if input_array is not of same shape as dos_energy
_find_orbitals_and_indices
Finds all the orbitals and their indicies associated with kwargs essential for retrieving the other indexed array parameters
kwargs – kwargs that can call orbitals as in get_orbitals()
retrieve_indexes, list of indicicies of orbitals corresponding to the kwargs
all_orbitals, list of orbitals to which the indexes correspond
_from_index_to_arrayname
Used internally to determine the array names.
get_pdos
Retrieves all the pdos arrays corresponding to the input kwargs
kwargs – inputs describing the orbitals associated with the pdos arrays
a list of tuples containing the orbital, energy array and pdos array associated with all orbitals that correspond to kwargs
get_projections
a list of tuples containing the orbital, and projection arrays associated with all orbitals that correspond to kwargs
get_reference_bandsdata
Returns the reference BandsData, using the set uuid via set_reference_bandsdata
a BandsData instance
AttributeError – if the bandsdata has not been set yet
exceptions.NotExistent – if the bandsdata uuid did not retrieve bandsdata
This method is inherited from OrbitalData, but is blocked here. If used will raise a NotImplementedError
set_projectiondata
Stores the projwfc_array using the projwfc_label, after validating both.
list_of_orbitals – list of orbitals, of class orbital data. They should be the ones up on which the projection array corresponds with.
list_of_projections – list of arrays of projections of a atomic wavefunctions onto bloch wavefunctions. Since the projection is for every bloch wavefunction which can be specified by its spin (if used), band, and kpoint the dimensions must be nspin x nbands x nkpoints for the projwfc array. Or nbands x nkpoints if spin is not used.
energy_axis – list of energy axis for the list_of_pdos
list_of_pdos – a list of projected density of states for the atomic wavefunctions, units in states/eV
tags – A list of tags, not supported currently.
bands_check – if false, skips checks of whether the bands has been already set, and whether the sizes match. For use in parsers, where the BandsData has not yet been stored and therefore get_reference_bandsdata cannot be called
set_reference_bandsdata
Sets a reference bandsdata, creates a uuid link between this data object and a bandsdata object, must be set before any projection arrays
value – a BandsData instance, a uuid or a pk
exceptions.NotExistent if there was no BandsData associated with uuid or pk
QueryBuilder
The class to query the AiiDA database.
Usage:
from aiida.orm.querybuilder import QueryBuilder qb = QueryBuilder() # Querying nodes: qb.append(Node) # retrieving the results: results = qb.all()
_EDGE_TAG_DELIM
_VALID_PROJECTION_KEYS
Create deep copy of QueryBuilder instance.
Instantiates a QueryBuilder instance.
Which backend is used decided here based on backend-settings (taken from the user profile). This cannot be overriden so far by the user.
debug (bool) – Turn on debug mode. This feature prints information on the screen about the stages of the QueryBuilder. Does not affect results.
path (list) – A list of the vertices to traverse. Leave empty if you plan on using the method QueryBuilder.append().
QueryBuilder.append()
filters – The filters to apply. You can specify the filters here, when appending to the query using QueryBuilder.append() or even later using QueryBuilder.add_filter(). Check latter gives API-details.
QueryBuilder.add_filter()
project – The projections to apply. You can specify the projections here, when appending to the query using QueryBuilder.append() or even later using QueryBuilder.add_projection(). Latter gives you API-details.
QueryBuilder.add_projection()
limit (int) – Limit the number of rows to this number. Check QueryBuilder.limit() for more information.
QueryBuilder.limit()
offset (int) – Set an offset for the results returned. Details in QueryBuilder.offset().
QueryBuilder.offset()
order_by – How to order the results. As the 2 above, can be set also at later stage, check QueryBuilder.order_by() for more information.
QueryBuilder.order_by()
When somebody hits: print(QueryBuilder) or print(str(QueryBuilder)) I want to print the SQL-query. Because it looks cool…
_add_group_type_filter
Add a filter based on group type.
tagspec – The tag, which has to exist already as a key in self._filters
classifiers – a dictionary with classifiers
subclassing – if True, allow for subclasses of the ormclass
_add_node_type_filter
Add a filter based on node type.
_add_process_type_filter
Add a filter based on process type.
subclassing – if True, allow for subclasses of the process type
Note: This function handles the case when process_type_string is None.
_add_to_projections
alias (sqlalchemy.orm.util.AliasedClass) – A instance of sqlalchemy.orm.util.AliasedClass, alias for an ormclass
sqlalchemy.orm.util.AliasedClass
projectable_entity_name – User specification of what to project. Appends to query’s entities what the user wants to project (have returned by the query)
_build
build the query and return a sqlalchemy.Query instance
_build_filters
Recurse through the filter specification and apply filter operations.
alias – The alias of the ORM class the filter will be applied on
filter_spec – the specification as given by the queryhelp
an instance of sqlalchemy.sql.elements.BinaryExpression.
_build_order
Build the order parameter of the query
_build_projections
Build the projections for a given tag.
_check_dbentities
entities_cls_joined – A tuple of the aliased class passed as joined_entity and the ormclass that was expected
entities_cls_joined – A tuple of the aliased class passed as entity_to_join and the ormclass that was expected
relationship (str) – The relationship between the two entities to make the Exception comprehensible
_get_connecting_node
querydict – A dictionary specifying how the current node is linked to other nodes.
index – Index of this node within the path specification
joining_keyword – the relation on which to join
joining_value – the tag of the nodes to be joined
_get_function_map
Map relationship type keywords to functions The new mapping (since 1.0.0a5) is a two level dictionary. The first level defines the entity which has been passed to the qb.append functon, and the second defines the relationship with respect to a given tag.
_get_ormclass
Get ORM classifiers from either class(es) or ormclass_type_string(s).
cls – a class or tuple/set/list of classes that are either AiiDA ORM classes or backend ORM classes.
ormclass_type_string – type string for ORM class
the ORM class as well as a dictionary with additional classifier strings
Handles the case of lists as well.
_get_projectable_entity
Return projectable entity for a given alias and column name.
_get_tag_from_specification
specification – If that is a string, I assume the user has deliberately specified it with tag=specification. In that case, I simply check that it’s not a duplicate. If it is a class, I check if it’s in the _cls_to_tag_map!
_get_unique_tag
Using the function get_tag_from_type, I get a tag. I increment an index that is appended to that tag until I have an unused tag. This function is called in QueryBuilder.append() when autotag is set to True.
classifiers (dict) –
Classifiers, containing the string that defines the type of the AiiDA ORM class. For subclasses of Node, this is the Node._plugin_type_string, for other they are as defined as returned by QueryBuilder._get_ormclass().
QueryBuilder._get_ormclass()
Can also be a list of dictionaries, when multiple classes are passed to QueryBuilder.append
A tag as a string (it is a single string also when passing multiple classes).
_join_ancestors_recursive
joining ancestors using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.
_join_comment_node
joined_entity – An aliased comment
entity_to_join – aliased node
_join_comment_user
entity_to_join – aliased user
_join_computer
joined_entity – An entity that can use a computer (eg a node)
entity_to_join – aliased dbcomputer entity
_join_created_by
joined_entity – the aliased user you want to join to
entity_to_join – the (aliased) node or group in the DB to join with
_join_creator_of
joined_entity – the aliased node
entity_to_join – the aliased user to join to that node
_join_descendants_recursive
joining descendants using the recursive functionality :TODO: Move the filters to be done inside the recursive query (for example on depth) :TODO: Pass an option to also show the path, if this is wanted.
_join_group_members
joined_entity – The (aliased) ORMclass that is a group in the database
entity_to_join – The (aliased) ORMClass that is a node and member of the group
joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as group to enitity_to_join as node. (enitity_to_join is with_group joined_entity)
_join_group_user
joined_entity – An aliased dbgroup
entity_to_join – aliased dbuser
_join_groups
joined_entity – The (aliased) node in the database
entity_to_join – The (aliased) Group
joined_entity and entity_to_join are joined via the table_groups_nodes table. from joined_entity as node to enitity_to_join as group. (enitity_to_join is a group with_node joined_entity)
_join_inputs
joined_entity – The (aliased) ORMclass that is an output
entity_to_join – The (aliased) ORMClass that is an input.
joined_entity and entity_to_join are joined with a link from joined_entity as output to enitity_to_join as input (enitity_to_join is with_outgoing joined_entity)
_join_log_node
joined_entity – An aliased log
_join_node_comment
joined_entity – An aliased node
entity_to_join – aliased comment
_join_node_log
entity_to_join – aliased log
_join_outputs
joined_entity – The (aliased) ORMclass that is an input
entity_to_join – The (aliased) ORMClass that is an output.
joined_entity and entity_to_join are joined with a link from joined_entity as input to enitity_to_join as output (enitity_to_join is with_incoming joined_entity)
_join_to_computer_used
joined_entity – the (aliased) computer entity
entity_to_join – the (aliased) node entity
_join_user_comment
joined_entity – An aliased user
_join_user_group
entity_to_join – aliased group
_process_filters
Process filters.
add_filter
Adding a filter to my filters.
filter_spec – The specifications for the filter, has to be a dictionary
qb = QueryBuilder() # Instantiating the QueryBuilder instance qb.append(Node, tag='node') # Appending a Node #let's put some filters: qb.add_filter('node',{'id':{'>':12}}) # 2 filters together: qb.add_filter('node',{'label':'foo', 'uuid':{'like':'ab%'}}) # Now I am overriding the first filter I set: qb.add_filter('node',{'id':13})
add_projection
Adds a projection
tag_spec – A valid specification for a tag
projection_spec – The specification for the projection. A projection is a list of dictionaries, with each dictionary containing key-value pairs where the key is database entity (e.g. a column / an attribute) and the value is (optional) additional information on how to process this database entity.
If the given projection_spec is not a list, it will be expanded to a list. If the listitems are not dictionaries, but strings (No additional processing of the projected results desired), they will be expanded to dictionaries.
qb = QueryBuilder() qb.append(StructureData, tag='struc') # Will project the uuid and the kinds qb.add_projection('struc', ['uuid', 'attributes.kinds'])
The above example will project the uuid and the kinds-attribute of all matching structures. There are 2 (so far) special keys.
The single star * will project the ORM-instance:
qb = QueryBuilder() qb.append(StructureData, tag='struc') # Will project the ORM instance qb.add_projection('struc', '*') print type(qb.first()[0]) # >>> aiida.orm.nodes.data.structure.StructureData
The double star ** projects all possible projections of this entity:
**
QueryBuilder().append(StructureData,tag=’s’, project=’**’).limit(1).dict()[0][‘s’].keys() # >>> ‘user_id, description, ctime, label, extras, mtime, id, attributes, dbcomputer_id, type, uuid’
QueryBuilder().append(StructureData,tag=’s’, project=’**’).limit(1).dict()[0][‘s’].keys()
# >>> ‘user_id, description, ctime, label, extras, mtime, id, attributes, dbcomputer_id, type, uuid’
Be aware that the result of ** depends on the backend implementation.
Executes the full query with the order of the rows as returned by the backend.
The order inside each row is given by the order of the vertices in the path and the order of the projections for each vertex in the path.
batch_size (int) – the size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default None if speed is not critical or if you don’t know what you’re doing.
flat (bool) – return the result as a flat list of projected entities without sub lists.
a list of lists of all projected entities.
Any iterative procedure to build the path for a graph query needs to invoke this method to append to the path.
cls –
The Aiida-class (or backend-class) defining the appended vertice. Also supports a tuple/list of classes. This results in an all instances of this class being accepted in a query. However the classes have to have the same orm-class for the joining to work. I.e. both have to subclasses of Node. Valid is:
cls=(StructureData, Dict)
This is invalid:
cls=(Group, Node)
entity_type – The node type of the class, if cls is not given. Also here, a tuple or list is accepted.
autotag (bool) – Whether to find automatically a unique tag. If this is set to True (default False),
tag (str) – A unique tag. If none is given, I will create a unique tag myself.
filters – Filters to apply for this vertex. See add_filter(), the method invoked in the background, or usage examples for details.
add_filter()
project – Projections to apply. See usage examples for details. More information also in add_projection().
add_projection()
subclassing (bool) – Whether to include subclasses of the given class (default True). E.g. Specifying a ProcessNode as cls will include CalcJobNode, WorkChainNode, CalcFunctionNode, etc..
outerjoin (bool) – If True, (default is False), will do a left outerjoin instead of an inner join
edge_tag (str) – The tag that the edge will get. If nothing is specified (and there is a meaningful edge) the default is tag1–tag2 with tag1 being the entity joining from and tag2 being the entity joining to (this entity).
edge_filters (str) – The filters to apply on the edge. Also here, details in add_filter().
edge_project (str) – The project from the edges. API-details in add_projection().
A small usage example how this can be invoked:
qb = QueryBuilder() # Instantiating empty querybuilder instance qb.append(cls=StructureData) # First item is StructureData node # The # next node in the path is a PwCalculation, with # the structure joined as an input qb.append( cls=PwCalculation, with_incoming=StructureData )
self
children
Join to children/descendants of previous vertice in path.
Counts the number of rows returned by the backend.
the number of rows as an integer
Executes the full query with the order of the rows as returned by the backend. the order inside each row is given by the order of the vertices in the path and the order of the projections for each vertice in the path.
batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter. Leave the default (None) if speed is not critical or if you don’t know what you’re doing!
a list of dictionaries of all projected entities. Each dictionary consists of key value pairs, where the key is the tag of the vertice and the value a dictionary of key-value pairs where key is the entity description (a column name or attribute path) and the value the value in the DB.
qb = QueryBuilder() qb.append( StructureData, tag='structure', filters={'uuid':{'==':myuuid}}, ) qb.append( Node, with_ancestors='structure', project=['entity_type', 'id'], # returns entity_type (string) and id (string) tag='descendant' ) # Return the dictionaries: print "qb.iterdict()" for d in qb.iterdict(): print '>>>', d
results in the following output:
qb.iterdict() >>> {'descendant': { 'entity_type': 'calculation.job.quantumespresso.pw.PwCalculation.', 'id': 7716} } >>> {'descendant': { 'entity_type': 'data.remote.RemoteData.', 'id': 8510} }
distinct
Asks for distinct rows, which is the same as asking the backend to remove duplicates. Does not execute the query!
If you want a distinct query:
qb = QueryBuilder() # append stuff! qb.append(...) qb.append(...) ... qb.distinct().all() #or qb.distinct().dict()
first
Executes query asking for one instance. Use as follows:
qb = QueryBuilder(**queryhelp) qb.first()
One row of results as a list
get_aiida_entity_res
Convert a projected query result to front end class if it is an instance of a BackendEntity.
Values that are not an BackendEntity instance will be returned unaltered
value – a projected query result to convert
the converted value
get_alias
In order to continue a query by the user, this utility function returns the aliased ormclasses.
tag – The tag for a vertice in the path
the alias given for that vertice
get_aliases
the list of aliases
get_json_compatible_queryhelp
Makes the queryhelp a json-compatible dictionary.
In this way,the queryhelp can be stored in the database or a json-object, retrieved or shared and used later. See this usage:
qb = QueryBuilder(limit=3).append(StructureData, project='id').order_by({StructureData:'id'}) queryhelp = qb.get_json_compatible_queryhelp() # Now I could save this dictionary somewhere and use it later: qb2=QueryBuilder(**queryhelp) # This is True if no change has been made to the database. # Note that such a comparison can only be True if the order of results is enforced qb.all()==qb2.all()
the json-compatible queryhelp
Deprecated since version 1.0.0: Will be removed in v2.0.0, use the aiida.orm.querybuilder.QueryBuilder.queryhelp() property instead.
aiida.orm.querybuilder.QueryBuilder.queryhelp()
get_query
Instantiates and manipulates a sqlalchemy.orm.Query instance if this is needed. First, I check if the query instance is still valid by hashing the queryhelp. In this way, if a user asks for the same query twice, I am not recreating an instance.
an instance of sqlalchemy.orm.Query that is specific to the backend used.
get_used_tags
Returns a list of all the vertices that are being used. Some parameter allow to select only subsets. :param bool vertices: Defaults to True. If True, adds the tags of vertices to the returned list :param bool edges: Defaults to True. If True, adds the tags of edges to the returnend list.
A list of all tags, including (if there is) also the tag give for the edges
inject_query
Manipulate the query an inject it back. This can be done to add custom filters using SQLA. :param query: A sqlalchemy.orm.Query instance
Join to inputs of previous vertice in path.
iterall
Same as all(), but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_per
all()
batch_size (int) – The size of the batches to ask the backend to batch results in subcollections. You can optimize the speed of the query by tuning this parameter.
a generator of lists
iterdict
Same as dict(), but returns a generator. Be aware that this is only safe if no commit will take place during this transaction. You might also want to read the SQLAlchemy documentation on http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.yield_per
dict()
a generator of dictionaries
limit
Set the limit (nr of rows to return)
limit (int) – integers of number of rows of rows to return
offset
Set the offset. If offset is set, that many rows are skipped before returning. offset = 0 is the same as omitting setting the offset. If both offset and limit appear, then offset rows are skipped before starting to count the limit rows that are returned.
offset (int) – integers of nr of rows to skip
one
Executes the query asking for exactly one results. Will raise an exception if this is not the case :raises: MultipleObjectsError if more then one row can be returned :raises: NotExistent if no result was found
order_by
Set the entity to order by
order_by – This is a list of items, where each item is a dictionary specifies what to sort for an entity
In each dictionary in that list, keys represent valid tags of entities (tables), and values are list of columns.
#Sorting by id (ascending): qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':['id']}) # or #Sorting by id (ascending): qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':{'order':'asc'}}]}) # for descending order: qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':{'order':'desc'}}]}) # or (shorter) qb = QueryBuilder() qb.append(Node, tag='node') qb.order_by({'node':[{'id':'desc'}]})
Join to outputs of previous vertice in path.
parents
Join to parents/ancestors of previous vertice in path.
queryhelp
queryhelp dictionary correspondig to QueryBuilder instance.
The queryhelp can be used to create a copy of the QueryBuilder instance like so:
qb = QueryBuilder(limit=3).append(StructureData, project='id').order_by({StructureData:'id'}) qb2 = QueryBuilder(**qb.queryhelp) # The following is True if no change has been made to the database. # Note that such a comparison can only be True if the order of results is enforced qb.all() == qb2.all()
a queryhelp dictionary
set_debug
Run in debug mode. This does not affect functionality, but prints intermediate stages when creating a query on screen.
debug (bool) – Turn debug on or off
RemoteData
Store a link to a file or folder on a remote machine.
Remember to pass a computer!
_clean
Remove all content of the remote folder on the remote computer
Get label of this node’s computer.
Deprecated since version 1.4.0: Will be removed in v2.0.0, use the self.computer.label property instead.
get_remote_path
getfile
Connects to the remote folder and retrieves the content of a file.
relpath – The relative path of the file on the remote to retrieve.
destpath – The absolute path of where to store the file on the local machine.
Check if remote folder is empty
listdir
Connects to the remote folder and lists the directory content.
relpath – If ‘relpath’ is specified, lists the content of the given subfolder.
a flat list of file/directory names (as strings).
listdir_withattributes
a list of dictionaries, where the documentation is in :py:class:Transport.listdir_withattributes.
set_remote_path
RemoteStashData
Data plugin that models an archived folder on a remote computer.
A stashed folder is essentially an instance of RemoteData that has been archived. Archiving in this context can simply mean copying the content of the folder to another location on the same or another filesystem as long as it is on the same machine. In addition, the folder may have been compressed into a single file for efficiency or even written to tape. The stash_mode attribute will distinguish how the folder was stashed which will allow the implementation to also unstash it and transform it back into a RemoteData such that it can be used as an input for new CalcJobs.
stash_mode
CalcJobs
This class is a non-storable base class that merely registers the stash_mode attribute. Only its subclasses, that actually implement a certain stash mode, can be instantiated and therefore stored. The reason for this design is that because the behavior of the class can change significantly based on the mode employed to stash the files and implementing all these variants in the same class will lead to an unintuitive interface where certain properties or methods of the class will only be available or function properly based on the stash_mode.
Construct a new instance
stash_mode – the stashing mode with which the data was stashed on the remote.
Return the mode with which the data was stashed on the remote.
the stash mode.
RemoteStashFolderData
Bases: aiida.orm.nodes.data.remote.stash.base.RemoteStashData
aiida.orm.nodes.data.remote.stash.base.RemoteStashData
Data plugin that models a folder with files of a completed calculation job that has been stashed through a copy.
This data plugin can and should be used to stash files if and only if the stash mode is StashMode.COPY.
target_basepath – the target basepath.
source_list – the list of source files.
source_list
Return the list of source files that were stashed.
the list of source files.
target_basepath
Return the target basepath.
the target basepath.
SinglefileData
Data class that can be used to store a single file in its repository.
DEFAULT_FILENAME
file – an absolute filepath or filelike object whose contents to copy. Hint: Pass io.BytesIO(b”my string”) to construct the SinglefileData directly from a string.
Ensure that there is one object stored in the repository, whose key matches value set for filename attr.
filename
Return the name of the file stored.
the filename under which the file is stored in the repository
get_content
Return the content of the single file stored for this data node.
the content of the file as a string
Return an open file handle to the content of this data node.
key – optional key within the repository, by default is the filename set in the attributes
mode – the mode with which to open the file handle (default: read mode)
a file handle
Store the content of the file in the node’s repository, deleting any other existing objects.
file – an absolute filepath or filelike object whose contents to copy Hint: Pass io.BytesIO(b”my string”) to construct the file directly from a string.
Str
Data sub class to represent a string value.
alias of builtins.str
builtins.str
StructureData
This class contains the information about a given structure, i.e. a collection of sites together with a cell, the boundary conditions (whether they are periodic or not) and other related useful information.
_adjust_default_cell
If the structure was imported from an xyz file, it lacks a defined cell, and the default cell is taken ([[1,0,0], [0,1,0], [0,0,1]]), leading to an unphysical definition of the structure. This method will adjust the cell
_dimensionality_label
Converts StructureData to ase.Atoms
_get_object_phonopyatoms
Converts StructureData to PhonopyAtoms
a PhonopyAtoms object
_get_object_pymatgen
Converts StructureData to pymatgen object
a pymatgen Structure for structures with periodic boundary conditions (in three dimensions) and Molecule otherwise
Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors).
_get_object_pymatgen_molecule
Converts StructureData to pymatgen Molecule object
a pymatgen Molecule object corresponding to this StructureData object.
Requires the pymatgen module (version >= 3.0.13, usage of earlier versions may cause errors)
_get_object_pymatgen_structure
Converts StructureData to pymatgen Structure object :param add_spin: True to add the spins to the pymatgen structure. Default is False (no spin added).
The spins are set according to the following rule:
if the kind name ends with 1 -> spin=+1
if the kind name ends with 2 -> spin=-1
a pymatgen Structure object corresponding to this StructureData object
ValueError – if periodic boundary conditions does not hold in at least one dimension of real space; if there are partial occupancies together with spins (defined by kind names ending with ‘1’ or ‘2’).
_internal_kind_tags
_parse_xyz
Read the structure from a string of format XYZ.
_prepare_chemdoodle
Write the given structure to a string of format required by ChemDoodle.
Write the given structure to a string of format CIF.
_prepare_xsf
Write the given structure to a string of format XSF (for XCrySDen).
_prepare_xyz
Write the given structure to a string of format XYZ.
_set_incompatibilities
Performs some standard validation tests.
append_atom
Append an atom to the Structure, taking care of creating the corresponding kind.
ase – the ase Atom object from which we want to create a new atom (if present, this must be the only parameter)
position – the position of the atom (three numbers in angstrom)
symbols – passed to the constructor of the Kind object.
weights – passed to the constructor of the Kind object.
name – passed to the constructor of the Kind object. See also the note below.
Note on the ‘name’ parameter (that is, the name of the kind):
if specified, no checks are done on existing species. Simply, a new kind with that name is created. If there is a name clash, a check is done: if the kinds are identical, no error is issued; otherwise, an error is issued because you are trying to store two different kinds with the same name.
if not specified, the name is automatically generated. Before adding the kind, a check is done. If other species with the same properties already exist, no new kinds are created, but the site is added to the existing (identical) kind. (Actually, the first kind that is encountered). Otherwise, the name is made unique first, by adding to the string containing the list of chemical symbols a number starting from 1, until an unique name is found
checks of equality of species are done using the compare_with() method.
compare_with()
append_kind
Append a kind to the StructureData. It makes a copy of the kind.
kind – the site to append, must be a Kind object.
append_site
Append a site to the StructureData. It makes a copy of the site.
site – the site to append. It must be a Site object.
Returns the cell shape.
a 3x3 list of lists.
cell_angles
Get the angles between the cell lattice vectors in degrees.
cell_lengths
Get the lengths of cell lattice vectors in angstroms.
clear_kinds
Removes all kinds for the StructureData object.
Also clear all sites!
clear_sites
Removes all sites for the StructureData object.
Get the ASE object. Requires to be able to import ase.
an ASE object corresponding to this StructureData object.
If any site is an alloy or has vacancies, a ValueError is raised (from the site.get_ase() routine).
get_cell_volume
Returns the cell volume in Angstrom^3.
a float.
get_cif
Creates aiida.orm.nodes.data.cif.CifData.
aiida.orm.nodes.data.cif.CifData
New in version 1.0: Renamed from _get_cif
converter – specify the converter. Default ‘ase’.
store – If True, intermediate calculation gets stored in the AiiDA database for record. Default False.
aiida.orm.nodes.data.cif.CifData node.
get_composition
Returns the chemical composition of this structure as a dictionary, where each key is the kind symbol (e.g. H, Li, Ba), and each value is the number of occurences of that element in this structure. For BaZrO3 it would return {‘Ba’:1, ‘Zr’:1, ‘O’:3}. No reduction with smallest common divisor!
a dictionary with the composition
Returns a string with infos retrieved from StructureData node’s properties
self – the StructureData node
retsrt: the description string
get_dimensionality
This function checks the dimensionality of the structure and calculates its length/surface/volume :return: returns the dimensionality and length/surface/volume
get_formula
Return a string with the chemical formula.
mode –
a string to specify how to generate the formula, can assume one of the following values:
’hill’ (default): count the number of atoms of each species, then use Hill notation, i.e. alphabetical order with C and H first if one or several C atom(s) is (are) present, e.g. ['C','H','H','H','O','C','H','H','H'] will return 'C2H6O' ['S','O','O','H','O','H','O'] will return 'H2O4S' From E. A. Hill, J. Am. Chem. Soc., 22 (8), pp 478–494 (1900)
['C','H','H','H','O','C','H','H','H']
'C2H6O'
['S','O','O','H','O','H','O']
'H2O4S'
’hill_compact’: same as hill but the number of atoms for each species is divided by the greatest common divisor of all of them, e.g. ['C','H','H','H','O','C','H','H','H','O','O','O'] will return 'CH3O2'
['C','H','H','H','O','C','H','H','H','O','O','O']
'CH3O2'
’reduce’: group repeated symbols e.g. ['Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'Ti', 'O', 'O', 'O'] will return 'BaTiO3BaTiO3BaTi2O3'
['Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'Ti', 'O', 'O', 'O']
'BaTiO3BaTiO3BaTi2O3'
’group’: will try to group as much as possible parts of the formula e.g. ['Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'O', 'O', 'O', 'Ba', 'Ti', 'Ti', 'O', 'O', 'O'] will return '(BaTiO3)2BaTi2O3'
'(BaTiO3)2BaTi2O3'
’count’: same as hill (i.e. one just counts the number of atoms of each species) without the re-ordering (take the order of the atomic sites), e.g. ['Ba', 'Ti', 'O', 'O', 'O','Ba', 'Ti', 'O', 'O', 'O'] will return 'Ba2Ti2O6'
['Ba', 'Ti', 'O', 'O', 'O','Ba', 'Ti', 'O', 'O', 'O']
'Ba2Ti2O6'
’count_compact’: same as count but the number of atoms for each species is divided by the greatest common divisor of all of them, e.g. ['Ba', 'Ti', 'O', 'O', 'O','Ba', 'Ti', 'O', 'O', 'O'] will return 'BaTiO3'
'BaTiO3'
separator – a string used to concatenate symbols. Default empty.
a string with the formula
in modes reduce, group, count and count_compact, the initial order in which the atoms were appended by the user is used to group and/or order the symbols in the formula
get_kind
Return the kind object associated with the given kind name.
kind_name – String, the name of the kind you want to get
The Kind object associated with the given kind_name, if a Kind with the given name is present in the structure.
ValueError if the kind_name is not present.
get_kind_names
Return a list of kind names (in the same order of the self.kinds property, but return the names rather than Kind objects)
self.kinds
This is NOT necessarily a list of chemical symbols! Use get_symbols_set for chemical symbols
a list of strings.
get_pymatgen
Get pymatgen object. Returns Structure for structures with periodic boundary conditions (in three dimensions) and Molecule otherwise. :param add_spin: True to add the spins to the pymatgen structure. Default is False (no spin added).
get_pymatgen_molecule
Get the pymatgen Molecule object.
get_pymatgen_structure
Get the pymatgen Structure object. :param add_spin: True to add the spins to the pymatgen structure. Default is False (no spin added).
a pymatgen Structure object corresponding to this StructureData object.
ValueError – if periodic boundary conditions do not hold in at least one dimension of real space.
get_site_kindnames
Return a list with length equal to the number of sites of this structure, where each element of the list is the kind name of the corresponding site.
This is NOT necessarily a list of chemical symbols! Use [ self.get_kind(s.kind_name).get_symbols_string() for s in self.sites] for chemical symbols
[ self.get_kind(s.kind_name).get_symbols_string() for s in self.sites]
a list of strings
get_symbols_set
Return a set containing the names of all elements involved in this structure (i.e., for it joins the list of symbols for each kind k in the structure).
a set of strings of element names.
has_vacancies
Return whether the structure has vacancies in the structure.
a boolean, True if at least one kind has a vacancy
is_alloy
Return whether the structure contains any alloy kinds.
a boolean, True if at least one kind is an alloy
kinds
Returns a list of kinds.
Get the periodic boundary conditions.
reset_cell
Reset the cell of a structure not yet stored to a new value.
new_cell – list specifying the cell vectors
ModificationNotAllowed: if object is already stored
reset_sites_positions
Replace all the Site positions attached to the Structure
new_positions – list of (3D) positions for every sites.
conserve_particle – if True, allows the possibility of removing a site. currently not implemented.
aiida.common.ModificationNotAllowed – if object is stored already
ValueError – if positions are invalid
it is assumed that the order of the new_positions is given in the same order of the one it’s substituting, i.e. the kind of the site will not be checked.
Load the structure from a ASE object
Set the cell.
set_cell_angles
set_cell_lengths
set_pbc
Set the periodic boundary conditions.
set_pymatgen
Load the structure from a pymatgen object.
set_pymatgen_molecule
Load the structure from a pymatgen Molecule object.
margin – the margin to be added in all directions of the bounding box of the molecule.
set_pymatgen_structure
Load the structure from a pymatgen Structure object.
periodic boundary conditions are set to True in all three directions.
Requires the pymatgen module (version >= 3.3.5, usage of earlier versions may cause errors).
ValueError – if there are partial occupancies together with spins.
sites
Returns a list of sites.
TrajectoryData
Stores a trajectory (a sequence of crystal structures with timestamps, and possibly with velocities).
_internal_validate
Internal function to validate the type and shape of the arrays. See the documentation of py:meth:.set_trajectory for a description of the valid shape and type of the parameters.
_parse_xyz_pos
Load positions from a XYZ file.
The steps and symbols must be set manually before calling this import function as a consistency measure. Even though the symbols and steps could be extracted from the XYZ file, the data present in the XYZ file may or may not be correct and the same logic would have to be present in the XYZ-velocities function. It was therefore decided not to implement it at all but require it to be set explicitly.
from aiida.orm.nodes.data.array.trajectory import TrajectoryData t = TrajectoryData() # get sites and number of timesteps t.set_array('steps', arange(ntimesteps)) t.set_array('symbols', array([site.kind for site in s.sites])) t.importfile('some-calc/AIIDA-PROJECT-pos-1.xyz', 'xyz_pos')
_parse_xyz_vel
Load velocities from a XYZ file.
The steps and symbols must be set manually before calling this import function as a consistency measure. See also comment for _parse_xyz_pos()
_parse_xyz_pos()
Write the given trajectory to a string of format CIF.
Write the given trajectory to a string of format XSF (for XCrySDen).
Verify that the required arrays are present and that their type and dimension are correct.
get_cells
Return the array of cells, if it has already been set.
KeyError – if the trajectory has not been set yet.
Creates aiida.orm.nodes.data.cif.CifData
get_index_from_stepid
Given a value for the stepid (i.e., a value among those of the steps array), return the array index of that stepid, that can be used in other methods such as get_step_data() or get_step_structure().
steps
get_step_data()
get_step_structure()
New in version 0.7: Renamed from get_step_index
Note that this function returns the first index found (i.e. if multiple steps are present with the same value, only the index of the first one is returned).
ValueError – if no step with the given value is found.
get_positions
Return the array of positions, if it has already been set.
get_step_data
Return a tuple with all information concerning the stepid with given index (0 is the first step, 1 the second step and so on). If you know only the step value, use the get_index_from_stepid() method to get the corresponding index.
get_index_from_stepid()
If no velocities were specified, None is returned as the last element.
A tuple in the format (stepid, time, cell, symbols, positions, velocities), where stepid is an integer, time is a float, cell is a \(3 \times 3\) matrix, symbols is an array of length n, positions is a \(n \times 3\) array, and velocities is either None or a \(n \times 3\) array
(stepid, time, cell, symbols, positions, velocities)
stepid
symbols
n
index – The index of the step that you want to retrieve, from 0 to self.numsteps - 1.
self.numsteps - 1
IndexError – if you require an index beyond the limits.
KeyError – if you did not store the trajectory yet.
get_step_structure
Return an AiiDA aiida.orm.nodes.data.structure.StructureData node (not stored yet!) with the coordinates of the given step, identified by its index. If you know only the step value, use the get_index_from_stepid() method to get the corresponding index.
The periodic boundary conditions are always set to True.
New in version 0.7: Renamed from step_to_structure
index – The index of the step that you want to retrieve, from 0 to self.numsteps- 1.
self.numsteps- 1
custom_kinds – (Optional) If passed must be a list of aiida.orm.nodes.data.structure.Kind objects. There must be one kind object for each different string in the symbols array, with kind.name set to this string. If this parameter is omitted, the automatic kind generation of AiiDA aiida.orm.nodes.data.structure.StructureData nodes is used, meaning that the strings in the symbols array must be valid chemical symbols.
aiida.orm.nodes.data.structure.Kind
kind.name
get_stepids
Return the array of steps, if it has already been set.
New in version 0.7: Renamed from get_steps
get_times
Return the array of times (in ps), if it has already been set.
get_velocities
Return the array of velocities, if it has already been set.
This function (differently from all other get_* functions, will not raise an exception if the velocities are not set, but rather return None (both if no trajectory was not set yet, and if it the trajectory was set but no velocities were specified).
get_*
numsites
Return the number of stored sites, or zero if nothing has been stored yet.
numsteps
Return the number of stored steps, or zero if nothing has been stored yet.
set_structurelist
Create trajectory from the list of aiida.orm.nodes.data.structure.StructureData instances.
structurelist – a list of aiida.orm.nodes.data.structure.StructureData instances.
ValueError – if symbol lists of supplied structures are different
set_trajectory
Store the whole trajectory, after checking that types and dimensions are correct.
Parameters stepids, cells and velocities are optional variables. If nothing is passed for cells or velocities nothing will be stored. However, if no input is given for stepids a consecutive sequence [0,1,2,…,len(positions)-1] will be assumed.
stepids
cells
velocities
symbols – string list with dimension n, where n is the number of atoms (i.e., sites) in the structure. The same list is used for each step. Normally, the string should be a valid chemical symbol, but actually any unique string works and can be used as the name of the atomic kind (see also the get_step_structure() method).
positions – float array with dimension \(s \times n \times 3\), where s is the length of the stepids array and n is the length of the symbols array. Units are angstrom. In particular, positions[i,j,k] is the k-th component of the j-th atom (or site) in the structure at the time step with index i (identified by step number step[i] and with timestamp times[i]).
s
positions[i,j,k]
k
j
i
step[i]
times[i]
stepids – integer array with dimension s, where s is the number of steps. Typically represents an internal counter within the code. For instance, if you want to store a trajectory with one step every 10, starting from step 65, the array will be [65,75,85,...]. No checks are done on duplicate elements or on the ordering, but anyway this array should be sorted in ascending order, without duplicate elements. (If not specified, stepids will be set to numpy.arange(s) by default) It is internally stored as an array named ‘steps’.
[65,75,85,...]
numpy.arange(s)
cells – if specified float array with dimension \(s \times 3 \times 3\), where s is the length of the stepids array. Units are angstrom. In particular, cells[i,j,k] is the k-th component of the j-th cell vector at the time step with index i (identified by step number stepid[i] and with timestamp times[i]).
cells[i,j,k]
stepid[i]
times – if specified, float array with dimension s, where s is the length of the stepids array. Contains the timestamp of each step in picoseconds (ps).
velocities – if specified, must be a float array with the same dimensions of the positions array. The array contains the velocities in the atoms.
positions
show_mpl_heatmap
Show a heatmap of the trajectory with matplotlib.
show_mpl_pos
Shows the positions as a function of time, separate for XYZ coordinates
stepsize (int) – The stepsize for the trajectory, set higher than 1 to reduce number of points
mintime (int) – Time to start from
maxtime (int) – Maximum time
elements (list) – A list of atomic symbols that should be displayed. If not specified, all atoms are displayed.
indices (list) – A list of indices of that atoms that can be displayed. If not specified, all atoms of the correct species are displayed.
dont_block (bool) – If True, interpreter is not blocked when figure is displayed.
Return the array of symbols, if it has already been set.
UpfData
Data sub class to represent a pseudopotential single file in UPF format.
Create UpfData instance from pseudopotential file.
file – filepath or filelike object of the UPF potential file to store. Hint: Pass io.BytesIO(b”my string”) to construct directly from a string.
source – Dictionary with information on source of the potential (see “.source” property).
Returns UPF PP in json format.
_prepare_upf
Return UPF content.
Validate the UPF potential file stored for this node.
element
Return the element of the UPF pseudopotential.
the element
Return a list of all UpfData that match the given md5 hash.
assumes hash of stored UpfData nodes is stored in the md5 attribute
md5 – the file hash
list of existing UpfData nodes that have the same md5 hash
Get the UpfData with the same md5 of the given file, or create it if it does not yet exist.
filepath – an absolute filepath on disk
use_first – if False (default), raise an exception if more than one potential is found. If it is True, instead, use the first available pseudopotential.
store_upf – boolean, if false, the UpfData if created will not be stored.
tuple of UpfData and boolean indicating whether it was created.
get_upf_family_names
Get the list of all upf family names to which the pseudo belongs.
get_upf_group
Return the UPF family group with the given label.
group_label – the family group label
the Group with the given label, if it exists
get_upf_groups
Return all names of groups of type UpfFamily, possibly with some filters.
filter_elements – A string or a list of strings. If present, returns only the groups that contains one UPF for every element present in the list. The default is None, meaning that all families are returned.
user – if None (default), return the groups for all users. If defined, it should be either a User instance or the user email.
list of Group entities of type UPF.
md5sum
Return the md5 checksum of the UPF pseudopotential file.
the md5 checksum
Store the file in the repository and parse it to set the element and md5 attributes.
file – filepath or filelike object of the UPF potential file to store. Hint: Pass io.BytesIO(b”my string”) to construct the file directly from a string.
Store the node, reparsing the file so that the md5 and the element are correctly reset.
UpfFamily
Group that represents a pseudo potential family containing UpfData nodes.
AiiDA User
The collection of users stored in a backend.
UNDEFINED
_default_user
get_default
Get the current default user
The default user
Get the existing user with a given email address or create an unstored one
kwargs – The properties of the user to get or create
The corresponding user object
aiida.common.exceptions.MultipleObjectsError, aiida.common.exceptions.NotExistent
aiida.common.exceptions.MultipleObjectsError
aiida.common.exceptions.NotExistent
reset
Reset internal caches (default user).
REQUIRED_FIELDS
Create a new User.
email
first_name
get_full_name
Return the user full name
the user full name
Every node property contains:
display_name: display name of the property help text: short help text of the property is_foreign_key: is the property foreign key to other type of the node type: type of the property. e.g. str, dict, int
schema of the user
get_short_name
Return the user short name (typically, this returns the email)
The short name
institution
last_name
normalize_email
Normalize the address by lowercasing the domain part of the email address (taken from Django).
WorkChainNode
Bases: aiida.orm.nodes.process.workflow.workflow.WorkflowNode
aiida.orm.nodes.process.workflow.workflow.WorkflowNode
ORM class for all nodes representing the execution of a WorkChain.
STEPPER_STATE_INFO_KEY
set_stepper_state_info
Set the stepper state info
stepper_state_info
Return the stepper state info
string representation of the stepper state info
WorkFunctionNode
Bases: aiida.orm.utils.mixins.FunctionCalculationMixin, aiida.orm.nodes.process.workflow.workflow.WorkflowNode
ORM class for all nodes representing the execution of a workfunction.
A workfunction cannot create Data, so if we receive an outgoing RETURN link to an unstored Data node, that means the user created a Data node within our function body and is trying to return it. This use case should be reserved for @calcfunctions, as they can have CREATE links.
WorkflowNode
Base class for all nodes representing the execution of a workflow process.
Return an instance of NodeLinksManager to manage incoming INPUT_WORK links
The returned Manager allows you to easily explore the nodes connected to this node via an incoming INPUT_WORK link. The incoming nodes are reachable by their link labels which are attributes of the manager.
NodeLinksManager
Return an instance of NodeLinksManager to manage outgoing RETURN links
The returned Manager allows you to easily explore the nodes connected to this node via an outgoing RETURN link. The outgoing nodes are reachable by their link labels which are attributes of the manager.
A workflow cannot ‘create’ Data, so if we receive an outgoing link to an unstored Data node, that means the user created a Data node within our function body and tries to attach it as an output. This is strictly forbidden and can cause provenance to be lost.
XyData
A subclass designed to handle arrays that have an “XY” relationship to each other. That is there is one array, the X array, and there are several Y arrays, which can be considered functions of X.
_arrayandname_validator
Validates that the array is an numpy.ndarray and that the name is of type str. Raises InputValidationError if this not the case.
get_x
Tries to retrieve the x array and x name raises a NotExistent exception if no x array has been set yet. :return x_name: the name set for the x_array :return x_array: the x array set earlier :return x_units: the x units set earlier
get_y
Tries to retrieve the y arrays and the y names, raises a NotExistent exception if they have not been set yet, or cannot be retrieved :return y_names: list of strings naming the y_arrays :return y_arrays: list of y_arrays :return y_units: list of strings giving the units for the y_arrays
set_x
Sets the array and the name for the x values.
x_array – A numpy.ndarray, containing only floats
x_name – a string for the x array name
x_units – the units of x
set_y
Set array(s) for the y part of the dataset. Also checks if the x_array has already been set, and that, the shape of the y_arrays agree with the x_array. :param y_arrays: A list of y_arrays, numpy.ndarray :param y_names: A list of strings giving the names of the y_arrays :param y_units: A list of strings giving the units of the y_arrays
load_code
Load a Code instance by one of its identifiers: pk, uuid or label
If the type of the identifier is unknown simply pass it without a keyword and the loader will attempt to automatically infer the type.
identifier – pk (integer), uuid (string) or label (string) of a Code
pk – pk of a Code
uuid – uuid of a Code, or the beginning of the uuid
label – label of a Code
sub_classes – an optional tuple of orm classes to narrow the queryset. Each class should be a strict sub class of the ORM class of the given entity loader.
query_with_dashes (bool) – allow to query for a uuid with dashes
the Code instance
ValueError – if none or more than one of the identifiers are supplied
TypeError – if the provided identifier has the wrong type
aiida.common.NotExistent – if no matching Code is found
aiida.common.MultipleObjectsError – if more than one Code was found
load_computer
Load a Computer instance by one of its identifiers: pk, uuid or label
identifier – pk (integer), uuid (string) or label (string) of a Computer
pk – pk of a Computer
uuid – uuid of a Computer, or the beginning of the uuid
label – label of a Computer
the Computer instance
aiida.common.NotExistent – if no matching Computer is found
aiida.common.MultipleObjectsError – if more than one Computer was found
load_group
Load a Group instance by one of its identifiers: pk, uuid or label
identifier – pk (integer), uuid (string) or label (string) of a Group
pk – pk of a Group
uuid – uuid of a Group, or the beginning of the uuid
label – label of a Group
the Group instance
aiida.common.NotExistent – if no matching Group is found
aiida.common.MultipleObjectsError – if more than one Group was found
load_node
Load a node by one of its identifiers: pk or uuid. If the type of the identifier is unknown simply pass it without a keyword and the loader will attempt to infer the type
identifier – pk (integer) or uuid (string)
pk – pk of a node
uuid – uuid of a node, or the beginning of the uuid
label – label of a Node
the node instance
aiida.common.NotExistent – if no matching Node is found
aiida.common.MultipleObjectsError – if more than one Node was found
to_aiida_type
Turns basic Python types (str, int, float, bool) into the corresponding AiiDA types.
Module for the AuthInfo ORM class.
aiida.orm.authinfos.
Module to manage the autogrouping functionality by verdi run.
verdi run
aiida.orm.autogroup.
Autogroup
Class to create a new AutoGroup instance that will, while active, automatically contain all nodes being stored.
The autogrouping is checked by the Node.store() method which, if CURRENT_AUTOGROUP is not None the method Autogroup.is_to_be_grouped is called to decide whether to put the current node being stored in the current AutoGroup instance.
The exclude/include lists are lists of strings like: aiida.data:int, aiida.calculation:quantumespresso.pw, aiida.data:array.%, … i.e.: a string identifying the base class, followed a colona and by the path to the class as accepted by CalculationFactory/DataFactory. Each string can contain one or more wildcard characters %; in this case this is used in a like comparison with the QueryBuilder. Note that in this case you have to remember that _ means “any character” in the QueryBuilder, and you need to escape it if you mean a literal underscore.
aiida.data:int
aiida.calculation:quantumespresso.pw
aiida.data:array.%
%
like
_
Only one of the two (between exclude and include) can be set. If none of the two is set, everything is included.
Initialize with defaults.
_matches
Check if ‘string’ matches the ‘filter_string’ (used for include and exclude filters).
If ‘filter_string’ does not contain any % sign, perform an exact match. Otherwise, match with a SQL-like query, where % means any character sequence, and _ means a single character (these caracters can be escaped with a backslash).
string – the string to match.
filter_string – the filter string.
clear_group_cache
Clear the cache of the group name.
This is mostly used by tests when they reset the database.
get_exclude
Return the list of classes to exclude from autogrouping.
Returns None if no exclusion list has been set.
get_group_label_prefix
Get the prefix of the label of the group. If no group label prefix was set, it will set a default one by itself.
get_group_name
Get the label of the group. If no group label was set, it will set a default one by itself.
Deprecated since version 1.2.0: Will be removed in v2.0.0, use get_group_label_prefix() instead.
get_group_label_prefix()
get_include
Return the list of classes to include in the autogrouping.
Returns None if no inclusion list has been set.
get_or_create_group
Return the current AutoGroup, or create one if None has been set yet.
This function implements a somewhat complex logic that is however needed to make sure that, even if verdi run is called at the same time multiple times, e.g. in a for loop in bash, there is never the risk that two verdi run Unix processes try to create the same group, with the same label, ending up in a crash of the code (see PR #3650).
Here, instead, we make sure that if this concurrency issue happens, one of the two will get a IntegrityError from the DB, and then recover trying to create a group with a different label (with a numeric suffix appended), until it manages to create it.
is_to_be_grouped
Return whether the given node has to be included in the autogroup according to include/exclude list
True if node is to be included in the autogroup
set_exclude
Set the list of classes to exclude in the autogrouping.
exclude – a list of valid entry point strings (might contain ‘%’ to be used as string to be matched using SQL’s LIKE pattern-making logic), or None to specify no include list.
LIKE
set_group_label_prefix
Set the label of the group to be created
set_group_name
Set the name of the group.
Deprecated since version 1.2.0: Will be removed in v2.0.0, use set_group_label_prefix() instead.
set_group_label_prefix()
set_include
Set the list of classes to include in the autogrouping.
include – a list of valid entry point strings (might contain ‘%’ to be used as string to be matched using SQL’s LIKE pattern-making logic), or None to specify no include list.
Validate the list of strings passed to set_include and set_exclude.
Comment objects and functions
aiida.orm.comments.
Module for Computer entities
aiida.orm.computers.