aiida.orm.implementation package#
Module containing the backend entity abstracts for storage backends.
Submodules#
Module for the backend implementation of the AuthInfo ORM class.
- class aiida.orm.implementation.authinfos.BackendAuthInfo(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
BackendEntity
Backend implementation for the AuthInfo ORM class.
An authinfo is a set of credentials that can be used to authenticate to a remote computer.
- METADATA_WORKDIR = 'workdir'#
- __abstractmethods__ = frozenset({'computer', 'enabled', 'get_auth_params', 'get_metadata', 'id', 'is_stored', 'set_auth_params', 'set_metadata', 'store', 'user'})#
- __module__ = 'aiida.orm.implementation.authinfos'#
- _abc_impl = <_abc._abc_data object>#
- abstract property computer: BackendComputer#
Return the computer associated with this instance.
- abstract property enabled: bool#
Return whether this instance is enabled.
- Returns:
boolean, True if enabled, False otherwise
- abstract get_auth_params() Dict[str, Any] [source]#
Return the dictionary of authentication parameters
- Returns:
a dictionary with authentication parameters
- abstract get_metadata() Dict[str, Any] [source]#
Return the dictionary of metadata
- Returns:
a dictionary with metadata
- abstract set_auth_params(auth_params: Dict[str, Any]) None [source]#
Set the dictionary of authentication parameters
- Parameters:
auth_params – a dictionary with authentication parameters
- abstract set_metadata(metadata: Dict[str, Any]) None [source]#
Set the dictionary of metadata
- Parameters:
metadata – a dictionary with metadata
- abstract property user: BackendUser#
Return the user associated with this instance.
- class aiida.orm.implementation.authinfos.BackendAuthInfoCollection(backend: StorageBackend)[source]#
Bases:
BackendCollection
[BackendAuthInfo
]The collection of backend AuthInfo entries.
- ENTITY_CLASS#
alias of
BackendAuthInfo
- __annotations__ = {}#
- __module__ = 'aiida.orm.implementation.authinfos'#
- __orig_bases__ = (aiida.orm.implementation.entities.BackendCollection[aiida.orm.implementation.authinfos.BackendAuthInfo],)#
- __parameters__ = ()#
Module for comment backend classes.
- class aiida.orm.implementation.comments.BackendComment(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
BackendEntity
Backend implementation for the Comment ORM class.
A comment is a text that can be attached to a node.
- __abstractmethods__ = frozenset({'content', 'ctime', 'id', 'is_stored', 'mtime', 'node', 'set_content', 'set_mtime', 'set_user', 'store', 'user', 'uuid'})#
- __module__ = 'aiida.orm.implementation.comments'#
- _abc_impl = <_abc._abc_data object>#
- abstract property node: BackendNode#
Return the comment’s node.
- abstract set_user(value: BackendUser) None [source]#
Set the comment owner.
- abstract property user: BackendUser#
Return the comment owner.
- class aiida.orm.implementation.comments.BackendCommentCollection(backend: StorageBackend)[source]#
Bases:
BackendCollection
[BackendComment
]The collection of Comment entries.
- ENTITY_CLASS#
alias of
BackendComment
- __annotations__ = {}#
- __module__ = 'aiida.orm.implementation.comments'#
- __orig_bases__ = (aiida.orm.implementation.entities.BackendCollection[aiida.orm.implementation.comments.BackendComment],)#
- __parameters__ = ()#
- abstract create(node: BackendNode, user: BackendUser, content: str | None = None, **kwargs)[source]#
Create a Comment for a given node and user
- Parameters:
node – a Node instance
user – a User instance
content – the comment content
- Returns:
a Comment object associated to the given node and user
- abstract delete(comment_id: int) None [source]#
Remove a Comment from the collection with the given id
- Parameters:
comment_id – the id of the comment to delete
- Raises:
TypeError – if
comment_id
is not an intNotExistent – if Comment with ID
comment_id
is not found
- abstract delete_all() None [source]#
Delete all Comment entries.
- Raises:
IntegrityError – if all Comments could not be deleted
Backend specific computer objects and methods
- class aiida.orm.implementation.computers.BackendComputer(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
BackendEntity
Backend implementation for the Computer ORM class.
A computer is a resource that can be used to run calculations: It has an associated transport_type, which points to a plugin for connecting to the resource and passing data, and a scheduler_type, which points to a plugin for scheduling calculations.
- __abstractmethods__ = frozenset({'copy', 'description', 'get_metadata', 'get_scheduler_type', 'get_transport_type', 'hostname', 'id', 'is_stored', 'label', 'set_description', 'set_hostname', 'set_label', 'set_metadata', 'set_scheduler_type', 'set_transport_type', 'store', 'uuid'})#
- __module__ = 'aiida.orm.implementation.computers'#
- _abc_impl = <_abc._abc_data object>#
- _logger = <Logger aiida.orm.implementation.computers (WARNING)>#
- abstract copy() BackendComputer [source]#
Create an un-stored clone of an already stored Computer.
- Raises:
InvalidOperation
if the computer is not stored.
- abstract property hostname: str#
Return the hostname of the computer (used to associate the connected device).
- class aiida.orm.implementation.computers.BackendComputerCollection(backend: StorageBackend)[source]#
Bases:
BackendCollection
[BackendComputer
]The collection of Computer entries.
- ENTITY_CLASS#
alias of
BackendComputer
- __annotations__ = {}#
- __module__ = 'aiida.orm.implementation.computers'#
- __orig_bases__ = (aiida.orm.implementation.entities.BackendCollection[aiida.orm.implementation.computers.BackendComputer],)#
- __parameters__ = ()#
Classes and methods for backend non-specific entities
- class aiida.orm.implementation.entities.BackendCollection(backend: StorageBackend)[source]#
Bases:
Generic
[EntityType
]Container class that represents a collection of entries of a particular backend entity.
- __annotations__ = {'ENTITY_CLASS': 'ClassVar[Type[EntityType]]'}#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.entities', '__annotations__': {'ENTITY_CLASS': 'ClassVar[Type[EntityType]]'}, '__doc__': 'Container class that represents a collection of entries of a particular backend entity.', '__init__': <function BackendCollection.__init__>, 'backend': <property object>, 'create': <function BackendCollection.create>, '__orig_bases__': (typing.Generic[~EntityType],), '__dict__': <attribute '__dict__' of 'BackendCollection' objects>, '__weakref__': <attribute '__weakref__' of 'BackendCollection' objects>, '__parameters__': (~EntityType,)})#
- __init__(backend: StorageBackend)[source]#
- Parameters:
backend – the backend this collection belongs to
- __module__ = 'aiida.orm.implementation.entities'#
- __orig_bases__ = (typing.Generic[~EntityType],)#
- __parameters__ = (~EntityType,)#
- __weakref__#
list of weak references to the object
- property backend: StorageBackend#
Return the backend.
- class aiida.orm.implementation.entities.BackendEntity(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
ABC
An first-class entity in the backend
- __abstractmethods__ = frozenset({'id', 'is_stored', 'store'})#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.entities', '__doc__': 'An first-class entity in the backend', '__init__': <function BackendEntity.__init__>, 'backend': <property object>, 'id': <property object>, 'pk': <property object>, 'store': <function BackendEntity.store>, 'is_stored': <property object>, '__dict__': <attribute '__dict__' of 'BackendEntity' objects>, '__weakref__': <attribute '__weakref__' of 'BackendEntity' objects>, '__abstractmethods__': frozenset({'is_stored', 'store', 'id'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})#
- __init__(backend: StorageBackend, **kwargs: Any)[source]#
- __module__ = 'aiida.orm.implementation.entities'#
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- property backend: StorageBackend#
Return the backend this entity belongs to
- Returns:
the backend instance
- abstract property id: int#
Return the id for this entity.
This is unique only amongst entities of this type for a particular backend.
- Returns:
the entity id
- abstract property is_stored: bool#
Return whether the entity is stored.
- Returns:
True if stored, False otherwise
- class aiida.orm.implementation.entities.BackendEntityExtrasMixin[source]#
Bases:
ABC
Mixin class that adds all abstract methods for the extras column to a backend entity
- __abstractmethods__ = frozenset({'clear_extras', 'delete_extra', 'extras', 'extras_items', 'extras_keys', 'get_extra', 'reset_extras', 'set_extra'})#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.entities', '__doc__': 'Mixin class that adds all abstract methods for the extras column to a backend entity', 'extras': <property object>, 'get_extra': <function BackendEntityExtrasMixin.get_extra>, 'get_extra_many': <function BackendEntityExtrasMixin.get_extra_many>, 'set_extra': <function BackendEntityExtrasMixin.set_extra>, 'set_extra_many': <function BackendEntityExtrasMixin.set_extra_many>, 'reset_extras': <function BackendEntityExtrasMixin.reset_extras>, 'delete_extra': <function BackendEntityExtrasMixin.delete_extra>, 'delete_extra_many': <function BackendEntityExtrasMixin.delete_extra_many>, 'clear_extras': <function BackendEntityExtrasMixin.clear_extras>, 'extras_items': <function BackendEntityExtrasMixin.extras_items>, 'extras_keys': <function BackendEntityExtrasMixin.extras_keys>, '__dict__': <attribute '__dict__' of 'BackendEntityExtrasMixin' objects>, '__weakref__': <attribute '__weakref__' of 'BackendEntityExtrasMixin' objects>, '__abstractmethods__': frozenset({'extras_keys', 'extras', 'extras_items', 'get_extra', 'delete_extra', 'clear_extras', 'reset_extras', 'set_extra'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})#
- __module__ = 'aiida.orm.implementation.entities'#
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- abstract delete_extra(key: str) None [source]#
Delete an extra.
- Parameters:
key – name of the extra
- Raises:
AttributeError – if the extra does not exist
- delete_extra_many(keys: Iterable[str]) None [source]#
Delete multiple extras.
- Parameters:
keys – names of the extras to delete
- Raises:
AttributeError – if at least one of the extra does not exist
- abstract property extras: Dict[str, Any]#
Return the complete extras dictionary.
Warning
While the entity is unstored, this will return references of the extras on the database model, meaning that changes on the returned values (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned extras will be a deep copy and mutations of the database extras will have to go through the appropriate set methods. Therefore, once stored, retrieving a deep copy can be a heavy operation. If you only need the keys or some values, use the iterators extras_keys and extras_items, or the getters get_extra and get_extra_many instead.
- Returns:
the extras as a dictionary
- abstract extras_items() Iterable[Tuple[str, Any]] [source]#
Return an iterator over the extras key/value pairs.
- abstract get_extra(key: str) Any [source]#
Return the value of an extra.
Warning
While the entity is unstored, this will return a reference of the extra on the database model, meaning that changes on the returned value (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned extra will be a deep copy and mutations of the database extras will have to go through the appropriate set methods.
- Parameters:
key – name of the extra
- Returns:
the value of the extra
- Raises:
AttributeError – if the extra does not exist
- get_extra_many(keys: Iterable[str]) List[Any] [source]#
Return the values of multiple extras.
Warning
While the entity is unstored, this will return references of the extras on the database model, meaning that changes on the returned values (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned extras will be a deep copy and mutations of the database extras will have to go through the appropriate set methods. Therefore, once stored, retrieving a deep copy can be a heavy operation. If you only need the keys or some values, use the iterators extras_keys and extras_items, or the getters get_extra and get_extra_many instead.
- Parameters:
keys – a list of extra names
- Returns:
a list of extra values
- Raises:
AttributeError – if at least one extra does not exist
- abstract reset_extras(extras: Dict[str, Any]) None [source]#
Reset the extras.
Note
This will completely clear any existing extras and replace them with the new dictionary.
- Parameters:
extras – a dictionary with the extras to set
Backend group module
- class aiida.orm.implementation.groups.BackendGroup(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
BackendEntity
,BackendEntityExtrasMixin
Backend implementation for the Group ORM class.
A group is a collection of nodes.
- __abstractmethods__ = frozenset({'clear', 'clear_extras', 'count', 'delete_extra', 'description', 'extras', 'extras_items', 'extras_keys', 'get_extra', 'id', 'is_stored', 'label', 'nodes', 'reset_extras', 'set_extra', 'store', 'type_string', 'user', 'uuid'})#
- __module__ = 'aiida.orm.implementation.groups'#
- _abc_impl = <_abc._abc_data object>#
- add_nodes(nodes: Sequence[BackendNode], **kwargs)[source]#
Add a set of nodes to the group.
- Note:
all the nodes and the group itself have to be stored.
- Parameters:
nodes – a list of BackendNode instances to be added to this group
- abstract count() int [source]#
Return the number of entities in this group.
- Returns:
integer number of entities contained within the group
- abstract property nodes: NodeIterator#
Return a generator/iterator that iterates over all nodes and returns the respective AiiDA subclasses of Node, and also allows to ask for the number of nodes in the group using len().
- remove_nodes(nodes: Sequence[BackendNode]) None [source]#
Remove a set of nodes from the group.
- Note:
all the nodes and the group itself have to be stored.
- Parameters:
nodes – a list of BackendNode instances to be removed from this group
- abstract property user: BackendUser#
Return a backend user object, representing the user associated to this group.
- class aiida.orm.implementation.groups.BackendGroupCollection(backend: StorageBackend)[source]#
Bases:
BackendCollection
[BackendGroup
]The collection of Group entries.
- ENTITY_CLASS#
alias of
BackendGroup
- __annotations__ = {}#
- __module__ = 'aiida.orm.implementation.groups'#
- __orig_bases__ = (aiida.orm.implementation.entities.BackendCollection[aiida.orm.implementation.groups.BackendGroup],)#
- __parameters__ = ()#
- class aiida.orm.implementation.groups.NodeIterator(*args, **kwargs)[source]#
Bases:
Protocol
Protocol for iterating over nodes in a group
- __abstractmethods__ = frozenset({})#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.groups', '__doc__': 'Protocol for iterating over nodes in a group', '__iter__': <function NodeIterator.__iter__>, '__next__': <function NodeIterator.__next__>, '__getitem__': <function NodeIterator.__getitem__>, '__len__': <function NodeIterator.__len__>, '__dict__': <attribute '__dict__' of 'NodeIterator' objects>, '__weakref__': <attribute '__weakref__' of 'NodeIterator' objects>, '__parameters__': (), '_is_protocol': True, '__subclasshook__': <function Protocol.__init_subclass__.<locals>._proto_hook>, '__init__': <function _no_init_or_replace_init>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})#
- __getitem__(value: int | slice) BackendNode | List[BackendNode] [source]#
Index node(s) from the group.
- __init__(*args, **kwargs)#
- __iter__() NodeIterator [source]#
Return an iterator over the nodes in the group.
- __module__ = 'aiida.orm.implementation.groups'#
- __next__() BackendNode [source]#
Return the next node in the group.
- __parameters__ = ()#
- __subclasshook__()#
Abstract classes can override this to customize issubclass().
This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- _is_protocol = True#
Backend group module
- class aiida.orm.implementation.logs.BackendLog(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
BackendEntity
Backend implementation for the Log ORM class.
A log is a record of logging call for a particular node.
- __abstractmethods__ = frozenset({'dbnode_id', 'id', 'is_stored', 'levelname', 'loggername', 'message', 'metadata', 'store', 'time', 'uuid'})#
- __module__ = 'aiida.orm.implementation.logs'#
- _abc_impl = <_abc._abc_data object>#
- class aiida.orm.implementation.logs.BackendLogCollection(backend: StorageBackend)[source]#
Bases:
BackendCollection
[BackendLog
]The collection of Log entries.
- ENTITY_CLASS#
alias of
BackendLog
- __annotations__ = {}#
- __module__ = 'aiida.orm.implementation.logs'#
- __orig_bases__ = (aiida.orm.implementation.entities.BackendCollection[aiida.orm.implementation.logs.BackendLog],)#
- __parameters__ = ()#
- abstract delete(log_id: int) None [source]#
Remove a Log entry from the collection with the given id
- Parameters:
log_id – id of the Log to delete
- Raises:
TypeError – if
log_id
is not an intNotExistent – if Log with ID
log_id
is not found
- abstract delete_all() None [source]#
Delete all Log entries.
- Raises:
IntegrityError – if all Logs could not be deleted
Abstract BackendNode and BackendNodeCollection implementation.
- class aiida.orm.implementation.nodes.BackendNode(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
BackendEntity
,BackendEntityExtrasMixin
Backend implementation for the Node ORM class.
A node stores data input or output from a computation.
- __abstractmethods__ = frozenset({'add_incoming', 'attributes', 'attributes_items', 'attributes_keys', 'clean_values', 'clear_attributes', 'clear_extras', 'clone', 'computer', 'ctime', 'delete_attribute', 'delete_extra', 'description', 'extras', 'extras_items', 'extras_keys', 'get_attribute', 'get_extra', 'id', 'is_stored', 'label', 'mtime', 'node_type', 'process_type', 'repository_metadata', 'reset_attributes', 'reset_extras', 'set_attribute', 'set_extra', 'store', 'user', 'uuid'})#
- __module__ = 'aiida.orm.implementation.nodes'#
- _abc_impl = <_abc._abc_data object>#
- abstract add_incoming(source: BackendNode, link_type, link_label)[source]#
Add a link of the given type from a given node to ourself.
- Parameters:
source – the node from which the link is coming
link_type – the link type
link_label – the link label
- Returns:
True if the proposed link is allowed, False otherwise
- Raises:
TypeError – if source is not a Node instance or link_type is not a LinkType enum
ValueError – if the proposed link is invalid
aiida.common.ModificationNotAllowed – if either source or target node is not stored
- abstract property attributes: Dict[str, Any]#
Return the complete attributes dictionary.
Warning
While the entity is unstored, this will return references of the attributes on the database model, meaning that changes on the returned values (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned attributes will be a deep copy and mutations of the database attributes will have to go through the appropriate set methods. Therefore, once stored, retrieving a deep copy can be a heavy operation. If you only need the keys or some values, use the iterators attributes_keys and attributes_items, or the getters get_attribute and get_attribute_many instead.
- Returns:
the attributes as a dictionary
- abstract attributes_items() Iterable[Tuple[str, Any]] [source]#
Return an iterator over the attributes.
- Returns:
an iterator with attribute key value pairs
- abstract attributes_keys() Iterable[str] [source]#
Return an iterator over the attribute keys.
- Returns:
an iterator with attribute keys
- abstract clean_values()[source]#
Clean the values of the node fields.
This method is called before storing the node. The purpose of this method is to convert data to a type which can be serialized and deserialized for storage in the DB without its value changing.
- abstract clone() BackendNodeType [source]#
Return an unstored clone of ourselves.
- Returns:
an unstored BackendNode with the exact same attributes and extras as self
- abstract property computer: BackendComputer | None#
Return the computer of this node.
- Returns:
the computer or None
- abstract delete_attribute(key: str) None [source]#
Delete an attribute.
- Parameters:
key – name of the attribute
- Raises:
AttributeError – if the attribute does not exist
- delete_attribute_many(keys: Iterable[str]) None [source]#
Delete multiple attributes.
- Parameters:
keys – names of the attributes to delete
- Raises:
AttributeError – if at least one of the attribute does not exist
- abstract get_attribute(key: str) Any [source]#
Return the value of an attribute.
Warning
While the entity is unstored, this will return a reference of the attribute on the database model, meaning that changes on the returned value (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned attribute will be a deep copy and mutations of the database attributes will have to go through the appropriate set methods.
- Parameters:
key – name of the attribute
- Returns:
the value of the attribute
- Raises:
AttributeError – if the attribute does not exist
- get_attribute_many(keys: Iterable[str]) List[Any] [source]#
Return the values of multiple attributes.
Warning
While the entity is unstored, this will return references of the attributes on the database model, meaning that changes on the returned values (if they are mutable themselves, e.g. a list or dictionary) will automatically be reflected on the database model as well. As soon as the entity is stored, the returned attributes will be a deep copy and mutations of the database attributes will have to go through the appropriate set methods. Therefore, once stored, retrieving a deep copy can be a heavy operation. If you only need the keys or some values, use the iterators attributes_keys and attributes_items, or the getters get_attribute and get_attribute_many instead.
- Parameters:
keys – a list of attribute names
- Returns:
a list of attribute values
- Raises:
AttributeError – if at least one attribute does not exist
- abstract property repository_metadata: Dict[str, Any]#
Return the node repository metadata.
- Returns:
the repository metadata
- abstract reset_attributes(attributes: Dict[str, Any]) None [source]#
Reset the attributes.
Note
This will completely clear any existing attributes and replace them with the new dictionary.
- Parameters:
attributes – a dictionary with the attributes to set
- abstract set_attribute(key: str, value: Any) None [source]#
Set an attribute to the given value.
- Parameters:
key – name of the attribute
value – value of the attribute
- set_attribute_many(attributes: Dict[str, Any]) None [source]#
Set multiple attributes.
Note
This will override any existing attributes that are present in the new dictionary.
- Parameters:
attributes – a dictionary with the attributes to set
- abstract store(links: Sequence[LinkTriple] | None = None, clean: bool = True) BackendNodeType [source]#
Store the node in the database.
- Parameters:
links – optional links to add before storing
clean – boolean, if True, will clean the attributes and extras before attempting to store
- abstract property user: BackendUser#
Return the user of this node.
- Returns:
the user
- class aiida.orm.implementation.nodes.BackendNodeCollection(backend: StorageBackend)[source]#
Bases:
BackendCollection
[BackendNode
]The collection of BackendNode entries.
- ENTITY_CLASS#
alias of
BackendNode
- __annotations__ = {}#
- __module__ = 'aiida.orm.implementation.nodes'#
- __orig_bases__ = (aiida.orm.implementation.entities.BackendCollection[aiida.orm.implementation.nodes.BackendNode],)#
- __parameters__ = ()#
Abstract QueryBuilder definition.
- class aiida.orm.implementation.querybuilder.BackendQueryBuilder(backend: StorageBackend)[source]#
Bases:
ABC
Backend query builder interface
- __abstractmethods__ = frozenset({'count', 'first', 'get_creation_statistics', 'iterall', 'iterdict'})#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.querybuilder', '__doc__': 'Backend query builder interface', '__init__': <function BackendQueryBuilder.__init__>, 'count': <function BackendQueryBuilder.count>, 'first': <function BackendQueryBuilder.first>, 'iterall': <function BackendQueryBuilder.iterall>, 'iterdict': <function BackendQueryBuilder.iterdict>, 'as_sql': <function BackendQueryBuilder.as_sql>, 'analyze_query': <function BackendQueryBuilder.analyze_query>, 'get_creation_statistics': <function BackendQueryBuilder.get_creation_statistics>, '__dict__': <attribute '__dict__' of 'BackendQueryBuilder' objects>, '__weakref__': <attribute '__weakref__' of 'BackendQueryBuilder' objects>, '__abstractmethods__': frozenset({'first', 'get_creation_statistics', 'iterall', 'count', 'iterdict'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})#
- __init__(backend: StorageBackend)[source]#
- Parameters:
backend – the backend
- __module__ = 'aiida.orm.implementation.querybuilder'#
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- analyze_query(data: QueryDictType, execute: bool = True, verbose: bool = False) str [source]#
Return the query plan, i.e. a list of SQL statements that will be executed.
See: https://www.postgresql.org/docs/11/sql-explain.html
- Params execute:
Carry out the command and show actual run times and other statistics.
- Params verbose:
Display additional information regarding the plan.
- as_sql(data: QueryDictType, inline: bool = False) str [source]#
Convert the query to an SQL string representation.
Warning
This method should be used for debugging purposes only, since normally sqlalchemy will handle this process internally.
- Params inline:
Inline bound parameters (this is normally handled by the Python DBAPI).
- abstract count(data: QueryDictType) int [source]#
Return the number of results of the query
- abstract first(data: QueryDictType) List[Any] | None [source]#
Executes query, asking for one instance.
- Returns:
One row of aiida results
- abstract get_creation_statistics(user_pk: int | None = None) Dict[str, Any] [source]#
Return a dictionary with the statistics of node creation, summarized by day.
- Note:
Days when no nodes were created are not present in the returned ctime_by_day dictionary.
- Parameters:
user_pk – If None (default), return statistics for all users. If user pk is specified, return only the statistics for the given user.
- Returns:
a dictionary as follows:
{ "total": TOTAL_NUM_OF_NODES, "types": {TYPESTRING1: count, TYPESTRING2: count, ...}, "ctime_by_day": {'YYYY-MMM-DD': count, ...} }
where in ctime_by_day the key is a string in the format ‘YYYY-MM-DD’ and the value is an integer with the number of nodes created that day.
- class aiida.orm.implementation.querybuilder.PathItemType[source]#
Bases:
TypedDict
An item on the query path
- __annotations__ = {'edge_tag': <class 'str'>, 'entity_type': typing.Union[str, typing.List[str]], 'joining_keyword': <class 'str'>, 'joining_value': <class 'str'>, 'orm_base': typing.Literal['node', 'group', 'authinfo', 'comment', 'computer', 'log', 'user'], 'outerjoin': <class 'bool'>, 'tag': <class 'str'>}#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.querybuilder', '__annotations__': {'entity_type': typing.Union[str, typing.List[str]], 'orm_base': typing.Literal['node', 'group', 'authinfo', 'comment', 'computer', 'log', 'user'], 'tag': <class 'str'>, 'joining_keyword': <class 'str'>, 'joining_value': <class 'str'>, 'outerjoin': <class 'bool'>, 'edge_tag': <class 'str'>}, '__doc__': 'An item on the query path', '__orig_bases__': (<function TypedDict>,), '__dict__': <attribute '__dict__' of 'PathItemType' objects>, '__weakref__': <attribute '__weakref__' of 'PathItemType' objects>, '__required_keys__': frozenset({'orm_base', 'joining_value', 'tag', 'edge_tag', 'outerjoin', 'entity_type', 'joining_keyword'}), '__optional_keys__': frozenset(), '__total__': True})#
- __module__ = 'aiida.orm.implementation.querybuilder'#
- __optional_keys__ = frozenset({})#
- __orig_bases__ = (<function TypedDict>,)#
- __required_keys__ = frozenset({'edge_tag', 'entity_type', 'joining_keyword', 'joining_value', 'orm_base', 'outerjoin', 'tag'})#
- __total__ = True#
- __weakref__#
list of weak references to the object
- class aiida.orm.implementation.querybuilder.QueryDictType[source]#
Bases:
TypedDict
A JSON serialisable representation of a
QueryBuilder
instance- __annotations__ = {'distinct': <class 'bool'>, 'filters': typing.Dict[str, typing.Dict[str, typing.Union[typing.Dict[str, typing.List[typing.Dict[str, typing.Any]]], typing.Dict[str, typing.Any]]]], 'limit': typing.Optional[int], 'offset': typing.Optional[int], 'order_by': typing.List[typing.Dict[str, typing.List[typing.Dict[str, typing.Dict[str, str]]]]], 'path': typing.List[aiida.orm.implementation.querybuilder.PathItemType], 'project': typing.Dict[str, typing.List[typing.Dict[str, typing.Dict[str, typing.Any]]]], 'project_map': typing.Dict[str, typing.Dict[str, str]]}#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.querybuilder', '__annotations__': {'path': typing.List[aiida.orm.implementation.querybuilder.PathItemType], 'filters': typing.Dict[str, typing.Dict[str, typing.Union[typing.Dict[str, typing.List[typing.Dict[str, typing.Any]]], typing.Dict[str, typing.Any]]]], 'project': typing.Dict[str, typing.List[typing.Dict[str, typing.Dict[str, typing.Any]]]], 'project_map': typing.Dict[str, typing.Dict[str, str]], 'order_by': typing.List[typing.Dict[str, typing.List[typing.Dict[str, typing.Dict[str, str]]]]], 'offset': typing.Optional[int], 'limit': typing.Optional[int], 'distinct': <class 'bool'>}, '__doc__': 'A JSON serialisable representation of a ``QueryBuilder`` instance', '__orig_bases__': (<function TypedDict>,), '__dict__': <attribute '__dict__' of 'QueryDictType' objects>, '__weakref__': <attribute '__weakref__' of 'QueryDictType' objects>, '__required_keys__': frozenset({'filters', 'order_by', 'path', 'project', 'distinct', 'limit', 'offset', 'project_map'}), '__optional_keys__': frozenset(), '__total__': True})#
- __module__ = 'aiida.orm.implementation.querybuilder'#
- __optional_keys__ = frozenset({})#
- __orig_bases__ = (<function TypedDict>,)#
- __required_keys__ = frozenset({'distinct', 'filters', 'limit', 'offset', 'order_by', 'path', 'project', 'project_map'})#
- __total__ = True#
- __weakref__#
list of weak references to the object
- path: List[PathItemType]#
Generic backend related objects
- class aiida.orm.implementation.storage_backend.StorageBackend(profile: Profile)[source]#
Bases:
ABC
Abstraction for a backend to read/write persistent data for a profile’s provenance graph.
AiiDA splits data storage into two sources:
Searchable data, which is stored in the database and can be queried using the QueryBuilder
Non-searchable (binary) data, which is stored in the repository and can be loaded using the RepositoryBackend
The two sources are inter-linked by the
Node.base.repository.metadata
. Once stored, the leaf values of this dictionary must be valid pointers to object keys in the repository.For a completely new storage, the
initialise
method should be called first. This will automatically initialise the repository and the database with the current schema. The class methods,`version_profile` and migrate should be able to be called for existing storage, at any supported schema version. But an instance of this class should be created only for the latest schema version.- __abstractmethods__ = frozenset({'__init__', '__str__', '_clear', 'authinfos', 'bulk_insert', 'bulk_update', 'close', 'comments', 'computers', 'delete_nodes_and_connections', 'get_global_variable', 'get_repository', 'groups', 'in_transaction', 'initialise', 'is_closed', 'logs', 'maintain', 'migrate', 'nodes', 'query', 'set_global_variable', 'transaction', 'users', 'version_head', 'version_profile'})#
- __dict__ = mappingproxy({'__module__': 'aiida.orm.implementation.storage_backend', '__doc__': "Abstraction for a backend to read/write persistent data for a profile's provenance graph.\n\n AiiDA splits data storage into two sources:\n\n - Searchable data, which is stored in the database and can be queried using the QueryBuilder\n - Non-searchable (binary) data, which is stored in the repository and can be loaded using the RepositoryBackend\n\n The two sources are inter-linked by the ``Node.base.repository.metadata``.\n Once stored, the leaf values of this dictionary must be valid pointers to object keys in the repository.\n\n For a completely new storage, the ``initialise`` method should be called first. This will automatically initialise\n the repository and the database with the current schema. The class methods,`version_profile` and `migrate` should be\n able to be called for existing storage, at any supported schema version. But an instance of this class should be\n created only for the latest schema version.\n ", 'read_only': False, 'version_head': <classmethod(<function StorageBackend.version_head>)>, 'version_profile': <classmethod(<function StorageBackend.version_profile>)>, 'initialise': <classmethod(<function StorageBackend.initialise>)>, 'migrate': <classmethod(<function StorageBackend.migrate>)>, '__init__': <function StorageBackend.__init__>, '__str__': <function StorageBackend.__str__>, 'profile': <property object>, 'autogroup': <property object>, 'version': <function StorageBackend.version>, 'close': <function StorageBackend.close>, 'is_closed': <property object>, '_clear': <function StorageBackend._clear>, 'reset_default_user': <function StorageBackend.reset_default_user>, 'authinfos': <property object>, 'comments': <property object>, 'computers': <property object>, 'groups': <property object>, 'logs': <property object>, 'nodes': <property object>, 'users': <property object>, 'default_user': <property object>, 'query': <function StorageBackend.query>, 'transaction': <function StorageBackend.transaction>, 'in_transaction': <property object>, 'bulk_insert': <function StorageBackend.bulk_insert>, 'bulk_update': <function StorageBackend.bulk_update>, 'delete': <function StorageBackend.delete>, 'delete_nodes_and_connections': <function StorageBackend.delete_nodes_and_connections>, 'get_repository': <function StorageBackend.get_repository>, 'set_global_variable': <function StorageBackend.set_global_variable>, 'get_global_variable': <function StorageBackend.get_global_variable>, 'maintain': <function StorageBackend.maintain>, '_backup': <function StorageBackend._backup>, '_write_backup_config': <function StorageBackend._write_backup_config>, '_validate_or_init_backup_folder': <function StorageBackend._validate_or_init_backup_folder>, 'backup': <function StorageBackend.backup>, 'get_info': <function StorageBackend.get_info>, 'get_orm_entities': <function StorageBackend.get_orm_entities>, '__dict__': <attribute '__dict__' of 'StorageBackend' objects>, '__weakref__': <attribute '__weakref__' of 'StorageBackend' objects>, '__abstractmethods__': frozenset({'bulk_insert', 'transaction', 'nodes', 'query', 'get_global_variable', 'version_profile', 'logs', 'authinfos', '_clear', '__str__', 'users', 'in_transaction', 'is_closed', 'set_global_variable', 'delete_nodes_and_connections', '__init__', 'comments', 'migrate', 'initialise', 'groups', 'computers', 'bulk_update', 'maintain', 'close', 'version_head', 'get_repository'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {'_default_user': "Optional['User']"}})#
- abstract __init__(profile: Profile) None [source]#
Initialize the backend, for this profile.
- Raises:
~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed
- Raises:
~aiida.common.exceptions.IncompatibleStorageSchema if the profile’s storage schema is not at the latest version (and thus should be migrated)
- Raises:
- raises:
aiida.common.exceptions.CorruptStorage
if the storage is internally inconsistent
- __module__ = 'aiida.orm.implementation.storage_backend'#
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- abstract _clear() None [source]#
Clear the storage, removing all data.
Warning
This is a destructive operation, and should only be used for testing purposes.
- abstract property authinfos: BackendAuthInfoCollection#
Return the collection of authorisation information objects
- property autogroup: AutogroupManager#
Return the autogroup manager for this backend.
- backup(dest: str, keep: int | None = None)[source]#
Create a backup of the storage contents.
- Parameters:
dest – The path to the destination folder.
keep – The number of backups to keep in the target destination, if the backend supports it.
- Raises:
ValueError – If the input parameters are invalid.
StorageBackupError – If an error occurred during the backup procedure.
NotImplementedError – If the storage backend doesn’t implement a backup procedure.
- abstract bulk_insert(entity_type: EntityTypes, rows: List[dict], allow_defaults: bool = False) List[int] [source]#
Insert a list of entities into the database, directly into a backend transaction.
- Parameters:
entity_type – The type of the entity
data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically
allow_defaults – If
False
, assert that each row contains all fields (except primary key(s)), otherwise, allow default values for missing fields.
- Raises:
IntegrityError
if the keys in a row are not a subset of the columns in the table- Returns:
The list of generated primary keys for the entities
- abstract bulk_update(entity_type: EntityTypes, rows: List[dict]) None [source]#
Update a list of entities in the database, directly with a backend transaction.
- Parameters:
entity_type – The type of the entity
data – A list of dictionaries, containing fields of the backend model to update, and the id field (a.k.a primary key)
- Raises:
IntegrityError
if the keys in a row are not a subset of the columns in the table
- abstract property comments: BackendCommentCollection#
Return the collection of comments
- abstract property computers: BackendComputerCollection#
Return the collection of computers
- property default_user: 'User' | None#
Return the default user for the profile, if it has been created.
This is cached, since it is a frequently used operation, for creating other entities.
- abstract delete_nodes_and_connections(pks_to_delete: Sequence[int])[source]#
Delete all nodes corresponding to pks in the input and any links to/from them.
This method is intended to be used within a transaction context.
- Parameters:
pks_to_delete – a sequence of node pks to delete
- Raises:
AssertionError
if a transaction is not active
- abstract get_global_variable(key: str) None | str | int | float [source]#
Return a global variable from the storage.
- Parameters:
key – the key of the setting
- Raises:
KeyError if the setting does not exist
- get_info(detailed: bool = False) dict [source]#
Return general information on the storage.
- Parameters:
detailed – flag to request more detailed information about the content of the storage.
- Returns:
a nested dict with the relevant information.
- get_orm_entities(detailed: bool = False) dict [source]#
Return a mapping with an overview of the storage contents regarding ORM entities.
- Parameters:
detailed – flag to request more detailed information about the content of the storage.
- Returns:
a nested dict with the relevant information.
- abstract get_repository() AbstractRepositoryBackend [source]#
Return the object repository configured for this backend.
- abstract property groups: BackendGroupCollection#
Return the collection of groups
- abstract classmethod initialise(profile: Profile, reset: bool = False) bool [source]#
Initialise the storage backend.
This is typically used once when a new storage backed is created. If this method returns without exceptions the storage backend is ready for use. If the backend already seems initialised, this method is a no-op.
- Parameters:
reset – If
true
, destroy the backend if it already exists including all of its data before recreating and initialising it. This is useful for example for test profiles that need to be reset before or after tests having run.- Returns:
True
if the storage was initialised by the function call,False
if it was already initialised.
- abstract property logs: BackendLogCollection#
Return the collection of logs
- abstract maintain(full: bool = False, dry_run: bool = False, **kwargs) None [source]#
Perform maintenance tasks on the storage.
If full == True, then this method may attempt to block the profile associated with the storage to guarantee the safety of its procedures. This will not only prevent any other subsequent process from accessing that profile, but will also first check if there is already any process using it and raise if that is the case. The user will have to manually stop any processes that is currently accessing the profile themselves or wait for it to finish on its own.
- Parameters:
full – flag to perform operations that require to stop using the profile to be maintained.
dry_run – flag to only print the actions that would be taken without actually executing them.
- abstract classmethod migrate(profile: Profile) None [source]#
Migrate the storage of a profile to the latest schema version.
If the schema version is already the latest version, this method does nothing. If the storage is uninitialised, this method will raise an exception.
- Raises:
:class`~aiida.common.exceptions.UnreachableStorage` if the storage cannot be accessed.
- Raises:
StorageMigrationError
if the storage is not initialised.
- abstract property nodes: BackendNodeCollection#
Return the collection of nodes
- abstract query() BackendQueryBuilder [source]#
Return an instance of a query builder implementation for this backend
- read_only = False#
- reset_default_user() None [source]#
Reset the default user.
This should be done when the default user of the storage backend is changed on the corresponding profile because the old default user is cached on this instance.
- abstract set_global_variable(key: str, value: None | str | int | float, description: str | None = None, overwrite=True) None [source]#
Set a global variable in the storage.
- Parameters:
key – the key of the setting
value – the value of the setting
description – the description of the setting (optional)
overwrite – if True, overwrite the setting if it already exists
- Raises:
ValueError if the key already exists and overwrite is False
- abstract transaction() ContextManager[Any] [source]#
Get a context manager that can be used as a transaction context for a series of backend operations. If there is an exception within the context then the changes will be rolled back and the state will be as before entering. Transactions can be nested.
- Returns:
a context manager to group database operations
- abstract property users: BackendUserCollection#
Return the collection of users
Backend user
- class aiida.orm.implementation.users.BackendUser(backend: StorageBackend, **kwargs: Any)[source]#
Bases:
BackendEntity
Backend implementation for the User ORM class.
A user can be assigned as the creator of a variety of other entities.
- __abstractmethods__ = frozenset({'email', 'first_name', 'id', 'institution', 'is_stored', 'last_name', 'store'})#
- __module__ = 'aiida.orm.implementation.users'#
- _abc_impl = <_abc._abc_data object>#
- class aiida.orm.implementation.users.BackendUserCollection(backend: StorageBackend)[source]#
Bases:
BackendCollection
[BackendUser
]- ENTITY_CLASS#
alias of
BackendUser
- __annotations__ = {}#
- __module__ = 'aiida.orm.implementation.users'#
- __orig_bases__ = (aiida.orm.implementation.entities.BackendCollection[aiida.orm.implementation.users.BackendUser],)#
- __parameters__ = ()#
Utility methods for backend non-specific implementations.
- aiida.orm.implementation.utils.clean_value(value)[source]#
Get value from input and (recursively) replace, if needed, all occurrences of BaseType AiiDA data nodes with their value, and List with a standard list. It also makes a deep copy of everything The purpose of this function is to convert data to a type which can be serialized and deserialized for storage in the DB without its value changing.
Note however that there is no logic to avoid infinite loops when the user passes some perverse recursive dictionary or list. In any case, however, this would not be storable by AiiDA…
- Parameters:
value – A value to be set as an attribute or an extra
- Returns:
a “cleaned” value, potentially identical to value, but with values replaced where needed.
- aiida.orm.implementation.utils.validate_attribute_extra_key(key)[source]#
Validate the key for an entity attribute or extra.
- Raises:
aiida.common.ValidationError – if the key is not a string or contains reserved separator character