aiida.tools.archive package#
The AiiDA archive allows export/import, of subsets of the provenance graph, to a single file
Subpackages#
Submodules#
Abstraction for an archive file format.
- class aiida.tools.archive.abstract.ArchiveFormatAbstract[source]#
Bases:
ABC
Abstract class for an archive format.
- __abstractmethods__ = frozenset({'key_format', 'latest_version', 'migrate', 'open', 'read_version'})#
- __dict__ = mappingproxy({'__module__': 'aiida.tools.archive.abstract', '__doc__': 'Abstract class for an archive format.', 'latest_version': <property object>, 'key_format': <property object>, 'read_version': <function ArchiveFormatAbstract.read_version>, 'open': <function ArchiveFormatAbstract.open>, 'migrate': <function ArchiveFormatAbstract.migrate>, '__dict__': <attribute '__dict__' of 'ArchiveFormatAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveFormatAbstract' objects>, '__abstractmethods__': frozenset({'read_version', 'key_format', 'migrate', 'open', 'latest_version'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})#
- __module__ = 'aiida.tools.archive.abstract'#
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- abstract migrate(inpath: str | Path, outpath: str | Path, version: str, *, force: bool = False, compression: int = 6) None [source]#
Migrate an archive to a specific version.
- Parameters:
inpath – input archive path
outpath – output archive path
version – version to migrate to
force – allow overwrite of existing output archive path
compression – default level of compression to use for writing (integer from 0 to 9)
- abstract open(path: str | Path, mode: Literal['r'], *, compression: int = 6, **kwargs: Any) ArchiveReaderAbstract [source]#
- abstract open(path: str | Path, mode: Literal['x', 'w'], *, compression: int = 6, **kwargs: Any) ArchiveWriterAbstract
- abstract open(path: str | Path, mode: Literal['a'], *, compression: int = 6, **kwargs: Any) ArchiveWriterAbstract
Open an archive (latest version only).
- Parameters:
path – archive path
mode – open mode: ‘r’ (read), ‘x’ (exclusive write), ‘w’ (write) or ‘a’ (append)
compression – default level of compression to use for writing (integer from 0 to 9)
Note, in write mode, the writer is responsible for writing the format version.
- abstract read_version(path: str | Path) str [source]#
Read the version of the archive from a file.
This method should account for reading all versions of the archive format.
- Parameters:
path – archive path
- Raises:
UnreachableStorage
if the file does not exist- Raises:
CorruptStorage
if a version cannot be read from the archive
- class aiida.tools.archive.abstract.ArchiveReaderAbstract(path: str | Path, **kwargs: Any)[source]#
Bases:
ABC
Reader of an archive, that will be used as a context manager.
- __abstractmethods__ = frozenset({'get_backend', 'get_metadata'})#
- __dict__ = mappingproxy({'__module__': 'aiida.tools.archive.abstract', '__doc__': 'Reader of an archive, that will be used as a context manager.', '__init__': <function ArchiveReaderAbstract.__init__>, 'path': <property object>, '__enter__': <function ArchiveReaderAbstract.__enter__>, '__exit__': <function ArchiveReaderAbstract.__exit__>, 'get_metadata': <function ArchiveReaderAbstract.get_metadata>, 'get_backend': <function ArchiveReaderAbstract.get_backend>, 'querybuilder': <function ArchiveReaderAbstract.querybuilder>, 'get': <function ArchiveReaderAbstract.get>, 'graph': <function ArchiveReaderAbstract.graph>, '__dict__': <attribute '__dict__' of 'ArchiveReaderAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveReaderAbstract' objects>, '__abstractmethods__': frozenset({'get_backend', 'get_metadata'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})#
- __init__(path: str | Path, **kwargs: Any)[source]#
Initialise the reader.
- Parameters:
path – archive path
- __module__ = 'aiida.tools.archive.abstract'#
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- get(entity_cls: Type[EntityType], **filters: Any) EntityType [source]#
Return the entity for the given filters.
Example:
reader.get(orm.Node, pk=1)
- Parameters:
entity_cls – The type of the front-end entity
filters – the filters identifying the object to get
- abstract get_backend() StorageBackend [source]#
Return a ‘read-only’ backend for the archive.
- abstract get_metadata() Dict[str, Any] [source]#
Return the top-level metadata.
- Raises:
CorruptStorage
if the top-level metadata cannot be read from the archive
- property path#
Return the path to the archive.
- querybuilder(**kwargs: Any) QueryBuilder [source]#
Return a
QueryBuilder
instance, initialised with the archive backend.
- class aiida.tools.archive.abstract.ArchiveWriterAbstract(path: str | Path, fmt: ArchiveFormatAbstract, *, mode: Literal['x', 'w', 'a'] = 'x', compression: int = 6, **kwargs: Any)[source]#
Bases:
ABC
Writer of an archive, that will be used as a context manager.
- __abstractmethods__ = frozenset({'bulk_insert', 'delete_object', 'put_object', 'update_metadata'})#
- __dict__ = mappingproxy({'__module__': 'aiida.tools.archive.abstract', '__doc__': 'Writer of an archive, that will be used as a context manager.', '__init__': <function ArchiveWriterAbstract.__init__>, 'path': <property object>, 'mode': <property object>, 'compression': <property object>, '__enter__': <function ArchiveWriterAbstract.__enter__>, '__exit__': <function ArchiveWriterAbstract.__exit__>, 'update_metadata': <function ArchiveWriterAbstract.update_metadata>, 'bulk_insert': <function ArchiveWriterAbstract.bulk_insert>, 'put_object': <function ArchiveWriterAbstract.put_object>, 'delete_object': <function ArchiveWriterAbstract.delete_object>, '__dict__': <attribute '__dict__' of 'ArchiveWriterAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveWriterAbstract' objects>, '__abstractmethods__': frozenset({'update_metadata', 'bulk_insert', 'put_object', 'delete_object'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})#
- __init__(path: str | Path, fmt: ArchiveFormatAbstract, *, mode: Literal['x', 'w', 'a'] = 'x', compression: int = 6, **kwargs: Any)[source]#
Initialise the writer.
- Parameters:
path – archive path
mode – mode to open the archive in: ‘x’ (exclusive), ‘w’ (write) or ‘a’ (append)
compression – default level of compression to use (integer from 0 to 9)
- __module__ = 'aiida.tools.archive.abstract'#
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- abstract bulk_insert(entity_type: EntityTypes, rows: List[Dict[str, Any]], allow_defaults: bool = False) None [source]#
Add multiple rows of entity data to the archive.
- Parameters:
entity_type – The type of the entity
data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically
allow_defaults – If
False
, assert that each row contains all fields, otherwise, allow default values for missing fields.
- Raises:
IntegrityError
if the keys in a row are not a subset of the columns in the table
- abstract delete_object(key: str) None [source]#
Delete the object from the archive.
- Parameters:
key – fully qualified identifier for the object within the repository.
- Raises:
OSError – if the file could not be deleted.
- abstract put_object(stream: BinaryIO, *, buffer_size: int | None = None, key: str | None = None) str [source]#
Add an object to the archive.
- Parameters:
stream – byte stream to read the object from
buffer_size – Number of bytes to buffer when read/writing
key – key to use for the object (if None will be auto-generated)
- Returns:
the key of the object
- aiida.tools.archive.abstract.get_format(name: str = 'sqlite_zip') ArchiveFormatAbstract [source]#
Get the archive format instance.
- Parameters:
name – name of the archive format
- Returns:
archive format instance
Shared resources for the archive.
- class aiida.tools.archive.common.HTMLGetLinksParser(filter_extension=None)[source]#
Bases:
HTMLParser
If a filter_extension is passed, only links with extension matching the given one will be returned.
- __init__(filter_extension=None)[source]#
Initialize and reset this instance.
If convert_charrefs is True (the default), all character references are automatically converted to the corresponding Unicode characters.
- __module__ = 'aiida.tools.archive.common'#
- aiida.tools.archive.common.batch_iter(iterable: Iterable[Any], size: int, transform: Callable[[Any], Any] | None = None) Iterable[Tuple[int, List[Any]]] [source]#
Yield an iterable in batches of a set number of items.
Note, the final yield may be less than this size.
- Parameters:
transform – a transform to apply to each item
- Returns:
(number of items, list of items)
- aiida.tools.archive.common.get_valid_import_links(url)[source]#
Open the given URL, parse the HTML and return a list of valid links where the link file has a .aiida extension.
Create an AiiDA archive.
The archive is a subset of the provenance graph, stored in a single file.
- aiida.tools.archive.create._check_node_licenses(querybuilder: Callable[[], QueryBuilder], node_ids: Set[int], allowed_licenses: None | Sequence[str] | Callable, forbidden_licenses: None | Sequence[str] | Callable, batch_size: int) None [source]#
Check the nodes to be archived for disallowed licences.
- aiida.tools.archive.create._check_unsealed_nodes(querybuilder: Callable[[], QueryBuilder], node_ids: Set[int], batch_size: int) None [source]#
Check no process nodes are unsealed, i.e. all processes have completed.
- aiida.tools.archive.create._collect_all_entities(querybuilder: Callable[[], QueryBuilder], entity_ids: Dict[EntityTypes, Set[int]], include_authinfos: bool, include_comments: bool, include_logs: bool, batch_size: int) Tuple[List[Tuple[int, int]], Set[LinkQuadruple]] [source]#
Collect all entities.
- Returns:
(group_id_to_node_id, link_data) and updates entity_ids
- aiida.tools.archive.create._collect_required_entities(querybuilder: Callable[[], QueryBuilder], entity_ids: Dict[EntityTypes, Set[int]], traversal_rules: Dict[str, bool], include_authinfos: bool, include_comments: bool, include_logs: bool, backend: StorageBackend, batch_size: int) Tuple[List[Tuple[int, int]], Set[LinkQuadruple]] [source]#
Collect required entities, given a set of starting entities and provenance graph traversal rules.
- Returns:
(group_id_to_node_id, link_data) and updates entity_ids
- aiida.tools.archive.create._stream_repo_files(key_format: str, writer: ArchiveWriterAbstract, node_ids: Set[int], backend: StorageBackend, batch_size: int) None [source]#
Collect all repository object keys from the nodes, then stream the files to the archive.
- aiida.tools.archive.create.create_archive(entities: Iterable[Computer | Node | Group | User] | None, filename: None | str | Path = None, *, archive_format: ArchiveFormatAbstract | None = None, overwrite: bool = False, include_comments: bool = True, include_logs: bool = True, include_authinfos: bool = False, allowed_licenses: list | Callable | None = None, forbidden_licenses: list | Callable | None = None, strip_checkpoints: bool = True, batch_size: int = 1000, compression: int = 6, test_run: bool = False, backend: StorageBackend | None = None, **traversal_rules: bool) Path [source]#
Export AiiDA data to an archive file.
The export follows the following logic:
First gather all entity primary keys (per type) that needs to be exported. This need to proceed in the “reverse” order of relationships:
groups: input groups
group_to_nodes: from nodes in groups
nodes & links: from graph_traversal(input nodes & group_to_nodes)
computers: from input computers & computers of nodes
authinfos: from authinfos of computers
comments: from comments of nodes
logs: from logs of nodes
users: from users of nodes, groups, comments & authinfos
Now stream the full entities (per type) to the archive writer, in the order of relationships:
users
computers
authinfos
groups
nodes
comments
logs
group_to_nodes
links
Finally stream the repository files, for the exported nodes, to the archive writer.
Note, the logging level and progress reporter should be set externally, for example:
from aiida.common.progress_reporter import set_progress_bar_tqdm EXPORT_LOGGER.setLevel('DEBUG') set_progress_bar_tqdm(leave=True) create_archive(...)
- Parameters:
entities – If
None
, import all entities, or a list of entity instances that can include Computers, Groups, and Nodes.filename – the filename (possibly including the absolute path) of the file on which to export.
overwrite – if True, overwrite the output file without asking, if it exists. If False, raise an
ArchiveExportError
if the output file already exists.allowed_licenses – List or function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.
forbidden_licenses – List or function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.
include_comments – In-/exclude export of comments for given node(s) in
entities
. Default: True, include comments in export (as well as relevant users).include_logs – In-/exclude export of logs for given node(s) in
entities
. Default: True, include logs in export.strip_checkpoints – Remove checkpoint keys from process node attributes. These contain serialized code and can cause security issues.
compression – level of compression to use (integer from 0 to 9)
batch_size – batch database query results in sub-collections to reduce memory usage
test_run – if True, do not write to file
backend – the backend to export from. If not specified, the default backend is used.
traversal_rules – graph traversal rules. See
aiida.common.links.GraphTraversalRules
what rule names are toggleable and what the defaults are.
- Raises:
ArchiveExportError – if there are any internal errors when exporting.
LicensingException – if any node is licensed under forbidden license.
- aiida.tools.archive.create.get_init_summary(*, archive_version: str, outfile: Path, collect_all: bool, include_authinfos: bool, include_comments: bool, include_logs: bool, traversal_rules: dict, compression: int) str [source]#
Get summary for archive initialisation
Module that defines the exceptions thrown by AiiDA’s archive module.
- Note: In order to not override the built-in ImportError,
both ImportError and ExportError are prefixed with Archive.
- exception aiida.tools.archive.exceptions.ArchiveExportError[source]#
Bases:
ExportImportException
Base class for all AiiDA export exceptions.
- __annotations__ = {}#
- __module__ = 'aiida.tools.archive.exceptions'#
- exception aiida.tools.archive.exceptions.ArchiveImportError[source]#
Bases:
ExportImportException
Base class for all AiiDA import exceptions.
- __annotations__ = {}#
- __module__ = 'aiida.tools.archive.exceptions'#
- exception aiida.tools.archive.exceptions.ExportImportException[source]#
Bases:
AiidaException
Base class for all AiiDA export/import module exceptions.
- __annotations__ = {}#
- __module__ = 'aiida.tools.archive.exceptions'#
- exception aiida.tools.archive.exceptions.ExportValidationError[source]#
Bases:
ArchiveExportError
Raised when validation fails during export, e.g. for non-sealed
ProcessNode
s.- __annotations__ = {}#
- __module__ = 'aiida.tools.archive.exceptions'#
- exception aiida.tools.archive.exceptions.ImportTestRun[source]#
Bases:
ArchiveImportError
Raised during an import, before the transaction is commited.
- __annotations__ = {}#
- __module__ = 'aiida.tools.archive.exceptions'#
- exception aiida.tools.archive.exceptions.ImportUniquenessError[source]#
Bases:
ArchiveImportError
Raised when the user tries to violate a uniqueness constraint.
Similar to
UniquenessError
.- __annotations__ = {}#
- __module__ = 'aiida.tools.archive.exceptions'#
- exception aiida.tools.archive.exceptions.ImportValidationError[source]#
Bases:
ArchiveImportError
Raised when validation fails during import, e.g. for parameter types and values.
- __annotations__ = {}#
- __module__ = 'aiida.tools.archive.exceptions'#
Import an archive.
- class aiida.tools.archive.imports.CommentTransform(user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int])[source]#
Bases:
object
Callable to transform a Comment DB row, between the source archive and target backend.
- __dict__ = mappingproxy({'__module__': 'aiida.tools.archive.imports', '__doc__': 'Callable to transform a Comment DB row, between the source archive and target backend.', '__init__': <function CommentTransform.__init__>, '__call__': <function CommentTransform.__call__>, '__dict__': <attribute '__dict__' of 'CommentTransform' objects>, '__weakref__': <attribute '__weakref__' of 'CommentTransform' objects>, '__annotations__': {}})#
- __init__(user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int])[source]#
Construct a new instance.
- __module__ = 'aiida.tools.archive.imports'#
- __weakref__#
list of weak references to the object
- class aiida.tools.archive.imports.GroupTransform(user_ids_archive_backend: Dict[int, int], labels: Set[str])[source]#
Bases:
object
Callable to transform a Group DB row, between the source archive and target backend.
- __dict__ = mappingproxy({'__module__': 'aiida.tools.archive.imports', '__doc__': 'Callable to transform a Group DB row, between the source archive and target backend.', '__init__': <function GroupTransform.__init__>, '__call__': <function GroupTransform.__call__>, '__dict__': <attribute '__dict__' of 'GroupTransform' objects>, '__weakref__': <attribute '__weakref__' of 'GroupTransform' objects>, '__annotations__': {}})#
- __module__ = 'aiida.tools.archive.imports'#
- __weakref__#
list of weak references to the object
- class aiida.tools.archive.imports.NodeTransform(user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int], import_new_extras: bool)[source]#
Bases:
object
Callable to transform a Node DB row, between the source archive and target backend.
- __dict__ = mappingproxy({'__module__': 'aiida.tools.archive.imports', '__doc__': 'Callable to transform a Node DB row, between the source archive and target backend.', '__init__': <function NodeTransform.__init__>, '__call__': <function NodeTransform.__call__>, '__dict__': <attribute '__dict__' of 'NodeTransform' objects>, '__weakref__': <attribute '__weakref__' of 'NodeTransform' objects>, '__annotations__': {}})#
- __init__(user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int], import_new_extras: bool)[source]#
Construct a new instance.
- __module__ = 'aiida.tools.archive.imports'#
- __weakref__#
list of weak references to the object
- class aiida.tools.archive.imports.QueryParams(batch_size: int, filter_size: int)[source]#
Bases:
object
Parameters for executing backend queries.
- __annotations__ = {'batch_size': <class 'int'>, 'filter_size': <class 'int'>}#
- __dataclass_fields__ = {'batch_size': Field(name='batch_size',type=<class 'int'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'filter_size': Field(name='filter_size',type=<class 'int'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD)}#
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)#
- __dict__ = mappingproxy({'__module__': 'aiida.tools.archive.imports', '__annotations__': {'batch_size': <class 'int'>, 'filter_size': <class 'int'>}, '__doc__': 'Parameters for executing backend queries.', '__dict__': <attribute '__dict__' of 'QueryParams' objects>, '__weakref__': <attribute '__weakref__' of 'QueryParams' objects>, '__dataclass_params__': _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False), '__dataclass_fields__': {'batch_size': Field(name='batch_size',type=<class 'int'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), 'filter_size': Field(name='filter_size',type=<class 'int'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD)}, '__init__': <function QueryParams.__init__>, '__repr__': <function QueryParams.__repr__>, '__eq__': <function QueryParams.__eq__>, '__hash__': None, '__match_args__': ('batch_size', 'filter_size')})#
- __eq__(other)#
Return self==value.
- __hash__ = None#
- __match_args__ = ('batch_size', 'filter_size')#
- __module__ = 'aiida.tools.archive.imports'#
- __repr__()#
Return repr(self).
- __weakref__#
list of weak references to the object
- aiida.tools.archive.imports._add_files_to_repo(backend_from: StorageBackend, backend_to: StorageBackend, new_keys: Set[str]) None [source]#
Add the new files to the repository.
- aiida.tools.archive.imports._add_new_entities(etype: EntityTypes, total: int, unique_field: str, backend_unique_id: dict, backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams, transform: Callable[[dict], dict]) None [source]#
Add new entities to the output backend and update the mapping of unique field -> id.
- aiida.tools.archive.imports._get_new_object_keys(key_format: str, backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams) Set[str] [source]#
Return the object keys that need to be added to the backend.
- aiida.tools.archive.imports._import_authinfos(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams, user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int]) None [source]#
Import logs from one backend to another.
- Returns:
mapping of input backend id to output backend id
- aiida.tools.archive.imports._import_comments(backend_from: StorageBackend, backend: StorageBackend, query_params: QueryParams, user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int], merge_comments: Literal['leave', 'newest', 'overwrite']) Dict[int, int] [source]#
Import comments from one backend to another.
- Returns:
mapping of archive id to backend id
- aiida.tools.archive.imports._import_computers(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams) Dict[int, int] [source]#
Import computers from one backend to another.
- Returns:
mapping of input backend id to output backend id
- aiida.tools.archive.imports._import_groups(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams, user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int]) Set[str] [source]#
Import groups from the input backend, and add group -> node records.
- Returns:
Set of labels
- aiida.tools.archive.imports._import_links(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams, node_ids_archive_backend: Dict[int, int]) None [source]#
Import links from one backend to another.
- aiida.tools.archive.imports._import_logs(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams, node_ids_archive_backend: Dict[int, int]) Dict[int, int] [source]#
Import logs from one backend to another.
- Returns:
mapping of input backend id to output backend id
- aiida.tools.archive.imports._import_nodes(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams, user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int], import_new_extras: bool, merge_extras: Tuple[Literal['k', 'n'], Literal['c', 'n'], Literal['l', 'u', 'd']]) Dict[int, int] [source]#
Import nodes from one backend to another.
- Returns:
mapping of input backend id to output backend id
- aiida.tools.archive.imports._import_users(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams) Dict[int, int] [source]#
Import users from one backend to another.
- Returns:
mapping of input backend id to output backend id
- aiida.tools.archive.imports._make_import_group(group: Group | None, labels: Set[str], node_ids_archive_backend: Dict[int, int], backend_to: StorageBackend, query_params: QueryParams) int | None [source]#
Make an import group containing all imported nodes.
- Parameters:
group – Use an existing group
labels – All existing group labels on the backend
node_ids_archive_backend – node pks to add to the group
- Returns:
The id of the group
- aiida.tools.archive.imports._merge_node_extras(backend_from: StorageBackend, backend_to: StorageBackend, query_params: QueryParams, backend_uuid_id: Dict[str, int], mode: Tuple[Literal['k', 'n'], Literal['c', 'n'], Literal['l', 'u', 'd']]) None [source]#
Merge extras from the input backend with the ones in the output backend.
- Parameters:
backend_uuid_id – mapping of uuid to output backend id
mode – tuple of merge modes for extras
- aiida.tools.archive.imports.import_archive(path: str | Path, *, archive_format: ArchiveFormatAbstract | None = None, filter_size: int = 999, batch_size: int = 1000, import_new_extras: bool = True, merge_extras: Tuple[Literal['k', 'n'], Literal['c', 'n'], Literal['l', 'u', 'd']] = ('k', 'n', 'l'), merge_comments: Literal['leave', 'newest', 'overwrite'] = 'leave', include_authinfos: bool = False, create_group: bool = True, group: Group | None = None, test_run: bool = False, backend: StorageBackend | None = None) int | None [source]#
Import an archive into the AiiDA backend.
- Parameters:
path – the path to the archive
archive_format – The class for interacting with the archive
filter_size – Maximum size of parameters allowed in a single query filter
batch_size – Batch size for streaming database rows
import_new_extras – Keep extras on new nodes (except private aiida keys), else strip
merge_extras – Rules for merging extras into existing nodes. The first letter acts on extras that are present in the original node and not present in the imported node. Can be either: ‘k’ (keep it) or ‘n’ (do not keep it). The second letter acts on the imported extras that are not present in the original node. Can be either: ‘c’ (create it) or ‘n’ (do not create it). The third letter defines what to do in case of a name collision. Can be either: ‘l’ (leave the old value), ‘u’ (update with a new value), ‘d’ (delete the extra)
create_group – Add all imported nodes to the specified group, or an automatically created one
group – Group wherein all imported Nodes will be placed. If None, one will be auto-generated.
test_run – if True, do not write to file
backend – the backend to import to. If not specified, the default backend is used.
- Returns:
Primary Key of the import Group
- Raises:
CorruptStorage – if the provided archive cannot be read.
IncompatibleStorageSchema – if the archive version is not at head.
ImportValidationError – if invalid entities are found in the archive.
ImportUniquenessError – if a new unique entity can not be created.