aiida.tools.archive package

The AiiDA archive allows export/import, of subsets of the provenance graph, to a single file

Submodules

Abstraction for an archive file format.

class aiida.tools.archive.abstract.ArchiveFormatAbstract[source]

Bases: abc.ABC

Abstract class for an archive format.

__abstractmethods__ = frozenset({'key_format', 'latest_version', 'migrate', 'open', 'read_version'})
__dict__ = mappingproxy({'__module__': 'aiida.tools.archive.abstract', '__doc__': 'Abstract class for an archive format.', 'latest_version': <property object>, 'key_format': <property object>, 'read_version': <function ArchiveFormatAbstract.read_version>, 'open': <function ArchiveFormatAbstract.open>, 'migrate': <function ArchiveFormatAbstract.migrate>, '__dict__': <attribute '__dict__' of 'ArchiveFormatAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveFormatAbstract' objects>, '__abstractmethods__': frozenset({'migrate', 'open', 'key_format', 'read_version', 'latest_version'}), '_abc_impl': <_abc_data object>, '__annotations__': {}})
__module__ = 'aiida.tools.archive.abstract'
__weakref__

list of weak references to the object (if defined)

_abc_impl = <_abc_data object>
abstract property key_format: str

Return the format of repository keys.

abstract property latest_version: str

Return the latest schema version of the archive format.

abstract migrate(inpath: Union[str, pathlib.Path], outpath: Union[str, pathlib.Path], version: str, *, force: bool = False, compression: int = 6) None[source]

Migrate an archive to a specific version.

Parameters
  • inpath – input archive path

  • outpath – output archive path

  • version – version to migrate to

  • force – allow overwrite of existing output archive path

  • compression – default level of compression to use for writing (integer from 0 to 9)

abstract open(path: Union[str, pathlib.Path], mode: Literal['r'], *, compression: int = '6', **kwargs: Any) aiida.tools.archive.abstract.ArchiveReaderAbstract[source]
abstract open(path: Union[str, pathlib.Path], mode: Literal['x', 'w'], *, compression: int = '6', **kwargs: Any) aiida.tools.archive.abstract.ArchiveWriterAbstract
abstract open(path: Union[str, pathlib.Path], mode: Literal['a'], *, compression: int = '6', **kwargs: Any) aiida.tools.archive.abstract.ArchiveWriterAbstract

Open an archive (latest version only).

Parameters
  • path – archive path

  • mode – open mode: ‘r’ (read), ‘x’ (exclusive write), ‘w’ (write) or ‘a’ (append)

  • compression – default level of compression to use for writing (integer from 0 to 9)

Note, in write mode, the writer is responsible for writing the format version.

abstract read_version(path: Union[str, pathlib.Path]) str[source]

Read the version of the archive from a file.

This method should account for reading all versions of the archive format.

Parameters

path – archive path

Raises

UnreachableStorage if the file does not exist

Raises

CorruptStorage if a version cannot be read from the archive

class aiida.tools.archive.abstract.ArchiveReaderAbstract(path: Union[str, pathlib.Path], **kwargs: Any)[source]

Bases: abc.ABC

Reader of an archive, that will be used as a context manager.

__abstractmethods__ = frozenset({'get_backend', 'get_metadata'})
__dict__ = mappingproxy({'__module__': 'aiida.tools.archive.abstract', '__doc__': 'Reader of an archive, that will be used as a context manager.', '__init__': <function ArchiveReaderAbstract.__init__>, 'path': <property object>, '__enter__': <function ArchiveReaderAbstract.__enter__>, '__exit__': <function ArchiveReaderAbstract.__exit__>, 'get_metadata': <function ArchiveReaderAbstract.get_metadata>, 'get_backend': <function ArchiveReaderAbstract.get_backend>, 'querybuilder': <function ArchiveReaderAbstract.querybuilder>, 'get': <function ArchiveReaderAbstract.get>, 'graph': <function ArchiveReaderAbstract.graph>, '__dict__': <attribute '__dict__' of 'ArchiveReaderAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveReaderAbstract' objects>, '__abstractmethods__': frozenset({'get_metadata', 'get_backend'}), '_abc_impl': <_abc_data object>, '__annotations__': {}})
__enter__() aiida.tools.archive.abstract.SelfType[source]

Start reading from the archive.

__exit__(*args, **kwargs) None[source]

Finalise the archive.

__init__(path: Union[str, pathlib.Path], **kwargs: Any)[source]

Initialise the reader.

Parameters

path – archive path

__module__ = 'aiida.tools.archive.abstract'
__weakref__

list of weak references to the object (if defined)

_abc_impl = <_abc_data object>
get(entity_cls: Type[aiida.tools.archive.abstract.EntityType], **filters: Any) aiida.tools.archive.abstract.EntityType[source]

Return the entity for the given filters.

Example:

reader.get(orm.Node, pk=1)
Parameters
  • entity_cls – The type of the front-end entity

  • filters – the filters identifying the object to get

abstract get_backend() StorageBackend[source]

Return a ‘read-only’ backend for the archive.

abstract get_metadata() Dict[str, Any][source]

Return the top-level metadata.

Raises

CorruptStorage if the top-level metadata cannot be read from the archive

graph(**kwargs: Any) Graph[source]

Return a provenance graph generator for the archive.

property path

Return the path to the archive.

querybuilder(**kwargs: Any) QueryBuilder[source]

Return a QueryBuilder instance, initialised with the archive backend.

class aiida.tools.archive.abstract.ArchiveWriterAbstract(path: Union[str, pathlib.Path], fmt: aiida.tools.archive.abstract.ArchiveFormatAbstract, *, mode: Literal['x', 'w', 'a'] = 'x', compression: int = 6, **kwargs: Any)[source]

Bases: abc.ABC

Writer of an archive, that will be used as a context manager.

__abstractmethods__ = frozenset({'bulk_insert', 'delete_object', 'put_object', 'update_metadata'})
__dict__ = mappingproxy({'__module__': 'aiida.tools.archive.abstract', '__doc__': 'Writer of an archive, that will be used as a context manager.', '__init__': <function ArchiveWriterAbstract.__init__>, 'path': <property object>, 'mode': <property object>, 'compression': <property object>, '__enter__': <function ArchiveWriterAbstract.__enter__>, '__exit__': <function ArchiveWriterAbstract.__exit__>, 'update_metadata': <function ArchiveWriterAbstract.update_metadata>, 'bulk_insert': <function ArchiveWriterAbstract.bulk_insert>, 'put_object': <function ArchiveWriterAbstract.put_object>, 'delete_object': <function ArchiveWriterAbstract.delete_object>, '__dict__': <attribute '__dict__' of 'ArchiveWriterAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveWriterAbstract' objects>, '__abstractmethods__': frozenset({'delete_object', 'update_metadata', 'put_object', 'bulk_insert'}), '_abc_impl': <_abc_data object>, '__annotations__': {}})
__enter__() aiida.tools.archive.abstract.SelfType[source]

Start writing to the archive.

__exit__(*args, **kwargs) None[source]

Finalise the archive.

__init__(path: Union[str, pathlib.Path], fmt: aiida.tools.archive.abstract.ArchiveFormatAbstract, *, mode: Literal['x', 'w', 'a'] = 'x', compression: int = 6, **kwargs: Any)[source]

Initialise the writer.

Parameters
  • path – archive path

  • mode – mode to open the archive in: ‘x’ (exclusive), ‘w’ (write) or ‘a’ (append)

  • compression – default level of compression to use (integer from 0 to 9)

__module__ = 'aiida.tools.archive.abstract'
__weakref__

list of weak references to the object (if defined)

_abc_impl = <_abc_data object>
abstract bulk_insert(entity_type: EntityTypes, rows: List[Dict[str, Any]], allow_defaults: bool = False) None[source]

Add multiple rows of entity data to the archive.

Parameters
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically

  • allow_defaults – If False, assert that each row contains all fields, otherwise, allow default values for missing fields.

Raises

IntegrityError if the keys in a row are not a subset of the columns in the table

property compression: int

Return the compression level.

abstract delete_object(key: str) None[source]

Delete the object from the archive.

Parameters

key – fully qualified identifier for the object within the repository.

Raises

IOError – if the file could not be deleted.

property mode: Literal['x', 'w', 'a']

Return the mode of the archive.

property path: pathlib.Path

Return the path to the archive.

abstract put_object(stream: BinaryIO, *, buffer_size: Optional[int] = None, key: Optional[str] = None) str[source]

Add an object to the archive.

Parameters
  • stream – byte stream to read the object from

  • buffer_size – Number of bytes to buffer when read/writing

  • key – key to use for the object (if None will be auto-generated)

Returns

the key of the object

abstract update_metadata(data: Dict[str, Any], overwrite: bool = False) None[source]

Add key, values to the top-level metadata.

aiida.tools.archive.abstract.get_format(name: str = 'sqlite_zip') aiida.tools.archive.abstract.ArchiveFormatAbstract[source]

Get the archive format instance.

Parameters

name – name of the archive format

Returns

archive format instance

Shared resources for the archive.

class aiida.tools.archive.common.HTMLGetLinksParser(filter_extension=None)[source]

Bases: html.parser.HTMLParser

If a filter_extension is passed, only links with extension matching the given one will be returned.

__init__(filter_extension=None)[source]

Initialize and reset this instance.

If convert_charrefs is True (the default), all character references are automatically converted to the corresponding Unicode characters.

__module__ = 'aiida.tools.archive.common'

Return the links that were found during the parsing phase.

handle_starttag(tag, attrs)[source]

Store the urls encountered, if they match the request.

aiida.tools.archive.common.batch_iter(iterable: Iterable[Any], size: int, transform: Optional[Callable[[Any], Any]] = None) Iterable[Tuple[int, List[Any]]][source]

Yield an iterable in batches of a set number of items.

Note, the final yield may be less than this size.

Parameters

transform – a transform to apply to each item

Returns

(number of items, list of items)

Open the given URL, parse the HTML and return a list of valid links where the link file has a .aiida extension.

Create an AiiDA archive.

The archive is a subset of the provenance graph, stored in a single file.

aiida.tools.archive.create._check_node_licenses(querybuilder: Callable[[], aiida.orm.querybuilder.QueryBuilder], node_ids: Set[int], allowed_licenses: Union[None, Sequence[str], Callable], forbidden_licenses: Union[None, Sequence[str], Callable], batch_size: int) None[source]

Check the nodes to be archived for disallowed licences.

aiida.tools.archive.create._check_unsealed_nodes(querybuilder: Callable[[], aiida.orm.querybuilder.QueryBuilder], node_ids: Set[int], batch_size: int) None[source]

Check no process nodes are unsealed, i.e. all processes have completed.

aiida.tools.archive.create._collect_all_entities(querybuilder: Callable[[], aiida.orm.querybuilder.QueryBuilder], entity_ids: Dict[aiida.orm.entities.EntityTypes, Set[int]], include_authinfos: bool, include_comments: bool, include_logs: bool, batch_size: int) Tuple[List[list], Set[aiida.orm.utils.links.LinkQuadruple]][source]

Collect all entities.

Returns

(group_id_to_node_id, link_data) and updates entity_ids

aiida.tools.archive.create._collect_required_entities(querybuilder: Callable[[], aiida.orm.querybuilder.QueryBuilder], entity_ids: Dict[aiida.orm.entities.EntityTypes, Set[int]], traversal_rules: Dict[str, bool], include_authinfos: bool, include_comments: bool, include_logs: bool, backend: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int) Tuple[List[list], Set[aiida.orm.utils.links.LinkQuadruple]][source]

Collect required entities, given a set of starting entities and provenance graph traversal rules.

Returns

(group_id_to_node_id, link_data) and updates entity_ids

aiida.tools.archive.create._stream_repo_files(key_format: str, writer: aiida.tools.archive.abstract.ArchiveWriterAbstract, node_ids: Set[int], backend: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int) None[source]

Collect all repository object keys from the nodes, then stream the files to the archive.

aiida.tools.archive.create.create_archive(entities: Optional[Iterable[Union[aiida.orm.computers.Computer, aiida.orm.nodes.node.Node, aiida.orm.groups.Group, aiida.orm.users.User]]], filename: Union[None, str, pathlib.Path] = None, *, archive_format: Optional[aiida.tools.archive.abstract.ArchiveFormatAbstract] = None, overwrite: bool = False, include_comments: bool = True, include_logs: bool = True, include_authinfos: bool = False, allowed_licenses: Optional[Union[list, Callable]] = None, forbidden_licenses: Optional[Union[list, Callable]] = None, strip_checkpoints: bool = True, batch_size: int = 1000, compression: int = 6, test_run: bool = False, backend: Optional[aiida.orm.implementation.storage_backend.StorageBackend] = None, **traversal_rules: bool) pathlib.Path[source]

Export AiiDA data to an archive file.

The export follows the following logic:

First gather all entity primary keys (per type) that needs to be exported. This need to proceed in the “reverse” order of relationships:

  • groups: input groups

  • group_to_nodes: from nodes in groups

  • nodes & links: from graph_traversal(input nodes & group_to_nodes)

  • computers: from input computers & computers of nodes

  • authinfos: from authinfos of computers

  • comments: from comments of nodes

  • logs: from logs of nodes

  • users: from users of nodes, groups, comments & authinfos

Now stream the full entities (per type) to the archive writer, in the order of relationships:

  • users

  • computers

  • authinfos

  • groups

  • nodes

  • comments

  • logs

  • group_to_nodes

  • links

Finally stream the repository files, for the exported nodes, to the archive writer.

Note, the logging level and progress reporter should be set externally, for example:

from aiida.common.progress_reporter import set_progress_bar_tqdm

EXPORT_LOGGER.setLevel('DEBUG')
set_progress_bar_tqdm(leave=True)
create_archive(...)
Parameters
  • entities – If None, import all entities, or a list of entity instances that can include Computers, Groups, and Nodes.

  • filename – the filename (possibly including the absolute path) of the file on which to export.

  • overwrite – if True, overwrite the output file without asking, if it exists. If False, raise an ArchiveExportError if the output file already exists.

  • allowed_licenses – List or function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.

  • forbidden_licenses – List or function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.

  • include_comments – In-/exclude export of comments for given node(s) in entities. Default: True, include comments in export (as well as relevant users).

  • include_logs – In-/exclude export of logs for given node(s) in entities. Default: True, include logs in export.

  • strip_checkpoints – Remove checkpoint keys from process node attributes. These contain serialized code and can cause security issues.

  • compression – level of compression to use (integer from 0 to 9)

  • batch_size – batch database query results in sub-collections to reduce memory usage

  • test_run – if True, do not write to file

  • backend – the backend to export from. If not specified, the default backend is used.

  • traversal_rules – graph traversal rules. See aiida.common.links.GraphTraversalRules what rule names are toggleable and what the defaults are.

Raises
aiida.tools.archive.create.get_init_summary(*, archive_version: str, outfile: pathlib.Path, collect_all: bool, include_authinfos: bool, include_comments: bool, include_logs: bool, traversal_rules: dict, compression: int) str[source]

Get summary for archive initialisation

Module that defines the exceptions thrown by AiiDA’s archive module.

Note: In order to not override the built-in ImportError,

both ImportError and ExportError are prefixed with Archive.

exception aiida.tools.archive.exceptions.ArchiveExportError[source]

Bases: aiida.tools.archive.exceptions.ExportImportException

Base class for all AiiDA export exceptions.

__module__ = 'aiida.tools.archive.exceptions'
exception aiida.tools.archive.exceptions.ArchiveImportError[source]

Bases: aiida.tools.archive.exceptions.ExportImportException

Base class for all AiiDA import exceptions.

__module__ = 'aiida.tools.archive.exceptions'
exception aiida.tools.archive.exceptions.ExportImportException[source]

Bases: aiida.common.exceptions.AiidaException

Base class for all AiiDA export/import module exceptions.

__module__ = 'aiida.tools.archive.exceptions'
exception aiida.tools.archive.exceptions.ExportValidationError[source]

Bases: aiida.tools.archive.exceptions.ArchiveExportError

Raised when validation fails during export, e.g. for non-sealed ProcessNode s.

__module__ = 'aiida.tools.archive.exceptions'
exception aiida.tools.archive.exceptions.ImportTestRun[source]

Bases: aiida.tools.archive.exceptions.ArchiveImportError

Raised during an import, before the transaction is commited.

__module__ = 'aiida.tools.archive.exceptions'
exception aiida.tools.archive.exceptions.ImportUniquenessError[source]

Bases: aiida.tools.archive.exceptions.ArchiveImportError

Raised when the user tries to violate a uniqueness constraint.

Similar to UniquenessError.

__module__ = 'aiida.tools.archive.exceptions'
exception aiida.tools.archive.exceptions.ImportValidationError[source]

Bases: aiida.tools.archive.exceptions.ArchiveImportError

Raised when validation fails during import, e.g. for parameter types and values.

__module__ = 'aiida.tools.archive.exceptions'

Import an archive.

class aiida.tools.archive.imports.CommentTransform(user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int])[source]

Bases: object

Callable to transform a Comment DB row, between the source archive and target backend.

__call__(row: dict) dict[source]

Perform the transform.

__dict__ = mappingproxy({'__module__': 'aiida.tools.archive.imports', '__doc__': 'Callable to transform a Comment DB row, between the source archive and target backend.', '__init__': <function CommentTransform.__init__>, '__call__': <function CommentTransform.__call__>, '__dict__': <attribute '__dict__' of 'CommentTransform' objects>, '__weakref__': <attribute '__weakref__' of 'CommentTransform' objects>, '__annotations__': {}})
__init__(user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int])[source]
__module__ = 'aiida.tools.archive.imports'
__weakref__

list of weak references to the object (if defined)

class aiida.tools.archive.imports.GroupTransform(user_ids_archive_backend: Dict[int, int], labels: Set[str])[source]

Bases: object

Callable to transform a Group DB row, between the source archive and target backend.

__call__(row: dict) dict[source]

Perform the transform.

__dict__ = mappingproxy({'__module__': 'aiida.tools.archive.imports', '__doc__': 'Callable to transform a Group DB row, between the source archive and target backend.', '__init__': <function GroupTransform.__init__>, '__call__': <function GroupTransform.__call__>, '__dict__': <attribute '__dict__' of 'GroupTransform' objects>, '__weakref__': <attribute '__weakref__' of 'GroupTransform' objects>, '__annotations__': {}})
__init__(user_ids_archive_backend: Dict[int, int], labels: Set[str])[source]
__module__ = 'aiida.tools.archive.imports'
__weakref__

list of weak references to the object (if defined)

class aiida.tools.archive.imports.NodeTransform(user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int], import_new_extras: bool)[source]

Bases: object

Callable to transform a Node DB row, between the source archive and target backend.

__call__(row: dict) dict[source]

Perform the transform.

__dict__ = mappingproxy({'__module__': 'aiida.tools.archive.imports', '__doc__': 'Callable to transform a Node DB row, between the source archive and target backend.', '__init__': <function NodeTransform.__init__>, '__call__': <function NodeTransform.__call__>, '__dict__': <attribute '__dict__' of 'NodeTransform' objects>, '__weakref__': <attribute '__weakref__' of 'NodeTransform' objects>, '__annotations__': {}})
__init__(user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int], import_new_extras: bool)[source]
__module__ = 'aiida.tools.archive.imports'
__weakref__

list of weak references to the object (if defined)

aiida.tools.archive.imports._add_files_to_repo(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, new_keys: Set[str]) None[source]

Add the new files to the repository.

aiida.tools.archive.imports._add_new_entities(etype: aiida.orm.entities.EntityTypes, total: int, unique_field: str, backend_unique_id: dict, backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int, transform: Callable[[dict], dict]) None[source]

Add new entities to the output backend and update the mapping of unique field -> id.

aiida.tools.archive.imports._get_new_object_keys(key_format: str, backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int) Set[str][source]

Return the object keys that need to be added to the backend.

aiida.tools.archive.imports._import_authinfos(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int, user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int]) None[source]

Import logs from one backend to another.

Returns

mapping of input backend id to output backend id

aiida.tools.archive.imports._import_comments(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int, user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int], merge_comments: Literal['leave', 'newest', 'overwrite']) Dict[int, int][source]

Import comments from one backend to another.

Returns

mapping of archive id to backend id

aiida.tools.archive.imports._import_computers(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int) Dict[int, int][source]

Import computers from one backend to another.

Returns

mapping of input backend id to output backend id

aiida.tools.archive.imports._import_groups(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int, user_ids_archive_backend: Dict[int, int], node_ids_archive_backend: Dict[int, int]) Set[str][source]

Import groups from the input backend, and add group -> node records.

Returns

Set of labels

Import links from one backend to another.

aiida.tools.archive.imports._import_logs(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int, node_ids_archive_backend: Dict[int, int]) Dict[int, int][source]

Import logs from one backend to another.

Returns

mapping of input backend id to output backend id

aiida.tools.archive.imports._import_nodes(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int, user_ids_archive_backend: Dict[int, int], computer_ids_archive_backend: Dict[int, int], import_new_extras: bool, merge_extras: Tuple[Literal['k', 'n'], Literal['c', 'n'], Literal['l', 'u', 'd']]) Dict[int, int][source]

Import users from one backend to another.

Returns

mapping of input backend id to output backend id

aiida.tools.archive.imports._import_users(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int) Dict[int, int][source]

Import users from one backend to another.

Returns

mapping of input backend id to output backend id

aiida.tools.archive.imports._make_import_group(group: Optional[aiida.orm.groups.Group], labels: Set[str], node_ids_archive_backend: Dict[int, int], backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int) Optional[int][source]

Make an import group containing all imported nodes.

Parameters
  • group – Use an existing group

  • labels – All existing group labels on the backend

  • node_ids_archive_backend – node pks to add to the group

Returns

The id of the group

aiida.tools.archive.imports._merge_node_extras(backend_from: aiida.orm.implementation.storage_backend.StorageBackend, backend_to: aiida.orm.implementation.storage_backend.StorageBackend, batch_size: int, backend_uuid_id: Dict[str, int], mode: Tuple[Literal['k', 'n'], Literal['c', 'n'], Literal['l', 'u', 'd']]) None[source]

Merge extras from the input backend with the ones in the output backend.

Parameters
  • backend_uuid_id – mapping of uuid to output backend id

  • mode – tuple of merge modes for extras

aiida.tools.archive.imports.import_archive(path: Union[str, pathlib.Path], *, archive_format: Optional[aiida.tools.archive.abstract.ArchiveFormatAbstract] = None, batch_size: int = 1000, import_new_extras: bool = True, merge_extras: Tuple[Literal['k', 'n'], Literal['c', 'n'], Literal['l', 'u', 'd']] = ('k', 'n', 'l'), merge_comments: Literal['leave', 'newest', 'overwrite'] = 'leave', include_authinfos: bool = False, create_group: bool = True, group: Optional[aiida.orm.groups.Group] = None, test_run: bool = False, backend: Optional[aiida.orm.implementation.storage_backend.StorageBackend] = None) Optional[int][source]

Import an archive into the AiiDA backend.

Parameters
  • path – the path to the archive

  • archive_format – The class for interacting with the archive

  • batch_size – Batch size for streaming database rows

  • import_new_extras – Keep extras on new nodes (except private aiida keys), else strip

  • merge_extras – Rules for merging extras into existing nodes. The first letter acts on extras that are present in the original node and not present in the imported node. Can be either: ‘k’ (keep it) or ‘n’ (do not keep it). The second letter acts on the imported extras that are not present in the original node. Can be either: ‘c’ (create it) or ‘n’ (do not create it). The third letter defines what to do in case of a name collision. Can be either: ‘l’ (leave the old value), ‘u’ (update with a new value), ‘d’ (delete the extra)

  • create_group – Add all imported nodes to the specified group, or an automatically created one

  • group – Group wherein all imported Nodes will be placed. If None, one will be auto-generated.

  • test_run – if True, do not write to file

  • backend – the backend to import to. If not specified, the default backend is used.

Returns

Primary Key of the import Group

Raises