aiida.tools.importexport package

Provides import/export functionalities.

To see history/git blame prior to the move to aiida.tools.importexport, explore tree: https://github.com/aiidateam/aiida-core/tree/eebef392c81e8b130834a92e1d7abf5e2e30b3ce Functionality: <tree>/aiida/orm/importexport.py Tests: <tree>/aiida/backends/tests/test_export_and_import.py

exception aiida.tools.importexport.ArchiveExportError[source]

Bases: aiida.tools.importexport.common.exceptions.ExportImportException

Base class for all AiiDA export exceptions.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.ArchiveImportError[source]

Bases: aiida.tools.importexport.common.exceptions.ExportImportException

Base class for all AiiDA import exceptions.

__module__ = 'aiida.tools.importexport.common.exceptions'
class aiida.tools.importexport.ArchiveMetadata(export_version: str, aiida_version: str, unique_identifiers: Dict[str, str], all_fields_info: Dict[str, Dict[str, Dict[str, str]]], graph_traversal_rules: Optional[Dict[str, bool]] = None, entities_starting_set: Optional[Dict[str, List[str]]] = None, include_comments: Optional[bool] = None, include_logs: Optional[bool] = None, conversion_info: List[str] = <factory>)[source]

Bases: object

Class for storing metadata about this archive.

Required fields are necessary for importing the data back into AiiDA, whereas optional fields capture information about the export/migration process(es)

__annotations__ = {'aiida_version': <class 'str'>, 'all_fields_info': typing.Dict[str, typing.Dict[str, typing.Dict[str, str]]], 'conversion_info': typing.List[str], 'entities_starting_set': typing.Union[typing.Dict[str, typing.List[str]], NoneType], 'export_version': <class 'str'>, 'graph_traversal_rules': typing.Union[typing.Dict[str, bool], NoneType], 'include_comments': typing.Union[bool, NoneType], 'include_logs': typing.Union[bool, NoneType], 'unique_identifiers': typing.Dict[str, str]}
__dataclass_fields__ = {'aiida_version': Field(name='aiida_version',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'all_fields_info': Field(name='all_fields_info',type=typing.Dict[str, typing.Dict[str, typing.Dict[str, str]]],default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=False,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'conversion_info': Field(name='conversion_info',type=typing.List[str],default=<dataclasses._MISSING_TYPE object>,default_factory=<class 'list'>,init=True,repr=False,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'entities_starting_set': Field(name='entities_starting_set',type=typing.Union[typing.Dict[str, typing.List[str]], NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'export_version': Field(name='export_version',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'graph_traversal_rules': Field(name='graph_traversal_rules',type=typing.Union[typing.Dict[str, bool], NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'include_comments': Field(name='include_comments',type=typing.Union[bool, NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'include_logs': Field(name='include_logs',type=typing.Union[bool, NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'unique_identifiers': Field(name='unique_identifiers',type=typing.Dict[str, str],default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=False,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD)}
__dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)
__dict__ = mappingproxy({'__module__': 'aiida.tools.importexport.archive.common', '__annotations__': {'export_version': <class 'str'>, 'aiida_version': <class 'str'>, 'unique_identifiers': typing.Dict[str, str], 'all_fields_info': typing.Dict[str, typing.Dict[str, typing.Dict[str, str]]], 'graph_traversal_rules': typing.Union[typing.Dict[str, bool], NoneType], 'entities_starting_set': typing.Union[typing.Dict[str, typing.List[str]], NoneType], 'include_comments': typing.Union[bool, NoneType], 'include_logs': typing.Union[bool, NoneType], 'conversion_info': typing.List[str]}, '__doc__': 'Class for storing metadata about this archive.\n\n Required fields are necessary for importing the data back into AiiDA,\n whereas optional fields capture information about the export/migration process(es)\n ', 'graph_traversal_rules': None, 'entities_starting_set': None, 'include_comments': None, 'include_logs': None, '__dict__': <attribute '__dict__' of 'ArchiveMetadata' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveMetadata' objects>, '__dataclass_params__': _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False), '__dataclass_fields__': {'export_version': Field(name='export_version',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'aiida_version': Field(name='aiida_version',type=<class 'str'>,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'unique_identifiers': Field(name='unique_identifiers',type=typing.Dict[str, str],default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=False,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'all_fields_info': Field(name='all_fields_info',type=typing.Dict[str, typing.Dict[str, typing.Dict[str, str]]],default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=False,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'graph_traversal_rules': Field(name='graph_traversal_rules',type=typing.Union[typing.Dict[str, bool], NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'entities_starting_set': Field(name='entities_starting_set',type=typing.Union[typing.Dict[str, typing.List[str]], NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'include_comments': Field(name='include_comments',type=typing.Union[bool, NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'include_logs': Field(name='include_logs',type=typing.Union[bool, NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD), 'conversion_info': Field(name='conversion_info',type=typing.List[str],default=<dataclasses._MISSING_TYPE object>,default_factory=<class 'list'>,init=True,repr=False,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD)}, '__init__': <function __create_fn__.<locals>.__init__>, '__repr__': <function __create_fn__.<locals>.__repr__>, '__eq__': <function __create_fn__.<locals>.__eq__>, '__hash__': None})
__eq__(other)

Return self==value.

__hash__ = None
__init__(export_version: str, aiida_version: str, unique_identifiers: Dict[str, str], all_fields_info: Dict[str, Dict[str, Dict[str, str]]], graph_traversal_rules: Optional[Dict[str, bool]] = None, entities_starting_set: Optional[Dict[str, List[str]]] = None, include_comments: Optional[bool] = None, include_logs: Optional[bool] = None, conversion_info: List[str] = <factory>)None

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'aiida.tools.importexport.archive.common'
__repr__()

Return repr(self).

__weakref__

list of weak references to the object (if defined)

aiida_version: str
all_fields_info: Dict[str, Dict[str, Dict[str, str]]]
conversion_info: List[str]
entities_starting_set: Optional[Dict[str, List[str]]] = None
export_version: str
graph_traversal_rules: Optional[Dict[str, bool]] = None
include_comments: Optional[bool] = None
include_logs: Optional[bool] = None
unique_identifiers: Dict[str, str]
exception aiida.tools.importexport.ArchiveMigrationError[source]

Bases: aiida.tools.importexport.common.exceptions.ExportImportException

Base class for all AiiDA export archive migration exceptions.

__module__ = 'aiida.tools.importexport.common.exceptions'
class aiida.tools.importexport.ArchiveMigratorAbstract(filepath: str)[source]

Bases: abc.ABC

An abstract base class to define an archive migrator.

__abstractmethods__ = frozenset({'migrate'})
__dict__ = mappingproxy({'__module__': 'aiida.tools.importexport.archive.migrators', '__doc__': 'An abstract base class to define an archive migrator.', '__init__': <function ArchiveMigratorAbstract.__init__>, 'filepath': <property object>, 'migrate': <function ArchiveMigratorAbstract.migrate>, '__dict__': <attribute '__dict__' of 'ArchiveMigratorAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveMigratorAbstract' objects>, '__abstractmethods__': frozenset({'migrate'}), '_abc_impl': <_abc_data object>})
__init__(filepath: str)[source]

Initialise the migrator

Parameters
  • filepath – the path to the archive file

  • version – the version of the archive file or, if None, the version will be auto-retrieved.

__module__ = 'aiida.tools.importexport.archive.migrators'
__weakref__

list of weak references to the object (if defined)

_abc_impl = <_abc_data object>
property filepath

Return the input file path.

abstract migrate(version: str, filename: Optional[Union[str, pathlib.Path]], *, force: bool = False, work_dir: Optional[pathlib.Path] = None, **kwargs: Any) → Optional[pathlib.Path][source]

Migrate the archive to another version

Parameters
  • version – the version to migrate to

  • filename – the file path to migrate to. If None, the migrated archive will not be copied from the work_dir.

  • force – overwrite output file if it already exists

  • work_dir – The directory in which to perform the migration. If None, a temporary folder will be created and destroyed at the end of the process.

  • kwargs – key-word arguments specific to the concrete migrator implementation

Returns

path to the migrated archive or None if no migration performed (if filename is None, this will point to a path in the work_dir)

Raises

CorruptArchive: if the archive cannot be read

Raises

ArchiveMigrationError: if the archive cannot migrated to the requested version

class aiida.tools.importexport.ArchiveMigratorJsonBase(filepath: str)[source]

Bases: aiida.tools.importexport.archive.migrators.ArchiveMigratorAbstract

A migrator base for the JSON compressed formats.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.tools.importexport.archive.migrators'
_abc_impl = <_abc_data object>
static _compress_archive_tar(in_path: pathlib.Path, out_path: pathlib.Path)[source]

Create a new zip compressed tar from a folder.

static _compress_archive_zip(in_path: pathlib.Path, out_path: pathlib.Path, compression: int)[source]

Create a new zip compressed zip from a folder.

_extract_archive(filepath: pathlib.Path, callback: Callable[[str, Any], None])[source]

Extract the archive to a filepath.

static _move_file(in_path: pathlib.Path, out_path: pathlib.Path)[source]

Move a file to a another path, deleting the target path first if it exists.

_perform_migration(work_dir: pathlib.Path, pathway: List[str], out_compression: str, out_path: Optional[Union[str, pathlib.Path]])pathlib.Path[source]

Perform the migration(s) in the work directory, compress (if necessary), then move to the out_path (if not None).

_retrieve_version()str[source]

Retrieve the version of the input archive.

migrate(version: str, filename: Optional[Union[str, pathlib.Path]], *, force: bool = False, work_dir: Optional[pathlib.Path] = None, out_compression: str = 'zip', **kwargs) → Optional[pathlib.Path][source]

Migrate the archive to another version

Parameters
  • version – the version to migrate to

  • filename – the file path to migrate to. If None, the migrated archive will not be copied from the work_dir.

  • force – overwrite output file if it already exists

  • work_dir – The directory in which to perform the migration. If None, a temporary folder will be created and destroyed at the end of the process.

  • kwargs – key-word arguments specific to the concrete migrator implementation

Returns

path to the migrated archive or None if no migration performed (if filename is None, this will point to a path in the work_dir)

Raises

CorruptArchive: if the archive cannot be read

Raises

ArchiveMigrationError: if the archive cannot migrated to the requested version

class aiida.tools.importexport.ArchiveMigratorJsonTar(filepath: str)[source]

Bases: aiida.tools.importexport.archive.migrators.ArchiveMigratorJsonBase

A migrator for a JSON tar compressed format.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.tools.importexport.archive.migrators'
_abc_impl = <_abc_data object>
_extract_archive(filepath: pathlib.Path, callback: Callable[[str, Any], None])[source]

Extract the archive to a filepath.

_retrieve_version()str[source]

Retrieve the version of the input archive.

class aiida.tools.importexport.ArchiveMigratorJsonZip(filepath: str)[source]

Bases: aiida.tools.importexport.archive.migrators.ArchiveMigratorJsonBase

A migrator for a JSON zip compressed format.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.tools.importexport.archive.migrators'
_abc_impl = <_abc_data object>
_extract_archive(filepath: pathlib.Path, callback: Callable[[str, Any], None])[source]

Extract the archive to a filepath.

_retrieve_version()str[source]

Retrieve the version of the input archive.

class aiida.tools.importexport.ArchiveReaderAbstract(filename: str, **kwargs: Any)[source]

Bases: abc.ABC

An abstract interface for AiiDA archive readers.

An ArchiveReader implementation is intended to be used with a context:

with ArchiveReader(filename) as reader:
    reader.entity_count('Node')
__abstractmethods__ = frozenset({'compatible_export_version', 'entity_count', 'export_version', 'file_format_verbose', 'iter_entity_fields', 'iter_group_uuids', 'iter_link_data', 'iter_node_repos', 'iter_node_uuids', 'link_count', 'metadata'})
__dict__ = mappingproxy({'__module__': 'aiida.tools.importexport.archive.readers', '__doc__': "An abstract interface for AiiDA archive readers.\n\n An ``ArchiveReader`` implementation is intended to be used with a context::\n\n with ArchiveReader(filename) as reader:\n reader.entity_count('Node')\n\n ", '__init__': <function ArchiveReaderAbstract.__init__>, 'filename': <property object>, 'file_format_verbose': <property object>, 'compatible_export_version': <property object>, '__enter__': <function ArchiveReaderAbstract.__enter__>, '__exit__': <function ArchiveReaderAbstract.__exit__>, 'assert_within_context': <function ArchiveReaderAbstract.assert_within_context>, 'export_version': <property object>, 'check_version': <function ArchiveReaderAbstract.check_version>, 'metadata': <property object>, 'entity_names': <property object>, 'entity_count': <function ArchiveReaderAbstract.entity_count>, 'link_count': <property object>, 'iter_entity_fields': <function ArchiveReaderAbstract.iter_entity_fields>, 'iter_node_uuids': <function ArchiveReaderAbstract.iter_node_uuids>, 'iter_group_uuids': <function ArchiveReaderAbstract.iter_group_uuids>, 'iter_link_data': <function ArchiveReaderAbstract.iter_link_data>, 'iter_node_repos': <function ArchiveReaderAbstract.iter_node_repos>, 'node_repository': <function ArchiveReaderAbstract.node_repository>, '__dict__': <attribute '__dict__' of 'ArchiveReaderAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveReaderAbstract' objects>, '__abstractmethods__': frozenset({'iter_node_repos', 'file_format_verbose', 'metadata', 'link_count', 'export_version', 'iter_entity_fields', 'iter_group_uuids', 'entity_count', 'iter_link_data', 'iter_node_uuids', 'compatible_export_version'}), '_abc_impl': <_abc_data object>})
__enter__()aiida.tools.importexport.archive.readers.ArchiveReaderAbstract[source]
__exit__(exctype: Optional[Type[BaseException]], excinst: Optional[BaseException], exctb: Optional[traceback])[source]
__init__(filename: str, **kwargs: Any)[source]

An archive reader

Parameters

filename – the filename (possibly including the absolute path) of the file to import.

__module__ = 'aiida.tools.importexport.archive.readers'
__weakref__

list of weak references to the object (if defined)

_abc_impl = <_abc_data object>
assert_within_context()[source]

Assert that the method is called within a context.

Raises

~aiida.common.exceptions.InvalidOperation: if not called within a context

check_version()[source]

Check the version compatibility of the archive.

Raises

~aiida.tools.importexport.common.exceptions.IncompatibleArchiveVersionError: If the version is not compatible

abstract property compatible_export_version

Return the export version that this reader is compatible with.

abstract entity_count(name: str)int[source]

Return the count of an entity or None if not contained in the archive.

property entity_names

Return list of all entity names.

abstract property export_version

Return the export version.

Raises

CorruptArchive – If the version cannot be retrieved.

abstract property file_format_verbose

The file format name.

property filename

Return the name of the file that is being read from.

abstract iter_entity_fields(name: str, fields: Optional[Tuple[str, ]] = None) → Iterator[Tuple[int, Dict[str, Any]]][source]

Iterate over entities and yield their pk and database fields.

abstract iter_group_uuids() → Iterator[Tuple[str, Set[str]]][source]

Iterate over group UUIDs and the a set of node UUIDs they contain.

Iterate over links: {‘input’: <UUID>, ‘output’: <UUID>, ‘label’: <LABEL>, ‘type’: <TYPE>}

abstract iter_node_repos(uuids: Iterable[str], callback: Callable[[str, Any], None] = <function null_callback>) → Iterator[aiida.common.folders.Folder][source]

Yield temporary folders containing the contents of the repository for each node.

Parameters
  • uuids – UUIDs of the nodes over whose repository folders to iterate

  • callback

    a callback to report on the process, callback(action, value), with the following callback signatures:

    • callback('init', {'total': <int>, 'description': <str>}),

      to signal the start of a process, its total iterations and description

    • callback('update', <int>),

      to signal an update to the process and the number of iterations to progress

Raises

CorruptArchive – If the repository does not exist.

abstract iter_node_uuids() → Iterator[str][source]

Iterate over node UUIDs.

Return the count of links.

abstract property metadata

Return the full (validated) archive metadata.

node_repository(uuid: str)aiida.common.folders.Folder[source]

Return a temporary folder with the contents of the repository for a single node.

Parameters

uuid – The UUID of the node

Raises

CorruptArchive – If the repository does not exist.

class aiida.tools.importexport.ArchiveWriterAbstract(filepath: Union[str, pathlib.Path], **kwargs: Any)[source]

Bases: abc.ABC

An abstract interface for AiiDA archive writers.

__abstractmethods__ = frozenset({'close', 'export_version', 'file_format_verbose', 'open', 'write_entity_data', 'write_group_nodes', 'write_link', 'write_metadata', 'write_node_repo_folder'})
__dict__ = mappingproxy({'__module__': 'aiida.tools.importexport.archive.writers', '__doc__': 'An abstract interface for AiiDA archive writers.', '__init__': <function ArchiveWriterAbstract.__init__>, 'filepath': <property object>, 'export_info': <property object>, 'file_format_verbose': <property object>, 'export_version': <property object>, '__enter__': <function ArchiveWriterAbstract.__enter__>, '__exit__': <function ArchiveWriterAbstract.__exit__>, 'assert_within_context': <function ArchiveWriterAbstract.assert_within_context>, 'add_export_info': <function ArchiveWriterAbstract.add_export_info>, '_remove_filepath': <function ArchiveWriterAbstract._remove_filepath>, 'open': <function ArchiveWriterAbstract.open>, 'close': <function ArchiveWriterAbstract.close>, 'write_metadata': <function ArchiveWriterAbstract.write_metadata>, 'write_link': <function ArchiveWriterAbstract.write_link>, 'write_group_nodes': <function ArchiveWriterAbstract.write_group_nodes>, 'write_entity_data': <function ArchiveWriterAbstract.write_entity_data>, 'write_node_repo_folder': <function ArchiveWriterAbstract.write_node_repo_folder>, '__dict__': <attribute '__dict__' of 'ArchiveWriterAbstract' objects>, '__weakref__': <attribute '__weakref__' of 'ArchiveWriterAbstract' objects>, '__abstractmethods__': frozenset({'open', 'file_format_verbose', 'close', 'write_node_repo_folder', 'write_group_nodes', 'export_version', 'write_link', 'write_entity_data', 'write_metadata'}), '_abc_impl': <_abc_data object>})
__enter__()aiida.tools.importexport.archive.writers.ArchiveWriterAbstract[source]

Open the contextmanager

__exit__(exctype: Optional[Type[BaseException]], excinst: Optional[BaseException], exctb: Optional[traceback])[source]

Open the contextmanager

__init__(filepath: Union[str, pathlib.Path], **kwargs: Any)[source]

Initiate the writer.

Parameters
  • filepath – the path to the file to export to.

  • kwargs – keyword arguments specific to the writer implementation.

__module__ = 'aiida.tools.importexport.archive.writers'
__weakref__

list of weak references to the object (if defined)

_abc_impl = <_abc_data object>
_remove_filepath()[source]

To run before moving the final export from a temporary folder.

add_export_info(key: str, value: Any)[source]

Record information about the export process.

This information can be accessed by writer.export_info, it is reset on entrance of the context manager.

assert_within_context()[source]

Assert that the method is called within a context.

Raises

~aiida.common.exceptions.InvalidOperation: if not called within a context

abstract close(excepted: bool)[source]

Finalise the export.

This method is called on exiting a context manager.

Parameters

excepted – Whether

property export_info

Return information stored during the export process.

abstract property export_version

Return the export version.

abstract property file_format_verbose

Return the file format name.

property filepath

Return the filepath to write to.

abstract open()[source]

Setup the export process.

This method is called on entering a context manager.

abstract write_entity_data(name: str, pk: int, id_key: str, fields: Dict[str, Any])[source]

Write the data for a single DB entity.

Parameters
  • name – the name of the entity (e.g. ‘NODE’)

  • pk – the primary key of the entity (unique for the current DB only)

  • id_key – the key within fields that denotes the unique identifier of the entity (unique for all DBs)

  • fields – mapping of DB fields to store -> values

abstract write_group_nodes(uuid: str, node_uuids: List[str])[source]

Write a mapping of a group to the nodes it contains.

Parameters
  • uuid – the UUID of the group

  • node_uuids – the list of node UUIDs the group contains

Write a dictionary of information for a single provenance link.

Parameters

data{'input': <UUID_STR>, 'output': <UUID_STR>, 'label': <LABEL_STR>, 'type': <TYPE_STR>}

abstract write_metadata(data: aiida.tools.importexport.archive.common.ArchiveMetadata)[source]

Write the metadata of the export process.

abstract write_node_repo_folder(uuid: str, path: Union[str, pathlib.Path], overwrite: bool = True)[source]

Write a node repository to the archive.

Parameters
  • uuid – The UUID of the node

  • path – The path to the repository folder on disk

  • overwrite – Allow to overwrite existing path in archive

class aiida.tools.importexport.CacheFolder(path: Union[pathlib.Path, str], *, encoding: str = 'utf8')[source]

Bases: object

A class to encapsulate a folder path with cached read/writes.

The class can be used as a context manager, and will flush the cache on exit:

with CacheFolder(path) as folder:
    # these are stored in memory (no disk write)
    folder.write_text('path/to/file.txt', 'content')
    folder.write_json('path/to/data.json', {'a': 1})
    # these will be read from memory
    text = folder.read_text('path/to/file.txt')
    text = folder.load_json('path/to/data.json')

# all files will now have been written to disk
__dict__ = mappingproxy({'__module__': 'aiida.tools.importexport.archive.common', '__doc__': "A class to encapsulate a folder path with cached read/writes.\n\n The class can be used as a context manager, and will flush the cache on exit::\n\n with CacheFolder(path) as folder:\n # these are stored in memory (no disk write)\n folder.write_text('path/to/file.txt', 'content')\n folder.write_json('path/to/data.json', {'a': 1})\n # these will be read from memory\n text = folder.read_text('path/to/file.txt')\n text = folder.load_json('path/to/data.json')\n\n # all files will now have been written to disk\n\n ", '__init__': <function CacheFolder.__init__>, '_write_object': <function CacheFolder._write_object>, 'flush': <function CacheFolder.flush>, '_limit_cache': <function CacheFolder._limit_cache>, 'get_path': <function CacheFolder.get_path>, 'write_text': <function CacheFolder.write_text>, 'read_text': <function CacheFolder.read_text>, 'write_json': <function CacheFolder.write_json>, 'load_json': <function CacheFolder.load_json>, 'remove_file': <function CacheFolder.remove_file>, '__enter__': <function CacheFolder.__enter__>, '__exit__': <function CacheFolder.__exit__>, '__dict__': <attribute '__dict__' of 'CacheFolder' objects>, '__weakref__': <attribute '__weakref__' of 'CacheFolder' objects>})
__enter__()[source]

Enter the contextmanager.

__exit__(exctype: Optional[Type[BaseException]], excinst: Optional[BaseException], exctb: Optional[traceback])[source]

Exit the contextmanager.

__init__(path: Union[pathlib.Path, str], *, encoding: str = 'utf8')[source]

Initialise cached folder.

Parameters
  • path – folder path to cache

  • encoding – encoding of text to read/write

__module__ = 'aiida.tools.importexport.archive.common'
__weakref__

list of weak references to the object (if defined)

_limit_cache()[source]

Ensure the cache does not exceed a set limit.

Content is uncached on a First-In-First-Out basis.

_write_object(path: str, ctype: str, content: Any)[source]

Write an object from the cache to disk.

Parameters
  • path – relative path of file

  • ctype – the type of the content

  • content – the content to write

flush()[source]

Flush the cache.

get_path(flush=True)pathlib.Path[source]

Return the path.

Parameters

flush – flush the cache before returning

load_json(path: str, ensure_copy: bool = False) → Tuple[bool, dict][source]

Load a json file from the cache folder.

Important: if the dict is returned directly from the cache, any mutations will affect the cached dict.

Parameters
  • path – path relative to base folder

  • ensure_copy – ensure the dict is a copy of that from the cache

Returns

(from cache, the content) If from cache, mutations will directly affect the cache

read_text(path)str[source]

write text from the cache or base folder.

Parameters

path – path relative to base folder

remove_file(path)[source]

Remove a file from both the cache and base folder (if present).

Parameters

path – path relative to base folder

write_json(path: str, data: dict)[source]

Write dict to the folder, to be serialized as json.

The dictionary is stored in memory, until the cache is flushed, at which point the dictionary is serialized to json and written to disk.

Parameters

path – path relative to base folder

write_text(path: str, content: str)[source]

write text to the cache.

Parameters

path – path relative to base folder

exception aiida.tools.importexport.CorruptArchive[source]

Bases: aiida.tools.importexport.common.exceptions.ExportImportException

Raised when an operation is applied to a corrupt export archive, e.g. missing files or invalid formats.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.DanglingLinkError[source]

Bases: aiida.tools.importexport.common.exceptions.MigrationValidationError

Raised when an export archive is detected to contain dangling links when importing.

__module__ = 'aiida.tools.importexport.common.exceptions'
class aiida.tools.importexport.ExportFileFormat(value)[source]

Bases: str, enum.Enum

Archive file formats

TAR_GZIPPED = 'tar.gz'
ZIP = 'zip'
__dict__ = mappingproxy({'_generate_next_value_': <function Enum._generate_next_value_>, '__module__': 'aiida.tools.importexport.common.config', '__doc__': 'Archive file formats', '__dict__': <attribute '__dict__' of 'ExportFileFormat' objects>, '__weakref__': <attribute '__weakref__' of 'ExportFileFormat' objects>, '_member_names_': ['ZIP', 'TAR_GZIPPED'], '_member_map_': {'ZIP': <ExportFileFormat.ZIP: 'zip'>, 'TAR_GZIPPED': <ExportFileFormat.TAR_GZIPPED: 'tar.gz'>}, '_member_type_': <class 'str'>, '_value2member_map_': {'zip': <ExportFileFormat.ZIP: 'zip'>, 'tar.gz': <ExportFileFormat.TAR_GZIPPED: 'tar.gz'>}, 'ZIP': <ExportFileFormat.ZIP: 'zip'>, 'TAR_GZIPPED': <ExportFileFormat.TAR_GZIPPED: 'tar.gz'>, '__repr__': <function Enum.__repr__>, '__str__': <function Enum.__str__>, '__format__': <function Enum.__format__>, '__new__': <function Enum.__new__>})
__module__ = 'aiida.tools.importexport.common.config'
__weakref__

list of weak references to the object (if defined)

_generate_next_value_(start, count, last_values)
_member_map_ = {'TAR_GZIPPED': <ExportFileFormat.TAR_GZIPPED: 'tar.gz'>, 'ZIP': <ExportFileFormat.ZIP: 'zip'>}
_member_names_ = ['ZIP', 'TAR_GZIPPED']
_member_type_

alias of builtins.str

_value2member_map_ = {'tar.gz': <ExportFileFormat.TAR_GZIPPED: 'tar.gz'>, 'zip': <ExportFileFormat.ZIP: 'zip'>}
exception aiida.tools.importexport.ExportImportException[source]

Bases: aiida.common.exceptions.AiidaException

Base class for all AiiDA export/import module exceptions.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.ExportValidationError[source]

Bases: aiida.tools.importexport.common.exceptions.ArchiveExportError

Raised when validation fails during export, e.g. for non-sealed ProcessNode s.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.ImportUniquenessError[source]

Bases: aiida.tools.importexport.common.exceptions.ArchiveImportError

Raised when the user tries to violate a uniqueness constraint.

Similar to UniquenessError.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.ImportValidationError[source]

Bases: aiida.tools.importexport.common.exceptions.ArchiveImportError

Raised when validation fails during import, e.g. for parameter types and values.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.IncompatibleArchiveVersionError[source]

Bases: aiida.tools.importexport.common.exceptions.ExportImportException

Raised when trying to import an export archive with an incompatible schema version.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.MigrationValidationError[source]

Bases: aiida.tools.importexport.common.exceptions.ArchiveMigrationError

Raised when validation fails during migration of export archives.

__module__ = 'aiida.tools.importexport.common.exceptions'
exception aiida.tools.importexport.ProgressBarError[source]

Bases: aiida.tools.importexport.common.exceptions.ExportImportException

Something is wrong with setting up the tqdm progress bar

__module__ = 'aiida.tools.importexport.common.exceptions'
class aiida.tools.importexport.ReaderJsonBase(filename: str, sandbox_in_repo: bool = False, **kwargs: Any)[source]

Bases: aiida.tools.importexport.archive.readers.ArchiveReaderAbstract

A reader base for the JSON compressed formats.

FILENAME_DATA = 'data.json'
FILENAME_METADATA = 'metadata.json'
REPO_FOLDER = 'nodes'
__abstractmethods__ = frozenset({})
__enter__()[source]
__exit__(exctype: Optional[Type[BaseException]], excinst: Optional[BaseException], exctb: Optional[traceback])[source]
__init__(filename: str, sandbox_in_repo: bool = False, **kwargs: Any)[source]

A reader for JSON compressed archives.

Parameters
  • filename – the filename (possibly including the absolute path) of the file on which to export.

  • sandbox_in_repo – Create the temporary uncompressed folder within the aiida repository

__module__ = 'aiida.tools.importexport.archive.readers'
_abc_impl = <_abc_data object>
_extract(*, path_prefix: str, callback: Callable[[str, Any], None])[source]

Extract repository data to a temporary folder.

Parameters
  • path_prefix – Only extract paths starting with this prefix.

  • callback

    a callback to report on the process, callback(action, value), with the following callback signatures:

    • callback('init', {'total': <int>, 'description': <str>}),

      to signal the start of a process, its total iterations and description

    • callback('update', <int>),

      to signal an update to the process and the number of iterations to progress

Raises

TypeError – if parameter types are not respected

_get_data()[source]

Retrieve the data JSON.

_get_metadata()[source]

Retrieve the metadata JSON.

property compatible_export_version

Return the export version that this reader is compatible with.

entity_count(name: str)int[source]

Return the count of an entity or None if not contained in the archive.

property export_version

Return the export version.

Raises

CorruptArchive – If the version cannot be retrieved.

property file_format_verbose

The file format name.

iter_entity_fields(name: str, fields: Optional[Tuple[str, ]] = None) → Iterator[Tuple[int, Dict[str, Any]]][source]

Iterate over entities and yield their pk and database fields.

iter_group_uuids() → Iterator[Tuple[str, Set[str]]][source]

Iterate over group UUIDs and the a set of node UUIDs they contain.

Iterate over links: {‘input’: <UUID>, ‘output’: <UUID>, ‘label’: <LABEL>, ‘type’: <TYPE>}

iter_node_repos(uuids: Iterable[str], callback: Callable[[str, Any], None] = <function null_callback>) → Iterator[aiida.common.folders.Folder][source]

Yield temporary folders containing the contents of the repository for each node.

Parameters
  • uuids – UUIDs of the nodes over whose repository folders to iterate

  • callback

    a callback to report on the process, callback(action, value), with the following callback signatures:

    • callback('init', {'total': <int>, 'description': <str>}),

      to signal the start of a process, its total iterations and description

    • callback('update', <int>),

      to signal an update to the process and the number of iterations to progress

Raises

CorruptArchive – If the repository does not exist.

iter_node_uuids() → Iterator[str][source]

Iterate over node UUIDs.

Return the count of links.

property metadata

Return the full (validated) archive metadata.

class aiida.tools.importexport.ReaderJsonFolder(filename: str, sandbox_in_repo: bool = False, **kwargs: Any)[source]

Bases: aiida.tools.importexport.archive.readers.ReaderJsonBase

A reader for a JSON plain folder format.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.tools.importexport.archive.readers'
_abc_impl = <_abc_data object>
_extract(*, path_prefix: str, callback: Callable[[str, Any], None] = <function null_callback>)[source]

Extract repository data to a temporary folder.

Parameters
  • path_prefix – Only extract paths starting with this prefix.

  • callback

    a callback to report on the process, callback(action, value), with the following callback signatures:

    • callback('init', {'total': <int>, 'description': <str>}),

      to signal the start of a process, its total iterations and description

    • callback('update', <int>),

      to signal an update to the process and the number of iterations to progress

Raises

TypeError – if parameter types are not respected

_get_data()[source]

Retrieve the data JSON.

_get_metadata()[source]

Retrieve the metadata JSON.

property file_format_verbose

The file format name.

class aiida.tools.importexport.ReaderJsonTar(filename: str, sandbox_in_repo: bool = False, **kwargs: Any)[source]

Bases: aiida.tools.importexport.archive.readers.ReaderJsonBase

A reader for a JSON tar compressed format.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.tools.importexport.archive.readers'
_abc_impl = <_abc_data object>
_extract(*, path_prefix: str, callback: Callable[[str, Any], None] = <function null_callback>)[source]

Extract repository data to a temporary folder.

Parameters
  • path_prefix – Only extract paths starting with this prefix.

  • callback

    a callback to report on the process, callback(action, value), with the following callback signatures:

    • callback('init', {'total': <int>, 'description': <str>}),

      to signal the start of a process, its total iterations and description

    • callback('update', <int>),

      to signal an update to the process and the number of iterations to progress

Raises

TypeError – if parameter types are not respected

_get_data()[source]

Retrieve the data JSON.

_get_metadata()[source]

Retrieve the metadata JSON.

property file_format_verbose

The file format name.

class aiida.tools.importexport.ReaderJsonZip(filename: str, sandbox_in_repo: bool = False, **kwargs: Any)[source]

Bases: aiida.tools.importexport.archive.readers.ReaderJsonBase

A reader for a JSON zip compressed format.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.tools.importexport.archive.readers'
_abc_impl = <_abc_data object>
_extract(*, path_prefix: str, callback: Callable[[str, Any], None] = <function null_callback>)[source]

Extract repository data to a temporary folder.

Parameters
  • path_prefix – Only extract paths starting with this prefix.

  • callback

    a callback to report on the process, callback(action, value), with the following callback signatures:

    • callback('init', {'total': <int>, 'description': <str>}),

      to signal the start of a process, its total iterations and description

    • callback('update', <int>),

      to signal an update to the process and the number of iterations to progress

Raises

TypeError – if parameter types are not respected

_get_data()[source]

Retrieve the data JSON.

_get_metadata()[source]

Retrieve the metadata JSON.

property file_format_verbose

The file format name.

class aiida.tools.importexport.WriterJsonFolder(filepath: str, folder: aiida.common.folders.Folder = None, **kwargs: Any)[source]

Bases: aiida.tools.importexport.archive.writers.ArchiveWriterAbstract

An archive writer, which writes database data as a single JSON and repository data in a folder system.

This writer is mainly intended for backward compatibility with export_tree.

__abstractmethods__ = frozenset({})
__init__(filepath: str, folder: aiida.common.folders.Folder = None, **kwargs: Any)[source]

Initiate the writer.

Parameters
  • folder – a folder to write the archive to.

  • filepath – dummy value not used

__module__ = 'aiida.tools.importexport.archive.writers'
_abc_impl = <_abc_data object>
close(excepted: bool)[source]

Finalise the export.

This method is called on exiting a context manager.

Parameters

excepted – Whether

property export_version

Return the export version.

property file_format_verbose

Return the file format name.

open()[source]

Setup the export process.

This method is called on entering a context manager.

write_entity_data(name: str, pk: int, id_key: str, fields: Dict[str, Any])[source]

Write the data for a single DB entity.

Parameters
  • name – the name of the entity (e.g. ‘NODE’)

  • pk – the primary key of the entity (unique for the current DB only)

  • id_key – the key within fields that denotes the unique identifier of the entity (unique for all DBs)

  • fields – mapping of DB fields to store -> values

write_group_nodes(uuid: str, node_uuids: List[str])[source]

Write a mapping of a group to the nodes it contains.

Parameters
  • uuid – the UUID of the group

  • node_uuids – the list of node UUIDs the group contains

Write a dictionary of information for a single provenance link.

Parameters

data{'input': <UUID_STR>, 'output': <UUID_STR>, 'label': <LABEL_STR>, 'type': <TYPE_STR>}

write_metadata(data: aiida.tools.importexport.archive.common.ArchiveMetadata)[source]

Write the metadata of the export process.

write_node_repo_folder(uuid: str, path: Union[str, pathlib.Path], overwrite: bool = True)[source]

Write a node repository to the archive.

Parameters
  • uuid – The UUID of the node

  • path – The path to the repository folder on disk

  • overwrite – Allow to overwrite existing path in archive

class aiida.tools.importexport.WriterJsonTar(filepath: Union[str, pathlib.Path], **kwargs: Any)[source]

Bases: aiida.tools.importexport.archive.writers.ArchiveWriterAbstract

An archive writer, which writes database data as a single JSON and repository data in a folder system.

The entire containing folder is then compressed as a tar file.

__abstractmethods__ = frozenset({})
__module__ = 'aiida.tools.importexport.archive.writers'
_abc_impl = <_abc_data object>
close(excepted: bool)[source]

Finalise the export.

This method is called on exiting a context manager.

Parameters

excepted – Whether

property export_version

Return the export version.

property file_format_verbose

Return the file format name.

open()[source]

Setup the export process.

This method is called on entering a context manager.

write_entity_data(name: str, pk: int, id_key: str, fields: Dict[str, Any])[source]

Write the data for a single DB entity.

Parameters
  • name – the name of the entity (e.g. ‘NODE’)

  • pk – the primary key of the entity (unique for the current DB only)

  • id_key – the key within fields that denotes the unique identifier of the entity (unique for all DBs)

  • fields – mapping of DB fields to store -> values

write_group_nodes(uuid: str, node_uuids: List[str])[source]

Write a mapping of a group to the nodes it contains.

Parameters
  • uuid – the UUID of the group

  • node_uuids – the list of node UUIDs the group contains

Write a dictionary of information for a single provenance link.

Parameters

data{'input': <UUID_STR>, 'output': <UUID_STR>, 'label': <LABEL_STR>, 'type': <TYPE_STR>}

write_metadata(data: aiida.tools.importexport.archive.common.ArchiveMetadata)[source]

Write the metadata of the export process.

write_node_repo_folder(uuid: str, path: Union[str, pathlib.Path], overwrite: bool = True)[source]

Write a node repository to the archive.

Parameters
  • uuid – The UUID of the node

  • path – The path to the repository folder on disk

  • overwrite – Allow to overwrite existing path in archive

class aiida.tools.importexport.WriterJsonZip(filepath: Union[str, pathlib.Path], *, use_compression: bool = True, cache_zipinfo: bool = False, **kwargs)[source]

Bases: aiida.tools.importexport.archive.writers.ArchiveWriterAbstract

An archive writer, which writes database data as a single JSON and repository data in a zipped folder system.

__abstractmethods__ = frozenset({})
__init__(filepath: Union[str, pathlib.Path], *, use_compression: bool = True, cache_zipinfo: bool = False, **kwargs)[source]

Initiate the writer.

Parameters
  • filepath – the path to the file to export to.

  • use_compression – Whether or not to deflate the objects inside the zip file.

  • cache_zipinfo – Cache the zip file index on disk during the write. This reduces the RAM usage of the process, but will make the process slower.

__module__ = 'aiida.tools.importexport.archive.writers'
_abc_impl = <_abc_data object>
close(excepted: bool)[source]

Finalise the export.

This method is called on exiting a context manager.

Parameters

excepted – Whether

property export_version

Return the export version.

property file_format_verbose

Return the file format name.

open()[source]

Setup the export process.

This method is called on entering a context manager.

write_entity_data(name: str, pk: int, id_key: str, fields: Dict[str, Any])[source]

Write the data for a single DB entity.

Parameters
  • name – the name of the entity (e.g. ‘NODE’)

  • pk – the primary key of the entity (unique for the current DB only)

  • id_key – the key within fields that denotes the unique identifier of the entity (unique for all DBs)

  • fields – mapping of DB fields to store -> values

write_group_nodes(uuid: str, node_uuids: List[str])[source]

Write a mapping of a group to the nodes it contains.

Parameters
  • uuid – the UUID of the group

  • node_uuids – the list of node UUIDs the group contains

Write a dictionary of information for a single provenance link.

Parameters

data{'input': <UUID_STR>, 'output': <UUID_STR>, 'label': <LABEL_STR>, 'type': <TYPE_STR>}

write_metadata(data: aiida.tools.importexport.archive.common.ArchiveMetadata)[source]

Write the metadata of the export process.

write_node_repo_folder(uuid: str, path: Union[str, pathlib.Path], overwrite: bool = True)[source]

Write a node repository to the archive.

Parameters
  • uuid – The UUID of the node

  • path – The path to the repository folder on disk

  • overwrite – Allow to overwrite existing path in archive

aiida.tools.importexport.detect_archive_type(in_path: str)str[source]

For back-compatibility, but should be replaced with direct comparison of classes.

Parameters

in_path – the path to the file

Returns

the archive type identifier (currently one of ‘zip’, ‘tar.gz’, ‘folder’)

aiida.tools.importexport.export(entities: Optional[Iterable[Any]] = None, filename: Optional[str] = None, file_format: Union[str, Type[aiida.tools.importexport.archive.writers.ArchiveWriterAbstract]] = <ExportFileFormat.ZIP: 'zip'>, overwrite: bool = False, silent: Optional[bool] = None, use_compression: Optional[bool] = None, include_comments: bool = True, include_logs: bool = True, allowed_licenses: Optional[Union[list, Callable]] = None, forbidden_licenses: Optional[Union[list, Callable]] = None, writer_init: Optional[Dict[str, Any]] = None, batch_size: int = 100, **traversal_rules: bool)aiida.tools.importexport.archive.writers.ArchiveWriterAbstract[source]

Export AiiDA data to an archive file.

Note, the logging level and progress reporter should be set externally, for example:

from aiida.common.progress_reporter import set_progress_bar_tqdm

EXPORT_LOGGER.setLevel('DEBUG')
set_progress_bar_tqdm(leave=True)
export(...)

Deprecated since version 1.5.0: Support for the parameter silent will be removed in v2.0.0. Please set the log level and progress bar implementation independently.

Deprecated since version 1.5.0: Support for the parameter use_compression will be removed in v2.0.0. Please use writer_init={‘use_compression’: True}.

Deprecated since version 1.2.1: Support for the parameters what and outfile will be removed in v2.0.0. Please use entities and filename instead, respectively.

Parameters
  • entities – a list of entity instances; they can belong to different models/entities.

  • filename – the filename (possibly including the absolute path) of the file on which to export.

  • file_format – ‘zip’, ‘tar.gz’ or ‘folder’ or a specific writer class.

  • overwrite – if True, overwrite the output file without asking, if it exists. If False, raise an ArchiveExportError if the output file already exists.

  • allowed_licenses – List or function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.

  • forbidden_licenses – List or function. If a list, then checks whether all licenses of Data nodes are in the list. If a function, then calls function for licenses of Data nodes expecting True if license is allowed, False otherwise.

  • include_comments – In-/exclude export of comments for given node(s) in entities. Default: True, include comments in export (as well as relevant users).

  • include_logs – In-/exclude export of logs for given node(s) in entities. Default: True, include logs in export.

  • writer_init – Additional key-word arguments to pass to the writer class init

  • batch_size – batch database query results in sub-collections to reduce memory usage

  • traversal_rules – graph traversal rules. See aiida.common.links.GraphTraversalRules what rule names are toggleable and what the defaults are.

Returns

a dictionary of data regarding the export process (timings, etc)

Raises
aiida.tools.importexport.extract_tar(infile, folder, nodes_export_subfolder=None, check_files='data.json', 'metadata.json', **kwargs)[source]

Extract the nodes to be imported from a (possibly zipped) tar file.

Parameters
  • infile (str) – file path

  • folder (SandboxFolder) – a temporary fodler used to extract the file tree

  • nodes_export_subfolder (str) – name of the subfolder for AiiDA nodes

  • check_files – list of files to check are present

  • silent (bool) – suppress progress bar

Raises
  • TypeError – if parameter types are not respected

  • CorruptArchive – if the archive misses files or files have incorrect formats

aiida.tools.importexport.extract_tree(infile, folder)[source]

Prepare to import nodes from plain file system tree by copying in the given sandbox folder.

Note

the contents of the unpacked archive directory are copied into the sandbox folder, because the files will anyway haven to be copied to the repository at some point. By copying the contents of the source directory now and continuing operation only on the sandbox folder, we do not risk to modify the source files accidentally. During import then, the node files from the sandbox can be moved to the repository, so they won’t have to be copied again in any case.

Parameters
  • infile (str) – absolute filepath point to the unpacked archive directory

  • folder (SandboxFolder) – a temporary folder to which the archive contents are copied

aiida.tools.importexport.extract_zip(infile, folder, nodes_export_subfolder=None, check_files='data.json', 'metadata.json', **kwargs)[source]

Extract the nodes to be imported from a zip file.

Parameters
  • infile (str) – file path

  • folder (SandboxFolder) – a temporary folder used to extract the file tree

  • nodes_export_subfolder (str) – name of the subfolder for AiiDA nodes

  • check_files – list of files to check are present

  • silent (bool) – suppress progress bar

Raises
  • TypeError – if parameter types are not respected

  • CorruptArchive – if the archive misses files or files have incorrect formats

aiida.tools.importexport.get_migrator(file_format: str) → Type[aiida.tools.importexport.archive.migrators.ArchiveMigratorAbstract][source]

Return the available archive migrator classes.

aiida.tools.importexport.get_reader(file_format: str) → Type[aiida.tools.importexport.archive.readers.ArchiveReaderAbstract][source]

Return the available writer classes.

aiida.tools.importexport.get_writer(file_format: str) → Type[aiida.tools.importexport.archive.writers.ArchiveWriterAbstract][source]

Return the available writer classes.

aiida.tools.importexport.import_data(in_path, group=None, **kwargs)[source]

Import exported AiiDA archive to the AiiDA database and repository.

Proxy function for the backend-specific import functions. If in_path is a folder, calls extract_tree; otherwise, tries to detect the compression format (zip, tar.gz, tar.bz2, …) and calls the correct function.

Note, the logging level and progress reporter should be set externally, for example:

from aiida.common.progress_reporter import set_progress_bar_tqdm

IMPORT_LOGGER.setLevel('DEBUG')
set_progress_bar_tqdm(leave=True)
import_data(...)

Deprecated since version 1.5.0: Support for the parameter silent will be removed in v2.0.0. Please set the log level and progress bar implementation independently.

Parameters
  • in_path (str) – the path to a file or folder that can be imported in AiiDA.

  • group (Group) – Group wherein all imported Nodes will be placed.

  • extras_mode_existing (str) – 3 letter code that will identify what to do with the extras import. The first letter acts on extras that are present in the original node and not present in the imported node. Can be either: ‘k’ (keep it) or ‘n’ (do not keep it). The second letter acts on the imported extras that are not present in the original node. Can be either: ‘c’ (create it) or ‘n’ (do not create it). The third letter defines what to do in case of a name collision. Can be either: ‘l’ (leave the old value), ‘u’ (update with a new value), ‘d’ (delete the extra), or ‘a’ (ask what to do if the content is different).

  • extras_mode_new (str) – ‘import’ to import extras of new nodes or ‘none’ to ignore them

  • comment_mode (str) – Comment import modes (when same UUIDs are found). Can be either: ‘newest’ (will keep the Comment with the most recent modification time (mtime)) or ‘overwrite’ (will overwrite existing Comments with the ones from the import file).

Returns

New and existing Nodes and Links.

Return type

dict

Raises

ArchiveImportError – if there are any internal errors when importing.

aiida.tools.importexport.null_callback(action: str, value: Any)[source]

A null callback function.