aiida.tools.archive.implementations.sqlite_zip package#

SQLite implementations of an archive file format.

Submodules#

The file format implementation

class aiida.tools.archive.implementations.sqlite_zip.main.ArchiveFormatSqlZip[source]#

Bases: ArchiveFormatAbstract

Archive format, which uses a zip file, containing an SQLite database.

The content of the zip file is:

|- archive.zip
    |- metadata.json
    |- db.sqlite3
    |- repo/
        |- hashkey

Repository files are named by their SHA256 content hash.

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.tools.archive.implementations.sqlite_zip.main'#
_abc_impl = <_abc._abc_data object>#
property key_format: str#

Return the format of repository keys.

property latest_version: str#

Return the latest schema version of the archive format.

migrate(inpath: str | Path, outpath: str | Path, version: str, *, force: bool = False, compression: int = 6) None[source]#

Migrate an archive to a specific version.

Parameters:

path – archive path

open(path: str | Path, mode: Literal['r'], *, compression: int = 6, **kwargs: Any) ArchiveReaderSqlZip[source]#
open(path: str | Path, mode: Literal['x', 'w'], *, compression: int = 6, **kwargs: Any) ArchiveWriterSqlZip
open(path: str | Path, mode: Literal['a'], *, compression: int = 6, **kwargs: Any) ArchiveAppenderSqlZip

Open an archive (latest version only).

Parameters:
  • path – archive path

  • mode – open mode: ‘r’ (read), ‘x’ (exclusive write), ‘w’ (write) or ‘a’ (append)

  • compression – default level of compression to use for writing (integer from 0 to 9)

Note, in write mode, the writer is responsible for writing the format version.

read_version(path: str | Path) str[source]#

Read the version of the archive from a file.

This method should account for reading all versions of the archive format.

Parameters:

path – archive path

Raises:

UnreachableStorage if the file does not exist

Raises:

CorruptStorage if a version cannot be read from the archive

AiiDA archive reader implementation.

class aiida.tools.archive.implementations.sqlite_zip.reader.ArchiveReaderSqlZip(path: str | Path, **kwargs: Any)[source]#

Bases: ArchiveReaderAbstract

An archive reader for the SQLite format.

__abstractmethods__ = frozenset({})#
__enter__() ArchiveReaderSqlZip[source]#

Start reading from the archive.

__exit__(*args, **kwargs) None[source]#

Close the archive backend.

__init__(path: str | Path, **kwargs: Any)[source]#

Initialise the reader.

Parameters:

path – archive path

__module__ = 'aiida.tools.archive.implementations.sqlite_zip.reader'#
_abc_impl = <_abc._abc_data object>#
get_backend() SqliteZipBackend[source]#

Return a ‘read-only’ backend for the archive.

get_metadata() Dict[str, Any][source]#

Return the top-level metadata.

Raises:

CorruptStorage if the top-level metadata cannot be read from the archive

AiiDA archive writer implementation.

class aiida.tools.archive.implementations.sqlite_zip.writer.ArchiveAppenderSqlZip(path: str | Path, fmt: ArchiveFormatAbstract, *, mode: Literal['x', 'w', 'a'] = 'x', compression: int = 6, work_dir: Path | None = None, _debug: bool = False, _enforce_foreign_keys: bool = True)[source]#

Bases: ArchiveWriterSqlZip

AiiDA archive appender implementation.

__abstractmethods__ = frozenset({})#
__enter__() ArchiveAppenderSqlZip[source]#

Start appending to the archive

__exit__(*args, **kwargs)[source]#

Finalise the archive

__module__ = 'aiida.tools.archive.implementations.sqlite_zip.writer'#
_abc_impl = <_abc._abc_data object>#
_copy_old_zip_files()[source]#

Copy the old archive content to the new one (omitting any amended or deleted files)

delete_object(key: str) None[source]#

Delete the object from the archive.

Parameters:

key – fully qualified identifier for the object within the repository.

Raises:

IOError – if the file could not be deleted.

class aiida.tools.archive.implementations.sqlite_zip.writer.ArchiveWriterSqlZip(path: str | Path, fmt: ArchiveFormatAbstract, *, mode: Literal['x', 'w', 'a'] = 'x', compression: int = 6, work_dir: Path | None = None, _debug: bool = False, _enforce_foreign_keys: bool = True)[source]#

Bases: ArchiveWriterAbstract

AiiDA archive writer implementation.

__abstractmethods__ = frozenset({})#
__enter__() ArchiveWriterSqlZip[source]#

Start writing to the archive

__exit__(*args, **kwargs)[source]#

Finalise the archive

__init__(path: str | Path, fmt: ArchiveFormatAbstract, *, mode: Literal['x', 'w', 'a'] = 'x', compression: int = 6, work_dir: Path | None = None, _debug: bool = False, _enforce_foreign_keys: bool = True)[source]#

Initialise the writer.

Parameters:
  • path – archive path

  • mode – mode to open the archive in: ‘x’ (exclusive), ‘w’ (write) or ‘a’ (append)

  • compression – default level of compression to use (integer from 0 to 9)

__module__ = 'aiida.tools.archive.implementations.sqlite_zip.writer'#
_abc_impl = <_abc._abc_data object>#
_assert_in_context()[source]#
_stream_binary(name: str, handle: BinaryIO, *, buffer_size: int | None = None, compression: int | None = None, comment: bytes | None = None) None[source]#

Add a binary stream to the archive.

Parameters:
  • buffer_size – Number of bytes to buffer

  • compression – Override global compression level

  • comment – A binary meta comment about the object

bulk_insert(entity_type: EntityTypes, rows: List[Dict[str, Any]], allow_defaults: bool = False) None[source]#

Add multiple rows of entity data to the archive.

Parameters:
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically

  • allow_defaults – If False, assert that each row contains all fields, otherwise, allow default values for missing fields.

Raises:

IntegrityError if the keys in a row are not a subset of the columns in the table

db_name = 'db.sqlite3'#
delete_object(key: str) None[source]#

Delete the object from the archive.

Parameters:

key – fully qualified identifier for the object within the repository.

Raises:

IOError – if the file could not be deleted.

meta_name = 'metadata.json'#
put_object(stream: BinaryIO, *, buffer_size: int | None = None, key: str | None = None) str[source]#

Add an object to the archive.

Parameters:
  • stream – byte stream to read the object from

  • buffer_size – Number of bytes to buffer when read/writing

  • key – key to use for the object (if None will be auto-generated)

Returns:

the key of the object

update_metadata(data: Dict[str, Any], overwrite: bool = False) None[source]#

Add key, values to the top-level metadata.