aiida.storage.sqlite_zip package#

Module with implementation of the storage backend, using an SQLite database and repository files, within a zipfile.

The content of the zip file is:

|- storage.zip
    |- metadata.json
    |- db.sqlite3
    |- repo/
        |- hashkey1
        |- hashkey2
        ...

For quick access, the metadata (such as the version) is stored in a metadata.json file, at the “top” of the zip file, with the sqlite database, just below it, then the repository files. Repository files are named by their SHA256 content hash.

This storage method is primarily intended for the AiiDA archive, as a read-only storage method. This is because sqlite and zip are not suitable for concurrent write access.

The archive format originally used a JSON file to store the database, and these revisions are handled by the version_profile and migrate backend methods.

Submodules#

The table models are dynamically generated from the sqlalchemy backend models.

class aiida.storage.sqlite_zip.backend.FolderBackendRepository(path: str | Path)[source]#

Bases: aiida.storage.sqlite_zip.backend._RoBackendRepository

A read-only backend for a folder.

The folder should contain repository files, named by the sha256 hash of the file contents.

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.backend'#
_abc_impl = <_abc_data object>#
has_object(key: str) bool[source]#

Return whether the repository has an object with the given key.

Parameters

key – fully qualified identifier for the object within the repository.

Returns

True if the object exists, False otherwise.

list_objects() Iterable[str][source]#

Return iterable that yields all available objects by key.

Returns

An iterable for all the available object keys.

open(key: str) Iterator[BinaryIO][source]#

Open a file handle to an object stored under the given key.

Note

this should only be used to open a handle to read an existing file. To write a new file use the method put_object_from_filelike instead.

Parameters

key – fully qualified identifier for the object within the repository.

Returns

yield a byte stream object.

Raises
class aiida.storage.sqlite_zip.backend.SqliteZipBackend(profile: aiida.manage.configuration.profile.Profile)[source]#

Bases: aiida.orm.implementation.storage_backend.StorageBackend

A read-only backend for a sqlite/zip format.

The storage format uses an SQLite database and repository files, within a folder or zipfile.

The content of the folder/zipfile should be:

|- metadata.json
|- db.sqlite3
|- repo/
    |- hashkey1
    |- hashkey2
    ...
__abstractmethods__ = frozenset({})#
__init__(profile: aiida.manage.configuration.profile.Profile)[source]#

Initialize the backend, for this profile.

Raises

~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed

Raises

~aiida.common.exceptions.IncompatibleStorageSchema if the profile’s storage schema is not at the latest version (and thus should be migrated)

Raises
raises

aiida.common.exceptions.CorruptStorage if the storage is internally inconsistent

__module__ = 'aiida.storage.sqlite_zip.backend'#
__str__() str[source]#

Return a string showing connection details for this instance.

_abc_impl = <_abc_data object>#
_clear(recreate_user: bool = True) None[source]#

Clear the storage, removing all data.

Warning

This is a destructive operation, and should only be used for testing purposes.

Parameters

recreate_user – Re-create the default User for the profile, after clearing the storage.

_default_user: Optional[User]#
_read_only = True#
property authinfos#

Return the collection of authorisation information objects

bulk_insert(entity_type: EntityTypes, rows: list[dict], allow_defaults: bool = False) list[int][source]#

Insert a list of entities into the database, directly into a backend transaction.

Parameters
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically

  • allow_defaults – If False, assert that each row contains all fields (except primary key(s)), otherwise, allow default values for missing fields.

Raises

IntegrityError if the keys in a row are not a subset of the columns in the table

Returns

The list of generated primary keys for the entities

bulk_update(entity_type: EntityTypes, rows: list[dict]) None[source]#

Update a list of entities in the database, directly with a backend transaction.

Parameters
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing fields of the backend model to update, and the id field (a.k.a primary key)

Raises

IntegrityError if the keys in a row are not a subset of the columns in the table

close()[source]#

Close the backend

property comments#

Return the collection of comments

property computers#

Return the collection of computers

static create_profile(path: str | Path, options: dict | None = None) Profile[source]#

Create a new profile instance for this backend, from the path to the zip file.

delete_nodes_and_connections(pks_to_delete: Sequence[int])[source]#

Delete all nodes corresponding to pks in the input and any links to/from them.

This method is intended to be used within a transaction context.

Parameters

pks_to_delete – a sequence of node pks to delete

Raises

AssertionError if a transaction is not active

get_backend_entity(model)[source]#

Return the backend entity that corresponds to the given Model instance.

get_global_variable(key: str)[source]#

Return a global variable from the storage.

Parameters

key – the key of the setting

Raises

KeyError if the setting does not exist

get_info(detailed: bool = False) dict[source]#

Return general information on the storage.

Parameters

detailed – flag to request more detailed information about the content of the storage.

Returns

a nested dict with the relevant information.

get_repository() _RoBackendRepository[source]#

Return the object repository configured for this backend.

get_session() sqlalchemy.orm.session.Session[source]#

Return an SQLAlchemy session.

property groups#

Return the collection of groups

property in_transaction: bool#

Return whether a transaction is currently active.

property is_closed: bool#

Return whether the storage is closed.

property logs#

Return the collection of logs

maintain(dry_run: bool = False, live: bool = True, **kwargs) None[source]#

Perform maintenance tasks on the storage.

If full == True, then this method may attempt to block the profile associated with the storage to guarantee the safety of its procedures. This will not only prevent any other subsequent process from accessing that profile, but will also first check if there is already any process using it and raise if that is the case. The user will have to manually stop any processes that is currently accessing the profile themselves or wait for it to finish on its own.

Parameters
  • full – flag to perform operations that require to stop using the profile to be maintained.

  • dry_run – flag to only print the actions that would be taken without actually executing them.

classmethod migrate(profile: aiida.manage.configuration.profile.Profile)[source]#

Migrate the storage of a profile to the latest schema version.

If the schema version is already the latest version, this method does nothing. If the storage is empty/uninitialised, then it will be initialised at head.

Raises

~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed

property nodes#

Return the collection of nodes

query() aiida.storage.sqlite_zip.orm.SqliteQueryBuilder[source]#

Return an instance of a query builder implementation for this backend

set_global_variable(key: str, value, description: Optional[str] = None, overwrite=True) None[source]#

Set a global variable in the storage.

Parameters
  • key – the key of the setting

  • value – the value of the setting

  • description – the description of the setting (optional)

  • overwrite – if True, overwrite the setting if it already exists

Raises

ValueError if the key already exists and overwrite is False

transaction()[source]#

Get a context manager that can be used as a transaction context for a series of backend operations. If there is an exception within the context then the changes will be rolled back and the state will be as before entering. Transactions can be nested.

Returns

a context manager to group database operations

property users#

Return the collection of users

classmethod version_head() str[source]#

Return the head schema version of this storage backend type.

classmethod version_profile(profile: aiida.manage.configuration.profile.Profile) Optional[str][source]#

Return the schema version of the given profile’s storage, or None for empty/uninitialised storage.

Raises

~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed

class aiida.storage.sqlite_zip.backend.ZipfileBackendRepository(path: str | Path)[source]#

Bases: aiida.storage.sqlite_zip.backend._RoBackendRepository

A read-only backend for a zip file.

The zip file should contain repository files with the key format: repo/<sha256 hash>, i.e. files named by the sha256 hash of the file contents, inside a repo directory.

__abstractmethods__ = frozenset({})#
__init__(path: str | Path)[source]#

Initialise the repository backend.

Parameters

path – the path to the zip file

__module__ = 'aiida.storage.sqlite_zip.backend'#
_abc_impl = <_abc_data object>#
property _zipfile: zipfile.ZipFile#

Return the open zip file.

close() None[source]#

Close the repository.

has_object(key: str) bool[source]#

Return whether the repository has an object with the given key.

Parameters

key – fully qualified identifier for the object within the repository.

Returns

True if the object exists, False otherwise.

list_objects() Iterable[str][source]#

Return iterable that yields all available objects by key.

Returns

An iterable for all the available object keys.

open(key: str) Iterator[BinaryIO][source]#

Open a file handle to an object stored under the given key.

Note

this should only be used to open a handle to read an existing file. To write a new file use the method put_object_from_filelike instead.

Parameters

key – fully qualified identifier for the object within the repository.

Returns

yield a byte stream object.

Raises
class aiida.storage.sqlite_zip.backend._RoBackendRepository(path: str | Path)[source]#

Bases: aiida.repository.backend.abstract.AbstractRepositoryBackend

A backend abstract for a read-only folder or zip file.

__abstractmethods__ = frozenset({'list_objects'})#
__init__(path: str | Path)[source]#

Initialise the repository backend.

Parameters

path – the path to the zip file

__module__ = 'aiida.storage.sqlite_zip.backend'#
_abc_impl = <_abc_data object>#
_put_object_from_filelike(handle: BinaryIO) str[source]#
close() None[source]#

Close the repository.

delete_objects(keys: list[str]) None[source]#

Delete the objects from the repository.

Parameters

keys – list of fully qualified identifiers for the objects within the repository.

Raises
erase() None[source]#

Delete the repository itself and all its contents.

Note

This should not merely delete the contents of the repository but any resources it created. For example, if the repository is essentially a folder on disk, the folder itself should also be deleted, not just its contents.

get_info(detailed: bool = False, **kwargs) dict[source]#

Returns relevant information about the content of the repository.

Parameters

detailed – flag to enable extra information (detailed=False by default, only returns basic information).

Returns

a dictionary with the information.

get_object_hash(key: str) str[source]#

Return the SHA-256 hash of an object stored under the given key.

Important

A SHA-256 hash should always be returned, to ensure consistency across different repository implementations.

Parameters

key – fully qualified identifier for the object within the repository.

Raises
has_objects(keys: list[str]) list[bool][source]#

Return whether the repository has an object with the given key.

Parameters

keys – list of fully qualified identifiers for objects within the repository.

Returns

list of logicals, in the same order as the keys provided, with value True if the respective object exists and False otherwise.

initialise(**kwargs) None[source]#

Initialise the repository if it hasn’t already been initialised.

Parameters

kwargs – parameters for the initialisation.

property is_initialised: bool#

Return whether the repository has been initialised.

iter_object_streams(keys: list[str]) Iterator[Tuple[str, BinaryIO]][source]#

Return an iterator over the (read-only) byte streams of objects identified by key.

Note

handles should only be read within the context of this iterator.

Parameters

keys – fully qualified identifiers for the objects within the repository.

Returns

an iterator over the object byte streams.

Raises
property key_format: Optional[str]#

Return the format for the keys of the repository.

Important for when migrating between backends (e.g. archive -> main), as if they are not equal then it is necessary to re-compute all the Node.base.repository.metadata before importing (otherwise they will not match with the repository).

maintain(dry_run: bool = False, live: bool = True, **kwargs) None[source]#

Performs maintenance operations.

Parameters
  • dry_run – flag to only print the actions that would be taken without actually executing them.

  • live – flag to indicate to the backend whether AiiDA is live or not (i.e. if the profile of the backend is currently being used/accessed). The backend is expected then to only allow (and thus set by default) the operations that are safe to perform in this state.

property uuid: Optional[str]#

Return the unique identifier of the repository.

Versioning and migration implementation for the sqlite_zip format.

aiida.storage.sqlite_zip.migrator._alembic_config() alembic.config.Config[source]#

Return an instance of an Alembic Config.

aiida.storage.sqlite_zip.migrator._alembic_connect(db_path: pathlib.Path, enforce_foreign_keys=True) Iterator[alembic.config.Config][source]#

Context manager to return an instance of an Alembic configuration.

The profiles’s database connection is added in the attributes property, through which it can then also be retrieved, also in the env.py file, which is run when the database is migrated.

aiida.storage.sqlite_zip.migrator._alembic_script() alembic.script.base.ScriptDirectory[source]#

Return an instance of an Alembic ScriptDirectory.

aiida.storage.sqlite_zip.migrator._migration_context(db_path: pathlib.Path) Iterator[alembic.runtime.migration.MigrationContext][source]#

Context manager to return an instance of an Alembic migration context.

This migration context will have been configured with the current database connection, which allows this context to be used to inspect the contents of the database, such as the current revision.

aiida.storage.sqlite_zip.migrator._perform_legacy_migrations(current_version: str, to_version: str, metadata: dict, data: dict) str[source]#

Perform legacy migrations from the current version to the desired version.

Legacy archives use the old data.json format for storing the database. These migrations simply manipulate the metadata and data in-place.

Parameters
  • current_version – current version of the archive

  • to_version – version to migrate to

  • metadata – the metadata to migrate

  • data – the data to migrate

Returns

the new version of the archive

aiida.storage.sqlite_zip.migrator._read_json(inpath: pathlib.Path, filename: str, is_tar: bool) Dict[str, Any][source]#

Read a JSON file from the archive.

aiida.storage.sqlite_zip.migrator.get_schema_version_head() str[source]#

Return the head schema version for this storage, i.e. the latest schema this storage can be migrated to.

aiida.storage.sqlite_zip.migrator.list_versions() List[str][source]#

Return all available schema versions (oldest to latest).

aiida.storage.sqlite_zip.migrator.migrate(inpath: Union[str, pathlib.Path], outpath: Union[str, pathlib.Path], version: str, *, force: bool = False, compression: int = 6) None[source]#

Migrate an sqlite_zip storage file to a specific version.

Historically, this format could be a zip or a tar file, contained the database as a bespoke JSON format, and the repository files in the “legacy” per-node format. For these versions, we first migrate the JSON database to the final legacy schema, then we convert this file to the SQLite database, whilst sequentially migrating the repository files.

Once any legacy migrations have been performed, we can then migrate the SQLite database to the final schema, using alembic.

Note that, to minimise disk space usage, we never fully extract/uncompress the input file (except when migrating from a legacy tar file, whereby we cannot extract individual files):

  1. The sqlite database is extracted to a temporary location and migrated

  2. A new zip file is opened, within a temporary folder

  3. The repository files are “streamed” directly between the input file and the new zip file

  4. The sqlite database and metadata JSON are written to the new zip file

  5. The new zip file is closed (which writes its final central directory)

  6. The new zip file is moved to the output location, removing any existing file if force=True

Parameters
  • path – Path to the file

  • outpath – Path to output the migrated file

  • version – Target version

  • force – If True, overwrite the output file if it exists

  • compression – Compression level for the output file

aiida.storage.sqlite_zip.migrator.validate_storage(inpath: pathlib.Path) None[source]#

Validate that the storage is at the head version.

Raises

aiida.common.exceptions.UnreachableStorage if the file does not exist

Raises

aiida.common.exceptions.CorruptStorage if the version cannot be read from the storage.

Raises

aiida.common.exceptions.IncompatibleStorageSchema if the storage is not compatible with the code API.

This module contains the SQLAlchemy models for the SQLite backend.

These models are intended to be identical to those of the psql_dos backend, except for changes to the database specific types:

  • UUID -> CHAR(32)

  • DateTime -> TZDateTime

  • JSONB -> JSON

Also, varchar_pattern_ops indexes are not possible in sqlite.

class aiida.storage.sqlite_zip.models.DbAuthInfo(**kwargs)#

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to store data for aiida.orm.AuthInfo, and keep computer authentication data, per user.

Specifications are user-specific of how to submit jobs in the computer. The model also has an enabled logical switch that indicates whether the device is available for use or not. This last one can be set and unset by the user.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb891df8b50; DbAuthInfo>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbauthinfo', MetaData(), Column('id', Integer(), table=<db_dbauthinfo>, primary_key=True, nullable=False), Column('aiidauser_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbauthinfo>, nullable=False), Column('dbcomputer_id', Integer(), ForeignKey('db_dbcomputer.id'), table=<db_dbauthinfo>, nullable=False), Column('metadata', JSON(), table=<db_dbauthinfo>, nullable=False, default=ColumnDefault(<function dict>)), Column('auth_params', JSON(), table=<db_dbauthinfo>, nullable=False, default=ColumnDefault(<function dict>)), Column('enabled', Boolean(), table=<db_dbauthinfo>, nullable=False, default=ColumnDefault(True)), schema=None)#
__tablename__ = 'db_dbauthinfo'#
_metadata#
_sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'aiidauser': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'aiidauser_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'auth_params': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'enabled': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
aiidauser#
aiidauser_id#
auth_params#
dbcomputer#
dbcomputer_id#
enabled#
id#
class aiida.storage.sqlite_zip.models.DbComment(**kwargs)#

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to store data for aiida.orm.Comment.

Comments can be attach to the nodes by the users.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb890612880; DbComment>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbcomment', MetaData(), Column('id', Integer(), table=<db_dbcomment>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbcomment>, nullable=False, default=ColumnDefault(<function get_new_uuid>)), Column('dbnode_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dbcomment>, nullable=False), Column('ctime', TZDateTime(), table=<db_dbcomment>, nullable=False, default=ColumnDefault(<function now>)), Column('mtime', TZDateTime(), table=<db_dbcomment>, nullable=False, onupdate=ColumnDefault(<function now>), default=ColumnDefault(<function now>)), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbcomment>, nullable=False), Column('content', Text(), table=<db_dbcomment>, nullable=False, default=ColumnDefault('')), schema=None)#
__tablename__ = 'db_dbcomment'#
_sa_class_manager = {'content': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'ctime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'mtime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
content#
ctime#
dbnode#
dbnode_id#
id#
mtime#
user#
user_id#
uuid#
class aiida.storage.sqlite_zip.models.DbComputer(**kwargs)#

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to store data for aiida.orm.Computer.

Computers represent (and contain the information of) the physical hardware resources available. Nodes can be associated with computers if they are remote codes, remote folders, or processes that had run remotely.

Computers are identified within AiiDA by their label (and thus it must be unique for each one in the database), whereas the hostname is the label that identifies the computer within the network from which one can access it.

The scheduler_type column contains the information of the scheduler (and plugin) that the computer uses to manage jobs, whereas the transport_type the information of the transport (and plugin) required to copy files and communicate to and from the computer. The metadata contains some general settings for these communication and management protocols.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb891dea310; DbComputer>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbcomputer', MetaData(), Column('id', Integer(), table=<db_dbcomputer>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbcomputer>, nullable=False, default=ColumnDefault(<function get_new_uuid>)), Column('label', String(length=255), table=<db_dbcomputer>, nullable=False), Column('hostname', String(length=255), table=<db_dbcomputer>, nullable=False, default=ColumnDefault('')), Column('description', Text(), table=<db_dbcomputer>, nullable=False, default=ColumnDefault('')), Column('scheduler_type', String(length=255), table=<db_dbcomputer>, nullable=False, default=ColumnDefault('')), Column('transport_type', String(length=255), table=<db_dbcomputer>, nullable=False, default=ColumnDefault('')), Column('metadata', JSON(), table=<db_dbcomputer>, nullable=False, default=ColumnDefault(<function dict>)), schema=None)#
__tablename__ = 'db_dbcomputer'#
_metadata#
_sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'hostname': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'scheduler_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'transport_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
description#
hostname#
id#
label#
scheduler_type#
transport_type#
uuid#
class aiida.storage.sqlite_zip.models.DbGroup(**kwargs)#

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to store aiida.orm.Group data.

A group may contain many different nodes, but also each node can be included in different groups.

Users will typically identify and handle groups by using their label (which, unlike the labels in other models, must be unique). Groups also have a type, which serves to identify what plugin is being instanced, and the extras property for users to set any relevant information.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb891df8eb0; DbGroup>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbgroup', MetaData(), Column('id', Integer(), table=<db_dbgroup>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbgroup>, nullable=False, default=ColumnDefault(<function get_new_uuid>)), Column('label', String(length=255), table=<db_dbgroup>, nullable=False), Column('type_string', String(length=255), table=<db_dbgroup>, nullable=False, default=ColumnDefault('')), Column('time', TZDateTime(), table=<db_dbgroup>, nullable=False, default=ColumnDefault(<function now>)), Column('description', Text(), table=<db_dbgroup>, nullable=False, default=ColumnDefault('')), Column('extras', JSON(), table=<db_dbgroup>, nullable=False, default=ColumnDefault(<function dict>)), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbgroup>, nullable=False), schema=None)#
__tablename__ = 'db_dbgroup'#
_sa_class_manager = {'dbnodes': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'extras': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'time': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'type_string': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
dbnodes#
description#
extras#
id#
label#
time#
type_string#
user#
user_id#
uuid#
aiida.storage.sqlite_zip.models.DbGroupNodes#

alias of aiida.storage.sqlite_zip.models.DbGroupNode

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to store links between aiida.orm.Node.

Each entry in this table contains not only the id information of the two nodes that are linked, but also some extra properties of the link themselves. This includes the type of the link (see the Concepts section for all possible types) as well as a label which is more specific and typically determined by the procedure generating the process node that links the data nodes.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb890612f10; DbLink>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dblink', MetaData(), Column('id', Integer(), table=<db_dblink>, primary_key=True, nullable=False), Column('input_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblink>, nullable=False), Column('output_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblink>, nullable=False), Column('label', String(length=255), table=<db_dblink>, nullable=False), Column('type', String(length=255), table=<db_dblink>, nullable=False), schema=None)#
__tablename__ = 'db_dblink'#
_sa_class_manager = {'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'input_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'output_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
id#
input_id#
label#
output_id#
type#
class aiida.storage.sqlite_zip.models.DbLog(**kwargs)#

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to data for aiida.orm.Log, corresponding to aiida.orm.ProcessNode.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb890612be0; DbLog>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dblog', MetaData(), Column('id', Integer(), table=<db_dblog>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dblog>, nullable=False, default=ColumnDefault(<function get_new_uuid>)), Column('time', TZDateTime(), table=<db_dblog>, nullable=False, default=ColumnDefault(<function now>)), Column('loggername', String(length=255), table=<db_dblog>, nullable=False), Column('levelname', String(length=50), table=<db_dblog>, nullable=False), Column('dbnode_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblog>, nullable=False), Column('message', Text(), table=<db_dblog>, nullable=False, default=ColumnDefault('')), Column('metadata', JSON(), table=<db_dblog>, nullable=False, default=ColumnDefault(<function dict>)), schema=None)#
__tablename__ = 'db_dblog'#
_metadata#
_sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'levelname': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'loggername': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'message': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'time': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
dbnode#
dbnode_id#
id#
levelname#

How critical the message is

loggername#

What process recorded the message

message#
time#
uuid#
class aiida.storage.sqlite_zip.models.DbNode(**kwargs)#

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to store data for aiida.orm.Node.

Each node can be categorized according to its node_type, which indicates what kind of data or process node it is. Additionally, process nodes also have a process_type that further indicates what is the specific plugin it uses.

Nodes can also store two kind of properties:

  • attributes are determined by the node_type, and are set before storing the node and can’t be modified afterwards.

  • extras, on the other hand, can be added and removed after the node has been stored and are usually set by the user.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb890612190; DbNode>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbnode', MetaData(), Column('id', Integer(), table=<db_dbnode>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbnode>, nullable=False, default=ColumnDefault(<function get_new_uuid>)), Column('node_type', String(length=255), table=<db_dbnode>, nullable=False, default=ColumnDefault('')), Column('process_type', String(length=255), table=<db_dbnode>), Column('label', String(length=255), table=<db_dbnode>, nullable=False, default=ColumnDefault('')), Column('description', Text(), table=<db_dbnode>, nullable=False, default=ColumnDefault('')), Column('ctime', TZDateTime(), table=<db_dbnode>, nullable=False, default=ColumnDefault(<function now>)), Column('mtime', TZDateTime(), table=<db_dbnode>, nullable=False, onupdate=ColumnDefault(<function now>), default=ColumnDefault(<function now>)), Column('attributes', JSON(), table=<db_dbnode>, default=ColumnDefault(<function dict>)), Column('extras', JSON(), table=<db_dbnode>, default=ColumnDefault(<function dict>)), Column('repository_metadata', JSON(), table=<db_dbnode>, nullable=False, default=ColumnDefault(<function dict>)), Column('dbcomputer_id', Integer(), ForeignKey('db_dbcomputer.id'), table=<db_dbnode>), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbnode>, nullable=False), schema=None)#
__tablename__ = 'db_dbnode'#
_sa_class_manager = {'attributes': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'ctime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'extras': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'mtime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'node_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'process_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'repository_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
attributes#
ctime#
dbcomputer#
dbcomputer_id#
description#
extras#
id#
label#
mtime#
node_type#
process_type#
repository_metadata#
user#
user_id#
uuid#
class aiida.storage.sqlite_zip.models.DbUser(**kwargs)#

Bases: sqlalchemy.orm.decl_api.SqliteModel

Database model to store data for aiida.orm.User.

Every node that is created has a single user as its author.

The user information consists of the most basic personal contact details.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fb891df8910; DbUser>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbuser', MetaData(), Column('id', Integer(), table=<db_dbuser>, primary_key=True, nullable=False), Column('email', String(length=254), table=<db_dbuser>, nullable=False), Column('first_name', String(length=254), table=<db_dbuser>, nullable=False, default=ColumnDefault('')), Column('last_name', String(length=254), table=<db_dbuser>, nullable=False, default=ColumnDefault('')), Column('institution', String(length=254), table=<db_dbuser>, nullable=False, default=ColumnDefault('')), schema=None)#
__tablename__ = 'db_dbuser'#
_sa_class_manager = {'email': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'first_name': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'institution': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'last_name': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
email#
first_name#
id#
institution#
last_name#
class aiida.storage.sqlite_zip.models.SqliteModel[source]#

Bases: object

Represent a row in an sqlite database table

__dict__ = mappingproxy({'__module__': 'aiida.storage.sqlite_zip.models', '__doc__': 'Represent a row in an sqlite database table', '__repr__': <function SqliteModel.__repr__>, '__dict__': <attribute '__dict__' of 'SqliteModel' objects>, '__weakref__': <attribute '__weakref__' of 'SqliteModel' objects>, '__annotations__': {}})#
__module__ = 'aiida.storage.sqlite_zip.models'#
__repr__() str[source]#

Return a representation of the row columns

__weakref__#

list of weak references to the object (if defined)

class aiida.storage.sqlite_zip.models.TZDateTime(*args, **kwargs)[source]#

Bases: sqlalchemy.sql.type_api.TypeDecorator

A timezone naive UTC DateTime implementation for SQLite.

see: https://docs.sqlalchemy.org/en/14/core/custom_types.html#store-timezone-aware-timestamps-as-timezone-naive-utc

__module__ = 'aiida.storage.sqlite_zip.models'#
cache_ok = True#

Indicate if statements using this ExternalType are “safe to cache”.

The default value None will emit a warning and then not allow caching of a statement which includes this type. Set to False to disable statements using this type from being cached at all without a warning. When set to True, the object’s class and selected elements from its state will be used as part of the cache key. For example, using a TypeDecorator:

class MyType(TypeDecorator):
    impl = String

    cache_ok = True

    def __init__(self, choices):
        self.choices = tuple(choices)
        self.internal_only = True

The cache key for the above type would be equivalent to:

>>> MyType(["a", "b", "c"])._static_cache_key
(<class '__main__.MyType'>, ('choices', ('a', 'b', 'c')))

The caching scheme will extract attributes from the type that correspond to the names of parameters in the __init__() method. Above, the “choices” attribute becomes part of the cache key but “internal_only” does not, because there is no parameter named “internal_only”.

The requirements for cacheable elements is that they are hashable and also that they indicate the same SQL rendered for expressions using this type every time for a given cache value.

To accommodate for datatypes that refer to unhashable structures such as dictionaries, sets and lists, these objects can be made “cacheable” by assigning hashable structures to the attributes whose names correspond with the names of the arguments. For example, a datatype which accepts a dictionary of lookup values may publish this as a sorted series of tuples. Given a previously un-cacheable type as:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    this is the non-cacheable version, as "self.lookup" is not
    hashable.

    '''

    def __init__(self, lookup):
        self.lookup = lookup

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self.lookup" ...

Where “lookup” is a dictionary. The type will not be able to generate a cache key:

>>> type_ = LookupType({"a": 10, "b": 20})
>>> type_._static_cache_key
<stdin>:1: SAWarning: UserDefinedType LookupType({'a': 10, 'b': 20}) will not
produce a cache key because the ``cache_ok`` flag is not set to True.
Set this flag to True if this type object's state is safe to use
in a cache key, or False to disable this warning.
symbol('no_cache')

If we did set up such a cache key, it wouldn’t be usable. We would get a tuple structure that contains a dictionary inside of it, which cannot itself be used as a key in a “cache dictionary” such as SQLAlchemy’s statement cache, since Python dictionaries aren’t hashable:

>>> # set cache_ok = True
>>> type_.cache_ok = True

>>> # this is the cache key it would generate
>>> key = type_._static_cache_key
>>> key
(<class '__main__.LookupType'>, ('lookup', {'a': 10, 'b': 20}))

>>> # however this key is not hashable, will fail when used with
>>> # SQLAlchemy statement cache
>>> some_cache = {key: "some sql value"}
Traceback (most recent call last): File "<stdin>", line 1,
in <module> TypeError: unhashable type: 'dict'

The type may be made cacheable by assigning a sorted tuple of tuples to the “.lookup” attribute:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    The dictionary is stored both as itself in a private variable,
    and published in a public variable as a sorted tuple of tuples,
    which is hashable and will also return the same value for any
    two equivalent dictionaries.  Note it assumes the keys and
    values of the dictionary are themselves hashable.

    '''

    cache_ok = True

    def __init__(self, lookup):
        self._lookup = lookup

        # assume keys/values of "lookup" are hashable; otherwise
        # they would also need to be converted in some way here
        self.lookup = tuple(
            (key, lookup[key]) for key in sorted(lookup)
        )

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self._lookup" ...

Where above, the cache key for LookupType({"a": 10, "b": 20}) will be:

>>> LookupType({"a": 10, "b": 20})._static_cache_key
(<class '__main__.LookupType'>, ('lookup', (('a', 10), ('b', 20))))

New in version 1.4.14: - added the cache_ok flag to allow some configurability of caching for TypeDecorator classes.

New in version 1.4.28: - added the ExternalType mixin which generalizes the cache_ok flag to both the TypeDecorator and UserDefinedType classes.

impl#

alias of sqlalchemy.sql.sqltypes.DateTime

process_bind_param(value: Optional[datetime.datetime], dialect)[source]#

Process before writing to database.

process_result_value(value: Optional[datetime.datetime], dialect)[source]#

Process when returning from database.

aiida.storage.sqlite_zip.models.create_orm_cls(klass: sqlalchemy.orm.decl_api.Model) sqlalchemy.orm.decl_api.SqliteModel[source]#

Create an ORM class from an existing table in the declarative meta

aiida.storage.sqlite_zip.models.get_model_from_entity(entity_type: aiida.orm.entities.EntityTypes) Tuple[Any, Set[str]][source]#

Return the Sqlalchemy model and column names corresponding to the given entity.

aiida.storage.sqlite_zip.models.pg_to_sqlite(pg_table: sqlalchemy.sql.schema.Table)[source]#

Convert a model intended for PostGreSQL to one compatible with SQLite

This module contains the AiiDA backend ORM classes for the SQLite backend.

It re-uses the classes already defined in psql_dos backend (for PostGresQL), but redefines the SQLAlchemy models to the SQLite compatible ones.

class aiida.storage.sqlite_zip.orm.SqliteAuthInfo(backend, computer, user)[source]#

Bases: aiida.storage.sqlite_zip.orm.SqliteEntityOverride, aiida.storage.psql_dos.orm.authinfos.SqlaAuthInfo

COMPUTER_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteComputer

MODEL_CLASS#

alias of aiida.storage.sqlite_zip.models.DbAuthInfo

USER_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteUser

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc_data object>#
_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteAuthInfoCollection(backend: StorageBackend)[source]#

Bases: aiida.storage.psql_dos.orm.authinfos.SqlaAuthInfoCollection

ENTITY_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteAuthInfo

__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteComment(backend, node, user, content=None, ctime=None, mtime=None)[source]#

Bases: aiida.storage.sqlite_zip.orm.SqliteEntityOverride, aiida.storage.psql_dos.orm.comments.SqlaComment

MODEL_CLASS#

alias of aiida.storage.sqlite_zip.models.DbComment

USER_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteUser

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc_data object>#
_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteCommentCollection(backend: StorageBackend)[source]#

Bases: aiida.storage.psql_dos.orm.comments.SqlaCommentCollection

ENTITY_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteComment

__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteComputer(backend, **kwargs)[source]#

Bases: aiida.storage.sqlite_zip.orm.SqliteEntityOverride, aiida.storage.psql_dos.orm.computers.SqlaComputer

MODEL_CLASS#

alias of aiida.storage.sqlite_zip.models.DbComputer

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc_data object>#
_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteComputerCollection(backend: StorageBackend)[source]#

Bases: aiida.storage.psql_dos.orm.computers.SqlaComputerCollection

ENTITY_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteComputer

__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteEntityOverride[source]#

Bases: object

Overrides type-checking of psql_dos Entity.

MODEL_CLASS: Any#
__annotations__ = {'MODEL_CLASS': typing.Any, '_model': <class 'aiida.storage.psql_dos.orm.utils.ModelWrapper'>}#
__dict__ = mappingproxy({'__module__': 'aiida.storage.sqlite_zip.orm', '__annotations__': {'MODEL_CLASS': typing.Any, '_model': <class 'aiida.storage.psql_dos.orm.utils.ModelWrapper'>}, '__doc__': 'Overrides type-checking of psql_dos ``Entity``.', '_class_check': <classmethod object>, 'from_dbmodel': <classmethod object>, 'store': <function SqliteEntityOverride.store>, '__dict__': <attribute '__dict__' of 'SqliteEntityOverride' objects>, '__weakref__': <attribute '__weakref__' of 'SqliteEntityOverride' objects>})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__weakref__#

list of weak references to the object (if defined)

classmethod _class_check()[source]#

Assert that the class is correctly configured

_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
classmethod from_dbmodel(dbmodel, backend)[source]#

Create an AiiDA Entity from the corresponding SQLA ORM model and storage backend

Parameters
  • dbmodel – the SQLAlchemy model to create the entity from

  • backend – the corresponding storage backend

Returns

the AiiDA entity

store(*args, **kwargs)[source]#
class aiida.storage.sqlite_zip.orm.SqliteGroup(backend, label, user, description='', type_string='')[source]#

Bases: aiida.storage.sqlite_zip.orm.SqliteEntityOverride, aiida.storage.psql_dos.orm.groups.SqlaGroup

MODEL_CLASS#

alias of aiida.storage.sqlite_zip.models.DbGroup

USER_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteUser

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc_data object>#
_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteGroupCollection(backend: StorageBackend)[source]#

Bases: aiida.storage.psql_dos.orm.groups.SqlaGroupCollection

ENTITY_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteGroup

__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteLog(backend, time, loggername, levelname, dbnode_id, message='', metadata=None)[source]#

Bases: aiida.storage.sqlite_zip.orm.SqliteEntityOverride, aiida.storage.psql_dos.orm.logs.SqlaLog

MODEL_CLASS#

alias of aiida.storage.sqlite_zip.models.DbLog

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc_data object>#
_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteLogCollection(backend: StorageBackend)[source]#

Bases: aiida.storage.psql_dos.orm.logs.SqlaLogCollection

ENTITY_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteLog

__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteNode(backend, node_type, user, computer=None, process_type=None, label='', description='', ctime=None, mtime=None)[source]#

Bases: aiida.storage.sqlite_zip.orm.SqliteEntityOverride, aiida.storage.psql_dos.orm.nodes.SqlaNode

SQLA Node backend entity

COMPUTER_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteComputer

alias of aiida.storage.sqlite_zip.models.DbLink

MODEL_CLASS#

alias of aiida.storage.sqlite_zip.models.DbNode

USER_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteUser

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc_data object>#
_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteNodeCollection(backend: StorageBackend)[source]#

Bases: aiida.storage.psql_dos.orm.nodes.SqlaNodeCollection

ENTITY_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteNode

__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteQueryBuilder(backend)[source]#

Bases: aiida.storage.psql_dos.orm.querybuilder.main.SqlaQueryBuilder

QueryBuilder to use with SQLAlchemy-backend, adapted for SQLite.

property AuthInfo#
property Comment#
property Computer#
property Group#
property Log#
property Node#
property User#
__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
_abc_impl = <_abc_data object>#
_data: QueryDictType#
_hash: Optional[str]#
_query: Query#
_requested_projections: int#
_tag_to_alias: Dict[str, Optional[AliasedClass]]#
_tag_to_projected_fields: Dict[str, Dict[str, int]]#
get_filter_expr_from_jsonb(operator: str, value, attr_key: List[str], column=None, column_name=None, alias=None)[source]#

Return a filter expression.

See: https://www.sqlite.org/json1.html

get_projectable_attribute(alias, column_name: str, attrpath: List[str], cast: Optional[str] = None) sqlalchemy.sql.elements.ColumnElement[source]#

Return an attribute store in a JSON field of the give column

inner_to_outer_schema: Dict[str, Dict[str, str]]#
outer_to_inner_schema: Dict[str, Dict[str, str]]#
property table_groups_nodes#
class aiida.storage.sqlite_zip.orm.SqliteUser(backend, email, first_name, last_name, institution)[source]#

Bases: aiida.storage.sqlite_zip.orm.SqliteEntityOverride, aiida.storage.psql_dos.orm.users.SqlaUser

MODEL_CLASS#

alias of aiida.storage.sqlite_zip.models.DbUser

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc_data object>#
_model: aiida.storage.psql_dos.orm.utils.ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteUserCollection(backend: StorageBackend)[source]#

Bases: aiida.storage.psql_dos.orm.users.SqlaUserCollection

ENTITY_CLASS#

alias of aiida.storage.sqlite_zip.orm.SqliteUser

__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
aiida.storage.sqlite_zip.orm._(dbmodel, backend)[source]#
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel, backend)[source]#
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbUser, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbGroup, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbComputer, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbNode, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbAuthInfo, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbComment, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbLog, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: aiida.storage.sqlite_zip.models.DbLink, backend)

Utilities for this backend.

aiida.storage.sqlite_zip.utils.DB_FILENAME = 'db.sqlite3'#

The filename of the SQLite database.

aiida.storage.sqlite_zip.utils.META_FILENAME = 'metadata.json'#

The filename containing meta information about the storage instance.

aiida.storage.sqlite_zip.utils.REPO_FOLDER = 'repo'#

The name of the folder containing the repository files.

exception aiida.storage.sqlite_zip.utils.ReadOnlyError(msg='sqlite_zip storage is read-only')[source]#

Bases: aiida.common.exceptions.AiidaException

Raised when a write operation is called on a read-only archive.

__init__(msg='sqlite_zip storage is read-only')[source]#
__module__ = 'aiida.storage.sqlite_zip.utils'#
aiida.storage.sqlite_zip.utils.create_sqla_engine(path: Union[str, pathlib.Path], *, enforce_foreign_keys: bool = True, **kwargs) sqlalchemy.future.engine.Engine[source]#

Create a new engine instance.

aiida.storage.sqlite_zip.utils.extract_metadata(path: Union[str, pathlib.Path], *, search_limit: Optional[int] = 10) Dict[str, Any][source]#

Extract the metadata dictionary from the archive.

Parameters

search_limit – the maximum number of records to search for the metadata file in a zip file.

aiida.storage.sqlite_zip.utils.read_version(path: Union[str, pathlib.Path], *, search_limit: Optional[int] = None) str[source]#

Read the version of the storage instance from the path.

This is intended to work for all versions of the storage format.

Parameters
  • path – path to storage instance, either a folder, zip file or tar file.

  • search_limit – the maximum number of records to search for the metadata file in a zip file.

Raises

UnreachableStorage if a version cannot be read from the file

aiida.storage.sqlite_zip.utils.sqlite_case_sensitive_like(dbapi_connection, _)[source]#

Enforce case sensitive like operations (off by default).

See: https://www.sqlite.org/pragma.html#pragma_case_sensitive_like

aiida.storage.sqlite_zip.utils.sqlite_enforce_foreign_keys(dbapi_connection, _)[source]#

Enforce foreign key constraints, when using sqlite backend (off by default).

See: https://www.sqlite.org/pragma.html#pragma_foreign_keys