aiida.storage.sqlite_zip package#

Module with implementation of the storage backend, using an SQLite database and repository files, within a zipfile.

The content of the zip file is:

|- storage.zip
    |- metadata.json
    |- db.sqlite3
    |- repo/
        |- hashkey1
        |- hashkey2
        ...

For quick access, the metadata (such as the version) is stored in a metadata.json file, at the “top” of the zip file, with the sqlite database, just below it, then the repository files. Repository files are named by their SHA256 content hash.

This storage method is primarily intended for the AiiDA archive, as a read-only storage method. This is because sqlite and zip are not suitable for concurrent write access.

The archive format originally used a JSON file to store the database, and these revisions are handled by the version_profile and migrate backend methods.

Subpackages#

Submodules#

The table models are dynamically generated from the sqlalchemy backend models.

class aiida.storage.sqlite_zip.backend.FolderBackendRepository(path: str | Path)[源代码]#

基类:_RoBackendRepository

A read-only backend for a folder.

The folder should contain repository files, named by the sha256 hash of the file contents.

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.backend'#
_abc_impl = <_abc._abc_data object>#
has_object(key: str) bool[源代码]#

Return whether the repository has an object with the given key.

参数:

key – fully qualified identifier for the object within the repository.

返回:

True if the object exists, False otherwise.

list_objects() Iterable[str][源代码]#

Return iterable that yields all available objects by key.

返回:

An iterable for all the available object keys.

open(key: str) Iterator[BinaryIO][源代码]#

Open a file handle to an object stored under the given key.

备注

this should only be used to open a handle to read an existing file. To write a new file use the method put_object_from_filelike instead.

参数:

key – fully qualified identifier for the object within the repository.

返回:

yield a byte stream object.

抛出:
class aiida.storage.sqlite_zip.backend.SqliteZipBackend(profile: Profile)[源代码]#

基类:StorageBackend

A read-only backend for a sqlite/zip format.

The storage format uses an SQLite database and repository files, within a folder or zipfile.

The content of the folder/zipfile should be:

|- metadata.json
|- db.sqlite3
|- repo/
    |- hashkey1
    |- hashkey2
    ...
class Model(*, filepath: str)[源代码]#

基类:BaseModel

Model describing required information to configure an instance of the storage.

__abstractmethods__ = frozenset({})#
__annotations__ = {'__class_vars__': 'ClassVar[set[str]]', '__private_attributes__': 'ClassVar[dict[str, ModelPrivateAttr]]', '__pydantic_complete__': 'ClassVar[bool]', '__pydantic_core_schema__': 'ClassVar[CoreSchema]', '__pydantic_custom_init__': 'ClassVar[bool]', '__pydantic_decorators__': 'ClassVar[_decorators.DecoratorInfos]', '__pydantic_extra__': 'dict[str, Any] | None', '__pydantic_fields_set__': 'set[str]', '__pydantic_generic_metadata__': 'ClassVar[_generics.PydanticGenericMetadata]', '__pydantic_parent_namespace__': 'ClassVar[dict[str, Any] | None]', '__pydantic_post_init__': "ClassVar[None | typing_extensions.Literal['model_post_init']]", '__pydantic_private__': 'dict[str, Any] | None', '__pydantic_root_model__': 'ClassVar[bool]', '__pydantic_serializer__': 'ClassVar[SchemaSerializer]', '__pydantic_validator__': 'ClassVar[SchemaValidator]', '__signature__': 'ClassVar[Signature]', 'filepath': 'str', 'model_computed_fields': 'ClassVar[dict[str, ComputedFieldInfo]]', 'model_config': 'ClassVar[ConfigDict]', 'model_fields': 'ClassVar[dict[str, FieldInfo]]'}#
__class_vars__: ClassVar[set[str]] = {}#
__dict__#
__module__ = 'aiida.storage.sqlite_zip.backend'#
__private_attributes__: ClassVar[dict[str, ModelPrivateAttr]] = {}#
__pydantic_complete__: ClassVar[bool] = True#
__pydantic_core_schema__: ClassVar[CoreSchema] = {'cls': <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>, 'config': {'title': 'Model'}, 'custom_init': False, 'metadata': {'pydantic_js_annotation_functions': [], 'pydantic_js_functions': [functools.partial(<function modify_model_json_schema>, cls=<class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>), <bound method BaseModel.__get_pydantic_json_schema__ of <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>>]}, 'ref': 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model:94461380647072', 'root_model': False, 'schema': {'computed_fields': [], 'fields': {'filepath': {'metadata': {'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>], 'pydantic_js_functions': []}, 'schema': {'function': {'function': <bound method SqliteZipBackend.Model.filepath_exists_and_is_absolute of <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>>, 'type': 'no-info'}, 'schema': {'type': 'str'}, 'type': 'function-after'}, 'type': 'model-field'}}, 'model_name': 'Model', 'type': 'model-fields'}, 'type': 'model'}#
__pydantic_custom_init__: ClassVar[bool] = False#
__pydantic_decorators__: ClassVar[_decorators.DecoratorInfos] = DecoratorInfos(validators={}, field_validators={'filepath_exists_and_is_absolute': Decorator(cls_ref='aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model:94461380647072', cls_var_name='filepath_exists_and_is_absolute', func=<bound method SqliteZipBackend.Model.filepath_exists_and_is_absolute of <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>>, shim=None, info=FieldValidatorDecoratorInfo(fields=('filepath',), mode='after', check_fields=None))}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})#
__pydantic_extra__: dict[str, Any] | None#
__pydantic_fields_set__: set[str]#
__pydantic_generic_metadata__: ClassVar[_generics.PydanticGenericMetadata] = {'args': (), 'origin': None, 'parameters': ()}#
__pydantic_parent_namespace__: ClassVar[dict[str, Any] | None] = {'__doc__': 'A read-only backend for a sqlite/zip format.\n\n    The storage format uses an SQLite database and repository files, within a folder or zipfile.\n\n    The content of the folder/zipfile should be::\n\n        |- metadata.json\n        |- db.sqlite3\n        |- repo/\n            |- hashkey1\n            |- hashkey2\n            ...\n\n    ', '__module__': 'aiida.storage.sqlite_zip.backend', '__qualname__': 'SqliteZipBackend', 'read_only': True}#
__pydantic_post_init__: ClassVar[None | typing_extensions.Literal['model_post_init']] = None#
__pydantic_private__: dict[str, Any] | None#
__pydantic_serializer__: ClassVar[SchemaSerializer] = SchemaSerializer(serializer=Model(     ModelSerializer {         class: Py(             0x000055e98109d8a0,         ),         serializer: Fields(             GeneralFieldsSerializer {                 fields: {                     "filepath": SerField {                         key_py: Py(                             0x00007fca00c72900,                         ),                         alias: None,                         alias_py: None,                         serializer: Some(                             Str(                                 StrSerializer,                             ),                         ),                         required: true,                     },                 },                 computed_fields: Some(                     ComputedFields(                         [],                     ),                 ),                 mode: SimpleDict,                 extra_serializer: None,                 filter: SchemaFilter {                     include: None,                     exclude: None,                 },                 required_fields: 1,             },         ),         has_extra: false,         root_model: false,         name: "Model",     }, ), definitions=[])#
__pydantic_validator__: ClassVar[SchemaValidator] = SchemaValidator(title="Model", validator=Model(     ModelValidator {         revalidate: Never,         validator: ModelFields(             ModelFieldsValidator {                 fields: [                     Field {                         name: "filepath",                         lookup_key: Simple {                             key: "filepath",                             py_key: Py(                                 0x00007fc9f5c16130,                             ),                             path: LookupPath(                                 [                                     S(                                         "filepath",                                         Py(                                             0x00007fc9f5c160f0,                                         ),                                     ),                                 ],                             ),                         },                         name_py: Py(                             0x00007fca00c72900,                         ),                         validator: FunctionAfter(                             FunctionAfterValidator {                                 validator: Str(                                     StrValidator {                                         strict: false,                                         coerce_numbers_to_str: false,                                     },                                 ),                                 func: Py(                                     0x00007fc9f5c3f3c0,                                 ),                                 config: Py(                                     0x00007fc9f5c160c0,                                 ),                                 name: "function-after[filepath_exists_and_is_absolute(), str]",                                 field_name: None,                                 info_arg: false,                             },                         ),                         frozen: false,                     },                 ],                 model_name: "Model",                 extra_behavior: Ignore,                 extras_validator: None,                 strict: false,                 from_attributes: false,                 loc_by_alias: true,             },         ),         class: Py(             0x000055e98109d8a0,         ),         post_init: None,         frozen: false,         custom_init: false,         root_model: false,         undefined: Py(             0x00007fc9fe69f560,         ),         name: "Model",     }, ), definitions=[], cache_strings=True)#
__signature__: ClassVar[Signature] = <Signature (*, filepath: str) -> None>#
__weakref__#

list of weak references to the object (if defined)

_abc_impl = <_abc._abc_data object>#
filepath: str#
classmethod filepath_exists_and_is_absolute(value: str) str[源代码]#

Validate the filepath exists and return the resolved and absolute filepath.

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'filepath': FieldInfo(annotation=str, required=True, title='Filepath of the archive', description='Filepath of the archive.')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

__abstractmethods__ = frozenset({})#
__init__(profile: Profile)[源代码]#

Initialize the backend, for this profile.

Raises:

~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed

Raises:

~aiida.common.exceptions.IncompatibleStorageSchema if the profile’s storage schema is not at the latest version (and thus should be migrated)

Raises:
raises:

aiida.common.exceptions.CorruptStorage if the storage is internally inconsistent

__module__ = 'aiida.storage.sqlite_zip.backend'#
__str__() str[源代码]#

Return a string showing connection details for this instance.

_abc_impl = <_abc._abc_data object>#
_clear() None[源代码]#

Clear the storage, removing all data.

警告

This is a destructive operation, and should only be used for testing purposes.

_default_user: 'User' | None#
property authinfos#

Return the collection of authorisation information objects

bulk_insert(entity_type: EntityTypes, rows: list[dict], allow_defaults: bool = False) list[int][源代码]#

Insert a list of entities into the database, directly into a backend transaction.

参数:
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically

  • allow_defaults – If False, assert that each row contains all fields (except primary key(s)), otherwise, allow default values for missing fields.

Raises:

IntegrityError if the keys in a row are not a subset of the columns in the table

返回:

The list of generated primary keys for the entities

bulk_update(entity_type: EntityTypes, rows: list[dict]) None[源代码]#

Update a list of entities in the database, directly with a backend transaction.

参数:
  • entity_type – The type of the entity

  • data – A list of dictionaries, containing fields of the backend model to update, and the id field (a.k.a primary key)

Raises:

IntegrityError if the keys in a row are not a subset of the columns in the table

close()[源代码]#

Close the backend

property comments#

Return the collection of comments

property computers#

Return the collection of computers

static create_profile(filepath: str | Path, options: dict | None = None) Profile[源代码]#

Create a new profile instance for this backend, from the path to the zip file.

delete() None[源代码]#

Delete the storage and all the data.

delete_nodes_and_connections(pks_to_delete: Sequence[int])[源代码]#

Delete all nodes corresponding to pks in the input and any links to/from them.

This method is intended to be used within a transaction context.

参数:

pks_to_delete – a sequence of node pks to delete

Raises:

AssertionError if a transaction is not active

get_backend_entity(model)[源代码]#

Return the backend entity that corresponds to the given Model instance.

get_global_variable(key: str)[源代码]#

Return a global variable from the storage.

参数:

key – the key of the setting

Raises:

KeyError if the setting does not exist

get_info(detailed: bool = False) dict[源代码]#

Return general information on the storage.

参数:

detailed – flag to request more detailed information about the content of the storage.

返回:

a nested dict with the relevant information.

get_repository() _RoBackendRepository[源代码]#

Return the object repository configured for this backend.

get_session() Session[源代码]#

Return an SQLAlchemy session.

property groups#

Return the collection of groups

property in_transaction: bool#

Return whether a transaction is currently active.

classmethod initialise(profile: Profile, reset: bool = False) bool[源代码]#

Initialise an instance of the SqliteZipBackend storage backend.

参数:

reset – If true, destroy the backend if it already exists including all of its data before recreating and initialising it. This is useful for example for test profiles that need to be reset before or after tests having run.

返回:

True if the storage was initialised by the function call, False if it was already initialised.

property is_closed: bool#

Return whether the storage is closed.

property logs#

Return the collection of logs

maintain(dry_run: bool = False, live: bool = True, **kwargs) None[源代码]#

Perform maintenance tasks on the storage.

If full == True, then this method may attempt to block the profile associated with the storage to guarantee the safety of its procedures. This will not only prevent any other subsequent process from accessing that profile, but will also first check if there is already any process using it and raise if that is the case. The user will have to manually stop any processes that is currently accessing the profile themselves or wait for it to finish on its own.

参数:
  • full – flag to perform operations that require to stop using the profile to be maintained.

  • dry_run – flag to only print the actions that would be taken without actually executing them.

classmethod migrate(profile: Profile)[源代码]#

Migrate the storage of a profile to the latest schema version.

If the schema version is already the latest version, this method does nothing. If the storage is uninitialised, this method will raise an exception.

Raises:

:class`~aiida.common.exceptions.UnreachableStorage` if the storage cannot be accessed.

Raises:

StorageMigrationError if the storage is not initialised.

property nodes#

Return the collection of nodes

query() SqliteQueryBuilder[源代码]#

Return an instance of a query builder implementation for this backend

read_only = True#

This plugin is read only and data cannot be created or mutated.

set_global_variable(key: str, value, description: str | None = None, overwrite=True) None[源代码]#

Set a global variable in the storage.

参数:
  • key – the key of the setting

  • value – the value of the setting

  • description – the description of the setting (optional)

  • overwrite – if True, overwrite the setting if it already exists

Raises:

ValueError if the key already exists and overwrite is False

transaction()[源代码]#

Get a context manager that can be used as a transaction context for a series of backend operations. If there is an exception within the context then the changes will be rolled back and the state will be as before entering. Transactions can be nested.

返回:

a context manager to group database operations

property users#

Return the collection of users

classmethod version_head() str[源代码]#

Return the head schema version of this storage backend type.

classmethod version_profile(profile: Profile) str | None[源代码]#

Return the schema version of the given profile’s storage, or None for empty/uninitialised storage.

Raises:

~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed

class aiida.storage.sqlite_zip.backend.ZipfileBackendRepository(path: str | Path)[源代码]#

基类:_RoBackendRepository

A read-only backend for a zip file.

The zip file should contain repository files with the key format: repo/<sha256 hash>, i.e. files named by the sha256 hash of the file contents, inside a repo directory.

__abstractmethods__ = frozenset({})#
__init__(path: str | Path)[源代码]#

Initialise the repository backend.

参数:

path – the path to the zip file

__module__ = 'aiida.storage.sqlite_zip.backend'#
_abc_impl = <_abc._abc_data object>#
property _zipfile: ZipFile#

Return the open zip file.

close() None[源代码]#

Close the repository.

has_object(key: str) bool[源代码]#

Return whether the repository has an object with the given key.

参数:

key – fully qualified identifier for the object within the repository.

返回:

True if the object exists, False otherwise.

list_objects() Iterable[str][源代码]#

Return iterable that yields all available objects by key.

返回:

An iterable for all the available object keys.

open(key: str) Iterator[BinaryIO][源代码]#

Open a file handle to an object stored under the given key.

备注

this should only be used to open a handle to read an existing file. To write a new file use the method put_object_from_filelike instead.

参数:

key – fully qualified identifier for the object within the repository.

返回:

yield a byte stream object.

抛出:
class aiida.storage.sqlite_zip.backend._RoBackendRepository(path: str | Path)[源代码]#

基类:AbstractRepositoryBackend

A backend abstract for a read-only folder or zip file.

__abstractmethods__ = frozenset({'list_objects'})#
__init__(path: str | Path)[源代码]#

Initialise the repository backend.

参数:

path – the path to the zip file

__module__ = 'aiida.storage.sqlite_zip.backend'#
_abc_impl = <_abc._abc_data object>#
_put_object_from_filelike(handle: BinaryIO) str[源代码]#
close() None[源代码]#

Close the repository.

delete_objects(keys: list[str]) None[源代码]#

Delete the objects from the repository.

参数:

keys – list of fully qualified identifiers for the objects within the repository.

抛出:
erase() None[源代码]#

Delete the repository itself and all its contents.

备注

This should not merely delete the contents of the repository but any resources it created. For example, if the repository is essentially a folder on disk, the folder itself should also be deleted, not just its contents.

get_info(detailed: bool = False, **kwargs) dict[源代码]#

Returns relevant information about the content of the repository.

参数:

detailed – flag to enable extra information (detailed=False by default, only returns basic information).

返回:

a dictionary with the information.

get_object_hash(key: str) str[源代码]#

Return the SHA-256 hash of an object stored under the given key.

重要

A SHA-256 hash should always be returned, to ensure consistency across different repository implementations.

参数:

key – fully qualified identifier for the object within the repository.

抛出:
has_objects(keys: list[str]) list[bool][源代码]#

Return whether the repository has an object with the given key.

参数:

keys – list of fully qualified identifiers for objects within the repository.

返回:

list of logicals, in the same order as the keys provided, with value True if the respective object exists and False otherwise.

initialise(**kwargs) None[源代码]#

Initialise the repository if it hasn’t already been initialised.

参数:

kwargs – parameters for the initialisation.

property is_initialised: bool#

Return whether the repository has been initialised.

iter_object_streams(keys: list[str]) Iterator[Tuple[str, BinaryIO]][源代码]#

Return an iterator over the (read-only) byte streams of objects identified by key.

备注

handles should only be read within the context of this iterator.

参数:

keys – fully qualified identifiers for the objects within the repository.

返回:

an iterator over the object byte streams.

抛出:
property key_format: str | None#

Return the format for the keys of the repository.

Important for when migrating between backends (e.g. archive -> main), as if they are not equal then it is necessary to re-compute all the Node.base.repository.metadata before importing (otherwise they will not match with the repository).

maintain(dry_run: bool = False, live: bool = True, **kwargs) None[源代码]#

Performs maintenance operations.

参数:
  • dry_run – flag to only print the actions that would be taken without actually executing them.

  • live – flag to indicate to the backend whether AiiDA is live or not (i.e. if the profile of the backend is currently being used/accessed). The backend is expected then to only allow (and thus set by default) the operations that are safe to perform in this state.

property uuid: str | None#

Return the unique identifier of the repository.

Versioning and migration implementation for the sqlite_zip format.

aiida.storage.sqlite_zip.migrator._alembic_config() Config[源代码]#

Return an instance of an Alembic Config.

aiida.storage.sqlite_zip.migrator._alembic_connect(db_path: Path, enforce_foreign_keys=True) Iterator[Config][源代码]#

Context manager to return an instance of an Alembic configuration.

The profiles’s database connection is added in the attributes property, through which it can then also be retrieved, also in the env.py file, which is run when the database is migrated.

aiida.storage.sqlite_zip.migrator._alembic_script() ScriptDirectory[源代码]#

Return an instance of an Alembic ScriptDirectory.

aiida.storage.sqlite_zip.migrator._migration_context(db_path: Path) Iterator[MigrationContext][源代码]#

Context manager to return an instance of an Alembic migration context.

This migration context will have been configured with the current database connection, which allows this context to be used to inspect the contents of the database, such as the current revision.

aiida.storage.sqlite_zip.migrator._perform_legacy_migrations(current_version: str, to_version: str, metadata: dict, data: dict) str[源代码]#

Perform legacy migrations from the current version to the desired version.

Legacy archives use the old data.json format for storing the database. These migrations simply manipulate the metadata and data in-place.

参数:
  • current_version – current version of the archive

  • to_version – version to migrate to

  • metadata – the metadata to migrate

  • data – the data to migrate

返回:

the new version of the archive

aiida.storage.sqlite_zip.migrator._read_json(inpath: Path, filename: str, is_tar: bool) Dict[str, Any][源代码]#

Read a JSON file from the archive.

aiida.storage.sqlite_zip.migrator.get_schema_version_head() str[源代码]#

Return the head schema version for this storage, i.e. the latest schema this storage can be migrated to.

aiida.storage.sqlite_zip.migrator.list_versions() List[str][源代码]#

Return all available schema versions (oldest to latest).

aiida.storage.sqlite_zip.migrator.migrate(inpath: str | Path, outpath: str | Path, version: str, *, force: bool = False, compression: int = 6) None[源代码]#

Migrate an sqlite_zip storage file to a specific version.

Historically, this format could be a zip or a tar file, contained the database as a bespoke JSON format, and the repository files in the “legacy” per-node format. For these versions, we first migrate the JSON database to the final legacy schema, then we convert this file to the SQLite database, whilst sequentially migrating the repository files.

Once any legacy migrations have been performed, we can then migrate the SQLite database to the final schema, using alembic.

Note that, to minimise disk space usage, we never fully extract/uncompress the input file (except when migrating from a legacy tar file, whereby we cannot extract individual files):

  1. The sqlite database is extracted to a temporary location and migrated

  2. A new zip file is opened, within a temporary folder

  3. The repository files are “streamed” directly between the input file and the new zip file

  4. The sqlite database and metadata JSON are written to the new zip file

  5. The new zip file is closed (which writes its final central directory)

  6. The new zip file is moved to the output location, removing any existing file if force=True

参数:
  • path – Path to the file

  • outpath – Path to output the migrated file

  • version – Target version

  • force – If True, overwrite the output file if it exists

  • compression – Compression level for the output file

aiida.storage.sqlite_zip.migrator.validate_storage(inpath: Path) None[源代码]#

Validate that the storage is at the head version.

Raises:

aiida.common.exceptions.UnreachableStorage if the file does not exist

Raises:

aiida.common.exceptions.CorruptStorage if the version cannot be read from the storage.

Raises:

aiida.common.exceptions.IncompatibleStorageSchema if the storage is not compatible with the code API.

This module contains the SQLAlchemy models for the SQLite backend.

These models are intended to be identical to those of the psql_dos backend, except for changes to the database specific types:

  • UUID -> CHAR(32)

  • DateTime -> TZDateTime

  • JSONB -> JSON

Also, varchar_pattern_ops indexes are not possible in sqlite.

class aiida.storage.sqlite_zip.models.DbAuthInfo(**kwargs)#

基类:SqliteModel

Database model to store data for aiida.orm.AuthInfo, and keep computer authentication data, per user.

Specifications are user-specific of how to submit jobs in the computer. The model also has an enabled logical switch that indicates whether the device is available for use or not. This last one can be set and unset by the user.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f7c98c90; DbAuthInfo>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbauthinfo', MetaData(), Column('id', Integer(), table=<db_dbauthinfo>, primary_key=True, nullable=False), Column('aiidauser_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbauthinfo>, nullable=False), Column('dbcomputer_id', Integer(), ForeignKey('db_dbcomputer.id'), table=<db_dbauthinfo>, nullable=False), Column('metadata', JSON(), table=<db_dbauthinfo>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('auth_params', JSON(), table=<db_dbauthinfo>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('enabled', Boolean(), table=<db_dbauthinfo>, nullable=False, default=ScalarElementColumnDefault(True)), schema=None)#
__tablename__ = 'db_dbauthinfo'#
_metadata#
_sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'aiidauser': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'aiidauser_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'auth_params': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'enabled': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
aiidauser#
aiidauser_id#
auth_params#
dbcomputer#
dbcomputer_id#
enabled#
id#
class aiida.storage.sqlite_zip.models.DbComment(**kwargs)#

基类:SqliteModel

Database model to store data for aiida.orm.Comment.

Comments can be attach to the nodes by the users.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f5d8f990; DbComment>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbcomment', MetaData(), Column('id', Integer(), table=<db_dbcomment>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbcomment>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('dbnode_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dbcomment>, nullable=False), Column('ctime', TZDateTime(), table=<db_dbcomment>, nullable=False, default=CallableColumnDefault(<function now>)), Column('mtime', TZDateTime(), table=<db_dbcomment>, nullable=False, onupdate=CallableColumnDefault(<function now>), default=CallableColumnDefault(<function now>)), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbcomment>, nullable=False), Column('content', Text(), table=<db_dbcomment>, nullable=False, default=ScalarElementColumnDefault('')), schema=None)#
__tablename__ = 'db_dbcomment'#
_sa_class_manager = {'content': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'ctime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'mtime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
content#
ctime#
dbnode#
dbnode_id#
id#
mtime#
user#
user_id#
uuid#
class aiida.storage.sqlite_zip.models.DbComputer(**kwargs)#

基类:SqliteModel

Database model to store data for aiida.orm.Computer.

Computers represent (and contain the information of) the physical hardware resources available. Nodes can be associated with computers if they are remote codes, remote folders, or processes that had run remotely.

Computers are identified within AiiDA by their label (and thus it must be unique for each one in the database), whereas the hostname is the label that identifies the computer within the network from which one can access it.

The scheduler_type column contains the information of the scheduler (and plugin) that the computer uses to manage jobs, whereas the transport_type the information of the transport (and plugin) required to copy files and communicate to and from the computer. The metadata contains some general settings for these communication and management protocols.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f5d9d010; DbComputer>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbcomputer', MetaData(), Column('id', Integer(), table=<db_dbcomputer>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbcomputer>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('label', String(length=255), table=<db_dbcomputer>, nullable=False), Column('hostname', String(length=255), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('description', Text(), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('scheduler_type', String(length=255), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('transport_type', String(length=255), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('metadata', JSON(), table=<db_dbcomputer>, nullable=False, default=CallableColumnDefault(<function dict>)), schema=None)#
__tablename__ = 'db_dbcomputer'#
_metadata#
_sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'hostname': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'scheduler_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'transport_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
description#
hostname#
id#
label#
scheduler_type#
transport_type#
uuid#
class aiida.storage.sqlite_zip.models.DbGroup(**kwargs)#

基类:SqliteModel

Database model to store aiida.orm.Group data.

A group may contain many different nodes, but also each node can be included in different groups.

Users will typically identify and handle groups by using their label (which, unlike the labels in other models, must be unique). Groups also have a type, which serves to identify what plugin is being instanced, and the extras property for users to set any relevant information.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f5d9e990; DbGroup>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbgroup', MetaData(), Column('id', Integer(), table=<db_dbgroup>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbgroup>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('label', String(length=255), table=<db_dbgroup>, nullable=False), Column('type_string', String(length=255), table=<db_dbgroup>, nullable=False, default=ScalarElementColumnDefault('')), Column('time', TZDateTime(), table=<db_dbgroup>, nullable=False, default=CallableColumnDefault(<function now>)), Column('description', Text(), table=<db_dbgroup>, nullable=False, default=ScalarElementColumnDefault('')), Column('extras', JSON(), table=<db_dbgroup>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbgroup>, nullable=False), schema=None)#
__tablename__ = 'db_dbgroup'#
_sa_class_manager = {'dbnodes': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'extras': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'time': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'type_string': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
dbnodes#
description#
extras#
id#
label#
time#
type_string#
user#
user_id#
uuid#
aiida.storage.sqlite_zip.models.DbGroupNodes#

DbGroupNode 的别名

基类:SqliteModel

Database model to store links between aiida.orm.Node.

Each entry in this table contains not only the id information of the two nodes that are linked, but also some extra properties of the link themselves. This includes the type of the link (see the 概念 section for all possible types) as well as a label which is more specific and typically determined by the procedure generating the process node that links the data nodes.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f5d7f250; DbLink>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dblink', MetaData(), Column('id', Integer(), table=<db_dblink>, primary_key=True, nullable=False), Column('input_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblink>, nullable=False), Column('output_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblink>, nullable=False), Column('label', String(length=255), table=<db_dblink>, nullable=False), Column('type', String(length=255), table=<db_dblink>, nullable=False), schema=None)#
__tablename__ = 'db_dblink'#
_sa_class_manager = {'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'input_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'output_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
id#
input_id#
label#
output_id#
type#
class aiida.storage.sqlite_zip.models.DbLog(**kwargs)#

基类:SqliteModel

Database model to data for aiida.orm.Log, corresponding to aiida.orm.ProcessNode.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f5d7d450; DbLog>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dblog', MetaData(), Column('id', Integer(), table=<db_dblog>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dblog>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('time', TZDateTime(), table=<db_dblog>, nullable=False, default=CallableColumnDefault(<function now>)), Column('loggername', String(length=255), table=<db_dblog>, nullable=False), Column('levelname', String(length=50), table=<db_dblog>, nullable=False), Column('dbnode_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblog>, nullable=False), Column('message', Text(), table=<db_dblog>, nullable=False, default=ScalarElementColumnDefault('')), Column('metadata', JSON(), table=<db_dblog>, nullable=False, default=CallableColumnDefault(<function dict>)), schema=None)#
__tablename__ = 'db_dblog'#
_metadata#
_sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'levelname': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'loggername': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'message': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'time': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
dbnode#
dbnode_id#
id#
levelname#

How critical the message is

loggername#

What process recorded the message

message#
time#
uuid#
class aiida.storage.sqlite_zip.models.DbNode(**kwargs)#

基类:SqliteModel

Database model to store data for aiida.orm.Node.

Each node can be categorized according to its node_type, which indicates what kind of data or process node it is. Additionally, process nodes also have a process_type that further indicates what is the specific plugin it uses.

Nodes can also store two kind of properties:

  • attributes are determined by the node_type, and are set before storing the node and can’t be modified afterwards.

  • extras, on the other hand, can be added and removed after the node has been stored and are usually set by the user.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f5d8c450; DbNode>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbnode', MetaData(), Column('id', Integer(), table=<db_dbnode>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbnode>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('node_type', String(length=255), table=<db_dbnode>, nullable=False, default=ScalarElementColumnDefault('')), Column('process_type', String(length=255), table=<db_dbnode>), Column('label', String(length=255), table=<db_dbnode>, nullable=False, default=ScalarElementColumnDefault('')), Column('description', Text(), table=<db_dbnode>, nullable=False, default=ScalarElementColumnDefault('')), Column('ctime', TZDateTime(), table=<db_dbnode>, nullable=False, default=CallableColumnDefault(<function now>)), Column('mtime', TZDateTime(), table=<db_dbnode>, nullable=False, onupdate=CallableColumnDefault(<function now>), default=CallableColumnDefault(<function now>)), Column('attributes', JSON(), table=<db_dbnode>, default=CallableColumnDefault(<function dict>)), Column('extras', JSON(), table=<db_dbnode>, default=CallableColumnDefault(<function dict>)), Column('repository_metadata', JSON(), table=<db_dbnode>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('dbcomputer_id', Integer(), ForeignKey('db_dbcomputer.id'), table=<db_dbnode>), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbnode>, nullable=False), schema=None)#
__tablename__ = 'db_dbnode'#
_sa_class_manager = {'attributes': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'ctime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'extras': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'mtime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'node_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'process_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'repository_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
attributes#
ctime#
dbcomputer#
dbcomputer_id#
description#
extras#
id#
label#
mtime#
node_type#
process_type#
repository_metadata#
user#
user_id#
uuid#
class aiida.storage.sqlite_zip.models.DbUser(**kwargs)#

基类:SqliteModel

Database model to store data for aiida.orm.User.

Every node that is created has a single user as its author.

The user information consists of the most basic personal contact details.

__init__(**kwargs)#

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

__mapper__ = <Mapper at 0x7fc9f5dabf10; DbUser>#
__module__ = 'aiida.storage.sqlite_zip.models'#
__table__ = Table('db_dbuser', MetaData(), Column('id', Integer(), table=<db_dbuser>, primary_key=True, nullable=False), Column('email', String(length=254), table=<db_dbuser>, nullable=False), Column('first_name', String(length=254), table=<db_dbuser>, nullable=False, default=ScalarElementColumnDefault('')), Column('last_name', String(length=254), table=<db_dbuser>, nullable=False, default=ScalarElementColumnDefault('')), Column('institution', String(length=254), table=<db_dbuser>, nullable=False, default=ScalarElementColumnDefault('')), schema=None)#
__tablename__ = 'db_dbuser'#
_sa_class_manager = {'email': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'first_name': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'institution': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'last_name': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
email#
first_name#
id#
institution#
last_name#
class aiida.storage.sqlite_zip.models.SqliteModel[源代码]#

基类:object

Represent a row in an sqlite database table

__annotations__ = {}#
__dict__ = mappingproxy({'__module__': 'aiida.storage.sqlite_zip.models', '__doc__': 'Represent a row in an sqlite database table', '__repr__': <function SqliteModel.__repr__>, '__dict__': <attribute '__dict__' of 'SqliteModel' objects>, '__weakref__': <attribute '__weakref__' of 'SqliteModel' objects>, '__annotations__': {}})#
__module__ = 'aiida.storage.sqlite_zip.models'#
__repr__() str[源代码]#

Return a representation of the row columns

__weakref__#

list of weak references to the object (if defined)

class aiida.storage.sqlite_zip.models.TZDateTime(*args: Any, **kwargs: Any)[源代码]#

基类:TypeDecorator

A timezone naive UTC DateTime implementation for SQLite.

see: https://docs.sqlalchemy.org/en/14/core/custom_types.html#store-timezone-aware-timestamps-as-timezone-naive-utc

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.models'#
__parameters__ = ()#
cache_ok: bool | None = True#

Indicate if statements using this ExternalType are “safe to cache”.

The default value None will emit a warning and then not allow caching of a statement which includes this type. Set to False to disable statements using this type from being cached at all without a warning. When set to True, the object’s class and selected elements from its state will be used as part of the cache key. For example, using a TypeDecorator:

class MyType(TypeDecorator):
    impl = String

    cache_ok = True

    def __init__(self, choices):
        self.choices = tuple(choices)
        self.internal_only = True

The cache key for the above type would be equivalent to:

>>> MyType(["a", "b", "c"])._static_cache_key
(<class '__main__.MyType'>, ('choices', ('a', 'b', 'c')))

The caching scheme will extract attributes from the type that correspond to the names of parameters in the __init__() method. Above, the “choices” attribute becomes part of the cache key but “internal_only” does not, because there is no parameter named “internal_only”.

The requirements for cacheable elements is that they are hashable and also that they indicate the same SQL rendered for expressions using this type every time for a given cache value.

To accommodate for datatypes that refer to unhashable structures such as dictionaries, sets and lists, these objects can be made “cacheable” by assigning hashable structures to the attributes whose names correspond with the names of the arguments. For example, a datatype which accepts a dictionary of lookup values may publish this as a sorted series of tuples. Given a previously un-cacheable type as:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    this is the non-cacheable version, as "self.lookup" is not
    hashable.

    '''

    def __init__(self, lookup):
        self.lookup = lookup

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self.lookup" ...

Where “lookup” is a dictionary. The type will not be able to generate a cache key:

>>> type_ = LookupType({"a": 10, "b": 20})
>>> type_._static_cache_key
<stdin>:1: SAWarning: UserDefinedType LookupType({'a': 10, 'b': 20}) will not
produce a cache key because the ``cache_ok`` flag is not set to True.
Set this flag to True if this type object's state is safe to use
in a cache key, or False to disable this warning.
symbol('no_cache')

If we did set up such a cache key, it wouldn’t be usable. We would get a tuple structure that contains a dictionary inside of it, which cannot itself be used as a key in a “cache dictionary” such as SQLAlchemy’s statement cache, since Python dictionaries aren’t hashable:

>>> # set cache_ok = True
>>> type_.cache_ok = True

>>> # this is the cache key it would generate
>>> key = type_._static_cache_key
>>> key
(<class '__main__.LookupType'>, ('lookup', {'a': 10, 'b': 20}))

>>> # however this key is not hashable, will fail when used with
>>> # SQLAlchemy statement cache
>>> some_cache = {key: "some sql value"}
Traceback (most recent call last): File "<stdin>", line 1,
in <module> TypeError: unhashable type: 'dict'

The type may be made cacheable by assigning a sorted tuple of tuples to the “.lookup” attribute:

class LookupType(UserDefinedType):
    '''a custom type that accepts a dictionary as a parameter.

    The dictionary is stored both as itself in a private variable,
    and published in a public variable as a sorted tuple of tuples,
    which is hashable and will also return the same value for any
    two equivalent dictionaries.  Note it assumes the keys and
    values of the dictionary are themselves hashable.

    '''

    cache_ok = True

    def __init__(self, lookup):
        self._lookup = lookup

        # assume keys/values of "lookup" are hashable; otherwise
        # they would also need to be converted in some way here
        self.lookup = tuple(
            (key, lookup[key]) for key in sorted(lookup)
        )

    def get_col_spec(self, **kw):
        return "VARCHAR(255)"

    def bind_processor(self, dialect):
        # ...  works with "self._lookup" ...

Where above, the cache key for LookupType({"a": 10, "b": 20}) will be:

>>> LookupType({"a": 10, "b": 20})._static_cache_key
(<class '__main__.LookupType'>, ('lookup', (('a', 10), ('b', 20))))

Added in version 1.4.14: - added the cache_ok flag to allow some configurability of caching for TypeDecorator classes.

Added in version 1.4.28: - added the ExternalType mixin which generalizes the cache_ok flag to both the TypeDecorator and UserDefinedType classes.

impl#

DateTime 的别名

process_bind_param(value: datetime | None, dialect)[源代码]#

Process before writing to database.

process_result_value(value: datetime | None, dialect)[源代码]#

Process when returning from database.

aiida.storage.sqlite_zip.models.create_orm_cls(klass: Model) SqliteModel[源代码]#

Create an ORM class from an existing table in the declarative meta

aiida.storage.sqlite_zip.models.get_model_from_entity(entity_type: EntityTypes) Tuple[Any, Set[str]][源代码]#

Return the Sqlalchemy model and column names corresponding to the given entity.

aiida.storage.sqlite_zip.models.pg_to_sqlite(pg_table: Table)[源代码]#

Convert a model intended for PostGreSQL to one compatible with SQLite

This module contains the AiiDA backend ORM classes for the SQLite backend.

It re-uses the classes already defined in psql_dos backend (for PostGresQL), but redefines the SQLAlchemy models to the SQLite compatible ones.

class aiida.storage.sqlite_zip.orm.SqliteAuthInfo(backend, computer, user)[源代码]#

基类:SqliteEntityOverride, SqlaAuthInfo

COMPUTER_CLASS#

SqliteComputer 的别名

MODEL_CLASS#

DbAuthInfo 的别名

USER_CLASS#

SqliteUser 的别名

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc._abc_data object>#
_model: ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteAuthInfoCollection(backend: StorageBackend)[源代码]#

基类:SqlaAuthInfoCollection

ENTITY_CLASS#

SqliteAuthInfo 的别名

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteComment(backend, node, user, content=None, ctime=None, mtime=None)[源代码]#

基类:SqliteEntityOverride, SqlaComment

MODEL_CLASS#

DbComment 的别名

USER_CLASS#

SqliteUser 的别名

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc._abc_data object>#
_model: ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteCommentCollection(backend: StorageBackend)[源代码]#

基类:SqlaCommentCollection

ENTITY_CLASS#

SqliteComment 的别名

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteComputer(backend, **kwargs)[源代码]#

基类:SqliteEntityOverride, SqlaComputer

MODEL_CLASS#

DbComputer 的别名

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc._abc_data object>#
_model: ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteComputerCollection(backend: StorageBackend)[源代码]#

基类:SqlaComputerCollection

ENTITY_CLASS#

SqliteComputer 的别名

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteEntityOverride[源代码]#

基类:object

Overrides type-checking of psql_dos Entity.

MODEL_CLASS: Any#
__annotations__ = {'MODEL_CLASS': typing.Any, '_model': <class 'aiida.storage.psql_dos.orm.utils.ModelWrapper'>}#
__dict__ = mappingproxy({'__module__': 'aiida.storage.sqlite_zip.orm', '__annotations__': {'MODEL_CLASS': typing.Any, '_model': <class 'aiida.storage.psql_dos.orm.utils.ModelWrapper'>}, '__doc__': 'Overrides type-checking of psql_dos ``Entity``.', '_class_check': <classmethod(<function SqliteEntityOverride._class_check>)>, 'from_dbmodel': <classmethod(<function SqliteEntityOverride.from_dbmodel>)>, 'store': <function SqliteEntityOverride.store>, '__dict__': <attribute '__dict__' of 'SqliteEntityOverride' objects>, '__weakref__': <attribute '__weakref__' of 'SqliteEntityOverride' objects>})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__weakref__#

list of weak references to the object (if defined)

classmethod _class_check()[源代码]#

Assert that the class is correctly configured

_model: ModelWrapper#
classmethod from_dbmodel(dbmodel, backend)[源代码]#

Create an AiiDA Entity from the corresponding SQLA ORM model and storage backend

参数:
  • dbmodel – the SQLAlchemy model to create the entity from

  • backend – the corresponding storage backend

返回:

the AiiDA entity

store(*args, **kwargs)[源代码]#
class aiida.storage.sqlite_zip.orm.SqliteGroup(backend, label, user, description='', type_string='')[源代码]#

基类:SqliteEntityOverride, SqlaGroup

MODEL_CLASS#

DbGroup 的别名

USER_CLASS#

SqliteUser 的别名

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc._abc_data object>#
_model: ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteGroupCollection(backend: StorageBackend)[源代码]#

基类:SqlaGroupCollection

ENTITY_CLASS#

SqliteGroup 的别名

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteLog(backend, time, loggername, levelname, dbnode_id, message='', metadata=None)[源代码]#

基类:SqliteEntityOverride, SqlaLog

MODEL_CLASS#

DbLog 的别名

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc._abc_data object>#
_model: ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteLogCollection(backend: StorageBackend)[源代码]#

基类:SqlaLogCollection

ENTITY_CLASS#

SqliteLog 的别名

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteNode(backend, node_type, user, computer=None, process_type=None, label='', description='', ctime=None, mtime=None)[源代码]#

基类:SqliteEntityOverride, SqlaNode

SQLA Node backend entity

COMPUTER_CLASS#

SqliteComputer 的别名

DbLink 的别名

MODEL_CLASS#

DbNode 的别名

USER_CLASS#

SqliteUser 的别名

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc._abc_data object>#
_model: ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteNodeCollection(backend: StorageBackend)[源代码]#

基类:SqlaNodeCollection

ENTITY_CLASS#

SqliteNode 的别名

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
class aiida.storage.sqlite_zip.orm.SqliteQueryBuilder(backend)[源代码]#

基类:SqlaQueryBuilder

QueryBuilder to use with SQLAlchemy-backend, adapted for SQLite.

property AuthInfo#
property Comment#
property Computer#
property Group#
property Log#
property Node#
property User#
__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
_abc_impl = <_abc._abc_data object>#
_data: QueryDictType#
static _get_projectable_entity(alias: AliasedClass, column_name: str, attrpath: List[str], cast: str | None = None) ColumnElement | InstrumentedAttribute[源代码]#

Return projectable entity for a given alias and column name.

_query_cache: BuiltQuery | None#
_query_hash: str | None#
static get_filter_expr_from_column(operator: str, value: Any, column) BinaryExpression[源代码]#

A method that returns an valid SQLAlchemy expression.

参数:
  • operator – The operator provided by the user (‘==’, ‘>’, …)

  • value – The value to compare with, e.g. (5.0, ‘foo’, [‘a’,’b’])

  • column – an instance of sqlalchemy.orm.attributes.InstrumentedAttribute or

static get_filter_expr_from_jsonb(operator: str, value, attr_key: List[str], column=None, column_name=None, alias=None)[源代码]#

Return a filter expression.

See: https://www.sqlite.org/json1.html

inner_to_outer_schema: Dict[str, Dict[str, str]]#
outer_to_inner_schema: Dict[str, Dict[str, str]]#
property table_groups_nodes#
class aiida.storage.sqlite_zip.orm.SqliteUser(backend, email, first_name, last_name, institution)[源代码]#

基类:SqliteEntityOverride, SqlaUser

MODEL_CLASS#

DbUser 的别名

__abstractmethods__ = frozenset({})#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
_abc_impl = <_abc._abc_data object>#
_model: ModelWrapper#
class aiida.storage.sqlite_zip.orm.SqliteUserCollection(backend: StorageBackend)[源代码]#

基类:SqlaUserCollection

ENTITY_CLASS#

SqliteUser 的别名

__annotations__ = {}#
__module__ = 'aiida.storage.sqlite_zip.orm'#
__parameters__ = ()#
aiida.storage.sqlite_zip.orm._(dbmodel, backend)[源代码]#
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel, backend)[源代码]#
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbUser, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbGroup, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbComputer, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbNode, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbAuthInfo, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbComment, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbLog, backend)
aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbLink, backend)

Utilities for this backend.

aiida.storage.sqlite_zip.utils.DB_FILENAME = 'db.sqlite3'#

The filename of the SQLite database.

aiida.storage.sqlite_zip.utils.META_FILENAME = 'metadata.json'#

The filename containing meta information about the storage instance.

aiida.storage.sqlite_zip.utils.REPO_FOLDER = 'repo'#

The name of the folder containing the repository files.

exception aiida.storage.sqlite_zip.utils.ReadOnlyError(msg='sqlite_zip storage is read-only')[源代码]#

基类:AiidaException

Raised when a write operation is called on a read-only archive.

__annotations__ = {}#
__init__(msg='sqlite_zip storage is read-only')[源代码]#
__module__ = 'aiida.storage.sqlite_zip.utils'#
aiida.storage.sqlite_zip.utils.create_sqla_engine(path: str | Path, *, enforce_foreign_keys: bool = True, **kwargs) Engine[源代码]#

Create a new engine instance.

aiida.storage.sqlite_zip.utils.extract_metadata(path: str | Path, *, search_limit: int | None = 10) Dict[str, Any][源代码]#

Extract the metadata dictionary from the archive.

参数:

search_limit – the maximum number of records to search for the metadata file in a zip file.

aiida.storage.sqlite_zip.utils.read_version(path: str | Path, *, search_limit: int | None = None) str[源代码]#

Read the version of the storage instance from the path.

This is intended to work for all versions of the storage format.

参数:
  • path – path to storage instance, either a folder, zip file or tar file.

  • search_limit – the maximum number of records to search for the metadata file in a zip file.

Raises:

UnreachableStorage if a version cannot be read from the file

aiida.storage.sqlite_zip.utils.sqlite_case_sensitive_like(dbapi_connection, _)[源代码]#

Enforce case sensitive like operations (off by default).

See: https://www.sqlite.org/pragma.html#pragma_case_sensitive_like

aiida.storage.sqlite_zip.utils.sqlite_enforce_foreign_keys(dbapi_connection, _)[源代码]#

Enforce foreign key constraints, when using sqlite backend (off by default).

See: https://www.sqlite.org/pragma.html#pragma_foreign_keys