aiida.storage.sqlite_zip package#
Module with implementation of the storage backend, using an SQLite database and repository files, within a zipfile.
The content of the zip file is:
|- storage.zip
|- metadata.json
|- db.sqlite3
|- repo/
|- hashkey1
|- hashkey2
...
For quick access, the metadata (such as the version) is stored in a metadata.json file, at the “top” of the zip file, with the sqlite database, just below it, then the repository files. Repository files are named by their SHA256 content hash.
This storage method is primarily intended for the AiiDA archive, as a read-only storage method. This is because sqlite and zip are not suitable for concurrent write access.
The archive format originally used a JSON file to store the database, and these revisions are handled by the version_profile and migrate backend methods.
Subpackages#
- aiida.storage.sqlite_zip.migrations package
- Subpackages
- Submodules
run_migrations_online()
_convert_datetime()
_create_directory()
_create_repo_metadata()
_iter_entity_fields()
_json_to_sqlite()
perform_v1_migration()
copy_tar_to_zip()
copy_zip_to_zip()
update_metadata()
verify_metadata_version()
DbAuthInfo
DbComment
DbComputer
DbComputer.__init__()
DbComputer.__mapper__
DbComputer.__module__
DbComputer.__table__
DbComputer.__tablename__
DbComputer._metadata
DbComputer._sa_class_manager
DbComputer.description
DbComputer.hostname
DbComputer.id
DbComputer.label
DbComputer.scheduler_type
DbComputer.transport_type
DbComputer.uuid
DbGroup
DbGroupNodes
DbLink
DbLog
DbNode
DbNode.__init__()
DbNode.__mapper__
DbNode.__module__
DbNode.__table__
DbNode.__tablename__
DbNode._sa_class_manager
DbNode.attributes
DbNode.ctime
DbNode.dbcomputer_id
DbNode.description
DbNode.extras
DbNode.id
DbNode.label
DbNode.mtime
DbNode.node_type
DbNode.process_type
DbNode.repository_metadata
DbNode.user_id
DbNode.uuid
DbUser
Submodules#
The table models are dynamically generated from the sqlalchemy backend models.
- class aiida.storage.sqlite_zip.backend.FolderBackendRepository(path: str | Path)[source]#
Bases:
_RoBackendRepository
A read-only backend for a folder.
The folder should contain repository files, named by the sha256 hash of the file contents.
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.backend'#
- _abc_impl = <_abc._abc_data object>#
- has_object(key: str) bool [source]#
Return whether the repository has an object with the given key.
- Parameters:
key – fully qualified identifier for the object within the repository.
- Returns:
True if the object exists, False otherwise.
- list_objects() Iterable[str] [source]#
Return iterable that yields all available objects by key.
- Returns:
An iterable for all the available object keys.
- open(key: str) Iterator[BinaryIO] [source]#
Open a file handle to an object stored under the given key.
Note
this should only be used to open a handle to read an existing file. To write a new file use the method
put_object_from_filelike
instead.- Parameters:
key – fully qualified identifier for the object within the repository.
- Returns:
yield a byte stream object.
- Raises:
FileNotFoundError – if the file does not exist.
OSError – if the file could not be opened.
- class aiida.storage.sqlite_zip.backend.SqliteZipBackend(profile: Profile)[source]#
Bases:
StorageBackend
A read-only backend for a sqlite/zip format.
The storage format uses an SQLite database and repository files, within a folder or zipfile.
The content of the folder/zipfile should be:
|- metadata.json |- db.sqlite3 |- repo/ |- hashkey1 |- hashkey2 ...
- class Model(**data: Any)[source]#
Bases:
BaseModel
Model describing required information to configure an instance of the storage.
- __abstractmethods__ = frozenset({})#
- __annotations__ = {'__class_vars__': 'ClassVar[set[str]]', '__private_attributes__': 'ClassVar[Dict[str, ModelPrivateAttr]]', '__pydantic_complete__': 'ClassVar[bool]', '__pydantic_core_schema__': 'ClassVar[CoreSchema]', '__pydantic_custom_init__': 'ClassVar[bool]', '__pydantic_decorators__': 'ClassVar[_decorators.DecoratorInfos]', '__pydantic_extra__': 'dict[str, Any] | None', '__pydantic_fields_set__': 'set[str]', '__pydantic_generic_metadata__': 'ClassVar[_generics.PydanticGenericMetadata]', '__pydantic_parent_namespace__': 'ClassVar[Dict[str, Any] | None]', '__pydantic_post_init__': "ClassVar[None | Literal['model_post_init']]", '__pydantic_private__': 'dict[str, Any] | None', '__pydantic_root_model__': 'ClassVar[bool]', '__pydantic_serializer__': 'ClassVar[SchemaSerializer]', '__pydantic_validator__': 'ClassVar[SchemaValidator | PluggableSchemaValidator]', '__signature__': 'ClassVar[Signature]', 'filepath': 'str', 'model_computed_fields': 'ClassVar[Dict[str, ComputedFieldInfo]]', 'model_config': 'ClassVar[ConfigDict]', 'model_fields': 'ClassVar[Dict[str, FieldInfo]]'}#
- __dict__#
- __module__ = 'aiida.storage.sqlite_zip.backend'#
- __private_attributes__: ClassVar[Dict[str, ModelPrivateAttr]] = {}#
Metadata about the private attributes of the model.
- __pydantic_complete__: ClassVar[bool] = True#
Whether model building is completed, or if there are still undefined fields.
- __pydantic_core_schema__: ClassVar[CoreSchema] = {'cls': <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>, 'config': {'title': 'Model'}, 'custom_init': False, 'metadata': {'pydantic_js_annotation_functions': [], 'pydantic_js_functions': [functools.partial(<function modify_model_json_schema>, cls=<class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>, title=None), <bound method BaseModel.__get_pydantic_json_schema__ of <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>>]}, 'ref': 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model:94480886650624', 'root_model': False, 'schema': {'computed_fields': [], 'fields': {'filepath': {'metadata': {'pydantic_js_annotation_functions': [<function get_json_schema_update_func.<locals>.json_schema_update_func>], 'pydantic_js_functions': []}, 'schema': {'function': {'function': <bound method SqliteZipBackend.Model.filepath_exists_and_is_absolute of <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>>, 'type': 'no-info'}, 'schema': {'type': 'str'}, 'type': 'function-after'}, 'type': 'model-field'}}, 'model_name': 'Model', 'type': 'model-fields'}, 'type': 'model'}#
The core schema of the model.
- __pydantic_decorators__: ClassVar[_decorators.DecoratorInfos] = DecoratorInfos(validators={}, field_validators={'filepath_exists_and_is_absolute': Decorator(cls_ref='aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model:94480886650624', cls_var_name='filepath_exists_and_is_absolute', func=<bound method SqliteZipBackend.Model.filepath_exists_and_is_absolute of <class 'aiida.storage.sqlite_zip.backend.SqliteZipBackend.Model'>>, shim=None, info=FieldValidatorDecoratorInfo(fields=('filepath',), mode='after', check_fields=None, json_schema_input_type=PydanticUndefined))}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})#
Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.
- __pydantic_extra__: dict[str, Any] | None#
A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to ‘allow’.
- __pydantic_generic_metadata__: ClassVar[_generics.PydanticGenericMetadata] = {'args': (), 'origin': None, 'parameters': ()}#
Metadata for generic models; contains data used for a similar purpose to __args__, __origin__, __parameters__ in typing-module generics. May eventually be replaced by these.
- __pydantic_parent_namespace__: ClassVar[Dict[str, Any] | None] = {'__doc__': 'A read-only backend for a sqlite/zip format.\n\n The storage format uses an SQLite database and repository files, within a folder or zipfile.\n\n The content of the folder/zipfile should be::\n\n |- metadata.json\n |- db.sqlite3\n |- repo/\n |- hashkey1\n |- hashkey2\n ...\n\n ', '__module__': 'aiida.storage.sqlite_zip.backend', '__qualname__': 'SqliteZipBackend', 'read_only': True, 'subject': <pydantic._internal._mock_val_ser.MockValSer object>}#
Parent namespace of the model, used for automatic rebuilding of models.
- __pydantic_post_init__: ClassVar[None | Literal['model_post_init']] = None#
The name of the post-init method for the model, if defined.
- __pydantic_private__: dict[str, Any] | None#
Values of private attributes set on the model instance.
- __pydantic_serializer__: ClassVar[SchemaSerializer] = SchemaSerializer(serializer=Model( ModelSerializer { class: Py( 0x000055ee0bafd700, ), serializer: Fields( GeneralFieldsSerializer { fields: { "filepath": SerField { key_py: Py( 0x00007fc247e0d0a0, ), alias: None, alias_py: None, serializer: Some( Str( StrSerializer, ), ), required: true, }, }, computed_fields: Some( ComputedFields( [], ), ), mode: SimpleDict, extra_serializer: None, filter: SchemaFilter { include: None, exclude: None, }, required_fields: 1, }, ), has_extra: false, root_model: false, name: "Model", }, ), definitions=[])#
The pydantic-core SchemaSerializer used to dump instances of the model.
- __pydantic_validator__: ClassVar[SchemaValidator | PluggableSchemaValidator] = SchemaValidator(title="Model", validator=Model( ModelValidator { revalidate: Never, validator: ModelFields( ModelFieldsValidator { fields: [ Field { name: "filepath", lookup_key: Simple { key: "filepath", py_key: Py( 0x00007fc20b867f70, ), path: LookupPath( [ S( "filepath", Py( 0x00007fc20b867eb0, ), ), ], ), }, name_py: Py( 0x00007fc247e0d0a0, ), validator: FunctionAfter( FunctionAfterValidator { validator: Str( StrValidator { strict: false, coerce_numbers_to_str: false, }, ), func: Py( 0x00007fc23cb70900, ), config: Py( 0x00007fc20b4f1d00, ), name: "function-after[filepath_exists_and_is_absolute(), str]", field_name: None, info_arg: false, }, ), frozen: false, }, ], model_name: "Model", extra_behavior: Ignore, extras_validator: None, strict: false, from_attributes: false, loc_by_alias: true, }, ), class: Py( 0x000055ee0bafd700, ), post_init: None, frozen: false, custom_init: false, root_model: false, undefined: Py( 0x00007fc24570ee20, ), name: "Model", }, ), definitions=[], cache_strings=True)#
The pydantic-core SchemaValidator used to validate instances of the model.
- __weakref__#
list of weak references to the object
- _abc_impl = <_abc._abc_data object>#
- classmethod filepath_exists_and_is_absolute(value: str) str [source]#
Validate the filepath exists and return the resolved and absolute filepath.
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {'defer_build': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[Dict[str, FieldInfo]] = {'filepath': FieldInfo(annotation=str, required=True, title='Filepath of the archive', description='Filepath of the archive.')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.
This replaces Model.__fields__ from Pydantic V1.
- __abstractmethods__ = frozenset({})#
- __init__(profile: Profile)[source]#
Initialize the backend, for this profile.
- Raises:
~aiida.common.exceptions.UnreachableStorage if the storage cannot be accessed
- Raises:
~aiida.common.exceptions.IncompatibleStorageSchema if the profile’s storage schema is not at the latest version (and thus should be migrated)
- Raises:
- raises:
aiida.common.exceptions.CorruptStorage
if the storage is internally inconsistent
- __module__ = 'aiida.storage.sqlite_zip.backend'#
- _abc_impl = <_abc._abc_data object>#
- _clear() None [source]#
Clear the storage, removing all data.
Warning
This is a destructive operation, and should only be used for testing purposes.
- property authinfos#
Return the collection of authorisation information objects
- bulk_insert(entity_type: EntityTypes, rows: list[dict], allow_defaults: bool = False) list[int] [source]#
Insert a list of entities into the database, directly into a backend transaction.
- Parameters:
entity_type – The type of the entity
data – A list of dictionaries, containing all fields of the backend model, except the id field (a.k.a primary key), which will be generated dynamically
allow_defaults – If
False
, assert that each row contains all fields (except primary key(s)), otherwise, allow default values for missing fields.
- Raises:
IntegrityError
if the keys in a row are not a subset of the columns in the table- Returns:
The list of generated primary keys for the entities
- bulk_update(entity_type: EntityTypes, rows: list[dict]) None [source]#
Update a list of entities in the database, directly with a backend transaction.
- Parameters:
entity_type – The type of the entity
data – A list of dictionaries, containing fields of the backend model to update, and the id field (a.k.a primary key)
- Raises:
IntegrityError
if the keys in a row are not a subset of the columns in the table
- property comments#
Return the collection of comments
- property computers#
Return the collection of computers
- static create_profile(filepath: str | Path, options: dict | None = None) Profile [source]#
Create a new profile instance for this backend, from the path to the zip file.
- delete_nodes_and_connections(pks_to_delete: Sequence[int])[source]#
Delete all nodes corresponding to pks in the input and any links to/from them.
This method is intended to be used within a transaction context.
- Parameters:
pks_to_delete – a sequence of node pks to delete
- Raises:
AssertionError
if a transaction is not active
- get_backend_entity(model)[source]#
Return the backend entity that corresponds to the given Model instance.
- get_global_variable(key: str)[source]#
Return a global variable from the storage.
- Parameters:
key – the key of the setting
- Raises:
KeyError if the setting does not exist
- get_info(detailed: bool = False) dict [source]#
Return general information on the storage.
- Parameters:
detailed – flag to request more detailed information about the content of the storage.
- Returns:
a nested dict with the relevant information.
- get_repository() _RoBackendRepository [source]#
Return the object repository configured for this backend.
- property groups#
Return the collection of groups
- classmethod initialise(profile: Profile, reset: bool = False) bool [source]#
Initialise an instance of the
SqliteZipBackend
storage backend.- Parameters:
reset – If
true
, destroy the backend if it already exists including all of its data before recreating and initialising it. This is useful for example for test profiles that need to be reset before or after tests having run.- Returns:
True
if the storage was initialised by the function call,False
if it was already initialised.
- property logs#
Return the collection of logs
- maintain(dry_run: bool = False, live: bool = True, **kwargs) None [source]#
Perform maintenance tasks on the storage.
If full == True, then this method may attempt to block the profile associated with the storage to guarantee the safety of its procedures. This will not only prevent any other subsequent process from accessing that profile, but will also first check if there is already any process using it and raise if that is the case. The user will have to manually stop any processes that is currently accessing the profile themselves or wait for it to finish on its own.
- Parameters:
full – flag to perform operations that require to stop using the profile to be maintained.
dry_run – flag to only print the actions that would be taken without actually executing them.
- classmethod migrate(profile: Profile)[source]#
Migrate the storage of a profile to the latest schema version.
If the schema version is already the latest version, this method does nothing. If the storage is uninitialised, this method will raise an exception.
- Raises:
:class`~aiida.common.exceptions.UnreachableStorage` if the storage cannot be accessed.
- Raises:
StorageMigrationError
if the storage is not initialised.
- property nodes#
Return the collection of nodes
- query() SqliteQueryBuilder [source]#
Return an instance of a query builder implementation for this backend
- read_only = True#
This plugin is read only and data cannot be created or mutated.
- set_global_variable(key: str, value, description: str | None = None, overwrite=True) None [source]#
Set a global variable in the storage.
- Parameters:
key – the key of the setting
value – the value of the setting
description – the description of the setting (optional)
overwrite – if True, overwrite the setting if it already exists
- Raises:
ValueError if the key already exists and overwrite is False
- transaction()[source]#
Get a context manager that can be used as a transaction context for a series of backend operations. If there is an exception within the context then the changes will be rolled back and the state will be as before entering. Transactions can be nested.
- Returns:
a context manager to group database operations
- property users#
Return the collection of users
- class aiida.storage.sqlite_zip.backend.ZipfileBackendRepository(path: str | Path)[source]#
Bases:
_RoBackendRepository
A read-only backend for a zip file.
The zip file should contain repository files with the key format:
repo/<sha256 hash>
, i.e. files named by the sha256 hash of the file contents, inside arepo
directory.- __abstractmethods__ = frozenset({})#
- __init__(path: str | Path)[source]#
Initialise the repository backend.
- Parameters:
path – the path to the zip file
- __module__ = 'aiida.storage.sqlite_zip.backend'#
- _abc_impl = <_abc._abc_data object>#
- has_object(key: str) bool [source]#
Return whether the repository has an object with the given key.
- Parameters:
key – fully qualified identifier for the object within the repository.
- Returns:
True if the object exists, False otherwise.
- list_objects() Iterable[str] [source]#
Return iterable that yields all available objects by key.
- Returns:
An iterable for all the available object keys.
- open(key: str) Iterator[BinaryIO] [source]#
Open a file handle to an object stored under the given key.
Note
this should only be used to open a handle to read an existing file. To write a new file use the method
put_object_from_filelike
instead.- Parameters:
key – fully qualified identifier for the object within the repository.
- Returns:
yield a byte stream object.
- Raises:
FileNotFoundError – if the file does not exist.
OSError – if the file could not be opened.
- class aiida.storage.sqlite_zip.backend._RoBackendRepository(path: str | Path)[source]#
Bases:
AbstractRepositoryBackend
A backend abstract for a read-only folder or zip file.
- __abstractmethods__ = frozenset({'list_objects'})#
- __init__(path: str | Path)[source]#
Initialise the repository backend.
- Parameters:
path – the path to the zip file
- __module__ = 'aiida.storage.sqlite_zip.backend'#
- _abc_impl = <_abc._abc_data object>#
- delete_objects(keys: list[str]) None [source]#
Delete the objects from the repository.
- Parameters:
keys – list of fully qualified identifiers for the objects within the repository.
- Raises:
FileNotFoundError – if any of the files does not exist.
OSError – if any of the files could not be deleted.
- erase() None [source]#
Delete the repository itself and all its contents.
Note
This should not merely delete the contents of the repository but any resources it created. For example, if the repository is essentially a folder on disk, the folder itself should also be deleted, not just its contents.
- get_info(detailed: bool = False, **kwargs) dict [source]#
Returns relevant information about the content of the repository.
- Parameters:
detailed – flag to enable extra information (detailed=False by default, only returns basic information).
- Returns:
a dictionary with the information.
- get_object_hash(key: str) str [source]#
Return the SHA-256 hash of an object stored under the given key.
Important
A SHA-256 hash should always be returned, to ensure consistency across different repository implementations.
- Parameters:
key – fully qualified identifier for the object within the repository.
- Raises:
FileNotFoundError – if the file does not exist.
OSError – if the file could not be opened.
- has_objects(keys: list[str]) list[bool] [source]#
Return whether the repository has an object with the given key.
- Parameters:
keys – list of fully qualified identifiers for objects within the repository.
- Returns:
list of logicals, in the same order as the keys provided, with value True if the respective object exists and False otherwise.
- initialise(**kwargs) None [source]#
Initialise the repository if it hasn’t already been initialised.
- Parameters:
kwargs – parameters for the initialisation.
- iter_object_streams(keys: list[str]) Iterator[Tuple[str, BinaryIO]] [source]#
Return an iterator over the (read-only) byte streams of objects identified by key.
Note
handles should only be read within the context of this iterator.
- Parameters:
keys – fully qualified identifiers for the objects within the repository.
- Returns:
an iterator over the object byte streams.
- Raises:
FileNotFoundError – if the file does not exist.
OSError – if a file could not be opened.
- property key_format: str | None#
Return the format for the keys of the repository.
Important for when migrating between backends (e.g. archive -> main), as if they are not equal then it is necessary to re-compute all the Node.base.repository.metadata before importing (otherwise they will not match with the repository).
- maintain(dry_run: bool = False, live: bool = True, **kwargs) None [source]#
Performs maintenance operations.
- Parameters:
dry_run – flag to only print the actions that would be taken without actually executing them.
live – flag to indicate to the backend whether AiiDA is live or not (i.e. if the profile of the backend is currently being used/accessed). The backend is expected then to only allow (and thus set by default) the operations that are safe to perform in this state.
Versioning and migration implementation for the sqlite_zip format.
- aiida.storage.sqlite_zip.migrator._alembic_config() Config [source]#
Return an instance of an Alembic Config.
- aiida.storage.sqlite_zip.migrator._alembic_connect(db_path: Path, enforce_foreign_keys=True) Iterator[Config] [source]#
Context manager to return an instance of an Alembic configuration.
The profiles’s database connection is added in the attributes property, through which it can then also be retrieved, also in the env.py file, which is run when the database is migrated.
- aiida.storage.sqlite_zip.migrator._alembic_script() ScriptDirectory [source]#
Return an instance of an Alembic ScriptDirectory.
- aiida.storage.sqlite_zip.migrator._migration_context(db_path: Path) Iterator[MigrationContext] [source]#
Context manager to return an instance of an Alembic migration context.
This migration context will have been configured with the current database connection, which allows this context to be used to inspect the contents of the database, such as the current revision.
- aiida.storage.sqlite_zip.migrator._perform_legacy_migrations(current_version: str, to_version: str, metadata: dict, data: dict) str [source]#
Perform legacy migrations from the current version to the desired version.
Legacy archives use the old
data.json
format for storing the database. These migrations simply manipulate the metadata and data in-place.- Parameters:
current_version – current version of the archive
to_version – version to migrate to
metadata – the metadata to migrate
data – the data to migrate
- Returns:
the new version of the archive
- aiida.storage.sqlite_zip.migrator._read_json(inpath: Path, filename: str, is_tar: bool) Dict[str, Any] [source]#
Read a JSON file from the archive.
- aiida.storage.sqlite_zip.migrator.get_schema_version_head() str [source]#
Return the head schema version for this storage, i.e. the latest schema this storage can be migrated to.
- aiida.storage.sqlite_zip.migrator.list_versions() List[str] [source]#
Return all available schema versions (oldest to latest).
- aiida.storage.sqlite_zip.migrator.migrate(inpath: str | Path, outpath: str | Path, version: str, *, force: bool = False, compression: int = 6) None [source]#
Migrate an sqlite_zip storage file to a specific version.
Historically, this format could be a zip or a tar file, contained the database as a bespoke JSON format, and the repository files in the “legacy” per-node format. For these versions, we first migrate the JSON database to the final legacy schema, then we convert this file to the SQLite database, whilst sequentially migrating the repository files.
Once any legacy migrations have been performed, we can then migrate the SQLite database to the final schema, using alembic.
Note that, to minimise disk space usage, we never fully extract/uncompress the input file (except when migrating from a legacy tar file, whereby we cannot extract individual files):
The sqlite database is extracted to a temporary location and migrated
A new zip file is opened, within a temporary folder
The repository files are “streamed” directly between the input file and the new zip file
The sqlite database and metadata JSON are written to the new zip file
The new zip file is closed (which writes its final central directory)
The new zip file is moved to the output location, removing any existing file if force=True
- Parameters:
path – Path to the file
outpath – Path to output the migrated file
version – Target version
force – If True, overwrite the output file if it exists
compression – Compression level for the output file
- aiida.storage.sqlite_zip.migrator.validate_storage(inpath: Path) None [source]#
Validate that the storage is at the head version.
- Raises:
aiida.common.exceptions.UnreachableStorage
if the file does not exist- Raises:
aiida.common.exceptions.CorruptStorage
if the version cannot be read from the storage.- Raises:
aiida.common.exceptions.IncompatibleStorageSchema
if the storage is not compatible with the code API.
This module contains the SQLAlchemy models for the SQLite backend.
These models are intended to be identical to those of the psql_dos backend, except for changes to the database specific types:
UUID -> CHAR(32)
DateTime -> TZDateTime
JSONB -> JSON
Also, varchar_pattern_ops indexes are not possible in sqlite.
- class aiida.storage.sqlite_zip.models.DbAuthInfo(**kwargs)#
Bases:
SqliteModel
Database model to store data for
aiida.orm.AuthInfo
, and keep computer authentication data, per user.Specifications are user-specific of how to submit jobs in the computer. The model also has an
enabled
logical switch that indicates whether the device is available for use or not. This last one can be set and unset by the user.- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23cadd0d0; DbAuthInfo>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dbauthinfo', MetaData(), Column('id', Integer(), table=<db_dbauthinfo>, primary_key=True, nullable=False), Column('aiidauser_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbauthinfo>, nullable=False), Column('dbcomputer_id', Integer(), ForeignKey('db_dbcomputer.id'), table=<db_dbauthinfo>, nullable=False), Column('metadata', JSON(), table=<db_dbauthinfo>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('auth_params', JSON(), table=<db_dbauthinfo>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('enabled', Boolean(), table=<db_dbauthinfo>, nullable=False, default=ScalarElementColumnDefault(True)), schema=None)#
- __tablename__ = 'db_dbauthinfo'#
- _metadata#
- _sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'aiidauser': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'aiidauser_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'auth_params': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'enabled': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- aiidauser#
- aiidauser_id#
- auth_params#
- dbcomputer#
- dbcomputer_id#
- enabled#
- id#
- class aiida.storage.sqlite_zip.models.DbComment(**kwargs)#
Bases:
SqliteModel
Database model to store data for
aiida.orm.Comment
.Comments can be attach to the nodes by the users.
- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23cb4b3d0; DbComment>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dbcomment', MetaData(), Column('id', Integer(), table=<db_dbcomment>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbcomment>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('dbnode_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dbcomment>, nullable=False), Column('ctime', TZDateTime(), table=<db_dbcomment>, nullable=False, default=CallableColumnDefault(<function now>)), Column('mtime', TZDateTime(), table=<db_dbcomment>, nullable=False, onupdate=CallableColumnDefault(<function now>), default=CallableColumnDefault(<function now>)), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbcomment>, nullable=False), Column('content', Text(), table=<db_dbcomment>, nullable=False, default=ScalarElementColumnDefault('')), schema=None)#
- __tablename__ = 'db_dbcomment'#
- _sa_class_manager = {'content': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'ctime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'mtime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- content#
- ctime#
- dbnode#
- dbnode_id#
- id#
- mtime#
- user#
- user_id#
- uuid#
- class aiida.storage.sqlite_zip.models.DbComputer(**kwargs)#
Bases:
SqliteModel
Database model to store data for
aiida.orm.Computer
.Computers represent (and contain the information of) the physical hardware resources available. Nodes can be associated with computers if they are remote codes, remote folders, or processes that had run remotely.
Computers are identified within AiiDA by their
label
(and thus it must be unique for each one in the database), whereas thehostname
is the label that identifies the computer within the network from which one can access it.The
scheduler_type
column contains the information of the scheduler (and plugin) that the computer uses to manage jobs, whereas thetransport_type
the information of the transport (and plugin) required to copy files and communicate to and from the computer. Themetadata
contains some general settings for these communication and management protocols.- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23cadf010; DbComputer>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dbcomputer', MetaData(), Column('id', Integer(), table=<db_dbcomputer>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbcomputer>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('label', String(length=255), table=<db_dbcomputer>, nullable=False), Column('hostname', String(length=255), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('description', Text(), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('scheduler_type', String(length=255), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('transport_type', String(length=255), table=<db_dbcomputer>, nullable=False, default=ScalarElementColumnDefault('')), Column('metadata', JSON(), table=<db_dbcomputer>, nullable=False, default=CallableColumnDefault(<function dict>)), schema=None)#
- __tablename__ = 'db_dbcomputer'#
- _metadata#
- _sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'hostname': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'scheduler_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'transport_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- description#
- hostname#
- id#
- label#
- scheduler_type#
- transport_type#
- uuid#
- class aiida.storage.sqlite_zip.models.DbGroup(**kwargs)#
Bases:
SqliteModel
Database model to store
aiida.orm.Group
data.A group may contain many different nodes, but also each node can be included in different groups.
Users will typically identify and handle groups by using their
label
(which, unlike thelabels
in other models, must be unique). Groups also have atype
, which serves to identify what plugin is being instanced, and theextras
property for users to set any relevant information.- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23cb48510; DbGroup>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dbgroup', MetaData(), Column('id', Integer(), table=<db_dbgroup>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbgroup>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('label', String(length=255), table=<db_dbgroup>, nullable=False), Column('type_string', String(length=255), table=<db_dbgroup>, nullable=False, default=ScalarElementColumnDefault('')), Column('time', TZDateTime(), table=<db_dbgroup>, nullable=False, default=CallableColumnDefault(<function now>)), Column('description', Text(), table=<db_dbgroup>, nullable=False, default=ScalarElementColumnDefault('')), Column('extras', JSON(), table=<db_dbgroup>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbgroup>, nullable=False), schema=None)#
- __tablename__ = 'db_dbgroup'#
- _sa_class_manager = {'dbnodes': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'extras': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'time': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'type_string': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- dbnodes#
- description#
- extras#
- id#
- label#
- time#
- type_string#
- user#
- user_id#
- uuid#
- aiida.storage.sqlite_zip.models.DbGroupNodes#
alias of
DbGroupNode
- class aiida.storage.sqlite_zip.models.DbLink(**kwargs)#
Bases:
SqliteModel
Database model to store links between
aiida.orm.Node
.Each entry in this table contains not only the
id
information of the two nodes that are linked, but also some extra properties of the link themselves. This includes thetype
of the link (see the Concepts section for all possible types) as well as alabel
which is more specific and typically determined by the procedure generating the process node that links the data nodes.- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23cb5d690; DbLink>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dblink', MetaData(), Column('id', Integer(), table=<db_dblink>, primary_key=True, nullable=False), Column('input_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblink>, nullable=False), Column('output_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblink>, nullable=False), Column('label', String(length=255), table=<db_dblink>, nullable=False), Column('type', String(length=255), table=<db_dblink>, nullable=False), schema=None)#
- __tablename__ = 'db_dblink'#
- _sa_class_manager = {'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'input_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'output_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- id#
- input_id#
- label#
- output_id#
- type#
- class aiida.storage.sqlite_zip.models.DbLog(**kwargs)#
Bases:
SqliteModel
Database model to data for
aiida.orm.Log
, corresponding toaiida.orm.ProcessNode
.- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23cb5c490; DbLog>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dblog', MetaData(), Column('id', Integer(), table=<db_dblog>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dblog>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('time', TZDateTime(), table=<db_dblog>, nullable=False, default=CallableColumnDefault(<function now>)), Column('loggername', String(length=255), table=<db_dblog>, nullable=False), Column('levelname', String(length=50), table=<db_dblog>, nullable=False), Column('dbnode_id', Integer(), ForeignKey('db_dbnode.id'), table=<db_dblog>, nullable=False), Column('message', Text(), table=<db_dblog>, nullable=False, default=ScalarElementColumnDefault('')), Column('metadata', JSON(), table=<db_dblog>, nullable=False, default=CallableColumnDefault(<function dict>)), schema=None)#
- __tablename__ = 'db_dblog'#
- _metadata#
- _sa_class_manager = {'_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbnode_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'levelname': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'loggername': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'message': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'time': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- dbnode#
- dbnode_id#
- id#
- levelname#
How critical the message is
- loggername#
What process recorded the message
- message#
- time#
- uuid#
- class aiida.storage.sqlite_zip.models.DbNode(**kwargs)#
Bases:
SqliteModel
Database model to store data for
aiida.orm.Node
.Each node can be categorized according to its
node_type
, which indicates what kind of data or process node it is. Additionally, process nodes also have aprocess_type
that further indicates what is the specific plugin it uses.Nodes can also store two kind of properties:
attributes
are determined by thenode_type
, and are set before storing the node and can’t be modified afterwards.extras
, on the other hand, can be added and removed after the node has been stored and are usually set by the user.
- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23cb49290; DbNode>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dbnode', MetaData(), Column('id', Integer(), table=<db_dbnode>, primary_key=True, nullable=False), Column('uuid', String(length=32), table=<db_dbnode>, nullable=False, default=CallableColumnDefault(<function get_new_uuid>)), Column('node_type', String(length=255), table=<db_dbnode>, nullable=False, default=ScalarElementColumnDefault('')), Column('process_type', String(length=255), table=<db_dbnode>), Column('label', String(length=255), table=<db_dbnode>, nullable=False, default=ScalarElementColumnDefault('')), Column('description', Text(), table=<db_dbnode>, nullable=False, default=ScalarElementColumnDefault('')), Column('ctime', TZDateTime(), table=<db_dbnode>, nullable=False, default=CallableColumnDefault(<function now>)), Column('mtime', TZDateTime(), table=<db_dbnode>, nullable=False, onupdate=CallableColumnDefault(<function now>), default=CallableColumnDefault(<function now>)), Column('attributes', JSON(), table=<db_dbnode>, default=CallableColumnDefault(<function dict>)), Column('extras', JSON(), table=<db_dbnode>, default=CallableColumnDefault(<function dict>)), Column('repository_metadata', JSON(), table=<db_dbnode>, nullable=False, default=CallableColumnDefault(<function dict>)), Column('dbcomputer_id', Integer(), ForeignKey('db_dbcomputer.id'), table=<db_dbnode>), Column('user_id', Integer(), ForeignKey('db_dbuser.id'), table=<db_dbnode>, nullable=False), schema=None)#
- __tablename__ = 'db_dbnode'#
- _sa_class_manager = {'attributes': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'ctime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'dbcomputer_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'description': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'extras': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'label': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'mtime': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'node_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'process_type': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'repository_metadata': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'user_id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'uuid': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- attributes#
- ctime#
- dbcomputer#
- dbcomputer_id#
- description#
- extras#
- id#
- label#
- mtime#
- node_type#
- process_type#
- repository_metadata#
- user#
- user_id#
- uuid#
- class aiida.storage.sqlite_zip.models.DbUser(**kwargs)#
Bases:
SqliteModel
Database model to store data for
aiida.orm.User
.Every node that is created has a single user as its author.
The user information consists of the most basic personal contact details.
- __init__(**kwargs)#
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs
.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- __mapper__ = <Mapper at 0x7fc23caddf90; DbUser>#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __table__ = Table('db_dbuser', MetaData(), Column('id', Integer(), table=<db_dbuser>, primary_key=True, nullable=False), Column('email', String(length=254), table=<db_dbuser>, nullable=False), Column('first_name', String(length=254), table=<db_dbuser>, nullable=False, default=ScalarElementColumnDefault('')), Column('last_name', String(length=254), table=<db_dbuser>, nullable=False, default=ScalarElementColumnDefault('')), Column('institution', String(length=254), table=<db_dbuser>, nullable=False, default=ScalarElementColumnDefault('')), schema=None)#
- __tablename__ = 'db_dbuser'#
- _sa_class_manager = {'email': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'first_name': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'id': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'institution': <sqlalchemy.orm.attributes.InstrumentedAttribute object>, 'last_name': <sqlalchemy.orm.attributes.InstrumentedAttribute object>}#
- email#
- first_name#
- id#
- institution#
- last_name#
- class aiida.storage.sqlite_zip.models.SqliteModel[source]#
Bases:
object
Represent a row in an sqlite database table
- __annotations__ = {}#
- __dict__ = mappingproxy({'__module__': 'aiida.storage.sqlite_zip.models', '__doc__': 'Represent a row in an sqlite database table', '__repr__': <function SqliteModel.__repr__>, '__dict__': <attribute '__dict__' of 'SqliteModel' objects>, '__weakref__': <attribute '__weakref__' of 'SqliteModel' objects>, '__annotations__': {}})#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __weakref__#
list of weak references to the object
- class aiida.storage.sqlite_zip.models.TZDateTime(*args: Any, **kwargs: Any)[source]#
Bases:
TypeDecorator
A timezone naive UTC
DateTime
implementation for SQLite.- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.models'#
- __parameters__ = ()#
- cache_ok: bool | None = True#
Indicate if statements using this
ExternalType
are “safe to cache”.The default value
None
will emit a warning and then not allow caching of a statement which includes this type. Set toFalse
to disable statements using this type from being cached at all without a warning. When set toTrue
, the object’s class and selected elements from its state will be used as part of the cache key. For example, using aTypeDecorator
:class MyType(TypeDecorator): impl = String cache_ok = True def __init__(self, choices): self.choices = tuple(choices) self.internal_only = True
The cache key for the above type would be equivalent to:
>>> MyType(["a", "b", "c"])._static_cache_key (<class '__main__.MyType'>, ('choices', ('a', 'b', 'c')))
The caching scheme will extract attributes from the type that correspond to the names of parameters in the
__init__()
method. Above, the “choices” attribute becomes part of the cache key but “internal_only” does not, because there is no parameter named “internal_only”.The requirements for cacheable elements is that they are hashable and also that they indicate the same SQL rendered for expressions using this type every time for a given cache value.
To accommodate for datatypes that refer to unhashable structures such as dictionaries, sets and lists, these objects can be made “cacheable” by assigning hashable structures to the attributes whose names correspond with the names of the arguments. For example, a datatype which accepts a dictionary of lookup values may publish this as a sorted series of tuples. Given a previously un-cacheable type as:
class LookupType(UserDefinedType): '''a custom type that accepts a dictionary as a parameter. this is the non-cacheable version, as "self.lookup" is not hashable. ''' def __init__(self, lookup): self.lookup = lookup def get_col_spec(self, **kw): return "VARCHAR(255)" def bind_processor(self, dialect): # ... works with "self.lookup" ...
Where “lookup” is a dictionary. The type will not be able to generate a cache key:
>>> type_ = LookupType({"a": 10, "b": 20}) >>> type_._static_cache_key <stdin>:1: SAWarning: UserDefinedType LookupType({'a': 10, 'b': 20}) will not produce a cache key because the ``cache_ok`` flag is not set to True. Set this flag to True if this type object's state is safe to use in a cache key, or False to disable this warning. symbol('no_cache')
If we did set up such a cache key, it wouldn’t be usable. We would get a tuple structure that contains a dictionary inside of it, which cannot itself be used as a key in a “cache dictionary” such as SQLAlchemy’s statement cache, since Python dictionaries aren’t hashable:
>>> # set cache_ok = True >>> type_.cache_ok = True >>> # this is the cache key it would generate >>> key = type_._static_cache_key >>> key (<class '__main__.LookupType'>, ('lookup', {'a': 10, 'b': 20})) >>> # however this key is not hashable, will fail when used with >>> # SQLAlchemy statement cache >>> some_cache = {key: "some sql value"} Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict'
The type may be made cacheable by assigning a sorted tuple of tuples to the “.lookup” attribute:
class LookupType(UserDefinedType): '''a custom type that accepts a dictionary as a parameter. The dictionary is stored both as itself in a private variable, and published in a public variable as a sorted tuple of tuples, which is hashable and will also return the same value for any two equivalent dictionaries. Note it assumes the keys and values of the dictionary are themselves hashable. ''' cache_ok = True def __init__(self, lookup): self._lookup = lookup # assume keys/values of "lookup" are hashable; otherwise # they would also need to be converted in some way here self.lookup = tuple( (key, lookup[key]) for key in sorted(lookup) ) def get_col_spec(self, **kw): return "VARCHAR(255)" def bind_processor(self, dialect): # ... works with "self._lookup" ...
Where above, the cache key for
LookupType({"a": 10, "b": 20})
will be:>>> LookupType({"a": 10, "b": 20})._static_cache_key (<class '__main__.LookupType'>, ('lookup', (('a', 10), ('b', 20))))
New in version 1.4.14: - added the
cache_ok
flag to allow some configurability of caching forTypeDecorator
classes.New in version 1.4.28: - added the
ExternalType
mixin which generalizes thecache_ok
flag to both theTypeDecorator
andUserDefinedType
classes.See also
- aiida.storage.sqlite_zip.models.create_orm_cls(klass: Model) SqliteModel [source]#
Create an ORM class from an existing table in the declarative meta
- aiida.storage.sqlite_zip.models.get_model_from_entity(entity_type: EntityTypes) Tuple[Any, Set[str]] [source]#
Return the Sqlalchemy model and column names corresponding to the given entity.
- aiida.storage.sqlite_zip.models.pg_to_sqlite(pg_table: Table)[source]#
Convert a model intended for PostGreSQL to one compatible with SQLite
This module contains the AiiDA backend ORM classes for the SQLite backend.
It re-uses the classes already defined in psql_dos
backend (for PostGresQL),
but redefines the SQLAlchemy models to the SQLite compatible ones.
- class aiida.storage.sqlite_zip.orm.SqliteAuthInfo(backend, computer, user)[source]#
Bases:
SqliteEntityOverride
,SqlaAuthInfo
- COMPUTER_CLASS#
alias of
SqliteComputer
- MODEL_CLASS#
alias of
DbAuthInfo
- USER_CLASS#
alias of
SqliteUser
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- _abc_impl = <_abc._abc_data object>#
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteAuthInfoCollection(backend: StorageBackend)[source]#
Bases:
SqlaAuthInfoCollection
- ENTITY_CLASS#
alias of
SqliteAuthInfo
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- class aiida.storage.sqlite_zip.orm.SqliteComment(backend, node, user, content=None, ctime=None, mtime=None)[source]#
Bases:
SqliteEntityOverride
,SqlaComment
- USER_CLASS#
alias of
SqliteUser
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- _abc_impl = <_abc._abc_data object>#
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteCommentCollection(backend: StorageBackend)[source]#
Bases:
SqlaCommentCollection
- ENTITY_CLASS#
alias of
SqliteComment
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- class aiida.storage.sqlite_zip.orm.SqliteComputer(backend, **kwargs)[source]#
Bases:
SqliteEntityOverride
,SqlaComputer
- MODEL_CLASS#
alias of
DbComputer
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- _abc_impl = <_abc._abc_data object>#
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteComputerCollection(backend: StorageBackend)[source]#
Bases:
SqlaComputerCollection
- ENTITY_CLASS#
alias of
SqliteComputer
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- class aiida.storage.sqlite_zip.orm.SqliteEntityOverride[source]#
Bases:
object
Overrides type-checking of psql_dos
Entity
.- __annotations__ = {'MODEL_CLASS': typing.Any, '_model': <class 'aiida.storage.psql_dos.orm.utils.ModelWrapper'>}#
- __dict__ = mappingproxy({'__module__': 'aiida.storage.sqlite_zip.orm', '__annotations__': {'MODEL_CLASS': typing.Any, '_model': <class 'aiida.storage.psql_dos.orm.utils.ModelWrapper'>}, '__doc__': 'Overrides type-checking of psql_dos ``Entity``.', '_class_check': <classmethod(<function SqliteEntityOverride._class_check>)>, 'from_dbmodel': <classmethod(<function SqliteEntityOverride.from_dbmodel>)>, 'store': <function SqliteEntityOverride.store>, '__dict__': <attribute '__dict__' of 'SqliteEntityOverride' objects>, '__weakref__': <attribute '__weakref__' of 'SqliteEntityOverride' objects>})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __weakref__#
list of weak references to the object
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteGroup(backend, label, user, description='', type_string='')[source]#
Bases:
SqliteEntityOverride
,SqlaGroup
- USER_CLASS#
alias of
SqliteUser
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- _abc_impl = <_abc._abc_data object>#
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteGroupCollection(backend: StorageBackend)[source]#
Bases:
SqlaGroupCollection
- ENTITY_CLASS#
alias of
SqliteGroup
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- class aiida.storage.sqlite_zip.orm.SqliteLog(backend, time, loggername, levelname, dbnode_id, message='', metadata=None)[source]#
Bases:
SqliteEntityOverride
,SqlaLog
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- _abc_impl = <_abc._abc_data object>#
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteLogCollection(backend: StorageBackend)[source]#
Bases:
SqlaLogCollection
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- class aiida.storage.sqlite_zip.orm.SqliteNode(backend, node_type, user, computer=None, process_type=None, label='', description='', ctime=None, mtime=None)[source]#
Bases:
SqliteEntityOverride
,SqlaNode
SQLA Node backend entity
- COMPUTER_CLASS#
alias of
SqliteComputer
- USER_CLASS#
alias of
SqliteUser
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- _abc_impl = <_abc._abc_data object>#
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteNodeCollection(backend: StorageBackend)[source]#
Bases:
SqlaNodeCollection
- ENTITY_CLASS#
alias of
SqliteNode
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- class aiida.storage.sqlite_zip.orm.SqliteQueryBuilder(backend)[source]#
Bases:
SqlaQueryBuilder
QueryBuilder to use with SQLAlchemy-backend, adapted for SQLite.
- property AuthInfo#
- property Comment#
- property Computer#
- property Group#
- property Link#
- property Log#
- property Node#
- property User#
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- _abc_impl = <_abc._abc_data object>#
- _data: QueryDictType#
- static _get_projectable_entity(alias: AliasedClass, column_name: str, attrpath: List[str], cast: str | None = None) ColumnElement | InstrumentedAttribute [source]#
Return projectable entity for a given alias and column name.
- _query_cache: BuiltQuery | None#
- static get_filter_expr_from_column(operator: str, value: Any, column) BinaryExpression [source]#
A method that returns an valid SQLAlchemy expression.
- Parameters:
operator – The operator provided by the user (‘==’, ‘>’, …)
value – The value to compare with, e.g. (5.0, ‘foo’, [‘a’,’b’])
column – an instance of sqlalchemy.orm.attributes.InstrumentedAttribute or
- static get_filter_expr_from_jsonb(operator: str, value, attr_key: List[str], column=None, column_name=None, alias=None)[source]#
Return a filter expression.
- property table_groups_nodes#
- class aiida.storage.sqlite_zip.orm.SqliteUser(backend, email, first_name, last_name, institution)[source]#
Bases:
SqliteEntityOverride
,SqlaUser
- __abstractmethods__ = frozenset({})#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- _abc_impl = <_abc._abc_data object>#
- _model: ModelWrapper#
- class aiida.storage.sqlite_zip.orm.SqliteUserCollection(backend: StorageBackend)[source]#
Bases:
SqlaUserCollection
- ENTITY_CLASS#
alias of
SqliteUser
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.orm'#
- __parameters__ = ()#
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel, backend)[source]#
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbUser, backend)
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbGroup, backend)
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbComputer, backend)
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbNode, backend)
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbAuthInfo, backend)
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbComment, backend)
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbLog, backend)
- aiida.storage.sqlite_zip.orm.get_backend_entity(dbmodel: DbLink, backend)
Utilities for this backend.
- aiida.storage.sqlite_zip.utils.DB_FILENAME = 'db.sqlite3'#
The filename of the SQLite database.
- aiida.storage.sqlite_zip.utils.META_FILENAME = 'metadata.json'#
The filename containing meta information about the storage instance.
- aiida.storage.sqlite_zip.utils.REPO_FOLDER = 'repo'#
The name of the folder containing the repository files.
- exception aiida.storage.sqlite_zip.utils.ReadOnlyError(msg='sqlite_zip storage is read-only')[source]#
Bases:
AiidaException
Raised when a write operation is called on a read-only archive.
- __annotations__ = {}#
- __module__ = 'aiida.storage.sqlite_zip.utils'#
- aiida.storage.sqlite_zip.utils.create_sqla_engine(path: str | Path, *, enforce_foreign_keys: bool = True, **kwargs) Engine [source]#
Create a new engine instance.
- aiida.storage.sqlite_zip.utils.extract_metadata(path: str | Path, *, search_limit: int | None = 10) Dict[str, Any] [source]#
Extract the metadata dictionary from the archive.
- Parameters:
search_limit – the maximum number of records to search for the metadata file in a zip file.
- aiida.storage.sqlite_zip.utils.read_version(path: str | Path, *, search_limit: int | None = None) str [source]#
Read the version of the storage instance from the path.
This is intended to work for all versions of the storage format.
- Parameters:
path – path to storage instance, either a folder, zip file or tar file.
search_limit – the maximum number of records to search for the metadata file in a zip file.
- Raises:
UnreachableStorage
if a version cannot be read from the file
- aiida.storage.sqlite_zip.utils.sqlite_case_sensitive_like(dbapi_connection, _)[source]#
Enforce case sensitive like operations (off by default).
See: https://www.sqlite.org/pragma.html#pragma_case_sensitive_like