Plugins#
What a plugin can do#
Add a new class to AiiDA’s entry point groups, including:: calculations, parsers, workflows, data types, verdi commands, schedulers, transports and importers/exporters from external databases. This typically involves subclassing the respective base class AiiDA provides for that purpose.
Install new commandline and/or GUI executables
Depend on, and build on top of any number of other plugins (as long as their requirements do not clash)
What a plugin should not do#
An AiiDA plugin should not:
Change the database schema AiiDA uses
Use protected functions, methods or classes of AiiDA (those starting with an underscore
_
)Monkey patch anything within the
aiida
namespace (or the namespace itself)
Failure to comply will likely prevent your plugin from being listed on the official AiiDA plugin registry.
If you find yourself in a situation where you feel like you need to do any of the above, please open an issue on the AiiDA repository and we can try to advise on how to proceed.
Guidelines for plugin design#
CalcJob & Parser plugins#
The following guidelines are useful to keep in mind when wrapping external codes:
Start simple. Make use of existing classes like
Dict
,SinglefileData
, … Write only what is necessary to pass information from and to AiiDA.Don’t break data provenance. Store at least what is needed for full reproducibility.
Expose the full functionality. Standardization is good but don’t artificially limit the power of a code you are wrapping - or your users will get frustrated. If the code can do it, there should be some way to do it with your plugin.
Don’t rely on AiiDA internals. Functionality at deeper nesting levels is not considered part of the public API and may change between minor AiiDA releases, breaking your plugin.
Parse what you want to query for. Make a list of which information to:
parse into the database for querying (
Dict
, …)store in the file repository for safe-keeping (
SinglefileData
, …)leave on the computer where the calculation ran (
RemoteData
, …)
What is an entry point?#
The setuptools
package (used by pip
) has a feature called entry points, which allows to associate a string (the entry point identifier) with any python object defined inside a python package.
Entry points are defined in the pyproject.toml
file, for example:
...
[project.entry-points."aiida.data"]
# entry point = path.to.python.object
"mycode.mydata = aiida_mycode.data.mydata:MyData",
...
Here, we add a new entry point mycode.mydata
to the entry point group aiida.data
.
The entry point identifier points to the MyData
class inside the file mydata.py
, which is part of the aiida_mycode
package.
When installing a python package that defines entry points, the entry point specifications are written to a file inside the distribution’s .egg-info
folder.
setuptools
provides a package pkg_resources
for querying these entry point specifications by distribution, by entry point group and/or by name of the entry point and load the data structure to which it points.
Why entry points?#
AiiDA defines a set of entry point groups (see AiiDA entry point groups below). By inspecting the entry points added to these groups by AiiDA plugins, AiiDA can offer uniform interfaces to interact with them. For example:
verdi plugin list aiida.workflows
provides an overview of all workflows installed by AiiDA plugins. Users can inspect the inputs/outputs of each workflow using the same command without having to study the documentation of the plugin.The
DataFactory
,CalculationFactory
andWorkflowFactory
methods allow instantiating new classes through a simple short string (e.g.quantumespresso.pw
). Users don’t need to remember exactly where in the plugin package the class resides, and plugins can be refactored without users having to re-learn the plugin’s API.
AiiDA entry point groups#
Below, we list the entry point groups defined and searched by AiiDA.
You can get the same list as the output of verdi plugin list
.
aiida.calculations
#
Entry points in this group are expected to be subclasses of aiida.orm.JobCalculation
. This replaces the previous method of placing a python module with the class in question inside the aiida/orm/calculation/job
subpackage.
Example entry point specification:
[project.entry-points."aiida.calculations"]
"mycode.mycode" = "aiida_mycode.calcs.mycode:MycodeCalculation"
aiida_mycode/calcs/mycode.py
:
from aiida.orm import JobCalculation
class MycodeCalculation(JobCalculation):
...
Will lead to usage:
from aiida.plugins import CalculationFactory
calc = CalculationFactory('mycode.mycode')
aiida.parsers
#
AiiDA expects a subclass of Parser
. Replaces the previous approach consisting in placing a parser module under aiida/parsers/plugins
.
Example spec:
[project.entry-points."aiida.parsers"]
"mycode.myparser" = "aiida_mycode.parsers.mycode:MycodeParser"
aida_mycode/parsers/myparser.py
:
from aiida.parsers import Parser
class MycodeParser(Parser)
...
Usage:
from aiida.plugins import ParserFactory
parser = ParserFactory('mycode.mycode')
aiida.data
#
Group for Data
subclasses. Previously located in a subpackage of aiida/orm/data
.
Spec:
[project.entry-points."aiida.data"]
"mycode.mydata" = "aiida_mycode.data.mydata:MyData"
aiida_mycode/data/mydat.py
:
from aiida.orm import Data
class MyData(Data):
...
Usage:
from aiida.plugins import DataFactory
params = DataFactory('mycode.mydata')
aiida.workflows
#
Package AiiDA workflows as follows:
Spec:
[project.entry-points."aiida.workflows"]
"mycode.mywf" = "aiida_mycode.workflows.mywf:MyWorkflow"
aiida_mycode/workflows/mywf.py
:
from aiida.engine.workchain import WorkChain
class MyWorkflow(WorkChain):
...
Usage:
from aiida.plugins import WorkflowFactory
wf = WorkflowFactory('mycode.mywf')
Note
For old-style workflows the entry point mechanism of the plugin system is not supported.
Therefore one cannot load these workflows with the WorkflowFactory
.
The only way to run these, is to store their source code in the aiida/workflows/user
directory and use normal python imports to load the classes.
aiida.cmdline
#
verdi
uses the click_ framework, which makes it possible to add new subcommands to existing verdi commands, such as verdi data mydata
.
AiiDA expects each entry point to be either a click.Command
or click.Group
. At present extra commands can be injected at the following levels:
Spec for verdi data
:
[project.entry-points."aiida.cmdline.data"]
"mydata" = "aiida_mycode.commands.mydata:mydata"
aiida_mycode/commands/mydata.py
:
import click
@click.group()
mydata():
"""commandline help for mydata command"""
@mydata.command('animate')
@click.option('--format')
@click.argument('pk')
create_fancy_animation(format, pk):
"""help"""
...
Usage:
verdi data mydata animate --format=Format PK
Spec for verdi data core.structure import
:
entry_points={
"aiida.cmdline.data.structure.import": [
"myformat = aiida_mycode.commands.myformat:myformat"
]
}
[project.entry-points."aiida.cmdline.data.structure.import"]
"myformat" = "aiida_mycode.commands.myformat:myformat"
aiida_mycode/commands/myformat.py
:
import click
@click.group()
@click.argument('filename', type=click.File('r'))
myformat(filename):
"""commandline help for myformat import command"""
...
Usage:
verdi data core.structure import myformat a_file.myfmt
aiida.tools.dbexporters
#
If your plugin package adds support for exporting to an external database, use this entry point to have aiida find the module where you define the necessary functions.
aiida.tools.dbimporters
#
If your plugin package adds support for importing from an external database, use this entry point to have aiida find the module where you define the necessary functions.
aiida.schedulers
#
We recommend naming the plugin package after the scheduler (e.g. aiida-myscheduler
), so that the entry point name can simply equal the name of the scheduler:
Spec:
[project.entry-points."aiida.schedulers"]
"myscheduler" = "aiida_myscheduler.myscheduler:MyScheduler"
aiida_myscheduler/myscheduler.py
:
from aiida.schedulers import Scheduler
class MyScheduler(Scheduler):
...
Usage: The scheduler is used in the familiar way by entering ‘myscheduler’ as the scheduler option when setting up a computer.
aiida.transports
#
aiida-core
ships with two modes of transporting files and folders to remote computers: core.ssh
and core.local
(stub for when the remote computer is actually the same).
We recommend naming the plugin package after the mode of transport (e.g. aiida-mytransport
), so that the entry point name can simply equal the name of the transport:
Spec:
[project.entry-points."aiida.transports"]
"mytransport" = "aiida_mytransport.mytransport:MyTransport"
aiida_mytransport/mytransport.py
:
from aiida.transports import Transport
class MyTransport(Transport):
...
Usage:
from aiida.plugins import TransportFactory
transport = TransportFactory('mytransport')
When setting up a new computer, specify mytransport
as the transport mode.
Plugin test fixtures#
When developing AiiDA plugin packages, it is recommended to use pytest as the unit test library, which is the de-facto standard in the Python ecosystem.
It provides a number of fixtures that make it easy to setup and write tests.
aiida-core
also provides a number fixtures that are specific to AiiDA and make it easy to test various sorts of plugins.
To make use of these fixtures, create a conftest.py
file in your tests
folder and add the following code:
pytest_plugins = 'aiida.tools.pytest_fixtures'
Just by adding this line, the fixtures that are provided by the pytest_fixtures
module are automatically imported.
The module provides the following fixtures:
aiida_manager: Return the global instance of the
Manager
aiida_profile: Provide a loaded AiiDA test profile with loaded storage backend
aiida_profile_clean: Same as
aiida_profile
but the storage backend is cleanedaiida_profile_clean_class: Same as
aiida_profile_clean
but should be used at the class scopeaiida_profile_factory: Create a temporary profile ready to be used for testing
aiida_config: Return the
Config
instance that is used for the test sessionconfig_psql_dos: Return a profile configuration for the
PsqlDosBackend
postgres_cluster: Create a temporary and isolated PostgreSQL cluster using
pgtest
and cleanup after the yielderaiida_computer: Setup a
Computer
instanceaiida_computer_local: Setup the localhost as a
Computer
using local transportaiida_computer_ssh: Setup the localhost as a
Computer
using SSH transportaiida_localhost: Shortcut for <topics:plugins:testfixtures:aiida-computer-local> that immediately returns a
Computer
instance for thelocalhost
computer instead of a factoryaiida_code: Setup a
AbstractCode
instanceaiida_code_installed: Setup a
InstalledCode
instance on a given computersubmit_and_await: Submit a process or process builder to the daemon and wait for it to reach a certain process state
started_daemon_client: Same as
daemon_client
but the daemon is guaranteed to be runningstopped_daemon_client: Same as
daemon_client
but the daemon is guaranteed to not be runningdaemon_client: Return a
DaemonClient
instance to control the daemonentry_points: Return a
EntryPointManager
instance to add and remove entry points
Note
Before v2.6, test fixtures were located in aiida.manage.tests.pytest_fixtures
.
This module is now deprecated and will be removed in the future.
Some fixtures have analogs in aiida.tools.pytest_fixtures
that are drop-in replacements, but in general, there are differences in the interface and functionality.
aiida_manager
#
Return the global instance of the Manager
.
Can be used, for example, to retrieve the current Config
instance:
def test(aiida_manager):
aiida_manager.get_config().get_option('logging.aiida_loglevel')
aiida_profile
#
This fixture ensures that an AiiDA profile is loaded with an initialized storage backend, such that data can be stored.
The fixture is session-scoped and it has set autouse=True
, so it is automatically enabled for the test session.
By default, the fixture will generate a completely temporary independent AiiDA instance and test profile. This includes:
A temporary
.aiida
configuration folder with configuration filesA temporary test profile configured with
core.sqlite_dos
storage backend
Note
The profile uses core.sqlite_dos
instead of the standard core.psql_dos
storage plugin as it doesn’t require PostgreSQL to be installed.
Since the functionality of PostgreSQL is not needed for most common test cases, this choice makes it easier to start writing and running tests.
The temporary test instance and profile are automatically destroyed at the end of the test session. The fixture guarantees that no changes are made to the actual instance of AiiDA with its configuration and profiles.
Note
The profile does not configure RabbitMQ as a broker since it is not required for most test cases useful for plugins. This means, however, that any functionality that requires a broker is not available, such as running the daemon and submitting processes to the daemon. If that functionality is required, a profile should be created and loaded that configures a broker.
Although the fixture is automatically used, and so there is no need to explicitly pass it into a test function, it may still be useful, as it can be used to clean the storage backend from all data:
def test(aiida_profile):
from aiida.orm import Data, QueryBuilder
Data().store()
assert QueryBuilder().append(Data).count() != 0
# The following call clears the storage backend, deleting all data, except for the default user.
aiida_profile.reset_storage()
assert QueryBuilder().append(Data).count() == 0
aiida_profile_clean
#
Provides a loaded test profile through aiida_profile
but empties the storage before calling the test function.
Note that a default user will be inserted into the database after cleaning it.
def test(aiida_profile_clean):
"""The profile storage is guaranteed to be emptied at the start of this test."""
This functionality can be useful if it is easier to setup and write the test if there is no pre-existing data. However, cleaning the storage may take a non-negligible amount of time, so only use it when really needed in order to keep tests running as fast as possible.
aiida_profile_clean_class
#
Provides the same as aiida_profile_clean
but with scope=class
.
Should be used for a test class:
@pytest.mark.usefixtures('aiida_profile_clean_class')
class TestClass:
def test():
...
The storage is cleaned once when the class is initialized.
aiida_profile_factory
#
Create a temporary profile, add it to the config of the loaded AiiDA instance and load the profile. Can be useful to create a test profile for a custom storage backend:
@pytest.fixture(scope='session')
def custom_storage_profile(aiida_profile_factory) -> Profile:
"""Return a test profile for a custom :class:`~aiida.orm.implementation.storage_backend.StorageBackend`"""
from some_module import CustomStorage
configuration = {
'storage': {
'backend': 'plugin_package.custom_storage',
'config': {
'username': 'joe'
'api_key': 'super-secret-key'
}
}
}
yield aiida_profile_factory(configuration)
Note that the configuration above is not actually functional and the actual configuration depends on the storage implementation that is used.
aiida_config
#
Return the Config
instance that is used for the test session.
def test(aiida_config):
aiida_config.get_option('logging.aiida_loglevel')
config_psql_dos
#
Return a profile configuration for the PsqlDosBackend
.
This can be used in combination with the aiida_profile_factory
fixture to create a test profile with customised database parameters:
@pytest.fixture(scope='session')
def psql_dos_profile(aiida_profile_factory, config_psql_dos) -> Profile:
"""Return a test profile configured for the :class:`~aiida.storage.psql_dos.PsqlDosStorage`."""
configuration = config_psql_dos()
configuration['repository_uri'] = '/some/custom/path'
with aiida_profile_factory(storage_backend='core.psql_dos', storage_config=configuration) as profile:
yield profile
Note that this is only useful if the storage configuration needs to be customized.
If any configuration works, simply use the aiida_profile
fixture straight away.
postgres_cluster
#
Create a temporary and isolated PostgreSQL cluster using pgtest
and cleanup after the yield.
@pytest.fixture()
def custom_postgres_cluster(postgres_cluster):
yield postgres_cluster(
database_name='some-database-name',
database_username='guest',
database_password='guest',
)
aiida_localhost
#
This test is useful if a test requires a Computer
instance.
This fixture returns a Computer
that represents the localhost
.
def test(aiida_localhost):
aiida_localhost.get_minimum_job_poll_interval()
aiida_computer
#
This fixture should be used to create and configure a Computer
instance.
The fixture provides a factory that can be called without any arguments:
def test(aiida_computer):
from aiida.orm import Computer
computer = aiida_computer()
assert isinstance(computer, Computer)
By default, the localhost is used for the hostname and a random label is generated.
def test(aiida_computer):
custom_label = 'custom-label'
computer = aiida_computer(label=custom_label)
assert computer.label == custom_label
First the database is queried to see if a computer with the given label already exist. If found, the existing computer is returned, otherwise a new instance is created.
The returned computer is also configured for the current default user.
The configuration can be customized through the configuration_kwargs
dictionary:
def test(aiida_computer):
configuration_kwargs = {'safe_interval': 0}
computer = aiida_computer(configuration_kwargs=configuration_kwargs)
assert computer.get_minimum_job_poll_interval() == 0
aiida_computer_local
#
This fixture is a shortcut for aiida_computer
to setup the localhost with local transport:
def test(aiida_computer_local):
localhost = aiida_computer_local()
assert localhost.hostname == 'localhost'
assert localhost.transport_type == 'core.local'
To leave a newly created computer unconfigured, pass configure=False
:
def test(aiida_computer_local):
localhost = aiida_computer_local(configure=False)
assert not localhost.is_configured
Note that if the computer already exists and was configured before, it won’t be unconfigured. If you need a guarantee that the computer is not configured, make sure to clean the database before the test or use a unique label:
def test(aiida_computer_local):
import uuid
localhost = aiida_computer_local(label=str(uuid.uuid4()), configure=False)
assert not localhost.is_configured
aiida_computer_ssh
#
This fixture is a shortcut for aiida_computer
to setup the localhost with SSH transport:
def test(aiida_computer_ssh):
localhost = aiida_computer_ssh()
assert localhost.hostname == 'localhost'
assert localhost.transport_type == 'core.ssh'
This can be useful if the functionality that needs to be tested involves testing the SSH transport, but these use-cases should be rare outside of aiida-core.
To leave a newly created computer unconfigured, pass configure=False
:
def test(aiida_computer_ssh):
localhost = aiida_computer_ssh(configure=False)
assert not localhost.is_configured
Note that if the computer already exists and was configured before, it won’t be unconfigured. If you need a guarantee that the computer is not configured, make sure to clean the database before the test or use a unique label:
def test(aiida_computer_ssh):
import uuid
localhost = aiida_computer_ssh(label=str(uuid.uuid4()), configure=False)
assert not localhost.is_configured
aiida_code
#
This fixture is useful if a test requires an AbstractCode
instance.
For example:
def test(aiida_localhost, aiida_code):
from aiida.orm import InstalledCode
code = aiida_code(
'core.code.installed',
label='test-code',
computer=aiida_localhost,
filepath_executable='/bin/bash'
)
assert isinstance(code, InstalledCode)
aiida_code_installed
#
This test is useful if a test requires an InstalledCode
instance.
For example:
def test(aiida_code_installed):
from aiida.orm import InstalledCode
code = aiida_code_installed()
assert isinstance(code, InstalledCode)
By default, it will use the localhost
computer returned by the aiida_localhost
fixture.
submit_and_await
#
This fixture is useful when testing submitting a process to the daemon.
It submits the process to the daemon and will wait until it has reached a certain state.
By default it will wait for the process to reach ProcessState.FINISHED
:
def test(aiida_code_installed, submit_and_await):
code = aiida_code_installed(filepath_executable='core.arithmetic.add', filepath_executable='/usr/bin/bash')
builder = code.get_builder()
builder.x = orm.Int(1)
builder.y = orm.Int(1)
node = submit_and_await(builder)
assert node.is_finished_ok
Note that the fixture automatically depends on the started_daemon_client
fixture to guarantee the daemon is running.
started_daemon_client
#
This fixture ensures that the daemon for the test profile is running and returns an instance of the DaemonClient
which can be used to control the daemon.
def test(started_daemon_client):
assert started_daemon_client.is_daemon_running
stopped_daemon_client
#
This fixture ensures that the daemon for the test profile is stopped and returns an instance of the DaemonClient
which can be used to control the daemon.
def test(stopped_daemon_client):
assert not stopped_daemon_client.is_daemon_running
daemon_client
#
Return a DaemonClient
instance that can be used to control the daemon:
def test(daemon_client):
daemon_client.start_daemon()
assert daemon_client.is_daemon_running
daemon_client.stop_daemon(wait=True)
The fixture is session scoped. At the end of the test session, this fixture automatically shuts down the daemon if it is still running.
entry_points
#
Return a EntryPointManager
instance to add and remove entry points.
from aiida.parsers import Parser
class CustomParser(Parser):
"""Parser implementation."""
def test_parser(entry_points):
"""Test a custom ``Parser`` implementation."""
from aiida.plugins import ParserFactory
entry_points.add(CustomParser, 'aiida.parsers:custom.parser')
assert ParserFactory('custom.parser') is CustomParser
Any entry points additions and removals are automatically undone at the end of the test.