tmt.steps.provision package
Submodules
tmt.steps.provision.artemis module
- class tmt.steps.provision.artemis.ArtemisAPI(guest: GuestArtemis)
Bases:
object- create(path: str, data: dict[str, Any], request_kwargs: dict[str, Any] | None = None) Response
Create - or request creation of - a resource.
- Parameters:
path – API path to contact.
data – optional key/value data to send with the request.
request_kwargs – optional request options, as supported by
requestslibrary.
- delete(path: str, request_kwargs: dict[str, Any] | None = None) Response
Delete - or request removal of - a resource.
- Parameters:
path – API path to contact.
request_kwargs – optional request options, as supported by
requestslibrary.
- inspect(path: str, params: dict[str, Any] | None = None, request_kwargs: dict[str, Any] | None = None) Response
Inspect a resource.
- Parameters:
path – API path to contact.
params – optional key/value query parameters.
request_kwargs – optional request options, as supported by
requestslibrary.
- query(path: str, method: str = 'get', request_kwargs: dict[str, Any] | None = None) Response
Base helper for Artemis API queries.
Trivial dispatcher per method, returning retrieved response.
- Parameters:
path – API path to contact.
method – HTTP method to use.
request_kwargs – optional request options, as supported by
requestslibrary.
- class tmt.steps.provision.artemis.ArtemisGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, role: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, hardware: Optional[tmt.hardware.Hardware] = None, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, api_url: str = 'http://127.0.0.1:8001', api_version: str = '0.0.72', arch: str = 'x86_64', image: Optional[str] = None, pool: Optional[str] = None, priority_group: str = 'default-priority', keyname: str = 'default', user_data: dict[str, str] = <factory>, kickstart: dict[str, str] = <factory>, log_type: list[str] = <factory>, guestname: Optional[str] = None, provision_timeout: int = 600, provision_tick: int = 60, api_timeout: int = 10, api_retries: int = 10, api_retry_backoff_factor: int = 1, watchdog_dispatch_delay: Optional[int] = None, watchdog_period_delay: Optional[int] = None, skip_prepare_verify_ssh: bool = False, post_install_script: Optional[str] = None)
Bases:
GuestSshData- api_retries: int = 10
- api_retry_backoff_factor: int = 1
- api_timeout: int = 10
- api_url: str = 'http://127.0.0.1:8001'
- api_version: str = '0.0.72'
- arch: str = 'x86_64'
- guestname: str | None = None
- image: str | None = None
- keyname: str = 'default'
- kickstart: dict[str, str]
- log_type: list[str]
- pool: str | None = None
- post_install_script: str | None = None
- priority_group: str = 'default-priority'
- provision_tick: int = 60
- provision_timeout: int = 600
- skip_prepare_verify_ssh: bool = False
- user: str = 'root'
- user_data: dict[str, str]
- watchdog_dispatch_delay: int | None = None
- watchdog_period_delay: int | None = None
- exception tmt.steps.provision.artemis.ArtemisProvisionError(message: str, response: Response | None = None, request_data: dict[str, Any] | None = None, *args: Any, **kwargs: Any)
Bases:
ProvisionErrorArtemis provisioning error.
For some provisioning errors, we can provide more context.
General error.
- Parameters:
message – error message.
causes – optional list of exceptions that caused this one. Since
raise ... from ...allows only for a single cause, and some of our workflows may raise exceptions triggered by more than one exception, we need a mechanism for storing them. Our reporting will honor this field, and report causes the same way as__cause__.
- class tmt.steps.provision.artemis.GuestArtemis(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestSshArtemis guest instance
The following keys are expected in the ‘data’ dictionary:
Initialize guest data
- property api: ArtemisAPI
- api_retries: int
- api_retry_backoff_factor: int
- api_timeout: int
- api_url: str
- api_version: str
- arch: str
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- guestname: str | None
- image: str
- property is_ready: bool
Detect the guest is ready or not
- keyname: str
- kickstart: dict[str, str]
- log_type: list[str]
- pool: str | None
- post_install_script: str | None
- priority_group: str
- provision_tick: int
- provision_timeout: int
- remove() None
Remove the guest
- skip_prepare_verify_ssh: bool
- start() None
Start the guest
Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.
- user_data: dict[str, str]
- watchdog_dispatch_delay: int | None
- watchdog_period_delay: int | None
- class tmt.steps.provision.artemis.GuestInspectType
Bases:
TypedDict- address: str | None
- state: str
- class tmt.steps.provision.artemis.ProvisionArtemis(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionArtemisData]Provision guest using Artemis backend.
Minimal configuration could look like this:
provision: how: artemis image: Fedora api-url: https://your-artemis.com/
Note
The actual value of
imagedepends on what images - or “composes” as Artemis calls them - supports and can deliver.Note
The
api-urlcan be also given viaTMT_PLUGIN_PROVISION_ARTEMIS_API_URLenvironment variable.Full configuration example:
provision: how: artemis # Artemis API api-url: https://your-artemis.com/ api-version: 0.0.32 # Mandatory environment properties image: Fedora # Optional environment properties arch: aarch64 pool: optional-pool-name # Provisioning process control (optional) priority-group: custom-priority-group keyname: custom-SSH-key-name # Labels to be attached to guest request (optional) user-data: foo: bar # Timeouts and deadlines (optional) provision-timeout: 3600 provision-tick: 10 api-timeout: 600 api-retries: 5 api-retry-backoff-factor: 1
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- guest() GuestArtemis | None
Return the provisioned guest
- class tmt.steps.provision.artemis.ProvisionArtemisData(name: str, how: str, order: int = 50, summary: Optional[str] = None, role: Optional[str] = None, hardware: Optional[tmt.hardware.Hardware] = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, api_url: str = 'http://127.0.0.1:8001', api_version: str = '0.0.72', arch: str = 'x86_64', image: Optional[str] = None, pool: Optional[str] = None, priority_group: str = 'default-priority', keyname: str = 'default', user_data: dict[str, str] = <factory>, kickstart: dict[str, str] = <factory>, log_type: list[str] = <factory>, guestname: Optional[str] = None, provision_timeout: int = 600, provision_tick: int = 60, api_timeout: int = 10, api_retries: int = 10, api_retry_backoff_factor: int = 1, watchdog_dispatch_delay: Optional[int] = None, watchdog_period_delay: Optional[int] = None, skip_prepare_verify_ssh: bool = False, post_install_script: Optional[str] = None)
Bases:
ArtemisGuestData,ProvisionStepData
tmt.steps.provision.bootc module
- class tmt.steps.provision.bootc.BootcData(name: str, how: str, order: int = 50, summary: Optional[str] = None, role: Optional[str] = None, hardware: Optional[tmt.hardware.Hardware] = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: Optional[ForwardRef('Size')] = None, disk: Optional[ForwardRef('Size')] = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: Optional[str] = None, instance_name: Optional[str] = None, container_file: Optional[str] = None, container_file_workdir: str = '.', container_image: Optional[str] = None, add_tmt_dependencies: bool = True, image_builder: str = 'quay.io/centos-bootc/bootc-image-builder:latest')
Bases:
ProvisionTestcloudData- add_tmt_dependencies: bool = True
- container_file: str | None = None
- container_file_workdir: str = '.'
- container_image: str | None = None
- image_builder: str = 'quay.io/centos-bootc/bootc-image-builder:latest'
- class tmt.steps.provision.bootc.GuestBootc(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger, containerimage: str, rootless: bool)
Bases:
GuestTestcloudInitialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- containerimage: str
- remove() None
Remove the guest (disk cleanup)
- class tmt.steps.provision.bootc.ProvisionBootc(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[BootcData]Provision a local virtual machine using a bootc container image
Minimal config which uses the CentOS Stream 9 bootc image:
provision: how: bootc container-image: quay.io/centos-bootc/centos-bootc:stream9
Here’s a config example using a Containerfile:
provision: how: bootc container-file: "./my-custom-image.containerfile" container-file-workdir: . image-builder: quay.io/centos-bootc/bootc-image-builder:stream9 disk: 100
Another config example using an image that already includes tmt dependencies:
provision: how: bootc add-tmt-dependencies: false container-image: localhost/my-image-with-deps
This plugin is an extension of the virtual.testcloud plugin. Essentially, it takes a container image as input, builds a bootc disk image from the container image, then uses the virtual.testcloud plugin to create a virtual machine using the bootc disk image.
The bootc disk creation requires running podman as root. The plugin will automatically check if the current podman connection is rootless. If it is, a podman machine will be spun up and used to build the bootc disk.
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
tmt.steps.provision.connect module
- class tmt.steps.provision.connect.ConnectGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, role: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, hardware: Optional[tmt.hardware.Hardware] = None, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, guest: Optional[str] = None, soft_reboot: Optional[tmt.utils.ShellScript] = None, hard_reboot: Optional[tmt.utils.ShellScript] = None)
Bases:
GuestSshData- classmethod from_plugin(container: ProvisionConnect) ConnectGuestData
Create guest data from plugin and its current configuration
- guest: str | None = None
- hard_reboot: ShellScript | None = None
- soft_reboot: ShellScript | None = None
- user: str = 'root'
- class tmt.steps.provision.connect.GuestConnect(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestSshInitialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- hard_reboot: ShellScript | None
- reboot(hard: bool = False, command: Command | ShellScript | None = None, timeout: int | None = None, tick: float = 30.0, tick_increase: float = 1.0) bool
Reboot the guest, and wait for the guest to recover.
- Parameters:
hard – if set, force the reboot. This may result in a loss of data. The default of
Falsewill attempt a graceful reboot.command – a command to run on the guest to trigger the reboot. If not set, plugin would try to use
ConnectGuestData.soft_rebootorConnectGuestData.hard_reboot(--soft-rebootand--hard-reboot, respectively), if specified. Unlikecommand, these would be executed on the runner, not on the guest.timeout – amount of time in which the guest must become available again.
tick – how many seconds to wait between two consecutive attempts of contacting the guest.
tick_increase – a multiplier applied to
tickafter every attempt.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- soft_reboot: ShellScript | None
- start() None
Start the guest
- class tmt.steps.provision.connect.ProvisionConnect(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionConnectData]Connect to a provisioned guest using SSH.
Private key authentication:
provision: how: connect guest: host.example.org user: root key: /home/psss/.ssh/example_rsa
Password authentication:
provision: how: connect guest: host.example.org user: root password: secret
User defaults to
root, so if you have private key correctly set the minimal configuration can look like this:provision: how: connect guest: host.example.org
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.provision.connect.ProvisionConnectData(name: str, how: str, order: int = 50, summary: Optional[str] = None, role: Optional[str] = None, hardware: Optional[tmt.hardware.Hardware] = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, guest: Optional[str] = None, soft_reboot: Optional[tmt.utils.ShellScript] = None, hard_reboot: Optional[tmt.utils.ShellScript] = None)
Bases:
ConnectGuestData,ProvisionStepData
tmt.steps.provision.local module
- class tmt.steps.provision.local.GuestLocal(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestLocal Host
Initialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, **kwargs: Any) CommandOutput
Execute command on localhost
- property is_ready: bool
Local is always ready
- localhost = True
- pull(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, extend_options: list[str] | None = None) None
Nothing to be done to pull workdir
- push(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, superuser: bool = False) None
Nothing to be done to push workdir
- reboot(hard: bool = False, command: Command | ShellScript | None = None, timeout: int | None = None) bool
Reboot the guest, return True if successful
- start() None
Start the guest
- stop() None
Stop the guest
- class tmt.steps.provision.local.ProvisionLocal(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionLocalData]Use local host for test execution.
Warning
In general, it is not recommended to run tests on your local machine as there might be security risks. Run only those tests which you know are safe so that you don’t destroy your laptop ;-)
From tmt version 1.38, the
--feeling-safeoption or theTMT_FEELING_SAFE=1environment variable is required in order to use the local provision plugin.Example config:
provision: how: local
Note that
tmt runis expected to be executed under a regular user. If there are admin rights required (for example in the prepare step) you might be asked for asudopassword.Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- guest() GuestLocal | None
Return the provisioned guest
- class tmt.steps.provision.local.ProvisionLocalData(name: str, how: str, order: int = 50, summary: Optional[str] = None, role: Optional[str] = None, hardware: Optional[tmt.hardware.Hardware] = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>)
Bases:
GuestData,ProvisionStepData
tmt.steps.provision.mrack module
- tmt.steps.provision.mrack.BEAKER: Any
- class tmt.steps.provision.mrack.BeakerAPI(guest: GuestBeaker)
Bases:
objectInitialize the API class with defaults and load the config
- create(data: CreateJobParameters) Any
Create - or request creation of - a resource using mrack up.
- Parameters:
data – describes the provisioning request.
- delete() Any
Delete - or request removal of - a resource
- dsp_name: str = 'Beaker'
- inspect() Any
Inspect a resource (kinda wait till provisioned)
- mrack_requirement: dict[str, Any] = {}
- class tmt.steps.provision.mrack.BeakerGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, role: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, hardware: Optional[tmt.hardware.Hardware] = None, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, whiteboard: Optional[str] = None, arch: str = 'x86_64', image: Optional[str] = 'fedora', job_id: Optional[str] = None, provision_timeout: int = 3600, provision_tick: int = 60, api_session_refresh_tick: int = 3600, kickstart: dict[str, str] = <factory>, beaker_job_owner: Optional[str] = None)
Bases:
GuestSshData- api_session_refresh_tick: int = 3600
- arch: str = 'x86_64'
- beaker_job_owner: str | None = None
- image: str | None = 'fedora'
- job_id: str | None = None
- kickstart: dict[str, str]
- provision_tick: int = 60
- provision_timeout: int = 3600
- user: str = 'root'
- whiteboard: str | None = None
- tmt.steps.provision.mrack.BeakerProvider: Any
- tmt.steps.provision.mrack.BeakerTransformer: Any
- class tmt.steps.provision.mrack.CreateJobParameters(tmt_name: str, name: str, os: str, arch: str, hardware: Hardware | None, kickstart: dict[str, str], whiteboard: str | None, beaker_job_owner: str | None, group: str = 'linux')
Bases:
objectCollect all parameters for a future Beaker job
- arch: str
- beaker_job_owner: str | None
- group: str = 'linux'
- kickstart: dict[str, str]
- name: str
- os: str
- tmt_name: str
- to_mrack() dict[str, Any]
- whiteboard: str | None
- tmt.steps.provision.mrack.DEFAULT_API_SESSION_REFRESH = 3600
How often Beaker session should be refreshed to pick up up-to-date Kerberos ticket.
- class tmt.steps.provision.mrack.GuestBeaker(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestSshBeaker guest instance
Initialize guest data
- api_session_refresh_tick: int
- arch: str
- beaker_job_owner: str | None = None
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- image: str = 'fedora-latest'
- property is_ready: bool
Check if provisioning of machine is done
- job_id: str | None
- kickstart: dict[str, str]
- provision_tick: int
- provision_timeout: int
- reboot(hard: bool = False, command: Command | ShellScript | None = None, timeout: int | None = None, tick: float = 30.0, tick_increase: float = 1.0) bool
Reboot the guest, and wait for the guest to recover.
- Parameters:
hard – if set, force the reboot. This may result in a loss of data. The default of
Falsewill attempt a graceful reboot.command – a command to run on the guest to trigger the reboot. If not set, plugin would try to use
bkr system-powerfor hard reboot. Unlikecommand, this would be executed on the runner, not on the guest.timeout – amount of time in which the guest must become available again.
tick – how many seconds to wait between two consecutive attempts of contacting the guest.
tick_increase – a multiplier applied to
tickafter every attempt.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- remove() None
Remove the guest
- start() None
Start the guest
Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.
- stop() None
Stop the guest
- whiteboard: str | None
- class tmt.steps.provision.mrack.GuestInspectType
Bases:
TypedDict- address: str | None
- status: str
- system: str
- class tmt.steps.provision.mrack.MrackBaseHWElement(name: str)
Bases:
objectBase for Mrack hardware requirement elements
- name: str
- to_mrack() dict[str, Any]
Convert the element to Mrack-compatible dictionary tree
- class tmt.steps.provision.mrack.MrackHWAndGroup(name: str = 'and', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement] = <factory>)
Bases:
MrackHWGroupRepresents
<and/>element- name: str = 'and'
- class tmt.steps.provision.mrack.MrackHWBinOp(name: str, operator: str, value: str)
Bases:
MrackHWElementAn element describing a binary operation, a “check”
- class tmt.steps.provision.mrack.MrackHWElement(name: str, attributes: dict[str, str] = <factory>)
Bases:
MrackBaseHWElementAn element with name and attributes.
This type of element is not allowed to have any child elements.
- attributes: dict[str, str]
- to_mrack() dict[str, Any]
Convert the element to Mrack-compatible dictionary tree
- class tmt.steps.provision.mrack.MrackHWGroup(name: str, children: list[~tmt.steps.provision.mrack.MrackBaseHWElement] = <factory>)
Bases:
MrackBaseHWElementAn element with child elements.
This type of element is not allowed to have any attributes.
- children: list[MrackBaseHWElement]
- to_mrack() dict[str, Any]
Convert the element to Mrack-compatible dictionary tree
- class tmt.steps.provision.mrack.MrackHWKeyValue(name: str, operator: str, value: str)
Bases:
MrackHWElementA key-value element
- class tmt.steps.provision.mrack.MrackHWNotGroup(name: str = 'not', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement] = <factory>)
Bases:
MrackHWGroupRepresents
<not/>element- name: str = 'not'
- class tmt.steps.provision.mrack.MrackHWOrGroup(name: str = 'or', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement] = <factory>)
Bases:
MrackHWGroupRepresents
<or/>element- name: str = 'or'
- tmt.steps.provision.mrack.NotAuthenticatedError: Any
- class tmt.steps.provision.mrack.ProvisionBeaker(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionBeakerData]Provision guest on Beaker system using mrack.
Minimal configuration could look like this:
provision: how: beaker image: fedora
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- guest() GuestBeaker | None
Return the provisioned guest
- wake(data: BeakerGuestData | None = None) None
Wake up the plugin, process data, apply options
- class tmt.steps.provision.mrack.ProvisionBeakerData(name: str, how: str, order: int = 50, summary: Optional[str] = None, role: Optional[str] = None, hardware: Optional[tmt.hardware.Hardware] = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, whiteboard: Optional[str] = None, arch: str = 'x86_64', image: Optional[str] = 'fedora', job_id: Optional[str] = None, provision_timeout: int = 3600, provision_tick: int = 60, api_session_refresh_tick: int = 3600, kickstart: dict[str, str] = <factory>, beaker_job_owner: Optional[str] = None)
Bases:
BeakerGuestData,ProvisionStepData
- tmt.steps.provision.mrack.ProvisioningError: Any
- tmt.steps.provision.mrack.TmtBeakerTransformer: Any
- tmt.steps.provision.mrack.async_run(func: Any) Any
Decorate click actions to run as async
- tmt.steps.provision.mrack.constraint_to_beaker_filter(constraint: BaseConstraint, logger: Logger) MrackBaseHWElement
Convert a hardware constraint into a Mrack-compatible filter
- tmt.steps.provision.mrack.import_and_load_mrack_deps(workdir: Any, name: str, logger: Logger) None
Import mrack module only when needed
- tmt.steps.provision.mrack.mrack: Any
- tmt.steps.provision.mrack.mrack_constructs_ks_pre() bool
Kickstart construction has been improved in 1.21.0
- tmt.steps.provision.mrack.operator_to_beaker_op(operator: Operator, value: str) tuple[str, str, bool]
Convert constraint operator to Beaker “op”.
- Parameters:
operator – operator to convert.
value – value operator works with. It shall be a string representation of the the constraint value, as converted for the Beaker job XML.
- Returns:
tuple of three items: Beaker operator, fit for
opattribute of XML filters, a value to go with it instead of the input one, and a boolean signalizing whether the filter, constructed by the caller, should be negated.
- tmt.steps.provision.mrack.providers: Any
tmt.steps.provision.podman module
- class tmt.steps.provision.podman.GuestContainer(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestContainer Instance
Initialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- container: str | None
- execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, **kwargs: Any) CommandOutput
Execute given commands in podman via shell
- force_pull: bool
- image: str | None
- property is_ready: bool
Detect the guest is ready or not
- podman(command: Command, silent: bool = True, **kwargs: Any) CommandOutput
Run given command via podman
- pull(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, extend_options: list[str] | None = None) None
Nothing to be done to pull workdir
- pull_attempts: int
- pull_image() None
Pull image if not available or pull forced
- pull_interval: int
- push(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, superuser: bool = False) None
Make sure that the workdir has a correct selinux context
- reboot(hard: bool = False, command: Command | ShellScript | None = None, timeout: int | None = None) bool
Restart the container, return True if successful
- remove() None
Remove the container
- start() None
Start provisioned guest
- stop() None
Stop provisioned guest
- stop_time: int
- user: str
- wake() None
Wake up the guest
- class tmt.steps.provision.podman.PodmanGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, role: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, hardware: Optional[tmt.hardware.Hardware] = None, image: str = 'fedora', user: str = 'root', force_pull: bool = False, container: Optional[str] = None, network: Optional[str] = None, pull_attempts: int = 5, pull_interval: int = 5, stop_time: int = 1)
Bases:
GuestData- container: str | None = None
- force_pull: bool = False
- image: str = 'fedora'
- network: str | None = None
- pull_attempts: int = 5
- pull_interval: int = 5
- stop_time: int = 1
- user: str = 'root'
- class tmt.steps.provision.podman.ProvisionPodman(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionPodmanData]Create a new container using
podman.Example config:
provision: how: container image: fedora:latest
In order to always pull the fresh container image use
pull: true.In order to run the container with different user as the default
root, useuser: USER.Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- default(option: str, default: Any = None) Any
Return default data for given option
- guest() GuestContainer | None
Return the provisioned guest
- class tmt.steps.provision.podman.ProvisionPodmanData(name: str, how: str, order: int = 50, summary: Optional[str] = None, role: Optional[str] = None, hardware: Optional[tmt.hardware.Hardware] = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, image: str = 'fedora', user: str = 'root', force_pull: bool = False, container: Optional[str] = None, network: Optional[str] = None, pull_attempts: int = 5, pull_interval: int = 5, stop_time: int = 1)
Bases:
PodmanGuestData,ProvisionStepData
tmt.steps.provision.testcloud module
- tmt.steps.provision.testcloud.AArch64ArchitectureConfiguration: Any
- tmt.steps.provision.testcloud.BOOT_TIMEOUT: int = 120
How many seconds to wait for a VM to start. This is the effective value, combining the default and optional envvar,
TMT_BOOT_TIMEOUT.
- tmt.steps.provision.testcloud.CONNECT_TIMEOUT: int = 120
How many seconds to wait for a connection to succeed after guest boot. This is the effective value, combining the default and optional envvar,
TMT_CONNECT_TIMEOUT.
- tmt.steps.provision.testcloud.DEFAULT_BOOT_TIMEOUT: int = 120
How many seconds to wait for a VM to start. This is the default value tmt would use unless told otherwise.
- tmt.steps.provision.testcloud.DEFAULT_CONNECT_TIMEOUT = 120
How many seconds to wait for a connection to succeed after guest boot. This is the default value tmt would use unless told otherwise.
- tmt.steps.provision.testcloud.DomainConfiguration: Any
- class tmt.steps.provision.testcloud.GuestTestcloud(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestSshTestcloud Instance
The following keys are expected in the ‘data’ dictionary:
image ...... qcov image name or url user ....... user name to log in memory ..... memory size for vm disk ....... disk size for vm connection . either session (default) or system, to be passed to qemu arch ....... architecture for the VM, host arch is the default
Initialize guest data
- arch: str
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- connection: str
- disk: Size | None
- image: str
- image_url: str | None
- instance_name: str | None
- property is_coreos: bool
- property is_kvm: bool
- property is_legacy_os: bool
- property is_ready: bool
Detect guest is ready or not
- memory: Size | None
- prepare_config() None
Prepare common configuration
- prepare_ssh_key(key_type: str | None = None) str
Prepare ssh key for authentication
- reboot(hard: bool = False, command: Command | ShellScript | None = None, timeout: int | None = None, tick: float = 30.0, tick_increase: float = 1.0) bool
Reboot the guest, return True if successful
- remove() None
Remove the guest (disk cleanup)
- start() None
Start provisioned guest
- stop() None
Stop provisioned guest
- wake() None
Wake up the guest
- tmt.steps.provision.testcloud.NON_KVM_TIMEOUT_COEF = 10
How many times should the timeouts be multiplied in kvm-less cases. These include emulating a different architecture than the host, some nested virtualization cases, and hosts with degraded virt caps.
- tmt.steps.provision.testcloud.Ppc64leArchitectureConfiguration: Any
- class tmt.steps.provision.testcloud.ProvisionTestcloud(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionTestcloudData]Local virtual machine using
testcloudlibrary.Minimal config which uses the latest Fedora image:
provision: how: virtual
Here’s a full config example:
provision: how: virtual image: fedora user: root memory: 2048
As the image use
fedorafor the latest released Fedora compose,fedora-rawhidefor the latest Rawhide compose, short aliases such asfedora-32,f-32orf32for specific release or a full url to the qcow2 image for example from https://kojipkgs.fedoraproject.org/compose/.Short names are also provided for
centos,centos-stream,alma,rocky,oracle,debianandubuntu(e.g.centos-8orc8).Note
The non-rpm distros are not fully supported yet in tmt as the package installation is performed solely using
dnf/yumandrpm. But you should be able the login to the provisioned guest and start experimenting. Full support is coming in the future :)Supported Fedora CoreOS images are:
fedora-coreosfedora-coreos-stablefedora-coreos-testingfedora-coreos-next
Use the full path for images stored on local disk, for example:
/var/tmp/images/Fedora-Cloud-Base-31-1.9.x86_64.qcow2
In addition to the qcow2 format, Vagrant boxes can be used as well, testcloud will take care of unpacking the image for you.
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.provision.testcloud.ProvisionTestcloudData(name: str, how: str, order: int = 50, summary: Optional[str] = None, role: Optional[str] = None, hardware: Optional[tmt.hardware.Hardware] = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: Optional[ForwardRef('Size')] = None, disk: Optional[ForwardRef('Size')] = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: Optional[str] = None, instance_name: Optional[str] = None)
Bases:
TestcloudGuestData,ProvisionStepData
- tmt.steps.provision.testcloud.QCow2StorageDevice: Any
- tmt.steps.provision.testcloud.RawStorageDevice: Any
- tmt.steps.provision.testcloud.S390xArchitectureConfiguration: Any
- tmt.steps.provision.testcloud.SystemNetworkConfiguration: Any
- tmt.steps.provision.testcloud.TPMConfiguration: Any
- tmt.steps.provision.testcloud.TPM_CONFIG_ALLOWS_VERSIONS: bool = False
If set,
testcloudTPM configuration accepts TPM version as a parameter.
- tmt.steps.provision.testcloud.TPM_VERSION_ALLOWED_OPERATORS: tuple[Operator, ...] = (Operator.EQ, Operator.GTE, Operator.LTE)
List of operators supported for
tpm.versionHW requirement.
- tmt.steps.provision.testcloud.TPM_VERSION_SUPPORTED_VERSIONS = {False: ['2.0', '2'], True: ['2.0', '2', '1.2']}
TPM versions supported by the plugin. The key is
TPM_CONFIG_ALLOWS_VERSIONS.
- class tmt.steps.provision.testcloud.TestcloudGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: Optional[str] = None, topology_address: Optional[str] = None, role: Optional[str] = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, hardware: Optional[tmt.hardware.Hardware] = None, port: Optional[int] = None, user: str = 'root', key: list[str] = <factory>, password: Optional[str] = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: Optional[ForwardRef('Size')] = None, disk: Optional[ForwardRef('Size')] = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: Optional[str] = None, instance_name: Optional[str] = None)
Bases:
GuestSshData- arch: str = 'x86_64'
- connection: str = 'session'
- disk: Size | None = None
- image: str = 'fedora'
- image_url: str | None = None
- instance_name: str | None = None
- list_local_images: bool = False
- memory: Size | None = None
- show(*, keys: list[str] | None = None, verbose: int = 0, logger: Logger) None
Display guest data in a nice way.
- Parameters:
keys – if set, only these keys would be shown.
verbose – desired verbosity. Some fields may be omitted in low verbosity modes.
logger – logger to use for logging.
- user: str = 'root'
- tmt.steps.provision.testcloud.UserNetworkConfiguration: Any
- tmt.steps.provision.testcloud.Workarounds: Any
- tmt.steps.provision.testcloud.X86_64ArchitectureConfiguration: Any
- tmt.steps.provision.testcloud.import_testcloud() None
Import testcloud module only when needed
Module contents
- tmt.steps.provision.BASE_SSH_OPTIONS: list[str | Path] = ['-oForwardX11=no', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', '-oConnectionAttempts=5', '-oConnectTimeout=60', '-oServerAliveInterval=5', '-oServerAliveCountMax=60']
Base SSH options. This is the base set of SSH options tmt would use for all SSH connections. It is a combination of the default SSH options and those provided by environment variables.
- class tmt.steps.provision.CheckRsyncOutcome(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum- ALREADY_INSTALLED = 'already-installed'
- INSTALLED = 'installed'
- tmt.steps.provision.DEFAULT_REBOOT_COMMAND = <tmt.utils.Command object>
A default command to trigger a guest reboot when executed remotely.
- tmt.steps.provision.DEFAULT_REBOOT_TIMEOUT: int = 600
How many seconds to wait for a connection to succeed after guest reboot. This is the default value tmt would use unless told otherwise.
- tmt.steps.provision.DEFAULT_SSH_OPTIONS: list[str | Path] = ['-oForwardX11=no', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', '-oConnectionAttempts=5', '-oConnectTimeout=60', '-oServerAliveInterval=5', '-oServerAliveCountMax=60']
Default SSH options. This is the default set of SSH options tmt would use for all SSH connections.
- class tmt.steps.provision.Guest(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
CommonGuest provisioned for test execution
A base class for guest-like classes. Provides some of the basic methods and functionality, but note some of the methods are left intentionally empty. These do not have valid implementation on this level, and it’s up to Guest subclasses to provide one working in their respective infrastructure.
The following keys are expected in the ‘data’ container:
role ....... guest role in the multihost scenario guest ...... name, hostname or ip address become ..... boolean, whether to run shell scripts in tests, prepare, and finish with sudo
These are by default imported into instance attributes.
Initialize guest data
- ansible(playbook: Path, playbook_root: Path | None = None, extra_args: str | None = None, friendly_command: str | None = None, log: LoggingFunction | None = None, silent: bool = False) None
Run an Ansible playbook on the guest.
A wrapper for
_run_ansible()which is responsible for running the playbook while this method makes sure our logging is consistent.- Parameters:
playbook – path to the playbook to run.
playbook_root – if set,
playbookpath must be located under the given root path.extra_args – additional arguments to be passed to
ansible-playbookvia--extra-args.friendly_command – if set, it would be logged instead of the command itself, to improve visibility of the command in logging output.
log – a logging function to use for logging of command output. By default,
logger.debugis used.silent – if set, logging of steps taken by this function would be reduced.
- become: bool
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- classmethod essential_requires() list[DependencySimple | DependencyFmfId | DependencyFile]
Collect all essential requirements of the guest.
Essential requirements of a guest are necessary for the guest to be usable for testing.
- Returns:
a list of requirements.
- execute(command: tmt.utils.ShellScript, cwd: Path | None = None, env: tmt.utils.Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: tmt.log.LoggingFunction | None = None, interactive: bool = False, on_process_start: OnProcessStartCallback | None = None, **kwargs: Any) tmt.utils.CommandOutput
- execute(command: tmt.utils.Command, cwd: Path | None = None, env: tmt.utils.Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: tmt.log.LoggingFunction | None = None, interactive: bool = False, on_process_start: OnProcessStartCallback | None = None, **kwargs: Any) tmt.utils.CommandOutput
Execute a command on the guest.
- Parameters:
command – either a command or a shell script to execute.
cwd – if set, execute command in this directory on the guest.
env – if set, set these environment variables before running the command.
friendly_command – nice, human-friendly representation of the command.
- property facts: GuestFacts
- property is_ready: bool
Detect guest is ready or not
- load(data: GuestData) None
Load guest data into object attributes for easy access
Called during guest object initialization. Takes care of storing all supported keys (see class attribute _keys for the list) from provided data to the guest object attributes. Child classes can extend it to make additional guest attributes easily available.
Data dictionary can contain guest information from both command line options / L2 metadata / user configuration and wake up data stored by the save() method below.
- localhost = False
- property multihost_name: str
Return guest’s multihost name, i.e. name and its role
- classmethod options(how: str | None = None) list[Callable[[Any], Any]]
Prepare command line options related to guests
- property package_manager: PackageManager
- primary_address: str | None = None
Primary hostname or IP address for tmt/guest communication.
- pull(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, extend_options: list[str] | None = None) None
Pull files from the guest
- push(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, superuser: bool = False) None
Push files to the guest
- reboot(hard: bool = False, command: Command | ShellScript | None = None, timeout: int | None = None) bool
Reboot the guest, return True if successful
Parameter ‘hard’ set to True means that guest should be rebooted by way which is not clean in sense that data can be lost. When set to False reboot should be done gracefully.
Use the ‘command’ parameter to specify a custom reboot command instead of the default ‘reboot’.
Parameter ‘timeout’ can be used to specify time (in seconds) to wait for the guest to come back up after rebooting.
- reconnect(timeout: int | None = None, tick: float = 5, tick_increase: float = 1.0) bool
Ensure the connection to the guest is working
The default timeout is 5 minutes. Custom number of seconds can be provided in the timeout parameter. This may be useful when long operations (such as system upgrade) are performed.
- remove() None
Remove the guest
Completely remove all guest instance data so that it does not consume any disk resources.
- role: str | None
- save() GuestData
Save guest data for future wake up
Export all essential guest data into a dictionary which will be stored in the guests.yaml file for possible future wake up of the guest. Everything needed to attach to a running instance should be added into the data dictionary by child classes.
- property scripts_path: Path
Absolute path to tmt scripts directory
- setup() None
Setup the guest
Setup the guest after it has been started. It is called after
Guest.start().
- show(show_multihost_name: bool = True) None
Show guest details such as distro and kernel
- start() None
Start the guest
Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.
- stop() None
Stop the guest
Shut down a running guest instance so that it does not consume any memory or cpu resources. If needed, perform any actions necessary to store the instance status to disk.
- topology_address: str | None = None
Guest topology hostname or IP address for guest/guest communication.
- wake() None
Wake up the guest
Perform any actions necessary after step wake up to be able to attach to a running guest instance and execute commands. Called after load() is completed so all guest data should be prepared.
- class tmt.steps.provision.GuestCapability(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
EnumVarious Linux capabilities
- SYSLOG_ACTION_READ_ALL = 'syslog-action-read-all'
Read all messages remaining in the ring buffer.
- SYSLOG_ACTION_READ_CLEAR = 'syslog-action-read-clear'
Read and clear all messages remaining in the ring buffer.
- class tmt.steps.provision.GuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: ~tmt.steps.provision.GuestFacts = <factory>, hardware: ~tmt.hardware.Hardware | None = None)
Bases:
SerializableContainerKeys necessary to describe, create, save and restore a guest.
Very basic set of keys shared across all known guest classes.
- become: bool = False
- facts: GuestFacts
- classmethod from_plugin(container: ProvisionPlugin[ProvisionStepDataT]) GuestDataT
Create guest data from plugin and its current configuration
- classmethod options() Iterator[tuple[str, str]]
Iterate over option names.
Based on
keys(), but skips fields that cannot be set by options.- Yields:
two-item tuples, a key and corresponding option name.
- primary_address: str | None = None
Primary hostname or IP address for tmt/guest communication.
- role: str | None = None
- show(*, keys: list[str] | None = None, verbose: int = 0, logger: Logger) None
Display guest data in a nice way.
- Parameters:
keys – if set, only these keys would be shown.
verbose – desired verbosity. Some fields may be omitted in low verbosity modes.
logger – logger to use for logging.
- topology_address: str | None = None
Guest topology hostname or IP address for guest/guest communication.
- class tmt.steps.provision.GuestFacts(in_sync: bool = False, arch: str | None = None, distro: str | None = None, kernel_release: str | None = None, package_manager: tmt.package_managers.GuestPackageManager | None = None, has_selinux: bool | None = None, is_superuser: bool | None = None, is_ostree: bool | None = None, capabilities: dict[~tmt.steps.provision.GuestCapability, bool] = <factory>, os_release_content: dict[str, str] = <factory>, lsb_release_content: dict[str, str] = <factory>)
Bases:
SerializableContainerContains interesting facts about the guest.
Inspired by Ansible or Puppet facts, interesting guest facts tmt discovers while managing the guest are stored in this container, plus the code performing the discovery of these facts.
- arch: str | None = None
- capabilities: dict[GuestCapability, bool]
Various Linux capabilities and whether they are permitted to commands executed on this guest.
- distro: str | None = None
- format() Iterator[tuple[str, str, str]]
Format facts for pretty printing.
- Yields:
three-item tuples: the field name, its pretty label, and formatted representation of its value.
- has_capability(cap: GuestCapability) bool
- has_selinux: bool | None = None
- is_ostree: bool | None = None
- is_superuser: bool | None = None
- kernel_release: str | None = None
- lsb_release_content: dict[str, str]
- os_release_content: dict[str, str]
- package_manager: tmt.package_managers.GuestPackageManager | None = None
- class tmt.steps.provision.GuestSsh(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestGuest provisioned for test execution, capable of accepting SSH connections
The following keys are expected in the ‘data’ dictionary:
role ....... guest role in the multihost scenario (inherited) guest ...... hostname or ip address (inherited) become ..... run shell scripts in tests, prepare, and finish with sudo (inherited) port ....... port to connect to user ....... user name to log in key ........ path to the private key (str or list) password ... password
These are by default imported into instance attributes.
Initialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, **kwargs: Any) CommandOutput
Execute a command on the guest.
- Parameters:
command – either a command or a shell script to execute.
cwd – execute command in this directory on the guest.
env – if set, set these environment variables before running the command.
friendly_command – nice, human-friendly representation of the command.
- property is_ready: bool
Detect guest is ready or not
- property is_ssh_multiplexing_enabled: bool
Whether SSH multiplexing should be used
- key: list[Path]
- password: str | None
- perform_reboot(command: Callable[[], CommandOutput], timeout: int | None = None, tick: float = 30.0, tick_increase: float = 1.0, hard: bool = False) bool
Perform the actual reboot and wait for the guest to recover.
- Parameters:
command – a callable running the actual command triggering the reboot.
timeout – amount of time in which the guest must become available again.
tick – how many seconds to wait between two consecutive attempts of contacting the guest.
tick_increase – a multiplier applied to
tickafter every attempt.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- port: int | None
- pull(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, extend_options: list[str] | None = None) None
Pull files from the guest
By default the whole plan workdir is synced from the same location on the guest. Use the ‘source’ and ‘destination’ to sync custom location, the ‘options’ parameter to modify default options
DEFAULT_RSYNC_PULL_OPTIONSand ‘extend_options’ to extend them (e.g. by exclude).
- push(source: Path | None = None, destination: Path | None = None, options: list[str] | None = None, superuser: bool = False) None
Push files to the guest
By default the whole plan workdir is synced to the same location on the guest. Use the ‘source’ and ‘destination’ to sync custom location and the ‘options’ parameter to modify default options which are ‘-Rrz –links –safe-links –delete’.
Set ‘superuser’ if rsync command has to run as root or passwordless sudo on the Guest (e.g. pushing to r/o destination)
- reboot(hard: bool = False, command: Command | ShellScript | None = None, timeout: int | None = None, tick: float = 30.0, tick_increase: float = 1.0) bool
Reboot the guest, and wait for the guest to recover.
- Parameters:
hard – if set, force the reboot. This may result in a loss of data. The default of
Falsewill attempt a graceful reboot.command – a command to run on the guest to trigger the reboot.
timeout – amount of time in which the guest must become available again.
tick – how many seconds to wait between two consecutive attempts of contacting the guest.
tick_increase – a multiplier applied to
tickafter every attempt.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- remove() None
Remove the guest
Completely remove all guest instance data so that it does not consume any disk resources.
- setup() None
Setup the guest
Setup the guest after it has been started. It is called after
Guest.start().
- ssh_option: list[str]
- stop() None
Stop the guest
Shut down a running guest instance so that it does not consume any memory or cpu resources. If needed, perform any actions necessary to store the instance status to disk.
- user: str | None
- class tmt.steps.provision.GuestSshData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: ~tmt.steps.provision.GuestFacts = <factory>, hardware: ~tmt.hardware.Hardware | None = None, port: int | None = None, user: str | None = None, key: list[str] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>)
Bases:
GuestDataKeys necessary to describe, create, save and restore a guest with SSH capability.
Derived from GuestData, this class adds keys relevant for guests that can be reached over SSH.
- key: list[str]
- password: str | None = None
- port: int | None = None
- ssh_option: list[str]
- user: str | None = None
- class tmt.steps.provision.Provision(*, plan: Plan, data: _RawStepData | list[_RawStepData], logger: Logger)
Bases:
StepProvision an environment for testing or use localhost.
Initialize provision step data
- DEFAULT_HOW: str = 'virtual'
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- cli_invocations: list['tmt.cli.CliInvocation'] = []
- go(force: bool = False) None
Provision all guests
- property is_multihost: bool
- load() None
Load guest data from the workdir
- save() None
Save guest data to the workdir
- summary() None
Give a concise summary of the provisioning
- wake() None
Wake up the step (process workdir and command line)
- class tmt.steps.provision.ProvisionPlugin(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
GuestlessPlugin[ProvisionStepDataT,None]Common parent of provision plugins
Store plugin name, data and parent step
- classmethod base_command(usage: str, method_class: type[Command] | None = None) Command
Create base click command (common for all provision plugins)
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- essential_requires() list[DependencySimple | DependencyFmfId | DependencyFile]
Collect all essential requirements of the guest implementation.
Essential requirements of a guest are necessary for the guest to be usable for testing.
By default, plugin’s guest class,
ProvisionPlugin._guest_class, is asked to provide the list of required packages viaGuest.requires()method.- Returns:
a list of requirements.
- go(*, logger: Logger | None = None) None
Perform actions shared among plugins when beginning their tasks
- guest() Guest | None
Return provisioned guest
Each ProvisionPlugin has to implement this method. Should return a provisioned Guest() instance.
- how: str = 'virtual'
- opt(option: str, default: Any | None = None) Any
Get an option from the command line options
- classmethod options(how: str | None = None) list[Callable[[Any], Any]]
Return list of options.
- show(keys: list[str] | None = None) None
Show plugin details for given or all available keys
- class tmt.steps.provision.ProvisionQueue(name: str, logger: Logger)
Bases:
Queue[ProvisionTask]Queue class for running provisioning tasks
- enqueue(*, phases: list[ProvisionPlugin[ProvisionStepData]], logger: Logger) None
- class tmt.steps.provision.ProvisionStepData(name: str, how: str, order: int = 50, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None)
Bases:
StepData- role: str | None = None
- class tmt.steps.provision.ProvisionTask(logger: Logger, result: TaskResultT | None, guest: Guest | None, exc: Exception | None, requested_exit: SystemExit | None, phases: list[ProvisionPlugin[ProvisionStepData]], phase: ProvisionPlugin[ProvisionStepData] | None = None)
Bases:
GuestlessTask[None]A task to run provisioning of multiple guests
- go() Iterator[ProvisionTask]
Perform the task.
Called by
Queuemachinery to accomplish the task. It expects the child class would implementrun(), withgotaking care of task/queue interaction.- Yields:
since the task is not expected to run on multiple guests, only a single instance of the class is yielded to describe the task and its outcome.
- property name: str
A name of this task.
Left for child classes to implement, because the name depends on the actual task.
- phase: ProvisionPlugin[ProvisionStepData] | None = None
When
ProvisionTaskinstance is received from the queue,phasepoints to the phase that has been provisioned by the task.
- phases: list[ProvisionPlugin[ProvisionStepData]]
Phases describing guests to provision. In the
provisionstep, each phase describes one guest.
- tmt.steps.provision.REBOOT_TIMEOUT: int = 600
How many seconds to wait for a connection to succeed after guest reboot. This is the effective value, combining the default and optional envvar,
TMT_REBOOT_TIMEOUT.
- tmt.steps.provision.SSH_MASTER_SOCKET_LENGTH_LIMIT = 84
SSH master socket path is limited to this many characters.
UNIX socket path is limited to either 108 or 104 characters, depending on the platform. See man 7 unix and/or kernel sources, for example.
SSH client processes may create paths with added “connection hash” when connecting to the master, that is a couple of characters we need space for.
- tmt.steps.provision.STAT_BTIME_PATTERN = re.compile('btime\\s+(\\d+)')
A pattern to extract
btimefrom/proc/statfile.
- tmt.steps.provision.configure_ssh_options() list[str | Path]
Extract custom SSH options from environment variables
- tmt.steps.provision.essential_ansible_requires() list[DependencySimple | DependencyFmfId | DependencyFile]
Return essential requirements for running Ansible modules
- tmt.steps.provision.format_guest_full_name(name: str, role: str | None) str
Render guest’s full name, i.e. name and its role
- tmt.steps.provision.normalize_hardware(key_address: str, raw_hardware: Any | None, logger: Logger) Hardware | None
Normalize a
hardwarekey value.- Parameters:
key_address – location of the key being that’s being normalized.
logger – logger to use for logging.
raw_hardware – input from either command line or fmf node.