Tests
As a tester I want to store test metadata close to the test source code.
Tests, or L1 metadata, define attributes which are closely related to individual test cases such test script, framework, directory path where the test should be executed, maximum test duration or packages required to run the test.
In addition to the attributes defined here, tests also support common Core attributes which are shared across all metadata levels.
Examples:
summary: Check the man page
test: man tmt | grep friendly
require: grep
duration: 1m
tier: 1
tag: docs
check
Additional test checks
As a tester I want to employ additional checks before, during and after test execution. These checks would complement the actual test by monitoring various aspects and side-effects of the test execution.
In some cases we want to run additional checks while running a test. A nice example is a check for unexpected SELinux AVCs produced during the test, this can point to additional issues the user can run into. Another useful checks would be kernel panic detection, core dump collection or collection of system logs.
By default, the check results affect the overall test outcome.
To change this behaviour, use the result key, which accepts
the following values:
- respect
The check result is respected and affects the overall test result. This is the default.
- xfail
The check result is expected to fail (pass becomes fail and vice-versa).
- info
The check result is treated as an informational message and does not affect the overall test result.
Warning
Note that running one check multiple times for the same test is not yet supported.
Changed in version 1.38.0: the result key added
See Test checks Plugins for the list of available checks.
Examples:
# Enable a single check, AVC denial detection.
check: avc
# Enable multiple checks, by listing their names. A list of names
# is acceptable as well as a single name.
check:
- avc
- dmesg
# Enable multiple checks, one of them would be disabled temporarily.
# Using `how` key to pick the check.
check:
- avc
- kernel-panic
- how: test-inspector
enable: false
# Expect the AVC check to fail
check:
- how: avc
result: xfail
- how: dmesg
result: respect
# Treat the dmesg check as informational only
check:
- how: dmesg
result: info
Status: implemented and verified
Implemented by /tmt/checks
Implemented by /tmt/result.py
Verified by /tests/test/check/avc
Verified by /tests/test/check/dmesg
Verified by /tests/execute/result/check
component
Relevant fedora/rhel source package names
As a SELinux tester testing the ‘checkpolicy’ component I want to run Tier 1 tests for all SELinux components plus all checkpolicy tests.
It’s useful to be able to easily select all tests relevant for given component or package. As they do not always have to be stored in the same repository and because many tests cover multiple components a dedicated field is needed. Must be a string or a list of strings. Component name usually corresponds to the source package name.
Examples:
# Single component
component: libselinux
# Multiple components
component: [libselinux, checkpolicy]
# Multiple components on separate lines
component:
- libselinux
- checkpolicy
Status: implemented
Implemented by /tmt/base.py
description
Detailed description of what the test does
As a developer I review existing test coverage for my component and would like to get an overall idea what is covered without having to read the whole test code.
For complex tests it makes sense to provide more detailed description to better clarify what is covered by the test. This can be useful for test writer as well when reviewing a test written long time ago. Must be a string (multi line, plain text).
Examples:
description:
This test checks all available wget options related to
downloading files recursively. First a tree directory
structure is created for testing. Then a file download
is performed for different recursion depth specified by
the "--level=depth" option.
Status: implemented
Implemented by /tmt/base.py
duration
Maximum time for test execution
As a test harness I need to know after how long time I should kill test if it is still running to prevent resource wasting.
In order to prevent stuck tests from consuming resources, we define a
maximum time for test execution. If the limit is exceeded, the
running test is killed by the test harness. Value extends the
format of the sleep command by allowing multiplication (*[float]).
First, all time values are summed together, and only then are they multiplied.
The final value is then rounded up to the whole number.
Note that the asterisk character * has a special meaning
in YAML syntax and thus you need to put it into the quotes to
make it a string.
Must be a string. The default value is 5m.
Added in version 1.34: Multiplication
Examples:
# Three minutes
duration: 3m
# Two hours
duration: 2h
# One day
duration: 1d
# Combination & repetition of time suffixes (total 4h 2m 3s)
duration: 1h 3h 2m 3
# Multiplication is evaluated last (total 24s: 2s * 3 * 4)
duration: "*3 2s *4"
# Use context adjust to extend duration for given arch
duration: 5m
adjust:
duration+: 15m
when: arch == aarch64
# Use context adjust to scale duration for given arch
duration: 5m
adjust:
- duration+: "*2"
when: arch == aarch64
- duration+: "*0.9"
when: arch == s390x
Status: implemented and verified
Implemented by /tmt/base.py
Verified by /tests/discover/duration
Verified by /tests/execute/duration
environment
Environment variables to be set before running the test
As a tester I need to pass environment variables to my test script to properly execute the desired test scenario.
Test scripts might require certain environment variables to be set. Although this can be done on the shell command line as part of the test attribute it makes sense to have a dedicated field for this, especially when the number of parameters grows. This might be useful for virtual test cases as well. Plan environment overrides test environment. Must be a dictionary.
Examples:
environment:
PACKAGE: python37
PYTHON: python3.7
Status: implemented and verified
Implemented by /tmt/base.py
Verified by /tests/core/env
framework
Test framework defining how tests should be executed
As a tester I want to include tests using different test execution framework in a single plan.
The framework defines how test code should be executed and how test results should be interpreted (e.g. checking exit code of a shell test versus checking beakerlib test results file). It also determines possible additional required packages to be installed on the test environment.
Currently shell and beakerlib are supported. Each
execute step plugin must list which frameworks it supports
and raise an error when an unsupported framework is detected.
Must be a string, by default shell is used.
- shell
Only the exit code determines the test result. Exit code
0is handled as a testpass, exit code1is considered to be a testfailand any other exit code is interpreted as anerror.- beakerlib
Exit code and BeakerLib’s
TestResultsfile determine the test result.
Examples:
# Test written in shell
framework: shell
# A beakerlib test
framework: beakerlib
Status: implemented and verified
Implemented by /tmt/base.py
Verified by /tests/execute/framework
manual
Test automation state
As a tester I need to store detailed manual instructions covering test scenario I have to perform manually.
Attribute marks whether this test needs human interaction during its execution. Such tests are not likely to be executed in automation pipelines. In the future they can be executed in a semi-automated way, waiting on human interaction.
It’s value must be a boolean. The default value is
false. When set to true, the test
attribute must point to a Markdown document following the
CommonMark specification.
This is a minimal example of a manual test document containing a single test with one test step and one expected result:
# Test
## Step
Do this and that.
## Expect
Check this and that.
The following sections are recognized by tmt and have a special meaning. Any other features of Markdown can be used, but tmt will just show them.
- Setup
Optional heading
# Setupunder which any necessary preparation actions are documented. These actions are not part of the test itself.- Test
Required level 1 heading
# Testor# Test .*starting with the word ‘Test’ marks beginning of the test itself. Multiple Test sections can be defined in a single document.- Step
Required level 2 heading
## Stepor## Test Stepmarking a single step of the test, must be in pair with the Expect section which follows it. Cannot be used outside of test sections.- Expect
Required level 2 heading
## Expect,## Resultor## Expected Resultmarking expected outcome of the previous step. Cannot be used outside of test sections.- Cleanup
Optional heading
# Cleanupunder which any cleanup actions which are not part of the test itself are documented.- Code block
Optional, can be used in any section to mark code snippets. Code type specification (bash, python…) is recommended. It can be used for syntax highlighting and in the future for the semi-automated test execution as well.
See the manual test examples to get a better idea.
Examples:
manual: true
test: manual.md
Status: implemented
Implemented by /tmt/base.py
path
Directory to be entered before executing the test
As a test writer I want to define the directory from which the test script should be executed.
In order to have all files which are needed for testing
prepared for execution and available on locations expected by
the test script, automation changes the current working
directory to the provided path before running the test.
It must be a string containing path from the metadata
tree root to the desired directory and must
start with a slash. If path is not defined, the directory
where the test metadata are stored is used as a default.
Examples:
path: /protocols/https
Status: implemented and verified
Implemented by /tmt/base.py
Verified by /tests/execute/basic
Relates to https://fmf.readthedocs.io/en/latest/features.html#virtual
recommend
Packages or libraries recommended for the test execution
As a tester I want to specify additional packages which should be installed on the system if available and no error should be reported if they cannot be installed.
Sometimes there can be additional packages which are not strictly needed to run tests but can improve test execution in some way, for example provide better results presentation.
Also package names can differ across product versions. Using this attribute it is possible to specify all possible package names and only those which are available will be installed.
If possible, for the second use case it is recommended to specify such packages using the prepare step configuration which is usually branched according to the version and thus can better ensure that the right packages are correctly installed as expected.
Note that beakerlib libraries are supported by this attribute as well. See the require attribute for more details about available reference options.
Must be a string or a list of strings using package
specification supported by dnf which takes care of the
installation.
Examples:
# Single package
recommend: mariadb
# Multiple packages
recommend: [mariadb, mysql]
# Multiple packages on separate lines
recommend:
- mariadb
- mysql
Status: implemented and verified
Implemented by /tmt/base.py
Implemented by /tmt/steps/discover
Implemented by /tmt/steps/prepare
Verified by /tests/prepare/recommend
require
Packages, libraries or files required for test execution
As a tester I want to specify packages, libraries and files which are required by the test and need to be installed on or copied over to the system so that the test can be successfully executed.
In order to execute the test, additional packages may need to
be installed on the system. For example gcc and make are
needed to compile tests written in C on the target machine. If
the package cannot be installed test execution must result
in an error.
For tests shared across multiple components or product versions where required packages have different names it is recommended to use the prepare step configuration to specify required packages for each component or product version individually.
When referencing beakerlib libraries it is possible to use
both the backward-compatible syntax library(repo/lib)
which fetches libraries from the default location as well
as provide a dictionary with a full fmf identifier.
For the latter case specify type: library.
When referencing local files or directories use type: file
and define list of paths relative to the fmf root directory.
These can be regular expressions to match multiple files or
directories or just a single file or directory name. By
default everything under test path is
copied over to the system.
By default, fetched repositories are stored in the discover
step workdir under the libs directory. Use optional key
destination to choose a different location. The nick
key can be used to override the default git repository name.
For debugging beakerlib libraries it is useful to reference
the development version directly from the local filesystem.
Use the path key to specify a full path to the library.
Must be a string or a list of strings using package
specification supported by dnf which takes care of the
installation or a dictionary if using fmf identifier to
fetch dependent repositories.
Examples:
# Require a single package
require: make
# Multiple packages
require: [gcc, make]
# Multiple packages on separate lines
require:
- gcc
- make
# Library from the default upstream location
require: library(openssl/certgen)
# Library from a custom remote git repository
require:
- type: library
url: https://github.com/beakerlib/openssl
name: /certgen
# Library from the local filesystem
require:
- type: library
path: /tmp/library/openssl
name: /certgen
# Use a custom git ref and nick for the library
require:
- type: library
url: https://github.com/redhat-qe-security/certgen
ref: devel
nick: openssl
name: /certgen
# Require local files needed for the library or test
require:
- type: file
pattern:
- /include
- /Library/common/helper.sh
- /files/photos/IMG.*
# Require whole fmf tree
require:
- type: file
pattern: /
Status: implemented and verified
Implemented by /tmt/base.py
Implemented by /tmt/steps/discover
Implemented by /tmt/steps/prepare
Verified by /tests/prepare/require
Verified by /tests/libraries/local
restart
Handling test crashes
As a tester I want to run tests that may trigger kernel panic or cripple their guest in other ways, often on purpose. I need a way to reliably continue with the testing process.
Some tests may focus on lower levels of system functionality, and perform actions that cause system crashes. And the crashes might be triggered on purpose, e.g. to verify a system can recover.
tmt on its own cannot detect a kernel panic, and cannot pick from all possible ways of handling such a situation, therefore offers tests a way to hint tmt on how to proceed:
See watchdog for a test-level check that can detect frozen guests and trigger hard reboot before restarting the test.
- restart-on-exit-code:
EXIT-CODES When set, it lists test exit codes that should trigger the test restart.
Default: not set
- restart-max-count:
LIMIT How many times the test may be restarted before giving up. It must be at least 1, and the upper limit is 10.
Default:
1- restart-with-reboot:
true|false When set, a hard reboot would be triggered before restarting the test.
Default:
falseWarning
Be aware that this feature may be limited depending on how the guest was provisioned. See Hard reboot.
Added in version 1.33.
Examples:
# Enable test restart on very rare exit code
restart-on-exit-code: 79
# Enable test restart on exit code the test reports when detecting
# kernel panic. Do not reboot the guest, the test needs to re-enter
# the environment as it is.
restart-on-exit-code: 79
restart-with-reboot: true
Status: implemented and verified
Implemented by /tmt/steps/execute
Verified by /tests/execute/restart/basic
Verified by /tests/execute/restart/with-reboot
result
Specify how test result should be interpreted
As a tester I want to regularly execute the test but temporarily ignore test result until more investigation is done and the test can be fixed properly.
Even if a test fails it might makes sense to execute it to be able to manually review the results (ignore test result) or ensure the behaviour has not unexpectedly changed and the test is still failing (expected fail). The following values are supported:
- respect
test result is respected (fails when test failed) - default value
- xfail
expected fail (pass when test fails, fail when test passes)
- pass, info, warn, error, fail
ignore the actual test result and always report provided value instead
- custom
test needs to create its own
results.yamlorresults.jsonfile in the${TMT_TEST_DATA}directory. The format of the file, notes and detailed examples are documented at Results Format.- restraint
handle
tmt-report-result,rstrnt-report-resultandrhts-report-resultcommands as a separate test and report individual test result for each call, see also Report test resultAdded in version 1.35.
Examples:
# Plain swapping fail to pass and pass to fail result
result: xfail
# Look for $TMT_TEST_DATA/results.yaml (or results.json) with custom results
result: custom
Status: implemented and verified
Implemented by /tmt/base.py
Verified by /tests/execute/result/basic
Verified by /tests/execute/result/check
Verified by /tests/execute/result/custom
Verified by /tests/execute/result/repeated
Verified by /tests/execute/result/special
Verified by /tests/execute/result/subresults
Verified by /tests/execute/result/custom
summary
Concise summary of what the test does
As a developer reviewing multiple failed tests I would like to get quickly an idea of what my change broke.
In order to efficiently collaborate on test maintenance it’s crucial to have a short summary of what the test does. Must be a one-line string, should be up to 50 characters long.
Examples:
summary: Test wget recursive download options
Status: implemented
Relates to https://stackoverflow.com/questions/2290016/git-commit-messages-50-72-formatting
Implemented by /tmt/base.py
test
Shell command which executes the test
As a test writer I want to run a single test script in multiple ways (e.g. by providing different parameters).
This attribute defines how the test is to be executed. Allows to parametrize a single test script and in this way create virtual test cases.
If the test is manual, it points to the document describing the manual test case steps in Markdown format with defined structure.
Must be a string. This is a required attribute.
Bash is used as shell and options errexit and pipefail
are applied using set -eo pipefail to avoid potential errors
going unnoticed. You may revert this setting by explicitly
using set +eo pipefail. These options are not applied when
beakerlib is used as the framework.
Examples:
# Run a script
test: ./test.sh
# Run a script with parameter
test: ./test.sh --depth 1000
# Execute selected tests using pytest
test: pytest -k performance
# Run test using a Makefile target
test: make run
# Define a manual test
test: manual.md
manual: true
Status: implemented and verified
Implemented by /tmt/base.py
Verified by /tests/execute/basic
Verified by /tests/execute/codes
Verified by /tests/execute/deep
Verified by /tests/execute/duration
Verified by /tests/execute/exit-first
Verified by /tests/execute/filesubmit
Verified by /tests/execute/framework
Verified by /tests/execute/metadata
Verified by /tests/execute/multiline
Verified by /tests/execute/nonroot
Verified by /tests/execute/old
Verified by /tests/execute/reboot/basic
Verified by /tests/execute/reboot/efi
Verified by /tests/execute/reboot/freeze
Verified by /tests/execute/reboot/multi-part
Verified by /tests/execute/reboot/out-of-session
Verified by /tests/execute/reboot/reuse-provision
Verified by /tests/execute/reboot/shorten-timeout
Verified by /tests/execute/restart/basic
Verified by /tests/execute/restart/with-reboot
Verified by /tests/execute/restraint/report-log
Verified by /tests/execute/restraint/report-result
Verified by /tests/execute/restraint/tmt-abort
Verified by /tests/execute/result/basic
Verified by /tests/execute/result/check
Verified by /tests/execute/result/custom
Verified by /tests/execute/result/repeated
Verified by /tests/execute/result/special
Verified by /tests/execute/result/subresults
Verified by /tests/execute/rsync
Verified by /tests/execute/script
Verified by /tests/execute/tmt-scripts/fedora
Verified by /tests/execute/tmt-scripts/fedora-bootc
Verified by /tests/execute/tmt-scripts/fedora-bootc-force
Verified by /tests/execute/tmt-scripts/fedora-force
Verified by /tests/execute/tty
Verified by /tests/execute/upgrade/full
Verified by /tests/execute/upgrade/ignore-test
Verified by /tests/execute/upgrade/local
Verified by /tests/execute/upgrade/override
Verified by /tests/execute/upgrade/simple
Verified by /tests/execute/weird
tty
Test terminal environment
As a tester I want my test to have terminal environment available, because it needs it for successful execution.
Attribute marks whether during execution of the test a terminal environment should be available. Terminal environment is provided by creating a pseudo-terminal and keeping it available for the executed test.
Warning
For the local provisioner no tty is allocated, and
this attribute is therefore ignored. Please open a new
issue to the project if you would like to get this fixed.
It’s value must be a boolean. The default value is
false.
Added in version 1.30.
Examples:
test: script.sh
tty: true
Status: implemented and verified
Implemented by /tmt/base.py
Verified by /tests/execute/tty