![]() This allows the ptest results from ptest-runner, run in an image to be transferred over to the resulting json results output. Each test is given a pass/skip/fail so individual results can be monitored and the raw log output from the ptest-runner is also dumped into the results json file as this means after the fact debugging becomes much easier. Currently the log output is not split up per test but that would make a good future enhancement. I attempted to implement this as python subTests however it failed as the output was too confusing, subTests don't support any kind of log output handling, subTest successes aren't logged and it was making things far more complex than they needed to be. We mark ptest-runner as "EXPECTEDFAILURE" since its unlikely every ptest will pass currently and we don't want that to fail the whole image test run. Its assumed there would be later analysis of the json output to determine regressions. We do have to change the test runner code so that 'unexpectedsuccess' is not a failure. Also, the test names are manipuated to remove spaces and brackets with "_" used as a replacement and any duplicate occurrences truncated. (From OE-Core rev: a13e088942e2a3c3521e98954a394e61a15234e8) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org> |
||
---|---|---|
.. | ||
cases | ||
decorator | ||
target | ||
tests | ||
utils | ||
__init__.py | ||
case.py | ||
context.py | ||
exception.py | ||
loader.py | ||
README | ||
runner.py |
= OEQA (v2) Framework =
== Introduction ==
This is version 2 of the OEQA framework. Base clases are located in the 'oeqa/core' directory and subsequent components must extend from these.
The main design consideration was to implement the needed functionality on top of the Python unittest framework. To achieve this goal, the following modules are used:
* oeqa/core/runner.py: Provides OETestResult and OETestRunner base
classes extending the unittest class. These classes support exporting
results to different formats; currently RAW and XML support exist.
* oeqa/core/loader.py: Provides OETestLoader extending the unittest class.
It also features a unified implementation of decorator support and
filtering test cases.
* oeqa/core/case.py: Provides OETestCase base class extending
unittest.TestCase and provides access to the Test data (td), Test context
and Logger functionality.
* oeqa/core/decorator: Provides OETestDecorator, a new class to implement
decorators for Test cases.
* oeqa/core/context: Provides OETestContext, a high-level API for
loadTests and runTests of certain Test component and
OETestContextExecutor a base class to enable oe-test to discover/use
the Test component.
Also, a new 'oe-test' runner is located under 'scripts', allowing scans for components that supports OETestContextExecutor (see below).
== Terminology ==
* Test component: The area of testing in the Project, for example: runtime, SDK, eSDK, selftest.
* Test data: Data associated with the Test component. Currently we use bitbake datastore as
a Test data input.
* Test context: A context of what tests needs to be run and how to do it; this additionally
provides access to the Test data and could have custom methods and/or attrs.
== oe-test ==
The new tool, oe-test, has the ability to scan the code base for test components and provide a unified way to run test cases. Internally it scans folders inside oeqa module in order to find specific classes that implement a test component.
== Usage ==
Executing the example test component
$ source oe-init-build-env
$ oe-test core
Getting help
$ oe-test -h
== Creating new Test Component ==
Adding a new test component the developer needs to extend OETestContext/OETestContextExecutor (from context.py) and OETestCase (from case.py)
== Selftesting the framework ==
Run all tests:
$ PATH=$PATH:../../ python3 -m unittest discover -s tests
Run some test:
$ cd tests/
$ ./test_data.py