poky/scripts/resulttool
Clara Kowalsky 5aeabd3217 resulttool: Add support to create test report in JUnit XML format
This adds the functionality to convert the results of the
testresults.json file to a unit test report in JUnit XML format. The
unit test report can be used in the CI/CD pipeline to display the test
results.

To use the resulttool scripts, first source oe environment, then run the
entry point script to look for help.
	$ resulttool

To generate the unit test report, execute the below
	$ resulttool junit <json_file>

By default the unit test report is stored as
<build_dir>/tmp/log/oeqa/junit.xml.

(From OE-Core rev: 3f9be03946243feaa09b908d7010899769091fe6)

Signed-off-by: Clara Kowalsky <clara.kowalsky@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2024-08-29 21:58:19 +01:00

2.9 KiB
Executable File

#!/usr/bin/env python3

test results tool - tool for manipulating OEQA test result json files

(merge results, summarise results, regression analysis, generate manual test results file)

To look for help information.

$ resulttool

To store test results from oeqa automated tests, execute the below

$ resulttool store <source_dir> <git_branch>

To merge test results, execute the below

$ resulttool merge <base_result_file> <target_result_file>

To report test report, execute the below

$ resulttool report <source_dir>

To create a unit test report in JUnit XML format, execute the below

$ resulttool junit <json_file>

To perform regression file analysis, execute the below

$ resulttool regression-file <base_result_file> <target_result_file>

To execute manual test cases, execute the below

$ resulttool manualexecution

By default testresults.json for manualexecution store in /tmp/log/manual/

Copyright (c) 2019, Intel Corporation.

SPDX-License-Identifier: GPL-2.0-only

import os import sys import argparse import logging script_path = os.path.dirname(os.path.realpath(file)) lib_path = script_path + '/lib' sys.path = sys.path + [lib_path] import argparse_oe import scriptutils import resulttool.merge import resulttool.store import resulttool.regression import resulttool.report import resulttool.manualexecution import resulttool.log import resulttool.junit logger = scriptutils.logger_create('resulttool')

def main(): parser = argparse_oe.ArgumentParser(description="OEQA test result manipulation tool.", epilog="Use %(prog)s --help to get help on a specific command") parser.add_argument('-d', '--debug', help='enable debug output', action='store_true') parser.add_argument('-q', '--quiet', help='print only errors', action='store_true') subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='') subparsers.required = True subparsers.add_subparser_group('manualexecution', 'manual testcases', 300) resulttool.manualexecution.register_commands(subparsers) subparsers.add_subparser_group('setup', 'setup', 200) resulttool.merge.register_commands(subparsers) resulttool.store.register_commands(subparsers) subparsers.add_subparser_group('analysis', 'analysis', 100) resulttool.regression.register_commands(subparsers) resulttool.report.register_commands(subparsers) resulttool.log.register_commands(subparsers) resulttool.junit.register_commands(subparsers)

args = parser.parse_args()
if args.debug:
    logger.setLevel(logging.DEBUG)
elif args.quiet:
    logger.setLevel(logging.ERROR)

try:
    ret = args.func(args, logger)
except argparse_oe.ArgumentUsageError as ae:
    parser.error_subcommand(ae.message, ae.subcommand)
return ret

if name == "main": sys.exit(main())