poky/scripts/resulttool
Yeoh Ee Peng 1fd5ebdb06 resulttool: enable merge, store, report and regression analysis
OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.

These scripts were developed as a test result tools to manage
these testresults.json file.

Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.

Using the "merge" operation, user can merge multiple testresults.json
files to a target file.

Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.

Using the "regression-file" operation, user can perform regression
analysis on testresults.json files specified. Using the "regression-dir"
and "regression-git" operations, user can perform regression analysis
on directory and git accordingly.

These resulttool operations expect the testresults.json file to use
the json format below.
{
    "<testresult_1>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
    ...
    "<testresult_n>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
    $ resulttool

To store test result from oeqa automated tests, execute the below
    $ resulttool store <source_dir> <git_branch>

To merge multiple testresults.json files, execute the below
    $ resulttool merge <base_result_file> <target_result_file>

To report test report, execute the below
    $ resulttool report <source_dir>

To perform regression file analysis, execute the below
    $ resulttool regression-file <base_result_file> <target_result_file>

To perform regression dir analysis, execute the below
    $ resulttool regression-dir <base_result_dir> <target_result_dir>

To perform regression git analysis, execute the below
    $ resulttool regression-git <source_dir> <base_branch> <target_branch>

[YOCTO# 13012]
[YOCTO# 12654]

(From OE-Core rev: 78a322d7be402a5b9b5abf26ad35670a8535408a)

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-21 12:34:00 +00:00

3.0 KiB
Executable File

#!/usr/bin/env python3

test results tool - tool for testresults.json (merge test results, regression analysis)

To look for help information.

$ resulttool

To store test result from oeqa automated tests, execute the below

$ resulttool store <source_dir> <git_branch>

To merge test results, execute the below

$ resulttool merge <base_result_file> <target_result_file>

To report test report, execute the below

$ resulttool report <source_dir>

To perform regression file analysis, execute the below

$ resulttool regression-file <base_result_file> <target_result_file>

Copyright (c) 2019, Intel Corporation.

This program is free software; you can redistribute it and/or modify it

under the terms and conditions of the GNU General Public License,

version 2, as published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT

ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or

FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for

more details.

import os import sys import argparse import logging script_path = os.path.dirname(os.path.realpath(file)) lib_path = script_path + '/lib' sys.path = sys.path + [lib_path] import argparse_oe import scriptutils import resulttool.merge import resulttool.store import resulttool.regression import resulttool.report logger = scriptutils.logger_create('resulttool')

def _validate_user_input_arguments(args): if hasattr(args, "source_dir"): if not os.path.isdir(args.source_dir): logger.error('source_dir argument need to be a directory : %s' % args.source_dir) return False return True

def main(): parser = argparse_oe.ArgumentParser(description="OpenEmbedded test results tool.", epilog="Use %(prog)s --help to get help on a specific command") parser.add_argument('-d', '--debug', help='enable debug output', action='store_true') parser.add_argument('-q', '--quiet', help='print only errors', action='store_true') subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='') subparsers.required = True subparsers.add_subparser_group('setup', 'setup', 200) resulttool.merge.register_commands(subparsers) resulttool.store.register_commands(subparsers) subparsers.add_subparser_group('analysis', 'analysis', 100) resulttool.regression.register_commands(subparsers) resulttool.report.register_commands(subparsers)

args = parser.parse_args()
if args.debug:
    logger.setLevel(logging.DEBUG)
elif args.quiet:
    logger.setLevel(logging.ERROR)

if not _validate_user_input_arguments(args):
    return -1

try:
    ret = args.func(args, logger)
except argparse_oe.ArgumentUsageError as ae:
    parser.error_subcommand(ae.message, ae.subcommand)
return ret

if name == "main": sys.exit(main())