poky/scripts/resulttool
Mazliana beed7523b6 scripts/resulttool: enable manual execution and result creation
Integrated “manualexecution” operation to resulttool scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
        $ resulttool

To execute manual test cases, execute the below
        $ resulttool manualexecution <manualjsonfile>

By default testresults.json store in <build_dir>/tmp/log/manual/

[YOCTO #12651]

(From OE-Core rev: f24dc9e87085a8fe5410feee10c7a3591fe9d816)

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2019-02-21 12:34:00 +00:00

3.3 KiB
Executable File

#!/usr/bin/env python3

test results tool - tool for testresults.json (merge test results, regression analysis)

To look for help information.

$ resulttool

To store test result from oeqa automated tests, execute the below

$ resulttool store <source_dir> <git_branch>

To merge test results, execute the below

$ resulttool merge <base_result_file> <target_result_file>

To report test report, execute the below

$ resulttool report <source_dir>

To perform regression file analysis, execute the below

$ resulttool regression-file <base_result_file> <target_result_file>

To execute manual test cases, execute the below

$ resulttool manualexecution

By default testresults.json for manualexecution store in /tmp/log/manual/

Copyright (c) 2019, Intel Corporation.

This program is free software; you can redistribute it and/or modify it

under the terms and conditions of the GNU General Public License,

version 2, as published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT

ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or

FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for

more details.

import os import sys import argparse import logging script_path = os.path.dirname(os.path.realpath(file)) lib_path = script_path + '/lib' sys.path = sys.path + [lib_path] import argparse_oe import scriptutils import resulttool.merge import resulttool.store import resulttool.regression import resulttool.report import resulttool.manualexecution logger = scriptutils.logger_create('resulttool')

def _validate_user_input_arguments(args): if hasattr(args, "source_dir"): if not os.path.isdir(args.source_dir): logger.error('source_dir argument need to be a directory : %s' % args.source_dir) return False return True

def main(): parser = argparse_oe.ArgumentParser(description="OpenEmbedded test results tool.", epilog="Use %(prog)s --help to get help on a specific command") parser.add_argument('-d', '--debug', help='enable debug output', action='store_true') parser.add_argument('-q', '--quiet', help='print only errors', action='store_true') subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='') subparsers.required = True subparsers.add_subparser_group('manualexecution', 'manual testcases', 300) resulttool.manualexecution.register_commands(subparsers) subparsers.add_subparser_group('setup', 'setup', 200) resulttool.merge.register_commands(subparsers) resulttool.store.register_commands(subparsers) subparsers.add_subparser_group('analysis', 'analysis', 100) resulttool.regression.register_commands(subparsers) resulttool.report.register_commands(subparsers)

args = parser.parse_args()
if args.debug:
    logger.setLevel(logging.DEBUG)
elif args.quiet:
    logger.setLevel(logging.ERROR)

if not _validate_user_input_arguments(args):
    return -1

try:
    ret = args.func(args, logger)
except argparse_oe.ArgumentUsageError as ae:
    parser.error_subcommand(ae.message, ae.subcommand)
return ret

if name == "main": sys.exit(main())