This page is intended as a working draft of how to execute test cases and generate report for the dovetail project.
Scripts arcitecture
Test cases from upstream projects will be validated and work in conjunction with each other as an organic whole. To achieve this goal, we need bunch of scripts to communicate with these test cases. The scripts can execute the test cases under certain circumstances, and then analysis variety results to make a human-friendly report .
- config: basiclly the same env config files as the config files in functest/yardstick, used to indicate SUT information
- testcases: testcase definition files, mapping to the testcase from functest/yardstick being executed. these testcases comprise the certification
- certification: certification definition files, to determine the scenario of the SUT, and a collection of testcases.
- parser: parse, load and check input files(config,certification,testcases) whether they are valid.
- downloader: get functest/yardstick repos if they are not existed.
- runner: execute testcases from functest/yardstick.
- report: generate reports from test results.
- dashboard: presentation of test report and show the details of each testcase.
.
Testcase report analysis
As our certification reports come from the report of test cases, we have to check those test cases one by one. And the most important data is the test case details. Test case details include its duration, test case names , etc. So i make a table to show the differences between test cases.
functest report analysis
Test-case | duration | Sub-testcase | Details on success | Details on fail |
ping | ||||
healthcheck | ||||
tempest_smoke_serial | Y | Y | N | Y |
vping_ssh | Y | N | N | N |
vping_userdata | Y | N | N | N |
rally_sanity | Y | Y | N | N |
odl | Y | Y | Y | Y |
onos | Y | Y | Y | N |
moon | ||||
multisite | ||||
Tempest_full_parallel | ||||
Onos_sfc | Y | N | N | N |
vims | Y | Y | Y | N |
yardstick report analysis: