Anuket Project
Dovetail Testing Guide
This page is a temporary location to draft and validate the testing guide documentation. Once it's in a decent draft shape, I will submit it to documentation gerrit and respective section authors can complete the doc through gerrit review.
Reference: OVP and Dovetail Terminology
I will use the terms defined in the above WIKI. Those terms will also be copied to the documentation's term section when they are in decent shape.
The intended audience of this document is the technical staff in vendors who are interested in seeking CVP certifications using Dovetail testing or third party labs who are to assist their clients for the same purpose. Additional audience include anyone who is interested in using Dovetail for testing purposes other than CVP certification. We assume the readers have good technical background in IT (e.g. Linux, Docker containers, security), networking, NFV, in addition to test areas that CVP covers.
1 Creating a Compliant POD
Dovetail uses Pharos Specification (Pharos Specification) as a starting point of defining a compliant physical system as a part of the CVP's System Under Test (SUT). The current Pharos Specification is still being developed and updated for Danube as this time. It also has goals and constrains that are outside of the CVP's current scope, for example those required for use in the OPNFV community development. For these reasons, we provide a generalized guide here specifically for testers who need to create and configure a POD for the purpose of conducting CVP testing.
/* work in progress */
1.1 High Level Overview
In the top level, the CVP framework requires the tester's lab consist of the System Under Test (SUT), the jumphost that has access both public Internet and the private Management Network (a.k.a. Operation and Management Network, or O&M), necessary networking configuration and security gateways. The tester may realize this by a firewall, as shown below, and putting the jumphost in the DMZ. However, this configuration is just one possibility and a suggestion; it is not an explicit CVP requirement. Other methods of achieving the same goal of allowing the jumphost to have secure access to both the public Internet and private Management Network are equally valid.
The OPNFV CVP server, shown on the right side of the diagram above, is included for reference only. It is not the responsibility of the tester for the purpose of CVP certification.
In a data center environment, a remote management infrastructure (a.k.a. Light Out Management) is a important part of the running a large scale system efficiently and reliably. Such remote management infrastructure however is not an explicit requirement of CVP at this time. The test suite does not make direct reference to the existence of a remote management system, although it is a recommended component.
1.2 Compute Hardware
The following bare metal hardware is required:
- 1 jumphost
- 5 controller and compute hosts
We require all 5 controller and compute hosts to be bare metal hosts because of high availability and scalability tests require bare metal environment. Other test cases may also be developed with bare metal environment only.
The jumphost does not have to be bare metal for CVP. However, this guide assumes it is a stand alone bare metal machine for simplicity. It is the tester's responsibility to make necessary configurations such that a virtual machine or a container can perform the same duties as a bare metal jumphost.
1.2.1 Jumphost
The jumphost is not part of the SUT in Dovetail, but is required for the Dovetail testing apparatus. Dovetail installs Docker based containers and other testing tools in this host to drive the automated testing procedures. This host must have the necessary network connection such that it has good access to the Internet and the VIM's API interface to drive testing. While Dovetail does not specify any strict minimum hardware requirement, and Docker has very little in its "minimum" requirements, we still recommend at least a workstation level host with 64 bit processor that, for instance, meets or exceeds the following spec, /*please verify these are reasonable recommendations ??? */
Processor: 2-4 cores
Memory: 32 GB RAM
Hard disk space: 60 GB
The jumphost can itself be a virtual machine or container. However, for simplicity, this document always assumes that it is a bare metal machine.
1.2.2 Controller and Compute Hosts
The Controller and Compute hosts are collectively referred to as simply the Hosts. They are part of the SUT in Dovetail. The 5 controller and compute hosts should meet the following minimum hardware requirements.
CPU:
- Intel Xeon E5-2600v2 Series or newer
- AArch64 (64bit ARM architecture) compatible (ARMv8 or newer)
Firmware:
- BIOS/EFI compatible for x86-family blades
- EFI compatible for AArch64 blades
Local Storage:
Below describes the minimum for the Pharos spec, which is designed to provide enough capacity for a reasonably functional environment. We recommend the same as minimum requirement for CVP testing.
- Disks: 2 x 1TB HDD + 1 x 100GB SSD (or greater capacity)
- The first HDD should be used for Operating System and additional software/tool installation
- The second HDD is configured for object storage (e.g. based on CEPH or others)
- The SSD should be used as the object store database journal (e.g. based on CEPH or others)
Performance testing requires a mix of compute nodes with CEPH (Swift+Cinder) and without CEPH storage/*to confirm that this is Not Applicable for Dovetail in current release because performance is not in scope. */- Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
Memory:
- 32GB RAM Minimum
Power Supply
- Single power supply acceptable (redundant power not required/nice to have)
1.3 Networking Hardware
The OPNFV community Pharos specification documents several options for the physical networking configuration of the NFVI POD. In this section, we will use the Pharos as a reference to describe the required networking hardware configuration and point out areas where they are not strictly required for the purpose of CVP testing. The tester should take these as guidelines not strict requirements.
The NFV Infrastructure needs the following networks,
- Data Network
- Management Network
- Storage Network (optional)
- Administrative Network
- Lights-Out Management Network
1.3.1 Network Hardware
- 24 or 48 Port Top Of Rack (TOR) Ethernet Switch with either 1GB or 10GB ports (or both): 1 unit is required. Dovetail currently does not test HA over switch failures.
- NICs - Combination of 1GE and 10GE based on network topology options as defined below. These NICs are required per server and can be on-board or PCI-e modules.
- Connectivity for each data or control network is through a separate NIC. This simplifies switch configuration but requires more NICs on the server and more ports on the switch.
- BMC (Baseboard Management Controller) for lights-out management network using IPMI (Intelligent Platform Management Interface) is not required for Dovetail testing at this time.
The options of network topology below have been validated in the OPNFV community and can be recommended. However, you are not required to use these options for the purpose of the CVP. Other network configurations are also valid and can be sufficient for CVP.
1.3.2 Network Option Examples
Option I: 4x1G Control, 2x10G Data, 48 Port Switch
1 x 1G for lights-out Management
1 x 1G for Admin/PXE boot
1 x 1G for control-plane connectivity
1 x 1G for storage
2 x 10G for data network (redundancy, NIC bonding, High bandwidth testing)
Option II: 1x1G Control, 2x 10G Data, 24 Port Switch
Connectivity to networks is through VLANs on the Control NIC
Data NIC used for VNF traffic and storage traffic segmented through VLANs
Option III: 2x1G Control, 2x10G Data, 2x10G Storage, 24 Port Switch
Data NIC used for VNF traffic
Storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF)
1 x 1G for lights-out mangement
1 x 1G for Admin/PXE boot
2 x 10G for control-plane connectivity/storage
2 x 10G for data network
1.4 Management
1.5 Preparing the Hosts for CVP Testing
This section provide general guidelines on how to prepare the SUT for getting ready to conduct CVP testing using Dovetail. SInce the SUT is a commercial vendor product that is based on an open source VIM (currently Openstack), we will describe the preparation steps in the context of the open source community distributions only. The tester is advised to use the information here as reference only and prepare the commerdical SUT based on its own system specifics.
1.5.1 Host Operating Systems
The OPNFV CVP does not directly restrict choice of host operating systems. The following operating systems have been used and tested in certain degree in the OPNFV community.
Ubuntu 16.04.2 LTS (Xenial) for x86_64
- Ubuntu 14.04 LTS (Trusty) for x86_64
- CentOS-7-1611 for x86_64
1.5.2 Openstack-based VIM Requirements
The current version of CVP uses Openstack as the only option of a Virtual Infrastructure Manager (VIM). The SUT must therefore use an Openstack based VIM. The following guidelines explain the compliant versions of Openstack and required functional components.
The current CVP test suite includes Openstack DefCore testing through RefStack as a component. Therefore it is advised that testers should also consult documentation of Defcore qualification testing for additional guidance. The version of Refstack in CVP_1.0.0 is 2016.8.
1.5.2.1 Openstack Versions
For now, following versions of open source Openstack have been tested:
Kilo
Liberty
Mitaka
Newton
It's not guaranteed that those versions not listed here or commercial versions of Openstack are well compatible with CVP_1.0.0.
The list can be updated if additional versions have been tested.
1.5.2.2 Openstack POD Configuration
The SUT must be configured such that it consists of 3 controllers nodes in HA mode, and 2 compute nodes.
- controller node( 3 or more)
- compute node( 2 or more)
- storage node(optional)
- network node(optional)
Above is the recommended deployment, but you can still try your own combinations of nodes.
The topologies vary in different deployment, below is a typical deployment:
/*insert diagram */
Additional explainations /*what else needed here?? */
1.5.2.3 Required Openstack Components
The following Openstack components are required,
- Openstack Nova
- Openstack Neutron
- Openstack Glance
- Openstack Swift
- Openstack Keystone
- Openstack Cinder
1.5.3 Configuring Testing Resources in Openstack
/* creating projects, accounts, floating ip, networks, security policies, router,... all that are needed for conducting Dovetail testing */
/* any storage configuration needed? */
/* resource/configuration needed by IPv6 */
/* resource/configuration needed by HA */
Test Case Name | Test Case Name in Yardstick | Description | Category | Common Dependency | Special Dependency |
ha.tc001 | opnfv_yardstick_tc019 | Control Node Openstack Service High Availability | HA |
|
|
ha.tc003 | Opnfv_yardstick_tc045 | Control Node Openstack Service High Availability - Neutron Server | HA |
| |
ha.tc004 | Opnfv_yardstick_tc046 | Control Node Openstack Service High Availability - Keystone | HA |
| |
ha.tc005 | Opnfv_yardstick_tc047 | Control Node Openstack Service High Availability - Glance Api | HA |
| |
ha.tc006 | Opnfv_yardstick_tc048 | Control Node Openstack Service High Availability - Cinder Api | HA |
| |
ha.tc009 | Opnfv_yardstick_tc051 | OpenStack Controller Node CPU Overload High Availability | HA |
| |
ha.tc010 | Opnfv_yardstick_tc052 | OpenStack Controller Node Disk I/O Block High Availability | HA |
| |
ha.tc011 | Opnfv_yardstick_tc053 | OpenStack Controller Load Balance Service High Availability | HA |
|
/* resource/configuration needed by sdnvpn */
Test Case Name | Test Case Name in SDNVPN | Description | Category | Common Dependency | Special Dependency |
sdnvpn.tc001 | testcase_1 | Control Node Openstack Service High Availability | SDNVPN |
|
|
sdnvpn.tc002 | testcase_2 | Control Node Openstack Service High Availability - Neutron Server | SDNVPN |
| |
sdnvpn.tc003 | testcase_3 | Control Node Openstack Service High Availability - Keystone | SDNVPN |
| |
sdnvpn.tc004 | testcase_4 | Control Node Openstack Service High Availability - Glance Api | SDNVPN |
| |
sdnvpn.tc008 | testcase_8 | Control Node Openstack Service High Availability - Cinder Api | SDNVPN |
|
2 Conducting CVP Tests using Dovetail
2.1 Dovetail CVP Testing Overview
The Dovetail testing framework consists of two major parts: the testing client that executes all test cases in a vendor lab (self-testing) or a third party lab, and the server system that is under the OPNFV's administration to store and view test results based on OPNFV Test API. The following diagram illustrates this overall framework.
/* here is a draft diagram that needs to be revised when exact information is known and fixed */
This section mainly focuses on helping the testers in the vendor's domain attempting to run the CVP tests using Dovetail.
Dovetail client tool (or just Dovetail tool or Dovetail for short) can be installed in the jumphost either directly as Python software, or as a Docker(r) container. Comments of pros and cons of the two options TBD.
In 2.2, we describe steps the tester needs to take to install Dovetail directly from the source. In 2.3, we describe steps needed for installing Dovetail Docker(r) container. Once installed, and properly configured, the remaining test process is mostly identical for the two options. In 2.4, we go over the steps of actually running the test suite. In 2.5, we discuss how to view test results and make sense of them, for example, what the tester may do in case of unexpected test failures. Section 2.6 describes additional Dovetail features that are not absolutely necessary in CVP testing but users may find useful for other purposes. One example is to run Dovetail for in-house testing as preparation before official CVP testing; another example is to run Dovetail experimental test suites other than the CVP test suite. Experimental tests may be made available by the community for experimenting less mature test cases or functionalities for the purpose of getting feedbacks for improvement.
2.2 Installing Dovetail
Before taking this step, testers should check the hardware and networking requirements of the POD, and the jumphost in particular, to make sure they are compliant.
In this section, we describe the procedure to install Dovetail client tool that runs the CVP test suite from the jumphost. The jumphost must have network access to both the public Internet and to the O&M (Operation and Management) network with access rights to all VIM APIs being tested.
2.2.1 Checking the Jumphost Readiness
While Dovetail does not have hard requirement on a specific operating system type or version, these have been validated by the community through some level of exercise in OPNFV labs or PlugFests.
- Ubuntu 16.04.2 LTS (Xenial) for x86_64
- Ubuntu 14.04 LTS (Trusty) for x86_64
- CentOS-7-1611 for x86_64
- Red Hat Enterprise Linux 7.3 for x86_64
- Fedora 24 Server for x86_64
- Fedora 25 Server for x86_64
If anyone managed to run the dovetail on additional operation system, this list can be updated after a double check.
Non-Linux operating systems, such as Windows, Mac OS, have not been tested at this time.
The tester should also validate that the jumphost can reach the public Internet. For example,
%ping 8.8.8.8
%dig https://www.opnfv.org/cvp
2.2.2 Configuring the Jumphost Environment
/* First, openstack env variables to be passed to Functest */
The jumphost needs to have the right environmental variable setting to enable access to the Openstack API. This is usually done through the Openstack credential file. If you do not know how to fill the variables, just go to check OpenRC files on Openstack controller nodes, normally they are located in /opt.
Sample Openstack credential file environment_config.sh:
/*Project-level authentication scope (name or ID), recommend admin project.*/
export OS_PROJECT_NAME=
export OS_TENANT_NAME=
/* Authentication username, belongs to the project above, recommend admin user.*/
export OS_USERNAME=
/* Authentication password. Use your own password*/
export OS_PASSWORD=
/* Authentication URL, one of the endpoints of keystone service. If this is v3 version, there need some extra variables as follows.*/
export OS_AUTH_URL='http://xxx.xxx.xxx.xxx:5000/v3'
/* Default is 2.0. If use keystone v3 API, this should be set as 3.*/
export OS_IDENTITY_API_VERSION=3
/* Domain name or ID containing the user above. Command to check the domain: openstack user show <OS_USERNAME>*/
export OS_USER_DOMAIN_NAME=default
/* Domain name or ID containing the project above. Command to check the domain: openstack project show <OS_PROJECT_NAME>*/
export OS_PROJECT_DOMAIN_NAME=default
export DOVETAIL_HOME=$HOME/cvp
Export all these variables into environment by,
% source <OpenStack-credential-file-path>
2.2.3 Installing Prerequisite on Jumphost
1. Dovetail requires Python 2.7 and later
Use the following steps to check if the right version of python is already installed, and if not, install it.
% python --version
2. Dovetail requires Docker 1.12.3 and later
Use the following steps to check if the right version of Docker is already installed, and if not, install it.
% docker version
As the docker installation process is much complex, you can refer to the scripts.
It will install the latest version of docker, if you’re not intend to update your docker version, be careful to run the script:
% wget -qO- https://get.docker.com/ | sh
2.2.4 Installing Dovetail on Jumphost
A tester can choose one of the following two methods for installing and running Dovetail. In Method1, we explain the steps to install Dovetail from the source. In Method2, an alternative using a Docker image with preinstalled Dovetail is introduced.
Method 1. Installing Dovetail directly
- Update and install packages
a) Ubuntu
sudo apt-get update
sudo apt-get -y install gcc git vim python-dev python-pip --no-install-recommends
b) centos and redhat
sudo yum -y update
sudo yum -y install epel-release
sudo yum -y install gcc git vim-enhanced python-devel python-pip
c) fedora
sudo dnf -y update
sudo dnf -y install gcc git vim-enhanced python-devel python-pip redhat-rpm-config
p.s When testing SUT's https service, there need some extra packages, such as apt-transport-https. This still remains to be verified.
- Installing Dovetail
Now we are ready to install Dovetail.
/* Version of dovetail is not specified yet? we are still using the latest in the master - this needs to be fixed before launch. */
First change directory to $DOVETAIL_HOME,
% cd $DOVETAIL_HOME
% sudo git clone https://git.opnfv.org/dovetail
% cd $DOVETAIL_HOME/dovetail
% sudo pip install -e ./
/* test dovetail install is successful */
% dovetail -h
Method 2. Installing Dovetail Docker Container
The Dovetail project also maintains a Docker image that has Dovetail test tools preinstalled.
% sudo docker pull opnfv/dovetail:<tag>
The only <tag> is 'latest'.
% sudo docker run --privileged=true -it -v <openrc_path>:<openrc_path> \
-v $DOVETAIL_HOME/results:$DOVETAIL_HOME/results \
-v /home/opnfv/dovetail/results:/home/opnfv/dovetail/results \
-v /home/opnfv/dovetail/userconfig:/home/opnfv/dovetail/userconfig \
-v /var/run/docker.sock:/var/run/docker.sock \
--name <DoveTail_Container_Name> (optional) \
opnfv/dovetail:<Tag> /bin/bash
2.3 Running CVP Test Suite
2.3.1 Running Test Suite
The Dovetail client CLI allows the tester to specify which test suite to run. By default the results are stored in a local file $DOVETAIL_HOME/dovetail/results.
% dovetail run --testsuite <test suite name> --openrc <path-to-openrc-file> /*?? */
Multiple test suites may be available, testsuites named "debug" and "proposed_tests" are just provided for testing. But for the purpose of running CVP test suite, the test suite name follows the following format,
CVP.<major>.<minor>.<patch> /* test if this format works */
For example, CVP_1_0_0
% dovetail run --testsuite CVP_1_0_0
When the SUT's VIM (Virtual Infrastructure Manager) is Openstack, its configuration is commonly defined in the openrc file. In that case, you can specify the openrc file in the command line,
% dovetail run --testsuite CVP_1_0_0 --openrc <path-to-openrc-file>
In order to report official results to OPNFV, run the CVP test suite and report to OPNFV official URL,
% dovetail run --testsuite <test suite name> --openrc <path-to-openrc-file> --report https://www.opnfv.org/cvp
The official server https://www.opnfv.org/cvp is still under development, there is a temporal server to use http://205.177.226.237:9997/api/v1/results
2.3.2 Special Configuration for Running HA test cases
HA test cases need to know the info of all nodes of the OpenStack. It should include every node's name, role, ip, user and key_filename or password. The info should be written in file $DOVETAIL_HOME/dovetail/userconfig/pod.yaml. There is a sample file $DOVETAIL_HOME/dovetail/userconfig/sample_pod.yaml.
Two methods to login nodes
Method1. Use private key to login nodes. For example the info of node1,
name: node1
role: Controller
ip: 10.1.0.50
user: root
key_filename: /root/.ssh/id_rsa
Method2. Use password to login nodes. For example the info of node1,
name: node1
role: Controller
ip: 10.1.0.50
user: root
password: root
NOTE:
- The name of each node must be node1, node2,...
- node1 must be Controller node.
- If use private key to login, the key_filename must be /root/.ssh/id_rsa in file $DOVETAIL_HOME/dovetail/userconfig/pod.yaml.
- If use private key to login, you must copy the private key to $DOVETAIL_HOME/dovetail/userconfig/
cp <private_key_for_login_nodes> $DOVETAIL_HOME/dovetail/userconfig/id_rsa
2.3.3 Making Sense of CVP Test Results
When a tester is performing trial runs, Dovetail stores results in a local file by default.
% cd $DOVETAIL_HOME/dovetail/results
1. local file
a) Log file: dovetail.log
/* review the dovetail.log to see if all important information has been captured - in default mode without DEBUG */
/* the end of the log file has a summary of all test case test results */
Additional log files may be of interests: refstack.log, opnfv_yardstick_tcXXX.out ...
b) Example: Openstack refstack test case example
can see the log details in refstack.log, which has the passed/skipped/failed test cases result, the failed test cases have rich debug information
for the users to see why this test case fails.
c) Example: OPNFV Yardstick test case example
for yardstick tool, its log is stored in yardstick.log
for each test case result in Yardstick, the logs are stored in opnfv_yardstick_tcXXX.out, respectively.
2. OPNFV web interface
wait for the complement of LF, test community, etc.
2.3.4 Updating Dovetail or Test Suite
% cd $DOVETAIL_HOME/dovetail
% sudo git pull
% sudo pip install -e ./
This step is necessary if dovetail software or the CVP test suite have updates.
2.4 Other Dovetail Usage
2.4.1 Running Dovetail Locally
DoveTail supports uploading results into database. The database can be either a local database or an official one. Before you can use the local database, you need to create the local database and the testapi service. They can be installed on any machine which can talk to the jumphost. Docker 1.12.3 and later should be installed on this machine. There are 3 steps for creating local database and testapi service.
step1. Set ports for database and testapi service
The default ports of database and testapi are 27017 and 8000, respectively. Check whether they are used by other services already.
% netstat -nlt
If 27017 and 8000 are used by other services, you need to set other ports in step2.
step2. Run a Dovetail container
Run a Dovetail container.
% sudo docker pull opnfv/dovetail:<Tag>
% sudo docker run -itd --privileged=true --name <DoveTail_Container_Name> \
-v /var/run/docker.sock:/var/run/docker.sock opnfv/dovetail:<Tag> /bin/bash
% sudo docker exec -it <DoveTail_Container_Name> /bin/bash
If you need to set ports for database and testapi service,
% export mongodb_port=<database_port>
% export testapi_port=<testapi_port>
step3. Create local database and testapi service
% cd /home/opnfv/dovetail/dovetail/utils/local_db/
% ./launch_db.sh <localhost_ip_address>
Exit this Dovetail container.
% exit
step4. Check the status of database and testapi service
Local database and testapi service are actually two containers named mongodb and testapi. You can check whether there are these two containers running.
% sudo docker ps -a
You can try to get data from the database to make sure everything is OK.
% wget <localhost_ip_address>:<testapi_port>/api/v1/results
2.4.2 Running Dovetail with an Offline SUT
2.4.3 Running Dovetail with Experimental Test Cases
2.4.4 Running Individual Test Cases or for Special Cases
1. Refstack client to run Defcore testcases
a) By default, for Defcore test cases run by Refstack-client, which are consumed by DoveTail, are run followed with automatically generated configuration file, i.e., refstack_tempest.conf.
In some circumstances, the automatic configuration file may not quite satisfied with the SUT, DoveTail provide a way for users to set its configuration file according to its own SUT manually,
besides, the users should define Defcore testcase file, i.e., defcore.txt, at the same time. The steps are shown as,
when "Installing Dovetail Docker Container" method is used,
% sudo mkdir /home/opnfv/dovetail/userconfig
% cd /home/opnfv/dovetail/userconfig
% touch refstack_tempest.conf defcore.txt
% vim refstack_tempest.conf
% vim defcore.txt
the recommend way to set refstack_tempest.conf is shown in https://aptira.com/testing-openstack-tempest-part-1/
the recommended way to edit defcore.txt is to open https://refstack.openstack.org/api/v1/guidelines/2016.08/tests?target=compute&type=required&alias=true&flag=false and copy all the test cases into defcore.txt.
Then use “docker run” to create a container,
% sudo docker run --privileged=true -it -v <openrc_path>:<openrc_path> \
-v /home/opnfv/dovetail/results:/home/opnfv/dovetail/results \
-v /home/opnfv/dovetail/userconfig:/home/opnfv/dovetail/userconfig \
-v /var/run/docker.sock:/var/run/docker.sock \
--name <DoveTail_Container_Name> (optional) \
opnfv/dovetail:<Tag> /bin/bash
there is a need to adjust the CVP_1_0_0 testsuite, for dovetail, defcore.tc001.yml and defcore.tc002.yml are used for automatic and manual running method, respectively.
Inside the dovetail container,
% cd /home/opnfv/dovetail/compliance
% vim CVP_1_0_0.yml
to add defcore.tc002 and annotate defcore.tc001.
% cd $DOVETAIL_HOME/dovetail
% mkdir userconfig
% cd userconfig
% touch refstack_tempest.conf defcore.txt
% vim refstack_tempest.conf
% vim defcore.txt
recommended way to set refstack_tempest.conf and defcore.txt is same as above in "Installing Dovetail Docker Container" method section.
For Defcore test cases manually running method, there is a need to adjust the compliance_set test suite,
for dovetail, defcore.tc001.yml and defcore.tc002.yml are used for automatic and manual running method, respectively.
% cd $DOVETAIL_HOME/dovetail/compliance
% vim CVP_1_0_0.yml
to add defcore.tc002 and annotate defcore.tc001
3 Dovetail Client CLI Manual
This section contains a brief manual for all the features available through the Dovetail client command line interface (CLI).
3.1 Check dovetail commands
% dovetail -h
Dovetail has three commands: list, run and show.
3.2 List
3.2.1 List help
% dovetail list -h
3.2.2 List a test suite
List command will list all test cases belong to the given test suite.
% dovetail list compliance_set
% dovetail list debug
The ipv6, example and nfvi are test areas. If no <TESTSUITE> is given, it will list all testsuites.
3.3 Show
Show command will give the detailed info of one certain test case.
3.3.1 Show help
% dovetail show -h
3.3.2 Show test case
3.4 Run
Dovetail supports running a named test suite, or one named test area of a test suite.
3.4.1 Run help
% dovetail run -h
There are some options:
func_tag: set FuncTest’s Docker tag, for example stable,latest and danube.1.0
openrc: give the path of OpenStack credential file
yard_tag: set Yardstick’s Docker tag
testarea: set a certain testarea within a certain testsuite
offline: run without pull the docker images, and it requires the jumphost to have these images locally. This will ensure DoveTail run in an offline environment.
report: push results to DB or store with files
testsuite: set the testsuite to be tested
debug: flag to show the debug log messages