Table of Contents |
---|
This page is a temporary location to draft and validate the testing guide documentation. Once it's in a decent draft shape, I will submit it to documentation gerrit and respective section authors can complete the doc through gerrit review.
Reference: CVP OVP and Dovetail Terminology
I will use the terms defined in the above WIKI. Those terms will also be copied to the documentation's term section when they are in decent shape.
...
The jumphost is not part of the SUT in Dovetail, but is required for the Dovetail testing apparatus. Dovetail installs Docker based containers and other testing tools in this host to drive the automated testing procedures. This host must have the necessary network connection such that it has good access to the Internet and the VIM's API interface to drive testing. While Dovetail does not specify any strict minimum hardware requirement, and Docker has very little in its "minimum" requirements, we still recommend at least a workstation level host with 64 bit processor that, for instance, meets or exceeds the following spec, /*please verify these are reasonable recommendations ??? */
Processor: 2-4 cores
Memory: 32 GB RAM
Hard disk space: 60 GB
The jumphost can itself be a virtual machine or container. However, for simplicity, this document always assumes that it is a bare metal machine.
1.2.2 Controller and Compute Hosts
...
1.5 Preparing the Hosts for CVP Testing
This section provide general guidelines on how to prepare the SUT for getting ready to conduct CVP testing using Dovetail. SInce the SUT is a commercial vendor product that is based on an open source VIM (currently Openstack), we will describe the preparation steps in the context of the open source community distributions only. The tester is advised to use the information here as reference only and prepare the commerdical SUT based on its own system specifics.
1.5.1 Host Operating Systems
The OPNFV CVP does not directly restrict choice of host operating systems. The following operating systems have been used and tested in certain degree in the OPNFV community.
Ubuntu 16.04.2 LTS (Xenial) for x86_64
- Ubuntu 14.04 LTS (Trusty) for x86_64
- CentOS-7-1611 for x86_64
...
/* resource/configuration needed by HA */
Test Case Name | Test Case Name in Yardstick | Description | Category | Common Dependency | Special Dependency |
ha.tc001 | opnfv_yardstick_tc019 | Control Node Openstack Service High Availability | HA |
| None
|
ha.tc003 | Opnfv_yardstick_tc045 | Control Node Openstack Service High Availability - Neutron Server | HA |
| |
ha.tc004 | Opnfv_yardstick_tc046 | Control Node Openstack Service High Availability - Keystone | HA |
| |
ha.tc005 | Opnfv_yardstick_tc047_tc047 | Control Node Openstack Service High Availability - Glance Api | HA |
| |
ha.tc006 | Opnfv_yardstick_tc048 | Control Node Openstack Service High Availability - Cinder Api | HA |
| |
ha.tc009 | Opnfv_yardstick_tc051 | OpenStack Controller Node CPU Overload High Availability | HA |
| |
ha.tc010 | Opnfv_yardstick_tc052 | OpenStack Controller Node Disk I/O Block High Availability | HA |
| |
ha.tc011 | Opnfv_yardstick_tc053 | OpenStack Controller Load Balance Service High Availability | HA |
|
/* resource/configuration needed by sdnvpn */
Test Case Name | Test Case Name in SDNVPN | Description | Category | Common Dependency | Special Dependency |
sdnvpn.tc001 | testcase_1 | Control Node Openstack Service High Availability | SDNVPN |
|
|
sdnvpn.tc002 | testcase_2 | Control Node Openstack Service High Availability - Neutron Server | SDNVPN |
| |
sdnvpn.tc003 | testcase_3 | Control Node Openstack Service High Availability - Keystone | SDNVPN |
| |
sdnvpn.tc004 | testcase_4 | Control Node Openstack Service High Availability - Glance Api |
...
HA
...
Glance deployed
...
/* resource/configuration needed by sdnvpn */
SDNVPN |
| |
sdnvpn.tc008 | testcase_8 | Control Node Openstack Service High Availability - Cinder Api |
...
HA
...
Cinder deployed
...
Opnfv_yardstick_tc051
...
OpenStack Controller Node CPU Overload High Availability
...
HA
...
Nova, Neutron, Heat, Cinder
...
Opnfv_yardstick_tc052
...
OpenStack Controller Node Disk I/O Block High Availability
...
HA
...
None
...
Opnfv_yardstick_tc053
...
OpenStack Controller Load Balance Service High Availability
...
HA
...
Controller HA deployed
haproxy, Glance deployed
SDNVPN |
|
2 Conducting CVP Tests using Dovetail
...
The tester should also validate that the jumphost can reach the public Internet. For example,
%ping 8.8.8.8
%ping%dig https://www.opnfv.org/cvp
...
Use the following steps to check if the right version of Docker is already installed, and if not, install it.
% docker version
As the docker installation process is much complex, you can refer to the scripts.
It will install the latest version of docker, if you’re not intend to update your docker version, be careful to run the script:
% wget -qO- https://get.docker.com/ | sh
2.2.4 Installing Dovetail on Jumphost
...
% cd $DOVETAIL_HOME
% sudo git clone https://git.opnfv.org/dovetail
% cd $DOVETAIL_HOME/dovetail
% sudo pip install -e ./
/* test dovetail install is successful */
% dovetail -h
Method 2. Installing Dovetail Docker Container
The Dovetail project also maintains a Docker image that has Dovetail test tools preinstalled.
...
The only <tag> is 'latest'.
% sudo docker run --privileged=true -it -v <openrc_path>:<openrc_path> \
-v $DOVETAIL_HOME/results:$DOVETAIL_HOME/results \
-v /home/opnfv/dovetail/results:/home/opnfv/dovetail/results \
-v /home/opnfv/dovetail/userconfig:/home/opnfv/dovetail/userconfig \
-v /var/run/docker.sock:/var/run/docker.sock \
--name <DoveTail_Container_Name> (optional) \
opnfv/dovetail:<Tag> /bin/bash
2.3 Running CVP Test Suite
...
HA test cases need to know the info of all nodes of the OpenStack. It should include every node's name, role, ip, user and key_filename or password. The info should be written in file $DOVETAIL_HOME/dovetail/userconfig/pod.yaml. There is a sample file $DOVETAIL_HOME/dovetail/userconfig/sample_pod.yaml.
Two methods to login nodes
...
- The name of each node must be node1, node2,...
- node1 must be Controller node.
- If use private key to login, the key_filename must be /root/.ssh/id_rsa in file $DOVETAIL_HOME/dovetail/userconfig/pod.yaml.
- If use private key to login, you must copy the private key to $DOVETAIL_HOME/dovetail/userconfig/
cp <private_key_for_login_nodes> $DOVETAIL_HOME/dovetail/userconfig/id_rsa
2.3.3 Making Sense of CVP Test Results
...
% cd $DOVETAIL_HOME/dovetail
% sudo git pull
% sudo pip install -e ./
This step is necessary if dovetail software or the CVP test suite have updates.
...
2.4.1 Running Dovetail Locally
DoveTail supports uploading results into database. The database can be either a local database or an official one. Before you can use the local database, you need to create the local database and the testapi service. They can be installed on any machine which can talk to the jumphost. Docker 1.12.3 and later should be installed on this machine. There are 3 steps for creating local database and testapi service.
step1. Set ports for database and testapi service
The default ports of database and testapi are 27017 and 8000, respectively. Check whether they are used by other services already.
% netstat -nlt
If 27017 and 8000 are used by other services, you need to set other ports in step2.
step2. Run a Dovetail container
Run a Dovetail container.
% sudo docker pull opnfv/dovetail:<Tag>
% sudo docker run -itd --privileged=true --name <DoveTail_Container_Name> \
-v /var/run/docker.sock:/var/run/docker.sock opnfv/dovetail:<Tag> /bin/bash
% sudo docker exec -it <DoveTail_Container_Name> /bin/bash
If you need to set ports for database and testapi service,
% export mongodb_port=<database_port>
% export testapi_port=<testapi_port>
step3. Create local database and testapi service
% cd /home/opnfv/dovetail/dovetail/utils/local_db/
% ./launch_db.sh <localhost_ip_address>
Exit this Dovetail container.
% exit
step4. Check the status of database and testapi service
Local database and testapi service are actually two containers named mongodb and testapi. You can check whether there are these two containers running.
% sudo docker ps -a
You can try to get data from the database to make sure everything is OK.
% wget <localhost_ip_address>:<testapi_port>/api/v1/results
2.4.2 Running Dovetail with an Offline SUT
...
when "Installing Dovetail Docker Container" method is used,
% sudo mkdir /home/opnfv/dovetail/userconfig
% cd /home/opnfv/dovetail/userconfig
% touch refstack_tempest.conf defcore.txt
% vim refstack_tempest.conf
% vim defcore.txt
the recommend way to set refstack_tempest.conf is shown in https://aptira.com/testing-openstack-tempest-part-1/
the recommended way to edit defcore.txt is to open https://refstack.openstack.org/api/v1/guidelines/2016.08/tests?target=compute&type=required&alias=true&flag=false and copy all the test cases into defcore.txt.
Then use “docker run” to create a container,
% sudo docker run --privileged=true -it -v <openrc_path>:<openrc_path> \
-v /home/opnfv/dovetail/results:/home/opnfv/dovetail/results \
-v /home/opnfv/dovetail/userconfig:/home/opnfv/dovetail/userconfig \
-v /var/run/docker.sock:/var/run/docker.sock \
--name <DoveTail_Container_Name> (optional) \
opnfv/dovetail:<Tag> /bin/bash
there is a need to adjust the CVP_1_0_0 testsuite, for dovetail, defcore.tc001.yml and defcore.tc002.yml are used for automatic and manual running method, respectively.
Inside the dovetail container,
% cd /home/opnfv/dovetail/compliance
% vim CVP_1_0_0.yml
to add defcore.tc002 and annotate defcore.tc001.
% cd $DOVETAIL_HOME/dovetail
% mkdir userconfig
% cd userconfig
% touch refstack_tempest.conf defcore.txt
% vim refstack_tempest.conf
% vim defcore.txt
recommended way to set refstack_tempest.conf and defcore.txt is same as above in "Installing Dovetail Docker Container" method section.
For Defcore test cases manually running method, there is a need to adjust the compliance_set test suite,
for dovetail, defcore.tc001.yml and defcore.tc002.yml are used for automatic and manual running method, respectively.
% cd $DOVETAIL_HOME/dovetail/compliance
% vim CVP_1_0_0.yml
to add defcore.tc002 and annotate defcore.tc001
...
3.1 Check dovetail commands
% dovetail -h
Dovetail has three commands: list, run and show.
3.2 List
3.2.1 List help
% dovetail list -h
3.2.2 List a test suite
List command will list all test cases belong to the given test suite.
% dovetail list compliance_set
% dovetail list debug
The ipv6, example and nfvi are test areas. If no <TESTSUITE> is given, it will list all testsuites.
...
Show command will give the detailed info of one certain test case.
3.3.1 Show help
% dovetail show -h
3.3.2 Show test case
3.4 Run
Dovetail supports running a named test suite, or one named test area of a test suite.
3.4.1 Run help
% dovetail run -h
There are some options:
...