...
- Daily job:
- It is executed at INTEL POD.
- Requires a traffic generator (Ixia)
- Runs everyday in case that new change was merged into particular branch since the last daily job execution; Daily job runs duration is about 14 hours, but it can take over a day in case that VM running IxNetwork is slow. Please see FAQ section below for details.
A set of performance tests is executed for OVS with DPDK support, Vanilla OVS, VPP and SRIOV. Ixia traffic generator is used to generate RFC2544 Throughput and Back2Back traffic.
- Merge job (similar to verify job):
- It is executed at INTEL POD or at Ericsson PODs.
- Does not require a traffic generator.
- Runs whenever patches are merged to the particular branch.
- Runs a basic set of integration testcases for OVS with DPDK support, Vanilla OVS and VPP.
- in case that documentation files were modified, then documentation is built.
- Verify job (similar to merge job):
- It is executed at INTEL POD or at Ericsson PODs.
- Does not require a traffic generator.
- Runs every time a patch is pushed to gerrit. On success, the patch will be marked as verified (+1 for verification).
- Runs a basic set of integration testcases for OVS with DPDK support, Vanilla OVS and VPP.
- in case that documentation files were modified, then documentation is built
...
They are executed at POD12 or at ericsson Ericsson pods as they don't require a traffic generator. POD12 is used as a primary jenkins slave, because execution at ericsson Ericsson build machines was not reliable since other projects start to use it more extensively. It seems, that there is a clash on resources (hugepages). There was a attempt to avoid a parallel execution of VSPERF and other jobs, but it didn't help. Contact for the Ericsson Pod: ________
FAQ
Q: Why VEFIY JOB has failed and my patch got -1 for verification?
...
A: This is caused by VM where IxNetwork GUI application is executed. In the past, VSPERF used Intel-POD3, where execution of DAILY job was stable. It means, that performance results were stable among Daily job executions and the execution always took about 12 hours. After the move to a different Intel LAB and to Intel-POD12, the performance started to fluctuate and daily job execution takes more time by each execution. Several attempts to fix these issues were made, but issues still persists. In order to shorten DAILY job execution, it is required to login into VM as "vsperf_ci" user via remote desktop and to restart IxNetwork GUI application.
Q: What to do if Jenkins slave appears to be offline?
A: Check if Jenkins slave process is running:
Code Block | ||
---|---|---|
| ||
[root@pod12-node3 ~]# ps -ef | grep jenkins
jenkins 12995 1 0 Feb13 ? 00:09:40 java -jar slave.jar -jnlpUrl http
s://build.opnfv.org/ci/computer/intel-pod12/slave-agent.jnlp -secret <secret> -noCertificateCheck
root 17681 17647 0 15:23 pts/0 00:00:00 grep --color=auto jenk |
You can also restart it if needed using "monit stop" and "monit start" commands. Example output:
Code Block | ||
---|---|---|
| ||
[root@pod12-node3 ~]# monit status
Monit 5.25.1 uptime: 73d 5h 29m
Directory 'jenkins_piddir'
status OK
monitoring status Monitored
monitoring mode active
on reboot start
permission 755
uid 1001
gid 1001
access timestamp Mon, 03 Dec 2018 09:54:12
change timestamp Wed, 13 Feb 2019 14:35:01
modify timestamp Wed, 13 Feb 2019 14:35:01
data collected Thu, 14 Feb 2019 15:23:51
Process 'jenkins'
status OK
monitoring status Monitored
monitoring mode active
on reboot start
pid 12995
parent pid 1
uid 1001
effective uid 1001
gid 1001
uptime 1d 0h 48m
threads 53
children 0
cpu 0.0%
cpu total 0.0%
memory 0.7% [443.8 MB]
memory total 0.7% [443.8 MB]
security attribute (null)
disk read 0 B/s [81.8 MB total]
disk write 0 B/s [6.8 GB total]
data collected Thu, 14 Feb 2019 15:23:51
System 'pod12-node3.opnfv.local'
status OK
monitoring status Monitored
monitoring mode active
on reboot start
load average [0.00] [0.00] [0.00]
cpu 0.0%us 0.0%sy 0.0%wa
memory usage 15.2 GB [24.1%]
swap usage 0 B [0.0%]
uptime 73d 5h 30m
boot time Mon, 03 Dec 2018 09:53:25
data collected Thu, 14 Feb 2019 15:23:51
|
Q: What to do if IxNework TCL Server is Down/Connection Failed ?
A: Currently there are 3 vsperf user accounts for IxNetwork in Ixia VM. Follow the below procedure to overcome the issue. Basically, all IxNetwork port numbers are pre-configured. You would just need to restart it.
1. Connect the Ixia VM (Remote Desktop) using 'vsperf_ci' login and password. Once its connected and VM is launched, system should automatically start IxNetwork service on TCL port 9126. Open the Hidden icon's arrow button in task bar and place the mouse pointer on the IxNetwork icon to see whether it shows the TCL Port Configuration. If it's not started automatically, then double click on the IxNetwork icon and it will start the service at port 9126.
2. Connect the Ixia VM (Remote Desktop) using 'vsperf_sandbox' login and password. Once its connected and VM is launched, system should automatically start IxNetwork service ion TCL port 9127. Open the Hidden icon's arrow button in task bar and place the mouse pointer on the IxNetwork icon to see whether it shows the TCL Port Configuration. If it's not started automatically, then double click on the IxNetwork icon and it will start the service at port 9127.
3. Connect the Ixia VM (Remote Desktop) using 'vsperf_sandbox2' login and password. Once its connected and VM is launched, system should automatically start IxNetwork service ion TCL port 9128. Open the Hidden icon's arrow button in task bar and place the mouse pointer on the IxNetwork icon to see whether it shows the TCL Port Configuration. If it's not started automatically, then double click on the IxNetwork icon and it will start the service at port 9128.
If above three IxNetwork TCL services are running fine, then you are good to go.
Ideas
Configure 2nd jenkins slave for execution of VSPERF JOBs.
...