Anuket Project

Libvirt Executed tests

NOTE:

  • Tests cover CMT, CPU pinning info, CPU utilization, State metrics only.
  • Several tests added for interaction coverage(restarting libvirtd, disabling metrics for VM, stopping VM, etc...)
  • Added tests for MBM metric.
  • Added 'sanity' tests for other metrics: CPU cycles/instructions, cache misses/references, interface statistic, disk and memory data.
  • Added tests for disk errors, file system information and job statistics.

Test Environment details:

  • Bare Metal,  Ubuntu 16.04.1 LTS
  • Kernel version: 4.4.0-43-generic

Repo/branch used:

  • collectd/ feat_libvirt_upstreamed

Tests precondition:

  • libvirt version used 2.4.0 (3.1.0)
  • VM is started:

    virsh start demo

    root@silpixa00390838:~/orest/csv# virsh list
    Id Name State
    ----------------------------------------------------
    6 demo running

Collectd is started with csv write plugin enabled and the following libvirt configuration:
Interval 2
LoadPlugin virt

<Plugin virt>
Connection "qemu:///system"
RefreshInterval 60
# Domain "demo"
# BlockDevice "name:device"
# BlockDeviceFormat target
# BlockDeviceFormatBasename false
# InterfaceDevice "name:device"
# IgnoreSelected false
# HostnameFormat name
# InterfaceFormat name
# PluginInstanceFormat name
Instances 1
#ExtraStats "cpu_util disk disk_err domain_state fs_info job_stats_background pcpu perf vcpupin memory_last_update"
</Plugin>

 

Table #1. New metrics tests results.

#
Test Summary
Steps
Expected
Status
1Verify that virt plugin dispatches CMT metrics.
  1. Start collectd with virt plugin and write plugin enabled(Interval is set to 1 second):
    Interval 1
  2. Get cmt metric from VM.
    root@silpixa00390838:~/orest/dpdk_xstats-collectd# virsh domstats demo --perf
    Domain: 'demo'
    perf.cmt=196608
    perf.cpu_cycles=711466301
    perf.instructions=682427381
  3. Wait 10 seconds(this is done in order to catch value written by collectd)
  4. Stop collectd
  5. Get collectd data:

    root@silpixa00390838:~/orest/csv# tail -f demo/virt/perf-perf_cmt-2016-12-27

    1482836073.410,229376.000000
    1482836074.408,327680.000000
    1482836075.408,425984.000000

  6. Verify that value perf.cmt=196608 is present in collectd data.

     

Since cmt performance metric is continuously changing

it is complicated to catch equal values in virsh and collectd data.

Therefore we are verifying that perf.cmt=196608 is present in collectd data.

 

Pass
2Verify that virt plugin does not dispatch CMT metrics when CMT has been disabled in VM
  1. Make sure that CMT metrics are dispatched by collectd:

    tail -f demo/virt/perf-perf_cmt-2016-12-29
    1483023819.786,1376256.000000
    1483023821.786,1376256.000000
    1483023823.786,1376256.000000

  2. Disable CMT metric in VM:
    virsh perf demo --disable cmt --live

  3. Verify that plugin stops dispatching cmt metrics:
    tail -f demo/virt/perf-perf_cmt-2016-12-29
    1483023819.786,1376256.000000
    1483023821.786,1376256.000000
    1483023823.786,1376256.000000

Virt plugin should stop dispatching CMT metric when CMT is dynamically disabled for VM.

PASS
3Verify that virt plugin dispatches CPU pinning info metrics.
 
  1. Get CPU pinnning info using virsh tool:

    root@silpixa00390838:~# virsh vcpupin demo
    VCPU: CPU Affinity
    ----------------------------------
    0: 0-15

  2. Make sure that CPU pinning info metric is dispatched for all CPUs by collectd:

    root@silpixa00390838:~/orest/csv# ls demo/virt/cpu_affinity-vcpu_0-cpu_
    cpu_affinity-vcpu_0-cpu_0-2017-01-03 cpu_affinity-vcpu_0-cpu_12-2017-01-03 cpu_affinity-vcpu_0-cpu_2-2017-01-03 cpu_affinity-vcpu_0-cpu_6-2017-01-03
    cpu_affinity-vcpu_0-cpu_10-2017-01-03 cpu_affinity-vcpu_0-cpu_13-2017-01-03 cpu_affinity-vcpu_0-cpu_3-2017-01-03 cpu_affinity-vcpu_0-cpu_7-2017-01-03
    cpu_affinity-vcpu_0-cpu_11-2017-01-03 cpu_affinity-vcpu_0-cpu_14-2017-01-03 cpu_affinity-vcpu_0-cpu_4-2017-01-03 cpu_affinity-vcpu_0-cpu_8-2017-01-03
    cpu_affinity-vcpu_0-cpu_1-2017-01-03 cpu_affinity-vcpu_0-cpu_15-2017-01-03 cpu_affinity-vcpu_0-cpu_5-2017-01-03 cpu_affinity-vcpu_0-cpu_9-2017-01-03

  3. root@silpixa00390838:~/orest/csv# tail -f demo/virt/cpu_affinity-vcpu_0-cpu_0-2017-01-03
    1483439612.731,1.000000
    1483439614.729,1.000000

VCPU pinning info is dispatched for all CPUs:

root@silpixa00390838:~/orest/csv# tail -f demo/virt/cpu_affinity-vcpu_0-cpu_0-2017-01-03
1483439612.731,1.000000
1483439614.729,1.000000

VCPU-0 is pinned to all 16 CPUs as it shown by virsh tool.

PASS
4Verify that virt plugin change CPU pinning info metric when its value is changed.
  1. Get CPU pinning info using virsh tool:

    root@silpixa00390838:~# virsh vcpupin demo
    VCPU: CPU Affinity
    ----------------------------------
    0: 0-15

  2. Make sure that pinning value for CPU-15 is equal to 1 in write collectd data:
    root@silpixa00390838:~/orest/csv# tail -f demo/virt/cpu_affinity-vcpu_0-cpu_15-2017-01-03
    1483440114.729,1.000000
    1483440116.729,1.000000
    1483440118.729,1.000000
    1483440120.729,1.000000

  3. Change CPU pinning using virsh tool:

    root@silpixa00390838:~# virsh vcpupin demo --vcpu 0 --cpulist 0-14

     

  4. Verify that CPU pinning info is changed to 0 in write collectd data:

  5. root@silpixa00390838:~/orest/csv# tail -f demo/virt/cpu_affinity-vcpu_0-cpu_15-2017-01-03
    1483440132.730,1.000000
    1483440134.729,1.000000
    1483440136.729,0.000000
    1483440138.729,0.000000

 CPU pinning info is changed to 0 in collectd data when its value was changed using virsh tool. PASS
5Verify that virt plugin dispatches CPU utilization per VCPU in nanosecond format
  1. Start collectd with virt plugin and write plugin enabled.
  2. Get vcpu metric from VM.
  3. Wait collectd interval time for value update.
  4. Stop collectd and get collectd data.
  5. Compare utilizations for all vcpu in VM.
  1. Collectd, libvirt are running.
  2. virsh vcpuinfo U2
    VCPU:           0
    CPU time:       668.8s
  3. -
  4. tail -n2 U2/virt/virt_vcpu-0-2017-02-14
    1487092445.317,668860000000
    1487092450.319,668860000000
  5. Values are equal (in seconds).
 PASS
6Verify that notification raised when changed Virtual Machine state .
  1. Start VM: virsh start vm_name
  2. Using exec plugin get notification message.
  3. Reset VM: virsh reset vm_name
  4. Using exec plugin get notification message.
  5. Suspend VM: virsh suspend vm_name

  6. Using exec plugin get notification message.
  7. Resume VM: virsh resume vm_name

  8. Using exec plugin get notification message.

Notification message with reason: "normal startup from boot" appears

Notification message with reason: "normal startup from boot" appears

Notification message with reason: "paused on user request" appears

Notification message with reason: "returned from paused state" appears

PASS
7Verify that virt plugin starts dispatching data for newly created VM within RefreshInterval.
  1. One VM is in running state:

    root@silpixa00390838:~# virsh list --all
    Id Name State
    ----------------------------------------------------
    4 demo running
    - demo1 shut off

  2. Set RefreshInterval to 10 seconds in collectd.conf:
    RefreshInterval 10

  3. Start collectd and immediately start second VM.
    root@silpixa00390838:~# virsh start demo1

  4. Make sure that data occurs after Refresh Interval.
Collectd dispatches VM metrics after RefreshInterval for newly created VM.PASS
8Verify that virt plugin stops dispatching data for deleted VM.
  1. Two VMs are running:

    root@silpixa00390838:~# virsh list
    Id Name State
    ----------------------------------------------------
    4 demo running
    5 demo1 running

  2. Set RefreshInterval to 10 seconds in collectd.conf:
    RefreshInterval 10

  3. Start collectd, immediately stop second VM and remove collectd data.
  4. Make sure that some collectd metrics are still dispatched(State metrics)

    root@silpixa00390838:~/orest/csv# ls; virsh destroy demo1; rm -rf ./*; sleep 2; tail -f demo1/virt/domain_state-2017-01-03;
    demo demo1
    Domain demo1 destroyed

    epoch,state,reason
    1483443087.341,5.000000,2.000000
    1483443089.340,5.000000,2.000000
    1483443091.341,5.000000,2.000000
    1483443093.340,5.000000,2.000000

  5. Verify that collectd metrics stops dispatching after RefreshInterval


Virt plugin stops dispatching data after VM is deleted within RefreshInterval.PASS
9Verify that virt plugin resumes dispatching data after libvirtd has been restarted
  1. Restart libvirtd service.

    root@silpixa00390838:~# systemctl restart libvirtd

  2. Wait until service is restarted.

    root@silpixa00390838:~# systemctl status libvirtd
    ? libvirt-bin.service - Virtualization daemon
    Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled)
    Active: active (running) since Tue 2017-01-03 11:36:04 GMT; 1min 49s ago

  3. Verify that virt plugin resumes collecting metrics.

Virt plugin resumes collecting metrics after libvirtd service has been restarted.PASS
10Verify that virt plugin resumes dispatching data after VM has been restarted
  1. Restart VM:

    root@silpixa00390838:~# virsh destroy demo
    Domain demo destroyed

    root@silpixa00390838:~# virsh start demo
    Domain demo started

     

  2. Tail one of VM metrics(CPU total utilization):

    root@silpixa00390838:~/orest/csv# tail -f demo/virt/percent-virt_cpu_total-2017-01-03
    1483443552.844,0.000000
    1483443554.844,0.031250

  3. Verify that virt plugin resumes collecting metrics.
Virt plugin resumes collecting metrics after VM has been destroyed and started.PASS
11Verify that libvirt plugin correctly displays CPU utilization in percent in regular mode
  1. Start virt-top on server where VM is started
  2. Open libvirt plugin file where CPU utilization is stored
    tail -f percent-virt_cpu_total-2017-01-03
  3. Compare values
    NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different

Values are pretty similar except 0 value every 10 sec

Note: In case of CPU is not loaded around zero values will be retrieved.

PASS

 

12Verify that libvirt plugin correctly displays CPU utilization in CPU load mode
  1. Start virt-top on server where VM is started
  2. Open libvirt plugin file where CPU utilization is stored
    tail -f percent-virt_cpu_total-2017-01-03
  3. Start stress tool on VM
    stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 20s
  4. Compare values
    NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different
Values are pretty similar except 0 value every 10 sec

PASS

Note: except speed-up period values are +- correct

13Verify that libvirt plugin correctly displays CPU utilization in percent upon VM restart
  1. Start virt-top on server where VM is started
  2. Open libvirt plugin file where CPU utilization is stored:
    tail -f percent-virt_cpu_total-2017-01-03
  3. Start stress tool on VM:
    stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 60s
  4. Start/Stop a VM within 60 seconds:
    virsh destroy <VM_name>; sleep 1; virsh start <VM_name>
  5. Compare values
    NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different
Values are pretty similar except 0 value every 10 sec

PASS

 

14Verify that libvirt plugin doesn't update CPU utilization if collectd is disabled
  1. Start virt-top on server where VM is started
  2. Open libvirt plugin file where CPU utilization is stored
    tail -f percent-virt_cpu_total-2017-01-03
  3. Compare values
    NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different
  4. Stop collectd
  5. Compare values
Update of percent-virt.. file has stoppedPASS
15Verify that libvirt plugin starts update CPU utilization if collectd is started
  1. Start virt-top on server where VM is started
  2. Open libvirt plugin file where CPU utilization is stored
    tail -f percent-virt_cpu_total-2017-01-03
  3. Stop collectd
  4. Compare values
    NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different
  5. Start collectd
  6. Compare values

Update of percent-virt.. file has started

NOTE: up to 10 sec before 1-st value appeared

PASS
16Verify that CPU utilization values are correct through at least 30-40 sec3
  1. Start virt-top on server where VM is started
  2. Open libvirt plugin file where CPU utilization is stored
    tail -f percent-virt_cpu_total-2017-01-03
  3. Start stress tool on VM
    stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 40s
  4. Compare values
    NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different
Values are pretty similar except 0 value every 10 sec

PASS

 

17Verify libvirt collectd plugin MBM metric behavior upon enable/disable mbmt/mbml
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start vm and enable mbmt metric. Run some activity in VM:
    stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 20s
  3. Disable mbmt and enable mbml metric.
  4. Disable mbml metric.
  5. Enable both mbmt and mbml metric.
  1. Collectd, libvirt are running.
  2. mbmt changes observed by write plugin and similar to perf MBM statistic.
  3. mbml changes observed by write plugin and similar to perf MBM statistic.
  4. Neither mbmt nor mbml metric changes observed in MBM statistic.
  5. Both mbmt and mbml metric changes observed by write plugin and similar to perf MBM statistic.
PASS
18Verify libvirt collectd plugin MBM metric updates every interval time set in collectd.conf.
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2.  Start vm and enable mbmt metric. Run some activity in VM: stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 20s
  3. Change interval in collectd.conf. Restart collectd.
  4. Repeat step#3 for different time intervals.
  1. Collectd, libvirt are running.
  2. mbmt changes observed by write plugin and similar to perf MBM statistic.
  3. MBM metrics updated every new interval and similar to perf MBM statistic.
  4. MBM metrics updated every new interval and similar to perf MBM statistic.
PASS
19Verify MBM metric upon collectd stop/start
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Stop collectd.
  4. Start collectd.
  1. Collectd, libvirt are running.
  2. MBM changes observed by write plugin and similar to perf MBM statistic.
  3. MBM metric observed by write plugin not updated, perf MBM statistic is changed.
  4. MBM changes observed by write plugin and similar to perf MBM statistic.
PASS
20Verify libvirt collectd plugin MBM metric by un-/comment 'virt' in collectd.conf
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Comment 'Loadplugin virt' line in collectd.conf. Restart collectd.
  4. Uncomment 'Loadplugin virt' line in collectd.conf. Restart collectd.
  5. Comment out '<Plugin virt>'.
  6. Uncomment out '<Plugin virt>'.
  1. Collectd, libvirt are running.
  2. MBM changes observed by write plugin and similar to perf MBM statistic.
  3. MBM metric observed by write plugin not updated, perf MBM statistic is changed.
  4. MBM changes observed by write plugin and similar to perf MBM statistic.
  5. MBM changes observed by write plugin and similar to perf MBM statistic (default values are taken?).
  6. MBM changes observed by write plugin and similar to perf MBM statistic.
PASS
21Verify MBM metric after libvirt service restart
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Stop libvirtd.
  4. Start libvirtd.
  1. Collectd, libvirt are running.
  2. MBM changes observed by write plugin and similar to perf MBM statistic.
  3. MBM metric observed by write plugin not updated, perf MBM statistic is changed.
  4. MBM changes observed by write plugin and similar to perf MBM statistic.
PASS
22Verify MBM metric after VM start/destroy
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Destroy (stop) VM.
  4. Start VM.
  1. Collectd, libvirt are running.
  2. MBM metric changes observed by write plugin and similar to perf MBM statistic.
  3. MBM metric observed by write plugin not updated.
  4. MBM metric changes observed by write plugin and similar to perf MBM statistic.
PASS
23Verify libvirt collectd plugin MBM metric from two VMs
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start two VMs with enabled mbmt/mbml metric (run some activity in VM).
  3. Run stress test on both VM's.
  1. Collectd, libvirt are running.
  2. MBM changes observed by write plugin and similar to perf MBM statistic for both VM's.
  3. MBM changes observed by write plugin and similar to perf MBM statistic for both VM's.
PASS
24Verify MBM metric after VM reboot, suspend, resume
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Reboot VM (virsh reboot <domain name>).
  4. Suspend/resume VM (virsh suspend/resume <domain name>).
  1. Collectd, libvirt are running.
  2. MBM metric changes observed by write plugin and similar to perf MBM statistic.
  3. MBM metric observed by write plugin not updated.
  4. MBM metric changes observed by write plugin and similar to perf MBM statistic.
PASS
25Verify zero disk errors are collected by virt plugin
  1. Start collectd with virt and wirte plugin enabled in collectd.conf.
    ExtraStats "disk_err"
  2. Start VM
  3. Get disk errors information using virsh tool and make sure no errors are present:
    virsh domblkerror silvixa00398939a
  4. Start parsing syslog and make sure that plugin reportd zero disk errors.
  5. Verify that plugin does not collectd any errors.
Plugin does not collectd any errors and syslog shows that zero disk errors are reported by collectd.PASS
26Verify file system information reported by collectd corresponds to actual values of VM.
  1. Start collectd with virt and write plugin enabled in collectd.conf.
    ExtraStats "fs_info"
  2. Make sure that exec plugin is enabled for capturing collectd notifications.
    exec_script:

    #!/bin/bash
    rm -f /home/test/notifications
    while read x y
    do
    echo $x$y >> /home/test/notifications
    done


    collectd.conf for exec:

    <Plugin exec>
    Exec "test:test" "/home/test/exec_notification"
    NotificationExec "test:test" "/home/test/exec_notification"
    </Plugin>

  3. Get file system information using virsh utility:

    virsh domfsinfo silvixa00398939a
    Mountpoint Name Type Target
    -------------------------------------------------------------------
    / sda1 ext4 hda

  4. Get Notification data reported by collectd:

    Severity:OKAY
    Time:1490705042.261
    Host:silvixa00398939a
    Plugin:virt
    Type:file_system
    mountpoint:/
    name:sda1
    fstype:ext4
    ndevAlias:1
    devAlias:hda

    Filesystem information

  5. Verify that notification data corresponds to data retrieved by virsh utility.
Notification data corresponds to data retrieved by virsh utility. PASS
27Verify job statistic is reported by virt plugin.
  1. Start collectd with virt and write plugin enabled in collectd.conf.
  2. Set collectd read interval to 0.5 second in order to catch job statistics before VM exits.
  3. Make sure that VM is in running state.
  4. Perform virsh managedsave command and get job stat information using virsh in parallel.
    virsh managedsave silvixa00398939a --bypass-cache&

    for x in range{1..20}; do virsh domjobinfo silvixa00398939a; sleep 0.5; done

  5. Make sure that job information reported by collectd corresponds to values retrieved by virsh utility.

Job information reported by collectd corresponds to values retrieved by virsh utility.PASS

 

Table #2. Sanity tests results.

Tests precondition.

  • Configured write, virt plugins in collectd.conf.
  • Libvirtd, collectd services are running.
  • perf metrics are enabled (either during VM definition or later on running VM).

 

#
Test Summary
Steps
Expected
Status
1Verify CPU cycles/instructions upon enable/disable of perf.
  1. Get CPU cycles metric from VM.
    virsh domstats U2 --perf | grep -e cpu_cycles
      perf.cpu_cycles=3304247062191
  2. Compare CPU cycles values from virsh and write plugin (CSV).
  3. Disable  CPU cycles (virsh perf U2 --disable cpu_cycles).
  4. Enable CPU cycles metric (virsh perf U2 --enable cpu_cycles)
  5. Repeat same for CPU instructions metric.

  1. CPU cycles metric retrieved.
  2. CPU cycles metric is similar.
  3. CPU cycles metric is not updated by write plugin.
  4. CPU cycles metric is similar.
  5. Same as above for CPU instructions metric.
Pass
2Verify CPU cycles/instructions upon collectd start/stop, change interval.
  1. Get CPU metrics from VM and collectd write plugin.
  2. Stop collectd. Get CPU metrics from VM and collectd write plugin.
  3. Start collectd. Get CPU metrics from VM and collectd write plugin.
  4. Change interval in range 10-60 seconds (restart collectd).
    Get CPU metrics from VM and collectd write plugin.
  1. CPU cycles and instructions are updated and similar.
  2. CPU cycles and instructions are not updated.
  3. CPU cycles and instructions are updated and similar.
  4. CPU cycles and instructions are updated every interval set.
Pass
3Verify CPU cycles/instructions upon libvirtd start/stop, VM start/destroy.
  1. Get CPU metrics from VM and collectd write plugin.
  2. Stop libvirtd. Get CPU metrics from VM and collectd write plugin.
  3. Start libvirtd. Get CPU metrics from VM and collectd write plugin.
  4. Stop VM (virsh destroy <vm>).
  5. Start VM (virsh start <vm>).
  1. CPU cycles and instructions are updated and similar.
  2. Metric cannot be retrieved from VM.
  3. CPU cycles and instructions are updated and similar.
  4. Metric cannot be retrieved from VM.
  5. CPU cycles and instructions are updated and similar.
     
PASS 
4Verify cache misses/references upon enable/disable of perf.
  1. Get cache misses metric from VM.
    virsh domstats U2 --perf | grep -e cache_misses
      perf.cache_misses=36683
  2. Compare cache misses values from virsh and write plugin (CSV).
    tail -n1 U2/virt/perf-perf_cache_misses-2017-02-15
    1487151170.074,36389.000000
  3. Disable  cache misses (virsh perf U2 --disable cache_misses).
  4. Enable cache misses metric (virsh perf U2 --enable cache_misses)
  5. Repeat same for cache references metric.

  1. Cache misses metric retrieved.
  2. Cache misses metric is similar.
  3. Cache misses metric is not updated by write plugin.
  4. Cache misses metric is similar.
  5. Same as above for cache references metric:
    perf.cache_references=1815957
    1487151170.074,1795347.000000
PASS
5Verify cache misses/references upon collectd start/stop, change interval.
  1. Get cache metrics from VM and collectd write plugin.
  2. Stop collectd. Get cache metrics from VM and collectd write plugin.
  3. Start collectd. Get cache metrics from VM and collectd write plugin.
  4. Change interval in range 10-60 seconds (restart collectd).
    Get cache metrics from VM and collectd write plugin.
  1. Cache misses/references are updated and similar.
  2. Cache misses/references are not updated.
  3. Cache misses/references are updated and similar.
  4. Cache misses/references are updated every interval set.
PASS
6Verify cache misses/references upon libvirtd start/stop, VM start/destroy.
  1. Get cache metrics from VM and collectd write plugin.
  2. Stop libvirtd. Get cache metrics from VM and collectd write plugin.
  3. Start libvirtd. Get cache metrics from VM and collectd write plugin.
  4. Stop VM (virsh destroy <vm>).
  5. Start VM (virsh start <vm>).
  1. Cache misses/references  are updated and similar.
  2. Metric cannot be retrieved from VM.
  3. Cache misses/references  are updated and similar.
  4. Metric cannot be retrieved from VM.
  5. Cache misses/references  are updated and similar.
PASS
7Verify disk metrics upon collectd start/stop, change interval.
  1. Get disk number of operations/bytes read/write metrics from VM and collectd write plugin.
    virsh domblkstat --human Ubuntu-QA had [ | grep -e operations -e bytes]
    tail -n4 disk_octets-hda-2017-02-15
    tail -n4 disk_ops-hda-2017-02-15
  2. Stop collectd. Get disk metrics from VM and collectd write plugin.
  3. Start collectd. Get disk metrics from VM and collectd write plugin.
  4. Change interval in range 10-60 seconds. Get disk metrics from VM and collectd write plugin.
  1. Disk operations/bytes are updated and similar.
  2. Disk operations/bytes are not updated.
  3. Disk operations/bytes are updated and similar.
  4. Disk operations/bytes are updated every interval set.
PASS
8Verify disk metrics upon libvirtd start/stop, VM start/destroy.
  1. Get disk number of operations/bytes read/write metrics from VM and collectd write plugin.
  2. Stop libvirtd. Get disk metrics from VM and collectd write plugin.
  3. Start libvirtd. Get disk metrics from VM and collectd write plugin.
  4. Stop VM (virsh destroy <vm>).
  5. Start VM (virsh start <vm>).
  1. Disk operations/bytes are updated and similar.
  2. Disk operations/bytes cannot be retrieved from VM.
  3. Disk operations/bytes are updated and similar.
  4. Disk operations/bytes cannot be retrieved from VM.
  5. Disk operations/bytes are updated and similar.
PASS
9Verify interface metrics upon collectd start/stop, change interval.
  1. Get interface statistic from VM and collectd write plugin.
    virsh domifstat Ubuntu-QA vnet0; ls | grep if_.*vnet0 | xargs -i -t tail -n4 {}
    (files are: if_dropped-vnet0-2017-02-15; if_errors-vnet0-2017-02-15; 
    if_octets-vnet0-2017-02-15; if_packets-vnet0-2017-02-15)
  2. Stop collectd. Get interface statistic from VM and collectd write plugin.
  3. Start collectd. Get interface statistic from VM and collectd write plugin.
  4. Change interval in range 10-60 seconds (restart collectd).
    Get interface statistic from VM and collectd write plugin.
  1. Interface statistic are updated and similar.
  2. Interface statistic are not updated.
  3. Interface statistic are updated and similar.
  4. Interface statistic are updated every interval set.
PASS
10Verify interface metrics upon libvirtd start/stop, VM start/destroy.
  1. Get interface statistic from VM and collectd write plugin.
  2. Stop libvirtd. Get interface statistic from VM and collectd write plugin.
  3. Start libvirtd. Get interface statistic from VM and collectd write plugin.
  4. Stop VM (virsh destroy <vm>).
  5. Start VM (virsh start <vm>).
  1. Interface statistic are updated and similar.
  2. Interface statistic cannot be retrieved from VM.
  3. Interface statistic are updated and similar.
  4. Interface statistic cannot be retrieved from VM.
  5. Interface statistic are updated and similar.
    vnet0 rx_bytes 2098
    vnet0 rx_packets 32
    vnet0 rx_errs 0
    vnet0 rx_drop 0
    vnet0 tx_bytes 1402
    vnet0 tx_packets 15
    vnet0 tx_errs 0
    vnet0 tx_drop 0

    tail -n1 if_dropped-vnet0-2017-02-15
    1487172921.637,0,0
    tail -n1 if_errors-vnet0-2017-02-15
    1487172921.637,0,0
    tail -n1 if_octets-vnet0-2017-02-15
    1487172921.637,2098,1402
    tail -n1 if_packets-vnet0-2017-02-15
    1487172921.637,32,15

PASS

 

11Verify memory metrics upon collectd start/stop, change interval.
  1. Get memory metrics from VM and collectd write plugin.
    virsh domstats Ubuntu-QA | grep balloon ; ls | grep memory | xargs -i -t tail -n3 {}
    (files are: memory-actual_balloon-2017-02-15; memory-last_update-2017-02-15; memory-rss-2017-02-15; memory-swap_in-2017-02-15; memory-total-2017-02-15)
  2. Stop collectd. Get memory metrics from VM and collectd write plugin.
  3. Make changes to memory configuration (virsh setmem Ubuntu-QA --size 2097152 --live).
  4. Start collectd. Get memory metrics from VM and collectd write plugin.
  5. Change interval in range 10-60 seconds (restart collectd).
    Get memory metrics from VM and collectd write plugin.
  1. Memory metrics are updated and similar.
  2. Memory metrics are not updated.
  3. Memory rss, actual metrics are updated in VM. No changes observed in write plugin.
  4. Memory metrics are updated and similar.
  5. Memory metrics are updated every interval set.

  1. Maximum allowed memory is not dispatched by libvirt 3.1.0.
  2. Both memory-actual_balloon and memory-total may return same value.
 

FAIL

Internal JIRA filed

12Verify memory metrics upon VM start/destroy.
  1. Get memory statistic from VM and collectd write plugin.
  2. Stop VM (virsh destroy <vm>). Change max memory limit (virsh setmaxmem Ubuntu-QA --size 4194304).
  3. Start VM (virsh start <vm>). Get memory statistic from VM and collectd write plugin.
  1. Memory metrics are updated and similar.
  2. Memory total metrics updated in configuration but not updated by collectd write plugin.
    virsh domstats Ubuntu-QA | grep balloon.maximum
  3. Memory total metrics updated in VM and collectd write plugin.
PASS