...
- All test cases must be fully documented, in a common format, clearly identifying the test procedure and expected results / metrics to determine a “pass” or “fail” result for the test case.
- DN: We currently list a set of things which must be documented for test cases - is this insufficient, in combination with the test strategy document?
- lylavoie - No, we need to have the actual list of what things are tested in Dovetail, and how those things are tested. Otherwise, how to can we even begin to know if the tool test the things we think it does (i.e. validate the tool).
- DN: We currently list a set of things which must be documented for test cases - is this insufficient, in combination with the test strategy document?
- Tests and tool must support / run on both vanilla OPNFV and commercial OPNFV based solution (i.e. the tests and tool can not use interfaces or hooks that are internal to OPNFV, i.e. something during deployment / install / etc.).
- DN: Again, there is already a requirement thsat tests pass on reference OPNFV deployment scenarios
- lylavoie: Yes, but it can not do that by requiring access to something "under the hood," this might be obvious, but it's an important requirement for Dovetail developers to know.
- DN: Again, there is already a requirement thsat tests pass on reference OPNFV deployment scenarios
- Tests and tool must run independent of installer (Apex, Joid, Compass) and architecture (Intel / ARM).
- DN: This is already in the requirements: "Tests must not require a specific NFVi platform composition or installation tool"
- Tests and tool must run independent of specific OPNFV components, allowing different components to “swap in”. An example would be using a different storage than Ceph.
- DN: This is also covered by the above test requirement
- Tool / Tests must be validated for purpose, beyond running on the platform (this may require each test to be run with both an expected positive and negative outcome, to validate the test/tool for that case).
- DN: I do not understand what this proposal refers to
- lylavoie - The tool and program need must be validated. For example, if a test case purpose is to verify a specific API is implemented or functions is a specific way, we need to verify the test tool does actually test that API/function. Put differently, we need to check the test tool doesn't false pass or false fail devices. This is far beyond just a normal CI type test (i.e. did it compile and pass some unit tests).
- DN: I do not understand what this proposal refers to
- Tests should focus on functionality and not performance.
- Performance test output could be built in as “for information only,” but must not carry pass/fail metrics.
- DN: This is covered in the CVP already
...