This wiki page is based on an email sent to the Anuket TSC on Sept 6, 2021.
Introduction
During the Lakelse release, some issues with RC became apparent.
An important objective of the release process is to make the development status of deliverables visible to the TSC, and the community. RC lacks transparency, despite steps in the release process intended to provide transparency. The following text discusses some ways to improve RC transparency.
RC Mapping to RA
RC is frequently described as being dependent on RA. However, during the Lakelse release, it became apparent that RC is, at best, loosely coupled to RA. More accurately, RC seems to be based primarily on OpenStack releases. The apparent assumption is that this will result in sufficient, although unknown, coverage for RA. There doesn't appear to have been any attempt to understand or even read the RA requirements during the development of RC.
...
Milestone 2 (M2) in the release process requires a description of the coverage of RA
...
by RC. However, RC does not provide
...
a clear mapping of test cases to RA
...
, making it unclear what the coverage is, or how the coverage has changed from
...
release to release, or how RC has changed in response to changes in RA.
Not only does this obfuscate coverage, but it also contradicts the requirements set forth by the RC documentation, itself. For example: "The conformance specifications must provide the mapping between tests and requirements to demonstrate traceability and coverage."
In addition, the lack of mapping creates confusion for RI development, since RI depends on both RA and RC. For example, if RC includes test cases not explicitly mapped to RA, then RI may spend time on test cases that are not actually required.
In order to meet the requirements documented in RC, I suggest that the RC work stream team create and publish a simple table that displays RA requirement references in one column, and corresponding test cases in another. This table should be updated for every release.
Test Case Information
The RC documentation includes lists of test case names. However, there are no links to detailed information about the test cases.
For example, a requirement author, or other stakeholder, might want to understand how a requirement is being tested.
...
As another example, when the engineering team conducts a review of RA documentation for each release
...
, they occasionally want to see how a requirement is being tested.
RC Validation Testing
We discussed validation testing at the Sept 6 Technical Discussion meeting. At the meeting I learned that "validation" is performed by running the test suites against a known-good environment and verifying that the test cases pass.
- M1 requires a validation plan for RC. Instead of providing a validation plan, I was provided a pointer to the RC documentation. The result of this is that the community has no documented description of how RC is being validated.
- Based on feedback from the meeting, the validation testing does not include insertion of faults to ensure that test cases fail when they should. This may result in false negative tests.
- The validation testing seems to be conducted in just a single configuration/environment. This may result in configuration/optimization around a single environment and in turn, inaccurate test results in a valid, but different environment.
- There is no dashboard or other convenient means to see the results of validation testing. Also, there are no periodic reports or other updates on the results.
Conclusions
...
This issue could be corrected by adding links to the test case names that would take readers to expanded information about the test case. In addition, test cases should include reference numbers to facilitate mapping to RA, and to help differentiate similar sounding test cases.
RC Validation Testing
RC is currently lacking a validation plan, per the requirements of Milestone 1 (M1) of the release plan. This means that the TSC, and the community at large, have no documented way to understand how one of the project's most important release artifacts is being validated.
During the technical discussion meeting on Sept 6, we discussed validation testing. Based on that discussion, my understanding is that RC is validated by simply running the test cases on known good infrastructure and verifying that there are no failures. Unfortunately, this leaves out a key aspect of validating testing, which is confirming that the tests also fail when they should. In addition, the validation is only run on one environment, which means that over time, test cases become optimized to that environment, potentially leading to inaccurate results on different, yet still valid, infrastructure.
Validation testing could be improved by inserting faults and verifying that test cases fail appropriately. It would also be helpful to run the tests in multiple environments to avoid optimizing tests on a single environment.
Test Results
Although the test cases are run in a continuous integration (CI) environment, the community does not have readily available access to the results. The TSC, or a community member, should be able to independently see the results and determine the status of RC testing.
This issue could be resolved by creating a publicly available dashboard of RC testing results, and by providing regular status updates or reports to the TSC.