diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5171b1c9e9..3cc3e13eab 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -2,7 +2,7 @@ Contributions are welcome via GitHub Pull Requests. This document outlines the process to help get your contribution accepted. -Any type of contribution is welcome; from new features, bug fixes, documentation improvements or even [adding charts to the repository](#adding-a-new-chart-to-the-repository) (if it's viable once evaluated the feasibility). +Any type of contribution is welcome; from new features, bug fixes, [tests](#testing), documentation improvements or even [adding charts to the repository](#adding-a-new-chart-to-the-repository) (if it's viable once evaluated the feasibility). ## How to Contribute @@ -53,13 +53,18 @@ Notice the `Author` and `Signed-off-by` lines match. If they don't your PR will 1. Changes are automatically linted and tested using the [`ct` tool](https://github.com/helm/chart-testing) as a [GitHub action](https://github.com/helm/chart-testing-action). Those tests are based on `helm install`, `helm lint` and `helm test` commands and provide quick feedback about the changes in the PR. For those tests, the chart is installed on top of [kind](https://github.com/kubernetes-sigs/kind) and this step is not blocking (as opposed to 3rd step). 2. Changes are manually reviewed by Bitnami team members. -3. Once the changes are accepted, the PR is tested (if needed) into the Bitnami CI pipeline, the chart is installed and tested (verification and functional tests) on top of different k8s platforms. +3. Once the changes are accepted, the PR is verified with a [Static analysis](https://github.com/bitnami/charts/blob/master/TESTING.md#Static-analysis) that includes the lint and the vulnerability checks. If that passes, the Bitnami team will review the changes and trigger the verification and functional tests. 4. When the PR passes all tests, the PR is merged by the reviewer(s) in the GitHub `master` branch. 5. Then our CI/CD system is going to push the chart to the Helm registry including the recently merged changes and also the latest images and dependencies used by the chart. The changes in the images will be also committed by the CI/CD to the GitHub repository, bumping the chart version again. ***NOTE***: Please note that, in terms of time, may be a slight difference between the appearance of the code in GitHub and the chart in the registry. -[Here](https://docs.bitnami.com/kubernetes/faq/get-started/understand-charts-release-process/) you can find more information about this process. +### Testing + +1. Read the [Test Strategy](https://github.com/bitnami/charts/blob/master/TESTING.md) guide. +2. Determine the types of tests you will need based on the chart you are testing and the information in the test strategy. +3. Before you create a pull request, make sure you achieved the [Test Acceptance Criteria](https://github.com/bitnami/charts/blob/master/TESTING.md#Test-acceptance-criteria). +4. If you were able to achieve them, congrats! Create a PR and wait for the approval. You should then be able to see the result of the test execution for multiple cloud platforms (AKS, TKG, GKE) after the approval. ### Adding a new chart to the repository diff --git a/TESTING.md b/TESTING.md new file mode 100644 index 0000000000..3dc6f0d468 --- /dev/null +++ b/TESTING.md @@ -0,0 +1,90 @@ +# Test Strategy + +The overall test strategy can be described as _minimalistic_, rather than extensive. The goal of the tests is to verify that the application is deployed properly, so we will verify the main application features. + +Decide the testing scope carefully, keeping in mind the maintainability of the tests and the value they provide. +The tests described here are _deployment_ tests since their goal is to verify that the software is correctly deployed with all the inherent features. Both functional and non-functional characteristics are evaluated in these tests, focusing on the installation aspect. + +* The core asset features work properly when deployed +* There are no regression bugs + +Before writing any test scenario, understand the primary purpose of the chart. Take a look at [the documentation about the chart under test](https://github.com/bitnami/charts/tree/master/bitnami) as well as the [docker image documentation](https://github.com/bitnami?q=docker&type=all&language=&sort=). This will give you a solid base for creating valuable test scenarios. + +## Common test cases + +* Login/Logout +* Important files and folders are present in the docker image +* CRUD +* File upload +* SMTP (if applicable) +* Establish a connection +* Plugins (if applicable) + +## HOW-TO: + +### Tools + +The current test system supports a set of tools that are listed below. Test execution is done on temporary environments which are deployed and destroyed on the fly. In order to trigger the test execution in these test environments, internal approval is needed. Keeping in mind the acceptance criteria as well as the best practices for test automation using the supported tool will help you get your PR approved. + +#### Test types + +* Functional tests +* Integration tests + +#### Dynamic testing + +This part is of the most interest for the contributor. This is where the test design, test execution and test maintenance efforts will be focused. + +If your asset has a user interface, you will **need** to include the following tests: + +* [Cypress](https://docs.cypress.io/guides/overview/why-cypress) (functional tests) +* [Goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) (integration tests) + +#### Static analysis + +Static analysis is included for all assets as a part of the pipeline. Since this analysis is generic, it is defined internally and added as an action to the existing the pipeline. There is no need for additional work on the contributor side related to this. The following types of static analysis are supported: + +* [Trivy](https://github.com/aquasecurity/trivy) +* [Helm lint](https://helm.sh/docs/helm/helm_lint/) + +***NOTE***: Cypress and Goss tests need to be tailored per application under test. + +## Test acceptance criteria + +In order for your test code PR to be accepted the following criteria must be fulfilled. + +### Generic + +- [ ] Minimum of 5 test cases per test type is needed +- [ ] Key features of the asset need to be covered +- [ ] Tests need to contain assertions +- [ ] Tests need to be stateless +- [ ] Tests need to be independent +- [ ] Tests need to be retry-able +- [ ] Tests need to be executable in any order +- [ ] Test scope needs to be focused on **installation** of the asset and not testing the asset +- [ ] Test code needs to be peer-reviewed +- [ ] Aim to write one scenario per test type. In case there is a need to divide test code into two scenarios, the reasoning should be provided +- [ ] Tests need to be as minimalistic as possible +- [ ] Tests should run properly for future versions without major changes +- [ ] Avoid hardcoded values +- [ ] Use smart and uniform locator strategies +- [ ] Prefer fluid waits over hardcoded waits +- [ ] Favor URL’s for navigation over UI interaction +- [ ] Include only necessary files +- [ ] Test code needs to be [maintainable](https://testautomationpatterns.org/wiki/index.php/MAINTAINABLE_TESTWARE) +- [ ] Test names should be descriptive +- [ ] Test data should be generated dynamically + +### Cypress + +- [ ] Test file name has the following format: Helm chart name + spec (ex: `wordpress_spec.js`) +- [ ] No `describe()` blocks for Cypress test +- [ ] Aim to have an assertion after every command to avoid flakiness, taking advantage of Cypress retry-ability +- [ ] Test description is a sentence with the following format: Expected result summary, starting with a verb, in third person, no dots at the end of the sentence (ex: `it('checks if admin can edit a site', ()`) +- [ ] Respect the folder structure recommended by Cypress: + * [fixtures](https://docs.cypress.io/api/commands/fixture) - for test data + * [Integration](https://docs.cypress.io/api/commands/fixture) - test scenario + * [plugins](https://docs.cypress.io/guides/tooling/plugins-guide) - plugin configuration, if applicable + * [support](https://docs.cypress.io/api/commands/fixture) - reusable behaviours and overrides + * [cypress.json](https://docs.cypress.io/guides/tooling/plugins-guide) - configuration values you wish to store