This article is more than one year old. Older articles may contain outdated content. Check that the information in the page has not become incorrect since its publication.
Kubernetes End-to-end Testing for Everyone
Author: Patrick Ohly (Intel)
More and more components that used to be part of Kubernetes are now being developed outside of Kubernetes. For example, storage drivers used to be compiled into Kubernetes binaries, then were moved into stand-alone FlexVolume binaries on the host, and now are delivered as Container Storage Interface (CSI) drivers that get deployed in pods inside the Kubernetes cluster itself.
This poses a challenge for developers who work on such components: how can end-to-end (E2E) testing on a Kubernetes cluster be done for such external components? The E2E framework that is used for testing Kubernetes itself has all the necessary functionality. However, trying to use it outside of Kubernetes was difficult and only possible by carefully selecting the right versions of a large number of dependencies. E2E testing has become a lot simpler in Kubernetes 1.13.
This blog post summarizes the changes that went into Kubernetes 1.13. For CSI driver developers, it will cover the ongoing effort to also make the storage tests available for testing of third-party CSI drivers. How to use them will be shown based on two Intel CSI drivers:
Testing those drivers was the main motivation behind most of these enhancements.
E2E testing consists of several phases:
- Implementing a test suite. This is the main focus of this blog
post. The Kubernetes E2E framework is written in Go. It relies on
Ginkgo for managing tests and
Gomega for assertions. These tools
support “behavior driven development”, which describes expected
behavior in “specs”. In this blog post, “test” is used to reference
Ginkgo.Itspec. Tests interact with the Kubernetes cluster using client-go.
- Bringing up a test cluster. Tools like kubetest can help here.
- Running an E2E test suite against that cluster. Ginkgo test suites
can be run with the
ginkgotool or as a normal Go test with
go test. Without any parameters, a Kubernetes E2E test suite will connect to the default cluster based on environment variables like KUBECONFIG, exactly like kubectl. Kubetest also knows how to run the Kubernetes E2E suite.
E2E framework enhancements in Kubernetes 1.13
All of the following enhancements follow the same basic pattern: they make the E2E framework more useful and easier to use outside of Kubernetes, without changing the behavior of the original Kubernetes e2e.test binary.
Splitting out provider support
The main reason why using the E2E framework from Kubernetes <= 1.12 was difficult were the dependencies on provider-specific SDKs, which pulled in a large number of packages. Just getting it compiled was non-trivial.
Many of these packages are only needed for certain tests. For example, testing the mounting of a pre-provisioned volume must first provision such a volume the same way as an administrator would, by talking directly to a specific storage backend via some non-Kubernetes API.
There is an effort to remove cloud provider-specific tests from core Kubernetes. The approach taken in PR #68483 can be seen as an incremental step towards that goal: instead of ripping out the code immediately and breaking all tests that depend on it, all cloud provider-specific code was moved into optional packages under test/e2e/framework/providers. The E2E framework then accesses it via an interface that gets implemented separately by each vendor package.
The author of a E2E test suite decides which of these packages get
imported into the test suite. The vendor support is then activated via
--provider command line flag. The Kubernetes e2e.test binary in
1.13 and 1.14 still contains support for the same providers as in
1.12. It is also okay to include no packages, which means that only
the generic providers will be available:
- “skeleton”: cluster is accessed via the Kubernetes API and nothing else
- “local”: like “skeleton”, but in addition the scripts in kubernetes/kubernetes/cluster can retrieve logs via ssh after a test suite is run
Tests may have to read additional files at runtime, like .yaml
manifests. But the Kubernetes e2e.test binary is supposed to be usable
and entirely stand-alone because that simplifies shipping and running
it. The solution in the Kubernetes build system is to link all files
test/e2e/testing-manifests into the binary with
E2E framework used to have a hard dependency on the output of
go-bindata, now bindata support is
accessing a file via the testfiles
files will be retrieved from different sources:
- relative to the directory specified with
- zero or more bindata chunks
The e2e.test binary takes additional parameters which control test execution. In 2016, an effort was started to replace all E2E command line parameters with a Viper configuration file. But that effort stalled, which left developers without clear guidance how they should handle test-specific parameters.
The approach in v1.12 was to add all flags to the central
which does not work for tests developed independently from the
framework. Since PR
recommendation has been to use the normal
flag package to
define its parameters, in its own source code. Flag names must be
hierarchical with dots separating different levels, for example
my.test.parameter, and must be unique. Uniqueness is enforced by the
flag package which panics when registering a flag a second time. The
package simplifies the definition of multiple options, which are
stored in a single struct.
To summarize, this is how parameters are handled now:
- The init code in test packages defines tests and parameters. The actual parameter values are not available yet, so test definitions cannot use them.
- The init code of the test suite parses parameters and (optionally) the configuration file.
- The tests run and now can use parameter values.
However, recently it was pointed out that it is desirable and was possible to not expose test settings as command line flags and only set them via a configuration file. There is an open bug and a pending PR about this.
Viper support has been enhanced. Like the provider support, it is
completely optional. It gets pulled into a e2e.test binary by
viperconfig package and calling
after parsing the normal command line flags. This has been implemented
so that all variables which can be set via command line flags are also
set when the flag appears in a Viper config file. For example, the
e2e.test binary accepts
--viper-config=/tmp/my-config.yaml and that file will set the
value when it has this content: my: test:
In older Kubernetes releases, that option could only load a file from the current directory, the suffix had to be left out, and only a few parameters actually could be set this way. Beware that one limitation of Viper still exists: it works by matching config file entries against known flags, without warning about unknown config file entries and thus leaving typos undetected. A better config file parser for Kubernetes is still work in progress.
Creating items from .yaml manifests
In Kubernetes 1.12, there was some support for loading individual items from a .yaml file, but then creating that item had to be done by hand-written code. Now the framework has new methods for loading a .yaml file that has multiple items, patching those items (for example, setting the namespace created for the current test), and creating them. This is currently used to deploy CSI drivers anew for each test from exactly the same .yaml files that are also used for deployment via kubectl. If the CSI driver supports running under different names, then tests are completely independent and can run in parallel.
However, redeploying a driver slows down test execution and it does not cover concurrent operations against the driver. A more realistic test scenario is to deploy a driver once when bringing up the test cluster, then run all tests against that deployment. Eventually the Kubernetes E2E testing will move to that model, once it is clearer how test cluster bringup can be extended such that it also includes installing additional entities like CSI drivers.
Upcoming enhancements in Kubernetes 1.14
Reusing storage tests
Being able to use the framework outside of Kubernetes enables building a custom test suite. But a test suite without tests is still useless. Several of the existing tests, in particular for storage, can also be applied to out-of-tree components. Thanks to the work done by Masaki Kimura, storage tests in Kubernetes 1.13 are defined such that they can be instantiated multiple times for different drivers.
But history has a habit of repeating itself. As with providers, the package defining these tests also pulled in driver definitions for all in-tree storage backends, which in turn pulled in more additional packages than were needed. This has been fixed for the upcoming Kubernetes 1.14.
Skipping unsupported tests
Some of the storage tests depend on features of the cluster (like running on a host that supports XFS) or of the driver (like supporting block volumes). These conditions are checked while the test runs, leading to skipped tests when they are not satisfied. The good thing is that this records an explanation why the test did not run.
Starting a test is slow, in particular when it must first deploy the CSI driver, but also in other scenarios. Creating the namespace for a test has been measured at 5 seconds on a fast cluster, and it produces a lot of noisy test output. It would have been possible to address that by skipping the definition of unsupported tests, but then reporting why a test isn’t even part of the test suite becomes tricky. This approach has been dropped in favor of reorganizing the storage test suite such that it first checks conditions before doing the more expensive test setup steps.
More readable test definitions
The same PR also rewrites the tests to operate like conventional Ginkgo tests, with test cases and their local variables in a single function.
Testing external drivers
Building a custom E2E test suite is still quite a bit of work. The e2e.test binary that will get distributed in the Kubernetes 1.14 test archive will have the ability to test already installed storage drivers without rebuilding the test suite. See this README for further instructions.
E2E test suite HOWTO
Test suite initialization
The first step is to set up the necessary boilerplate code that
defines the test suite. In Kubernetes
this is done in the
e2e_test.go files. It could also be
done in a single
e2e_test.go file. Kubernetes imports all of the
various providers, in-tree tests, Viper configuration support, and
bindata file lookup in
e2e.go controls the actual
execution, including some cluster preparations and metrics collection.
A simpler starting point are the
e2e_[test].go files from
doesn’t use any providers, no Viper, no bindata, and imports just the
Like PMEM-CSI, OIM drops all of the extra features, but is a bit more
complex because it integrates a custom cluster startup directly into
which was useful in this case because some additional components have
to run on the host side. By running them directly in the E2E binary,
interactive debugging with
dlv becomes easier.
Both CSI drivers follow the Kubernetes example and use the
directory for their test suites, but any other directory and other
file names would also work.
Adding E2E storage tests
Tests are defined by packages that get imported into a test suite. The
only thing specific to E2E tests is that they instantiate a
framework.Framework pointer (usually called
framework.NewDefaultFramework. This variable gets initialized anew
BeforeEach for each test and freed in an
AfterEach. It has a
f.Namespace at runtime (and only at runtime!)
which can be used by a test.
The PMEM-CSI storage test imports the Kubernetes storage test suite and sets up one instance of the provisioning tests for a PMEM-CSI driver which must be already installed in the test cluster. The storage test suite changes the storage class to run tests with different filesystem types. Because of this requirement, the storage class is created from a .yaml file.
Explaining all the various utility methods available in the framework is out of scope for this blog post. Reading existing tests and the source code of the framework is a good way to get started.
Vendoring Kubernetes code is still not trivial, even after eliminating
many of the unnecessary dependencies.
k8s.io/kubernetes is not meant
to be included in other projects and does not define its dependencies
in a way that is understood by tools like
dep. The other
packages are meant to be included, but don’t follow semantic
yet or don’t
tag any releases (
PMEM-CSI uses dep. It’s
file is a good starting point. It enables pruning (not enabled in dep
by default) and locks certain projects onto versions that are
compatible with the Kubernetes version that is used. When
doesn’t pick a compatible version, then checking Kubernetes’
helps to determine which revision might be the right one.
Compiling and running the test suite
go test ./test/e2e -args -help is the fastest way to test that the
test suite compiles.
Once it does compile and a cluster has been set up, the command
go test -timeout=0 -v ./test/e2e -ginkgo.v runs all tests. In order to
run tests in parallel, use the
ginkgo -p ./test/e2e command instead.
The Kubernetes E2E framework is owned by the testing-commons sub-project in SIG-testing. See that page for contact information.
There are various tasks that could be worked on, including but not limited to:
- Moving test/e2e/framework into a staging repo and restructuring it so that it is more modular (#74352).
e2e.goby moving more of its code into
- Removing provider-specific code from the Kubernetes E2E test suite (#70194).
Special thanks to the reviewers of this article:
- Olev Kartau (https://github.com/okartau)
- Mary Camp (https://github.com/MCamp859)