In a traditional software development process where
you did analysis, development and then testing, there is often the use
of shared environments, and therefore there is often a one-to-one relationship
between the name of the environment and the type of testing performed. For
example UAT (User Acceptance Testing) tends to come right at the very end of the
process just before production. If you are working on a back-end system there
may well be no “U” in the UAT and so it really just becomes a more
production-like test environment.
In a modern development process there is more of a
distinction between the type of tests we are running and the environment in
which we are running them. We are always trying to achieve a balance between
getting the fastest feedback possible on whether our changes are correct, whilst
still ensuring that enough of the system is being tested in a manner similar to
production so that we minimise any problems due to environmental
differences.
In my C Vu article “The
Developer’s Sandbox” I described a number of different ways that you might
partition a system (and test data) to allow a variety of different levels of
non-unit testing. In essence I am mostly interested in running fast, automated
test suites in some isolated manner to gain rapid feedback. However I also like
to do a bit of manual exploratory testing, especially when making changes around
deployment or infrastructure code. And demoing new features is also important
too to ensure that we’re building “the right thing”.
What I’ve found is that there is often some
confusion when talking about testing that conflates the suite of tests being
exercised with the configuration of the system it’s being run on. For example I
will try and run every automated test possible on my local machine before
committing my changes. This means I’m probably running some combination of unit,
component, integration, acceptance and system tests against a variety of mock
and real components and services depending on how expensive or not they are to
use.
Similarly on the build server we will run exactly
the same suite of tests but because we have more time we can use the real
dependencies where possible and only rely on mocks where we have to. The closer
the code gets to production the closer the test environment has to get to
production too.
As a consequence this means there is no one-to-one
relationship between the test suite configuration and the environment where it
is run. By default we tend to optimise for the developer feedback loop which
means the out-of-the-box configuration is usually “localhost” everywhere [1]. In
contrast the build server, development and test environments will likely have
real networks, databases, message queues, etc. in play and so the same suite of
tests will increase the amount of infrastructure and integration for a more
production-like quality, perhaps at the expense of performance. The point is
that we aim to run the same tests and only vary the configuration.
Hence when talking about automated testing it may require us to qualify it with
the environment configuration we might be running with to avoid
confusion.
One natural observation might be that it’s not
right to call the running of the acceptance test suite on a developer’s local
machine “acceptance tests” as some element of the “acceptance” must come from it
being run in a more-production like manner. Whilst I get the sentiment, I think
that misses the point about developer’s leveraging the traditionally more costly
tests in a constrained, but by no means useless environment, to gain earlier
feedback around the functional behaviour. No, it doesn’t mean it’s
signed-off and ready for production just because it works on my machine, but it
does mean that at a fundamental level the change is sound and worthy of pushing
further down the deployment pipeline.
[1] I always say that I should be able to unplug
from the network and go out into the garden where there is no Wi-Fi and still be
able to write code and have a high degree of confidence that it works. Modern
tooling (and a sane approach to developer licensing) makes that possible even
when databases, message queues, etc. are in the equation without having to
restrict ourselves to relying solely on unit testing.
No comments:
Post a Comment