hands-on software development
Impersonator pattern

The impersonator pattern is a testing architecture pattern. It deals with the problem of performing integration and functional tests over unstable, slow, not always available,  data-changing or inexistent integration environments by providing an implementation which mimics the exact protocols and semantics of those environments while requiring minimal resources and providing full control over execution and managed data.

Testing against integration environments

Some classes of tests, such as integration tests, functional/acceptance tests and performance tests require testing against proper integration environments in order to validate system features. In practice, those environments have a combination of the following characteristics:

  • They are unstable: their uptime is not guaranteed, or they may suffer with performance or behavior variances that can cause random timeouts and inconsistent responses
  • They are slow: their response time is not fast enough to accommodate the demand required by automated testing; build scalability may demand more processing power than what integration servers provide
  • They are not always available: they may not be running all the time, as the resource required to provide their services are too expensive or only available during specific times
  • Their data is changing frequently: for any number of reasons the data required for testing may be changing due to external interactions and the cost of avoiding that may be prohibitive
  • They may not even exist as teams may be working concurrently to build a solution that will be integrated at a later stage

In those cases, continuous integration can be seriously impaired. As tests would be frequently broken and their ability to pass won’t be completely dependent on the changes performed to the system, chasing problems may be difficult.

To mitigate the problem those types of environments pose to testing, a stability reference is required to allow for:

  • Quick and scalable local builds
  • Creating a reference for the data tests expect from the integration environment
  • Proving the system works with reference data
  • Pinpointing quickly what are the causes of failure for broken tests in the in the integration environment

The stability reference environment requires an extra stage for the build pipeline, which is completely standalone as it doesn’t depend on any external environment.

The stability reference

The following implementation approaches can be used to allow for integration and function testing outside the actual integration enviroment:

  • Using test doubles, which implement the same programmatic interfaces as actual components in the system, but do not talk with external integration points. For example, a DAO may be reimplemented to provide static data and never talk with a real database
  • Using servers that provide the same data, behavior and protocols as the actual servers in the integration environment. These servers may or may not be testing Fakes, as they may be suitable for production (eg. local deployments of the same servers used in the actual integration environment)

Test doubles require the creation of artifacts which contain embedded test code. This is not desirable, as it adds complexity to the build process and move the build pipeline away from a single-artifact discipline. Also, by not exercising the stack required to interact with external systems, test doubles add very limited value for integration and functional testing.

Servers that provide the same behavior and protocols as the actual integration environment are preferable for the stability reference. They may require the creation of custom software in order to replicate the same semantics as the integration servers, but they have the advantage of exercising the whole integration stack and do not require testing code to be built into artifacts (which is a good practice for build pipelines and testing in general).

These servers are loosely referred to as stubs, mocks, fakes, simulators, and so on, but in most cases these names do not help understanding what is the testing architecture in use. As a matter of fact, those names come from the unit testing space, in which compatibility is the primary goal in order to isolate and test behavior, whereas for integration and functional tests, compatibility, speed, reliability and non-intrusiveness are key.

To avoid naming ambiguities and establish specific semantics, those types of testing servers are going to be referred to as impersonators here. Also, it is assumed from here on that the stability reference is comprised by an impersonated environment.


Simply put, impersonators are testing servers that provide the same data, protocol and semantics as their integration environment counterparts. Impersonated environments (or standalone environments) are comprised exclusively of impersonators and do not depend on external servers. Impersonators may or may not run in the application space, but they are always externalized from the application code.

Impersonators can be implemented using one of the following strategies:

  • Local deployments of the same software used in the integration environment, or a slimmed down version of that software (eg. using oracle express to impersonate an oracle database)
  • Protocol compatible stock servers (eg. in-memory databases; SMTP servers, FTP servers, etc.)
  • Home-grown [lightweight] servers, which can use a number of strategies to acquire data:
    • Record-and-replay: using recording proxies or hooks into the application to create data flow snapshots as tests run over the integration environment (see self initializing fake for recording proxies)
    • Fixed data: using a handcrafted test fixture, made by querying integration servers or otherwise
    • Rules reimplementation: implementing rules to comply exactly with the semantics of the integration servers. A specific case of that are generators, which provide data using a known sequence.

Local deployments of production-compatible servers are sometimes convenient, but they require developers to install and configure servers in their machines (or depend on a single development sever, which imposes a single point of failure) in order to perform changes to the system. This may or may not be a problem, depending on the installation complexity and the amount of resources required to have those server performing well enough for testing. They do have the advantage of providing compatibility out-of-the-box.

Protocol-”compatible” stock servers generally run in memory and perform well, but may have compatibility problems which are hard to predict at the beginning of the development process. They are usually chosen to be used as in-memory servers, which do not require local installations.

Home-grown servers may be challenging to build and/or setup. They are usually not too complicated for read-only systems. However, transactional systems require state management, which require a careful strategy to differentiate and record transaction data. On the other hand, they are in full control of the developers, and are generally built to be lightweight.

An important directive for creating impersonated environments is that all impersonators have to have their own integration tests in order to guarantee they do behave as expected and remove the possibility a flaw in their implementations would cause the main application code to fail.

Use with continuous integration

If testing against integration environments is not guaranteed to work consistently, it is advisable to perform local builds against the standalone environment only, running all the tests against it (instead of using tricks such as smoke builds) as build scalability is achievable in that environment.

If the impersonated environment is properly implemented, builds against that environment should never break, which means there should be no excuse to have the stages in the pipeline associated with the standalone environment broken at any given point in time.

Testing against the actual integration environment is something that can be safely deferred to later stages in the pipeline. If tests are found to be broken there, one of the following possibilities should apply:

  1. The impersonators do not provide the same protocol and semantics as the integration servers
  2. Data has changed in the integration servers
  3. The integration servers were not available, or didn’t perform consistently

Throughout development, most broken builds would be related to reasons 2 and 3. The stability reference would provide means to identify the root causes for the broken tests and eliminate the need to look into the application code as a likely source of problems.

In order to check if the data provided by the impersonators match the data provided by the integration servers, it is convenient to implement a verify feature in each impersonator in order to validate their data and quickly identify if tests were broken due to data variances in the integration environment.

Additional benefits

Standalone environments are sometimes useful for showcases, as they are reliable. Depending on the strategy used to implement impersonators, they can run in machines disconnected from the development environment and allow for off-site showcases.

If impersonators do not require local software installations, the costs and risks associated to development environments setups can be drastically minimized, as a single checkout of the codebase would be sufficient to allow for developing changes in the system.

Taming the project complexity budget by focusing on continuous integration

The term “complexity budget” is used to explain there’s a practical need for a cap in the amount of energy required to produce a design that is fit-for-purpose:

Any feature added to any system has to pass a basic test: If it adds complexity, is the benefit worth the cost? The more obscure or minor the benefit, the less complexity it’s worth. Sometimes this is referred to with the name “complexity budget”. A design should have a complexity budget to keep its overall complexity under control.

Ken Arnold, Generics Considered Harmful

The idea can also be used to to decide which technical aspects are to be prioritized for a given software development strategy.

It seems that focusing too much on some technical aspects and forgetting others is most likely to cause a lot of pain to any project. The question is: which aspects, if dealt with correctly, are more likely to contribute to the success of a given project?

To answer that question, consider the following two projects:

  1. Project is “awesome”, as it uses nice a programming language plus frameworks and tools. Design is “perfect”, methodologies are used extensively, etc. Continuous integration is somewhat broken, and it takes quite a long time to understand if a given change actually works and that it doesn’t break existing functionality. Due to unstable and slow integration points, builds frequently break due to timeouts and inconsistent behavior, and no work has ever been done to solve that problem.
  2. Project is not that “nice” as it uses an orthodox programming language, frameworks are somewhat outdated, tools work alright, but are not fancy at all. Code and overall design are not amazing, but they are not too hard to understand and maintain. Testing quality is high and continuous integration work flawlessly though, and any changes to the system can be guaranteed to work quickly (in the range of a few minutes) and safely.

If you had to pick one of those projects to work for, which one would it be? If you’re into actually delivering results, you’d probably pick project 2, as it provides the infrastructure to develop features in a sustainable way. If you’re into RDD and/or if you don’t really care about the actual solution you’re building, you’d probably lean towards project 1.

With that in mind, observe the following technical aspects dependencies graph:


This basically means that focusing on continuous integration (as in: the ability to quickly make sure the system is ready for production) would force the other aspects to be consistently managed. On the other hand, it doesn’t seem to be the case that spending most of the project’s complexity budget into basic aspects (such as dealing with programming languages and playing with frameworks) and leaving the other aspects aside would create an efficient development model.

It’s troubling to see that so much attention in most projects is given to technical aspects that have limited influence on software quality, whereas aspects such as proper testing and continuous integration are taken for granted and not prioritized. Maybe that’s because the most immediate problems found in developing software are related to coding, leading to a strong focus on “instant gratification” topics, such as programming languages, frameworks, code design, etc. At the same time, it may be counter-intuitive to imagine that “good code” could be produced by focusing on the ability to have the system ready for production at any given point in time.

As far as technical concerns influence the solution quality, your best bet to get a project to succeed is to limit its complexity budget by making sure every single technical choice supports a mode of development that is oriented to quickly guarantee the system is always working. Stay away from the initial temptation to aimlessly play with tools, programming languages and frameworks; focus on continuous integration first, derive the other choices from that, and you won’t get it wrong.