When NOT to use Mocks
Use mocks in SIT/SFT testing with caution
Mock services make many things easier.
- They are more stable
- They can handle high load
- They have a stable response time
- They are much cheaper than real environments
- They can be reused int the next release
Why not use mocks in all testing phases, then?
There are risks that the mocked services won’t interact the same way as in PROD.
We can make the mock responses' payloads and headers identical.
However, the real services' implementations or platforms may have small quirks that cause noticeably different behaviour.
Just a few examples I observed:
A service that runs on an outdated platform that has issues with ‘HTTP 100 Continue’ status implementation. When the client sent ‘Expect: 100-continue’, MockMotor reacted according to the modern specification, and the transaction succeeded. The exchange stopped when calling the real implementation, and the transaction timed out.
An otherwise standard SOAP service was not capable of parsing SOAP headers. Unknown to the project team, the monitoring tool injected a tracking header into the request. The mocked service performed perfectly - MockMotor had no issues with that SOAP header. However, once the code was moved to UAT, the transaction began to fail.
A client’s connection pool was misconfigured, keeping the connections longer than reasonable. The mocked environment did not catch the issue because MockMotor also tried to reuse the connections. When the client began calling the real implementation, the code encountered multiple network errors because the server closed the connections.
Use Your Judgement
These potential issues don’t negate the benefits of mock services.
It just shows that until you’re familiar with the service, it’s best to do functional and integration testing against a real environment.
But that doesn’t apply to performance and load testing. Whatever special properties the backend services have, in performance testing, we’re only interested in their payloads and response times.