Why Mock Services?
Mocks allow you do superhuman things where the real backends tie you down on every step.
Isn't it an extra complication to have mock services in addition to real services?
Why can't we test against the real backend and be done with it, eh?
There are many compelling reasons to use mocks instead of real backends.
Real Backends are Costly
Real backends run on real hardware, which needs to be paid for. Real backends require people to support them, configure them and help you to troubleshoot any issues during the testing.
Because of all these expenses, the owners of the service would want your project to pay for the usage. The running rate depends on the backend system type (can you imagine how much a Siebel environment costs?), but you can roughly estimate it at $500/wk. For a 6mo project, you then need to budget some $12,000 for that service alone. If you require more services, the cost growth proportionally.
Mock service is virtually free. One MockMotor installation can provide thousands of mock services, and yet run at the same low monthly cost.
Real Services are a Limited Resource
There are only that many test service instances. Some of them are already used by the service's own development team. Some are reserved by other projects. You may find out that there is no available test environment for your project's schedule.
And if you have multiple backends you depend on, reserving them all can become a nightmarish puzzle.
From bad to worse, the schedules tend to slide to the right. If your release is delayed, you have to extend the backends’ reservations. But there are other projects that may already be waiting for them, and your project is at risk of losing the backends during the most critical last weeks.
Mock services are always available to you. You can always clone a service and create a new test environment just for your project. You do not need to ask anyone for a lease. Nor anyone can suddenly take it from you.
Real Data Seeding is a Pain
Have you ever troubleshoot a project where the issue is caused by a subtle but important mismatch in how the data are seeded into two different backends? The data do not match, and the code doesn't work throwing weird errors, and a dozen of people, from QA to developers to support, waste hours figuring it out. Resolving it doesn't feel like a win, trust me.
Even when done right, seeding all services in a test environment can take hours, sometimes days - each system has its own way to seed, and before seeding, you need to carefully verify the data match the key data in other systems.
Data are always consistent in a mock environment! Mock services are seeded with a single Excel sheet driving the responses for all services. It takes just a few minutes: upload the file, and you are done!
Real Environments are Unstable
Real services are found to be down quite often, usually right in the middle of a critical test run.
Out of memory, disk full, accidental build, scheduled but not communicated reboot, account locked due to multiple failed login attempts, traffic is throttled due to the flood of requests, the database is down, someone entered a recursive search criterion that tries to retrieve the whole Siebel DB (speaking from experience here) … the list goes on and on, and each failure is a risk for your project, and, frankly, a huge annoyance.
Failed services disrupt testing. Most of the typical failure reasons do not apply to mock services such as MockMotor, and so they stay up.
Real Vendor Backend Troubleshooting is Slow
A backend gives you an error when it shouldn't, and the error doesn't make sense. Why does this happen? Are data seeded incorrectly on the backend? Is there a technical issue with that environment? We have no way to know, and so we need to engage their support.
You are trying their first level support, and it gives you some nonsensical answers. You escalate and get a knowledgable guy on the phone. The guy is annoyed he's got pulled into this and tries to insist it is your project fault. You must be using the wrong call format. Or wrong account. Or wrong credentials. And please disable those reliable-messaging retries, they fill up the logs! After hours of this going back and forth (and your testing is blocked), the guy notices that it was indeed their configuration issue, quickly fixes it, and the testing can finally continue. The whole experience leaves you feeling powerless.
If a mock service does not work, it is most likely your fault. But that also means you (and not a vendor) can fix it! The troubleshooting happens in real-time, and very effectively.
Real Environments are Low Capacity
When you have access to a live backend, that still doesn't mean you can use it as you wish. The test environments have their capacity, and often it is not very big. A typical test backend is deployed somewhere on a small VM hosted on hardware two generations old. Try to hit it with bigger requests, or anything resembling load or performance test - and the environment goes down, and you get an angry call from the environment owner.
You can lease a load test-graded backend, but its price is many times more in direct costs and support. It is also usually very contested.
MockMotor does not have to do the DB- and CPU-intensive work a real backend has to. A single instance of MockMotor easily handles tens and hundreds of TPS and is load-test capable.
Can't Test Special Cases against Real Backends
There are happy paths, and there are less happy ones. Our code has to be tested on both. What happens when the backend is getting slow? When it times out? What if the backend is down for maintenance and gives us 503? What if the target service queue is full and returns the “retry later” code?
Testing all these scenarios against a real backend is hard, and often impossible. I cannot put the only shared test instance of SAP in a telecom into a “broken” mode to test my application's fault recovery. And when the environment owner agrees to do the negative testing, it is a very manual and time-consuming operation. Worse yet, you cannot really repeat it after each build because there are only that many times you can ask them, “Can you shut down your LDAP link now, please?” before they had enough of it.
You can do any special test cases with mocks! You can configure response delays, statuses, failures, invalid payloads - whatever your application should be dealing with.
Can't Test Before Their Code is Ready
In projects where the backend code has to change to accomplish the objectives, there is a natural time dependency. You cannot start doing your testing until the backend team designs, implements and deploys their code. For complex backends such as certain billing systems, that moment can be a few months away. All you can do is wait. Oh, and to hope that SRS does not miss anything, or it is a CR and more costs and delays.
Mock services break this dependency.
A working mock prototype service is a massive help to both your and backend teams. Implement the desired backend functionality as a mock, play with it, see if you miss anything.
Can't Prototype Fast
You can be on the other side of the dependency - when other teams depend on you delivering a new service, and you're running late. Infra is not ready; developers are busy with PROD fixes, the lead architect has just left to work for Amazon. Meanwhile, VPs are mounting the pressure.
Set up a mock service that does the happy path functions to keep other teams happy. With MockMotor, that would probably take you an hour. Meanwhile, you focus and start working on the actual implementation.
That service is what typically enough for other teams to get off your back and start developing their code to use your service. You honestly warn them that this is an early prototype, and that is met with general understanding - after all, nobody expects miracles.
Hard to Re-Use Regression Tests
The project comes to an end. Tests are passing 99.8%, soon it is UAT and then PROD. However, with new code comes new regression. In 4 months, there's going to be another project around the same functional area. In 6 months - one more, and in a year - a whole re-platforming thing. During each of these projects - and beyond - you need to execute some regression tests to confirm the new code does not break the old code.
So now we run into the whole data seeding thing again, just in a recurring fashion. It was painful enough to do data seeding once; it is painful times 10 doing it all the time. The scope of the pain would only grow with time because with more functionality you need to test more scenarios meaning you need more test accounts.
However, the pain is plateaued when you use mocks! Yes, preparing test accounts and replies for the first time is still necessary.
Re-use the same mock accounts and responses in future projects. Nobody updates them into unusable; nobody drops the DB. These mocks are your stability anchor.
Real Environment Debugging is Hard
In a multi-tiered system, it is not always clear what interactions are happening between two given systems. The flows evolve with added functionality. New consumers, account types, and usage are added. Documentation?.. Let's just accept we're not working for NASA and forget about it.
If you need to analyze what's send and received during a particular flow in a live system, it is not an easy task. The link is encrypted, and you cannot easily inject yourself in the middle. The source and destination systems typically do not have an option to log all their traffic because it would harm performance too much. Alternatively, they may log the data objects, and not the on-the-wire requests, which is only partially helpful.
Point a system to a mock service, and you'll see all the calls as clear as possible. The interaction between the two systems cannot be spelled out more clearly.