Vladimir Dyuzhev
MockMotor Creator
How to Record SOAP Traffic with MockMotor
SOAP traffic can be recorded and mocked based on request payload values.
We've already seen how MockMotor can record HTTP traffic.
A single recording response can create a separate mock for each combination of REST or HTTP parameters because they are readily available in the URL.
However, for SOAP, it is not as simple. A typical SOAP service has only one URL, and that URL contains no request parameters.
The SOAPAction
header can only tell us the operation name. MockMotor can't easily tell which SOAP request values
are significant for telling one account from another, and which are just noise for our purposes.
How can we, with the minimal efforts, record a separate SOAP mock for each of the test accounts?
With a little help, MockMotor can do that.
Let's see how.
Forward & Record
For the introduction to Forward & Record functionality, see the Record HTTP traffic blog entry and Forward documentation.
However, we didn't use the payload values for HTTP recording. It is about time we do that.
Key Payload Values
In a HTTP URL like http://example.com?x=100&y=200
MockMotor has no problem extracting request keys x
and y
and add them to the recorded mock's match script as
x=='100' && y=='200'
.
Equally easily, in a REST URL like http://example.com/profile/BE100/connection/2
MockMotor is able to extract keys profile
and connection
and build a
match script profile=='BE100' && connection=='2'
.
However, for SOAP requests, this automation is not working. The key values are hidden in the payload, where MockMotor is not able to recognize the value's special meaning. You need to help MockMotor a bit.
You can specify the recording keys’ XPaths - i.e. the locations in the request that contain special values that identify a test account - in the Match Script
field.
For each inbound request, MockMotor will evaluate the provided expressions and add them to the recorded mock's match script along with their evaluated value.
For instance, suppose the key values are sent in the request under OriginZip
and DestinationZip
, as in the example below:
<soapenv:Envelope ...>
<soapenv:Body>
<tran:getTransitTime>
<arg0>
<tran:OriginCity>Houston</tran:OriginCity>
<tran:OriginState>TX</tran:OriginState>
<tran:OriginZip>77092</tran:OriginZip>
<tran:DestinationCity>Seattle</tran:DestinationCity>
<tran:DestinationState>WA</tran:DestinationState>
<tran:DestinationZip>98103</tran:DestinationZip>
</arg0>
</tran:getTransitTime>
</soapenv:Body>
</soapenv:Envelope>
We can then add their paths $input//*:OriginZip and $input//*:DestinationZip
to the Script
section. It won't break the match because this XPath means “check that these two elements exist in the payload” - and we know they do.
MockMotor, in the record mode, extracts their values (77092
and 98103
) from the request and adds them to the match script of the created mock response as $input//*:OriginZip='77092' and $input//*:DestinationZip='98103'
.
When the same operation is called with the opposite direction (Seattle to Houston), another mock is created with the match script $input//*:OriginZip='98103' and $input//*:DestinationZip='77092'
.
Below is the recording mock for Averitt's Transit Time Service:
As you can see, it extracts two key values from the request - its origin and destination ZIP code - before forwarding the request to the live backend.
Recording Averitt's Transit Time Service
Let's do a working example.
Averitt is a transportation company, not unlike say FedEx.
What makes it my favourite for today's post is that they have an open SOAP service that calculates the estimated delivery time for a given pair of USA zip codes (for people who live elsewhere, those are postal codes).
The WSDL is here:
http://webservices.averittexpress.com/TransitTimeService?wsdl
The request takes a pair of locations, and returns the expected delivery time:
Request:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tran="http://webservices.averittexpress.com/TransitTimeService">
<soapenv:Header/>
<soapenv:Body>
<tran:getTransitTime>
<arg0>
<tran:OriginCity>Seattle</tran:OriginCity>
<tran:OriginState>WA</tran:OriginState>
<tran:OriginZip>98103</tran:OriginZip>
<tran:DestinationCity>Houston</tran:DestinationCity>
<tran:DestinationState>TX</tran:DestinationState>
<tran:DestinationZip>77092</tran:DestinationZip>
</arg0>
</tran:getTransitTime>
</soapenv:Body>
</soapenv:Envelope>
Response:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<ns2:getTransitTimeResponse xmlns:ns2="http://webservices.averittexpress.com/TransitTimeService">
<return>
<ns2:OrigCity>Seattle</ns2:OrigCity>
<ns2:OrigState>WA</ns2:OrigState>
<ns2:OrigZip>98103</ns2:OrigZip>
<ns2:origServiceCenter>NORTH AMERICA LTL NETWORK</ns2:origServiceCenter>
<ns2:DestCity>Houston</ns2:DestCity>
<ns2:DestState>TX</ns2:DestState>
<ns2:DestZip>77092</ns2:DestZip>
<ns2:destServiceCenter>HOUSTON SERVICE CENTER</ns2:destServiceCenter>
<ns2:directShipment>no</ns2:directShipment>
<ns2:EstimatedDays>5</ns2:EstimatedDays>
<ns2:EstimatedDeliveryDate>04/16/2019</ns2:EstimatedDeliveryDate>
<ns2:Comment/>
</return>
</ns2:getTransitTimeResponse>
</soapenv:Body>
</soapenv:Envelope>
Now, it is not polite to pound the PROD Averitt service with our test requests.
We also want to force some location pairs to return error codes and timeouts to test the failure scenarios. The service, however, is working very reliably and is not eager to time out or give us the errors.
Then we, of course, need to mock the responses.
Test Data
I've prepared a list of 40 locations that are used in the tests.
AL,Huntsville,35801
AK,Anchorage,99501
AZ,Phoenix,85001
CO,Denver,80201
CT,Hartford,06101
DE,Dover,19901
DC,Washington,20001
GA,Atlanta,30301
...
SD,Aberdeen,57401
TN,Nashville,37201
TX,Austin,78701
UT,Logan,84321
VT,Killington,05751
VA,Altavista,24517
WV,Beaver,25813
WI,Milwaukee,53201
WY,Pinedale,82941
I'm going to set up a forwarding mock and record the responses for each pair of these locations. That makes 1600 pairs and 1600 mocks - more than enough for the testing.
I want to mock each response separately because I don't want to spend time creating a data-driven mock.
Once all 1600 pairs are recorded, no more traffic is going to flow from our test environment to the Averitt service.
Then I can manually update statuses and response times for some of the pairs to simulate the errors and timeouts.
Create a Service
First, we need to create a test service that is going to mock the Averitt API:
Create a Forward Response
How to configure the forwarding mock?
- It should match
POST
(orAny
) HTTP operation as befits SOAP. - It should match any SOAP operation.
- It should use XQuery scripting because the payloads are XML.
- It should have the Averitt service URL set in the Forward field.
- It should have
Record Reply as Mock
enabled. - It should use
OriginZip
andDestinationZip
elements’ values as the recording keys.
Below is the forward response:
The new mock is placed above the forwarding mock and begins handling the matching traffic immediately.
Here is the forwarding mock in the responses list - still alone:
Execute 1600 Requests
Now, let's execute the requests for each pair of ZIP codes.
Since I do not have any local mocks yet, the requests will be handled by the forwarding mock. It sends each request to the actual Averitt backend first. When the backend responds, MockMotor records the response and its properties as a new mock.
Any request with previously seen keys is served by a recorded mock and is not sent to the live backend.
Create a Client Script
Of course, being efficient (or lazy, which could be the same thing), I'm not going to execute 1600 requests manually. I'm going to use a Python script. In your test environment, you'd probably use your own application.
import subprocess
with open("zips.csv") as f:
content = f.readlines()
content = [x.strip() for x in content]
for i in range(0,len(content)):
for j in range(0,len(content)):
s1 = content[i]
s2 = content[j]
a1 = s1.split(",")
a2 = s2.split(",")
req = f"""<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tran="http://webservices.averittexpress.com/TransitTimeService">
<soapenv:Header/>
<soapenv:Body>
<tran:getTransitTime>
<arg0>
<tran:OriginCity>{a1[1]}</tran:OriginCity>
<tran:OriginState>{a1[0]}</tran:OriginState>
<tran:OriginZip>{a1[2]}</tran:OriginZip>
<tran:DestinationCity>{a2[1]}</tran:DestinationCity>
<tran:DestinationState>{a2[0]}</tran:DestinationState>
<tran:DestinationZip>{a2[2]}</tran:DestinationZip>
</arg0>
</tran:getTransitTime>
</soapenv:Body>
</soapenv:Envelope>
"""
reqf = open("req.xml","wt")
reqf.write(req)
reqf.close()
subprocess.run(["curl", "-k", "--header", "Content-Type:text/xml", "-d", "@req.xml", "http://127.0.0.1:20072/averittexpress/transittime"])
The script calls Averitt mock service for each pair of ZIP codes found in the zips.csv
file:
AL,Huntsville,35801
AK,Anchorage,99501
AZ,Phoenix,85001
...
Run the Script
Run the script, and about 10 minutes later, every single pair of ZIP codes is recorded.
Review the Recorded Responses
Let's take a look at the service responses again:
The forwarding mock was called 1600 times, and that there is a number (1600, of course) recorded mocks above it.
Note that each of the recorded mocks has its own version of match script, containing its own specific values of ZIP code, e.g.:
$input//*:DestinationZip='53201' and $input//*:OriginZip='82941'
In addition to the response payload and the match script, MockMotor recorded:
- HTTP operation
POST
- Top payload element
getTransitTime
(SOAPAction is not defined for this operation) - HTTP status
200
- Content-Type
text/xml;charset=utf-8
- Response time
203
ms - The request to use when clicking the
Debug
button.
Test It
Let's test the new mocks.
I execute the same script as before. Now it should be replied from mocks and do not call Averitt backend.
And indeed, you can see the call count next to the mock responses is incrementing too, while the one next to Forward All
response stays the same.
You can see the complete service (with a limited number of responses) on the demo MockMotor instance.
Performance Considerations
Having one mock per input may not provide the required response time for load testing.
MockMotor has to check each mock in a row to find a match. When you have 1600 mocks and the response time must be under 100ms, each check should take 0.06ms. There is no easy way MockMotor can satisfy that, and it can be a showstopper for a load testing run.
For time-sensitive runs, you should set up a single mock for getTransitTime
operation, and upload a set of mock accounts - one per the ZIP codes pair. Those
accounts should contain the origin and destination ZIP (to select the account) and all other values (to populate the response). The response then selects the
account based on OriginZIP
and DestinationZIP
values and populates the response payload with the values from that account.