Vladimir Dyuzhev, author of MockMotor

Vladimir Dyuzhev
MockMotor Creator

Can LLMs Generate Mocks?

LLMs interpolate not extrapolate

LLMs are Good at Interpolation

LLMs can do wonderful things. They can generate images, texts and summaries, and find solutions for common problems.

All those tasks are interpolation. Some people, at some point in the past, expressed themselves in words, images or sounds, and an LLM finds a probable path between those existing expressions. A path that looks like a new artifact.

When working in a zone with sparse human input, LLMs become increasingly unreliable. It replaces the solid choices with fuzzy ones and hallucinates.

And if we ask LLM to extrapolate, to produce an artifact or solution for something people haven’t done yet, the result would be incorrect or simply gibberish.

This is what distinguishes ML models such as AlphaFold, which produce new knowledge, from LLMs that regurgitate old knowledge.

Can LLMs Produce Mocks?

If a human produced mocks for a service before, an LLM can repeat the feat. There are token (word) probabilities that can guide the LLM in the right direction.

However, if a human already mocked a service, why would we need to re-mock it?

For a totally unfamiliar service, an LLM might map only a few features correctly. The rest would be a result of wild hallucinations. No need to say that for a strictly-defined service, any hallucinated field or property can result in a broken mock.

Can AIs/ML in General Produce Mocks?

Yes!

Machine learning has a wide and helpful set of tools. There are multiple ML algorithms. I, for example, use a basic Bayes model to identify likely account keys in the requests, and am working on a Closest-Neighbour model that links related requests in a flow.

I’m hopeful that ML will reduce the manual work in service simulation – ML, but not LLMs. Using an unfit tool (LLM) for mock generation would likely only cause frustration.

AI created no part of this text. It isn't the Butlerian Jihad, but it is something anyone could do.