Match Simulator: Modeling Lineup Changes Before Kick-Off
The biggest difference between a casual football fan's read on a fixture and an analyst's read is rarely the quality of the model. It is the willingness to ask "what if?". What if Haaland is rested? What if the away side is fighting relegation while the home side has nothing to play for? What if the team that just scored 4 in midweek is regressing to mean?
Match Simulator is Tactiq's tool for asking those questions on demand. Instead of accepting the analysis the AI produces from automated inputs, Premium users can override three of the most consequential variables and re-run the probability calculation. The result is not a different model. It is the same model, with different premises, and the delta between the two outputs tells you how sensitive the fixture is to each variable.
This article walks through what the simulator actually does, when it changes the read meaningfully, and how to interpret the deltas it produces.
What you can override
The simulator exposes three input layers that the base analysis fixes from automated data:
Lineup absences. Mark up to two confirmed or expected starters as out. The system identifies them by their player ID in the team's squad data, removes their contribution to the team's expected chance creation and defensive structure, and re-runs the probability engine with the diminished side. This is the most surgical of the three overrides.
Motivation tier. Set either side to one of four tiers: title race, European place, mid-table, relegation battle. Motivation does not change the on-paper quality of the squad. It scales how strongly the model weighs late-season fatigue effects, rotation likelihood, and the historical pattern that teams with something to play for outperform their xG by a small but real amount.
Recent-form scaler. Apply a multiplier to either side's last-five-match form. Useful when a team's form is sharply diverging from its longer-term baseline and you want to model the fixture as if the team were performing closer to its true level rather than the recent hot or cold streak.
You apply any combination of these three. The simulator runs the analysis, returns the new probability split, and displays a delta block beneath the result showing how much each output moved.
The delta block, decoded
When the simulator returns its result, you do not just see "Home win 48 percent". You see "Home win 48 percent, +3.2 from base". That second number is the delta from the base analysis. It is the actual decision-relevant information.
The deltas read in percentage points, not percentages. A change from 45 to 48 percent is a 3-point delta, not a 6.7 percent relative change. We display points because they're what fans actually compare in their heads, and because the percentage-of-percentage framing creates ambiguous gut reads.
Three thresholds are useful to keep in mind:
- Delta under 1.5 points. The override does not move the probability meaningfully. The fixture is not sensitive to that input. You can ignore the override and trust the base analysis.
- Delta between 1.5 and 4 points. The input matters. Whether you accept the override changes how you read the fixture. This is the band where most simulations land.
- Delta above 4 points. The input matters a lot. Often this means a single key absence on a tight fixture, or a motivation gap on a fixture where one side is fighting for survival. Treat the simulated probability as the more decision-relevant read, but write down the assumption you made.
When lineup overrides change the read
Lineup-out overrides are the most surgical because they swap a specific player's contribution out of the calculation. But they only change the read meaningfully when the player carries an outsized share of the team's expected output.
Three rough categories:
Carries little weight. A rotated full-back, a third-choice central midfielder, a backup keeper. Removing them barely moves the probability. The simulator confirms this: deltas usually under 1 point.
Carries moderate weight. A first-choice central defender, a regular winger, a starting holding midfielder. Removing them shifts the probability by 1 to 3 points, depending on the fixture's tightness.
Carries heavy weight. A 30-plus goal striker, a starting goalkeeper at a top side, a creative number 10 who registers half the side's expected assists. Removing them can swing the probability 3 to 6 points or more.
The simulator does not tell you whether the player is actually out. It tells you what happens to the probability if they are. Pair that output with the injury and suspension feed (which is automated and updates ahead of kick-off) for a complete read.
When motivation matters
Motivation tier is the trickiest override to apply because it is a judgment call on context that the model cannot read from the fixture data alone.
The clearest case is a final-day fixture where one side is in the title race and the other is mid-table with nothing to play for. The model's automated read sees both as equal-stakes, even though they obviously are not. Setting the title-race side to "title race" and the other to "mid-table" applies the historical correction, and the delta tells you how much that correction moves the probability.
The harder case is mid-season fixtures where motivation gaps exist but are subtler. A side with one point above the relegation zone playing a side already mathematically safe with three weeks left. There is a real motivation gap there, but how much does it move the probability? The simulator quantifies it.
A heuristic that works: apply the override only when both sides have clearly different stakes by your judgment, and only on fixtures within the last 10 matchdays of a season. Mid-season motivation gaps are smaller than end-of-season ones, and the model's calibration reflects that.
When recent-form scalers help
The recent-form scaler is the override most prone to misuse. Hot and cold streaks are real, and form does carry signal, but the human bias is to over-weight the last three matches and under-weight the longer baseline.
The model already weighs recent form. If a side has been outperforming its xG for five straight matches, the base analysis already reflects that. The scaler is for cases where you have specific information the model does not. Examples:
A team has been on a hot streak that you can attribute to a soft fixture run, and the next fixture is against a side they always struggle against. Apply a scaler under 1.0 to model regression toward the longer baseline.
A team has been on a cold streak driven by injuries to multiple key players, all of whom are now back. The base analysis cannot see that those players are returning. Apply a scaler over 1.0 to model the bounce-back.
The scaler is a precision tool, not a default override. If you find yourself applying it on every fixture, you are over-fitting to short-run noise.
Putting it together
A practical workflow looks like this. You open a fixture. You read the base analysis. You note the win probabilities. You decide which inputs you have specific information about, and which you do not. You apply only those overrides. You read the deltas.
If the deltas are small, the fixture is not particularly sensitive to those inputs, and the base analysis is the read. If the deltas are large, the fixture's read depends heavily on whether your overrides hold, and you should write down the assumption that drives the new probability.
The simulator is not a way to make probabilities say what you want them to say. It is a way to ask the model what happens conditional on a specific scenario, and to compare that scenario against the model's automated read. That comparison is the value.
In Premium tier, simulations save to history alongside base analyses, with the override tags preserved. After a few weeks of using the tool, you can review which overrides actually moved fixtures, which ones did not, and how often your overridden read tracked the actual result better than the base read. The simulator becomes a feedback loop, not a one-shot answer.