What Is npxG? Non-Penalty Expected Goals Explained
Open any modern football analytics dashboard and you'll see two columns next to a striker's name. One says xG. Another says npxG. The numbers differ, sometimes by a lot. A premier-league striker with 18 xG this season might have 14.5 npxG, with the 3.5-goal gap coming entirely from his club's penalty allocation. If you're reading the xG column without understanding which one you're looking at, you're reading a story with a penalty premium silently baked in.
This article walks through what npxG is, why stripping penalties out of xG is usually the honest move, and the traps that catch analysts who make the switch without thinking about what else hasn't been cleaned up. By the end, the next time someone throws around an xG stat, you'll know whether it's the headline version or the version that actually describes open-play quality.
What npxG actually is
Non-penalty expected goals is xG with penalty shots removed. That's it. It's not a separate metric trained differently; it's raw xG with the subset of shots that came from the penalty spot subtracted.
The mechanical calculation:
- Sum up every shot's xG across the window you care about (a match, a season, a career).
- Subtract every penalty shot's xG from that sum.
What's left is npxG. Penalty xG is close to a constant across providers, usually in the range of 0.76 to 0.78. The exact value depends on which historical sample the provider calibrates against, but the fluctuation is small enough that you can think of a penalty as roughly three-quarters of a goal waiting to happen.
The reason npxG exists as its own column is that penalty opportunities are distributed across players and teams in ways that don't reflect general attacking quality. A team that earns a lot of fouls in the box will generate more penalties regardless of how well they pass or finish in open play. A striker who's designated penalty-taker accumulates xG his teammate wouldn't have accumulated even if they'd shared the open-play load equally. Strip penalties and both effects disappear.
What's left is the number people actually want when they ask "how good has this team or this player been in open play?"
Why the gap matters
A few real-world patterns make the difference concrete.
Penalty-taker inflation. A striker who takes every penalty for a side that earns 9 penalties per season starts with 9 × 0.76 = 6.84 xG before his open-play shots are counted. If that striker has a 15 xG season, his npxG is 8.16. The honest open-play read of his quality is 8.16, not 15. Compare him to a striker on a lower-penalty-earning side who also posted 8.0 xG with zero penalty involvement and you're comparing like with like.
Team xG differential. Two teams can have similar season xG lines while being different sides in open play. Team A: 55 xG with 8 penalties earned (6.1 xG of penalties, 48.9 npxG). Team B: 52 xG with 1 penalty earned (0.76 xG of penalties, 51.24 npxG). The headline has Team A ahead. In open-play chance creation, Team B is ahead.
Cross-league comparison. Referee tendencies on penalty awards differ between leagues. La Liga has historically awarded more penalties per match than the Premier League. A La Liga side's raw xG benefits from that; their npxG doesn't. Cross-league xG comparisons without the npxG adjustment can mislead by enough to change the conclusion.
Early-season sample. In small samples, one penalty swings a striker's xG percentile sharply. A player with 2.3 xG across five matches and one penalty converted has 1.54 npxG. Stripped, his underlying rate looks far more modest. This is why scouting reports running early-season comparisons almost always work in npxG when stakes are high.
The pattern across these cases is the same. Raw xG answers "what was the quality of everything this side or player shot." npxG answers "what was the quality of everything this side or player did to create chances in live play." The second question is usually the more useful one.
How npxG is commonly calculated
All public xG models produce per-shot xG with a flag identifying penalty shots. Building npxG is trivial once per-shot data exists: filter out rows with is_penalty = true and sum what's left.
Two small design decisions vary across providers:
Missed penalty treatment. Some models keep the penalty-xG value in the player's cumulative xG even when the shot missed (the reasoning: the shot existed and carried high quality, so it should count toward xG). Others strip it. The first approach means a player who misses penalties develops a larger-looking "underperformed xG" gap than one who converts them. If you're reading a player's season npxG to judge his finishing, check which convention the provider uses. The more defensible choice is to count taken penalties in raw xG but not in npxG regardless of outcome, because npxG is explicitly about open-play work.
Rebounds from penalty misses. A saved penalty that rebounds to a teammate who scores: does the rebound shot get the full xG it deserves? Most providers treat it as a normal open-play shot from its location and assign xG based on the shot context. That's correct. But some simpler models lump the rebound into a penalty sequence and treat it differently. For users reading modern data feeds, this is a non-issue; for users reading older historical data, it's worth knowing.
Tactiq reads event-level match data from licensed sports feeds covering 1,200-plus competitions. The per-shot data includes the penalty flag, which lets both raw xG and npxG be computed cleanly for the match analysis. How those two signals combine with the rest of what the product looks at stays within the app.
Where npxG still misleads
Switching from xG to npxG is a cleanup, not a cure. The raw metric's other weaknesses still apply, and a few new ones appear.
Set pieces other than penalties still distort. npxG strips penalties and leaves in corners, direct free kicks, indirect free kicks, throw-in set plays. For a team built around set-piece excellence, npxG still captures that value. If you want "open-play xG" in the strict sense, you need to strip all set-piece shots, not just penalties. Some providers publish an "open-play xG" column separately. npxG is halfway there, not all the way.
Drawn penalty context is lost. A striker who's elite at drawing penalties through clever movement and body use is contributing real value to his team. That contribution disappears in npxG because the drawn-penalty event produces a penalty shot taken by someone, not a shot taken by the drawer in live play. The drawer's npxG reads lower than his actual attacking contribution. Comparing two strikers on npxG alone, where one draws fouls and the other doesn't, undervalues the first.
Designated-taker effects. npxG is a player-level number, and the penalty-earner often isn't the penalty-taker. Stripping the penalty from the taker's xG doesn't add it back to the earner's account. If you're trying to evaluate which forward is genuinely more productive for his team, the taker effect understates the earner's value in npxG just as raw xG overstates the taker's.
Missed-penalty handling inconsistency. As the FAQ notes, providers vary in whether a missed penalty's xG stays in the player's total. Comparing two players across providers who handle this differently produces apples-to-oranges conclusions.
The headline number still doesn't reflect shooter quality. A striker's npxG says how much open-play chance he received. It doesn't say whether he converted it better or worse than average. That's a finishing question, answered by the npxG-to-non-penalty-goals gap, not by npxG alone.
Small samples still lie. One big open-play chance in a single match can lift a team's npxG from 0.9 to 1.5. That shift doesn't tell you the team was better over 90 minutes. It tells you one good chance happened. A rolling window of several matches remains the baseline.
Cup and tournament fixtures still carry higher variance. Stripping penalties doesn't change the fact that a cup-final npxG read is less reliable than a mid-season league-game npxG read. Confidence scales with comparable sample depth, not with which xG column you're reading.
The usable rule that comes out of this: npxG is the cleaner of the two numbers for cross-team and cross-player comparisons focused on open-play ability. It's still a probability, still subject to the same sample-size and context warnings as raw xG, and still needs to be read alongside its companion metrics.
How Tactiq uses the npxG signal in the analysis
Tactiq treats npxG the way this article has just described it: as a refinement of the underlying-performance picture, not a standalone verdict.
Inside a match analysis, the difference between a team's recent raw xG and its recent npxG is one of the signals the analysis reads when evaluating team form. A team whose raw xG has been high but whose npxG has been modest is earning its xG largely from set-piece profile. A team whose npxG is stable and close to its raw xG is sustaining open-play quality. Those two read different on the match card even if the raw xG column looks similar.
The specific way Tactiq's analysis weights raw xG versus npxG across the rest of what it sees, the sample windows it uses, the way it flags unstable signals, stays inside the product. Published methodology gets copied and miscalibrated within weeks; what reaches the user is a confidence-qualified read with the reasoning explained in plain language, not a textbook.
What the user sees on the match card:
- An expected goals figure per side, with a recent-form trend indicator.
- Probability triples for the outcome, qualified by a visible confidence indicator.
- A written analysis that names the open-play picture in plain English: "Home side's recent creation has held up in open play, though set-piece conversion has fallen off, which has compressed the headline xG below the underlying pattern."
- No external market data anywhere. No redirects to third-party platforms. No virtual currency. Statistical analysis only.
The analysis doesn't surface raw npxG numbers on screen; it surfaces the interpretation of what the raw-versus-non-penalty gap implies about the team's open-play quality.
How to read npxG like a pro
Five habits turn npxG from a second column into a useful lens.
- Pair raw xG and npxG whenever both are available. The gap between them is the penalty premium. Penalty-heavy teams and strikers look different in the two columns for good reason.
- Use npxG for cross-team open-play comparison. Set-piece profiles vary by league; penalty frequencies vary by referee tendencies. npxG reduces at least one of those distortions.
- Don't evaluate penalty-takers on npxG alone. A designated penalty-taker's value includes the fact that he converts penalties reliably. npxG strips the credit and undersells him for that role. For taker evaluation, look at both columns.
- Be careful with missed-penalty handling across providers. Read the footnotes on the dashboard. A "player underperformed xG" narrative driven by counted missed penalties is a different story than one driven by genuine poor open-play finishing.
- Apply the same rolling-window discipline as xG. Four to eight matches. One fixture is anecdote, not pattern.
Together these habits turn npxG from a minor variant into a genuinely useful view on open-play football.
The takeaway
npxG is xG with the penalty constant stripped away. It's cleaner for most comparisons, particularly cross-team and cross-player reads focused on open-play ability. It's still a probability, not a verdict, and the broader xG discipline (rolling windows, context sensitivity, confidence qualifiers) applies equally to it.
Used as an upgrade over raw xG for the questions where open-play quality is what matters, it's the honest number. Used as a single-match oracle or as a leaderboard stat without context, it misleads in exactly the same ways raw xG misleads.
Tactiq is built to read the underlying-performance picture with that context held in place. The analysis reflects the raw-versus-non-penalty gap where it matters, surfaces it in plain language on the match card, and never mixes the statistical read with external market data. 1,200-plus competitions, 32-language localisation, free tier of eight analyses per day, no credit card required.
If you've been following this series, you've now read the foundations in three layers: how AI predicts football matches, what xG actually measures, and the complete xA guide on the creation side. npxG sits alongside xG and xA as the third metric in the underlying-performance toolkit, and the four articles together cover the ground the rest of the blog keeps building on.