Glancing at last weekend’s Premier League results through the lens of expected goals (xG), you might be tempted to dismiss the metric entirely.
Across the 10 fixtures, xG aligned with the actual scoreline in only three matches. Manchester United’s 0-1 home defeat to Everton is a prime example: United generated 2.27 xG compared to Everton’s 0.16 yet still lost.
But xG does not capture contextual events such as Everton going down to 10 men or defending deep for long spells.
What it does highlight is United’s inability to fashion clear-cut chances from their 25 attempts, with Mason Mount (0.44 xG) and Luke Shaw (0.38 xG) accounting for their best opportunities.
So, if xG appears so inconsistent at match level, should we rethink its importance? Not necessarily.
In week 11, xG matched seven results; in week 10, it matched eight. Since the 2021–22 season, xG has predicted results correctly 59% of the time. This season it stands at 57.5%—only one or two results shy of the long-term trend.
Crucially, xG is not designed to predict scores but to evaluate the quality of chances a team creates and how efficiently they convert them.
This becomes clearer when comparing expected performance to actual league standings. Based strictly on xG, Aston Villa, Sunderland and Tottenham would all be languishing near the bottom of the table. In reality, they sit fourth, seventh and ninth. Villa have excelled at scoring from long range, while goalkeepers Robin Roefs and Guglielmo Vicario have outperformed xG by repeatedly denying high-quality opposition chances.
This raises a natural question: why bother with xG at all?
A better way to interpret it is this: if you walk away from a match feeling “we should never have lost” or “if we keep playing like this, we’ll be fine,” your team likely produced a higher xG than goals scored. Conversely, “I can’t believe we won that” usually indicates a low xG performance.
Across a season, teams generally regress toward their xG—though form, finishing streaks and defensive errors affect the pace of that regression. Liverpool, for instance, would be fifth based on xG but sit 12th due to underperformance in key moments.
So what shapes xG? And how trustworthy is it as a performance indicator?
Several tactical trends this season help explain why xG sometimes diverges from match outcomes.
Teams have placed greater emphasis on choreographed set-pieces—corners, free-kicks and long throw-ins. Because set-pieces are pre-planned “mini-games,” their execution often results in higher-quality real-world outcomes than xG models assume.
Opta’s model factors in shooting distance, goalkeeper positioning, defensive pressure, shot type and angles. Yet set-pieces frequently involve crowded penalty areas and multiple defenders, which depress xG values—even when a routine creates a well-engineered scoring chance.
When a team successfully manufactures a set-piece shot, their design has worked. In such scenarios, they possess an advantage that xG may underestimate.
Opta’s xG model assumes shots are taken by an “average player.” This standardised approach contributes to under- or overprediction depending on team quality.
Elite teams tend to design routines that put their best finishers on the end of chances. Lower-level teams do not always have that luxury.
Research backs this up: xG models typically overpredict the scoring output of weaker teams while underpredicting that of stronger sides with more clinical finishers.
A second trend this season is the rise of shots taken from distance against deep defensive blocks.
Tottenham have dramatically overperformed their xG, frequently scoring from situations where defenders retreat and leave space 20–25 yards from goal. Aston Villa—one of the league’s biggest xG outliers—regularly score through similar scenarios despite creating some of the league’s lowest-quality chances.
These shots generate low xG values due to distance and perceived defensive pressure, but they often go unchallenged and are taken by high-level ball strikers—producing goals that the model does not fully account for.
Direct free-kicks contribute minimally to xG because the average player is unlikely to score from such situations. Yet teams with elite dead-ball specialists routinely outperform their expected totals.
This trend is neither new nor surprising: free-kick excellence has always existed outside xG assumptions and continues to distort match-level projections.
Aston Villa have outperformed their xG by 4.06 goals this season—trailing only Spurs (8.84) and Burnley (5.26). Their 4-0 win over Bournemouth captures the phenomenon perfectly.
Despite scoring four times, Villa posted just 1.88 xG, while Bournemouth generated 2.01. Villa’s goals comprised:
All four goals stemmed from low-value chances—two from distance and two from planned set-pieces—illustrating why Villa consistently exceed expected returns.
While xG struggles to explain one-off matches influenced by special moments or tactical wrinkles, it remains highly predictive over a full campaign.
Last season, the four teams with the best xG records finished first, second, fourth and fifth. Tactical outliers such as long-range goals, set-piece routines or standout goalkeeper performances occur less frequently than open-play chances and require significant individual quality to sustain.
Across a season, teams that consistently generate high-quality opportunities close to goal almost always finish near the top.
Football’s unpredictability—flashpoints, moments of brilliance and defensive lapses—is what makes the sport compelling. xG does not attempt to remove the chaos; it simply quantifies the underlying process.
Over time, that process still matters.