I have an idea for a change to the Laws of cricket to improve an aspect of Test match play. It would be an attempt to make (even) more Test cricket attractive to watch – more specifically, to restrict those passages of play that don’t feel vital, tense and keenly contested. It centres on competitiveness. For that reason, it goes wholly against the grain of the international game and, I accept, would not be adopted. It’s a political thing, it seems. But the idea came to me not as a challenge to the game’s status quo, but at home.
So, it starts personal: I am no longer always alone watching cricket. My sons join me and with their company comes responsibility and anxiety. Will the cricket.. will England’s performance.. sustain their interest? I’m not really worried about no.1 son. He is in too deep and has found the multiple layers of the sport, which can provide distraction from bad cricket or poor England. No. 2 son is more of my worry. Unlike his brother, he’s not an autodidact. What he knows is what we’re watching and how we interpret it for him. It could go wrong. And for that reason, I want Test cricket to show its better side.
The one aspect that I would change is a function of Test cricket’s tendency for one side’s advantage in a match to be exponential, not linear. This is routinely seen in the margins of victory between two closely matched teams. They are not the handful of runs that, on paper, separate them, but hundreds of runs, with the result and margin often reversed the following week. It’s not the margins of victory themselves that are the problem I want to highlight, but a particular passage of play that occurs as the team in the ascendancy turns their advantage into an unchallengeable lead.
The recent (2019/20) Trans-Tasman Trophy series was an archetype. In each of the three Tests, Australia held a first innings lead, amounting to 250, 319 and 198 (reduced from 203 after a penalty incurred by Australia in the third innings). From that commanding position, Payne’s team set out to bat again and build their advantage.
The outcome of this tactical ploy is a dissipation of competitive tension throughout the third innings of the match. The fielding team may attack briefly with the new ball, but if incisions aren’t made quickly, the innings proceeds with the teams at arms-length, not locked truly in battle. Runs are accumulated against either bowlers not exerting themselves or second and third string bowlers. The fielding team is trying to slow the scoring – by defensive methods – not in the interests of forcing an error, but simply eating up time, or just plain time-wasting. Batsmen may play attractive innings, but there’s a strong sense of cashing in on the situation rather than shaping the game. The match proceeds, like a car coasting down a hill in neutral – something may happen at the bottom of the hill, but there’s not much propelling it, or standing in its way.
‘Tune in later’, I’d advise a youngster trying to get to grips with Test cricket, ‘when the meaningful stuff starts’, hoping they will bother to return when the fourth innings begins.
A solution in Game Theory?
The problem of one competitor gaining an advantage that is detrimental to the spectators’ experience of the contest is not peculiar to cricket (although the duration of this period might be). From the academic field of game theory has arisen the idea of re-instilling interest in a contest by giving the team that falls behind a catch-up opportunity.
The first example is not just about maintaining competitive tension, but also equity. In football (soccer) knock-out matches that go to penalties, a heavy advantage is enjoyed by the team that kicks first. To level this playing field and promote tighter penalty shoot-outs, it is proposed that the sequence of kicks changes from ABABABABAB to ABBAABBAAB, so each team has the opportunity to take the lead. Baseball is another sport that has received games theory advice. In this case, the recommendation is to vary one of the fundamental components of the sport. The team that leads, it is proposed, should have its innings reduced from three outs, to two outs. This is catch-up theory at its crudest: limiting the scoring opportunities for the team that finds itself ahead. The ideas have not been adopted.
Game theory’s catch-up ideas remain just that. They do, though, provide material that might be applied to Test cricket if we want to rid it of its third innings malaise.
The problem analysed
Before outlining the options available to Test cricket, I have some data on the extent of the problem, drawn from all Test matches in the last decade.
Of the 427 Tests in the sample, almost two-thirds (65.5%) had first innings leads exceeding 100, and more than one-third (34.7%) exceeding 200.
The size of first innings lead closely correlates with the match result. Unsurprisingly, the larger the lead, the greater the prospects of victory and less likelihood of defeat. Once the lead tops 125 runs, the chance of defeat falls below 10%. That threshold is reached even earlier – above 100 – for sides batting first who gain a lead in the initial innings of the match.
Cricket has, of course, a provision in its Laws to prevent the aimless drift in two-innings matches which feature a dominant side at the half-way point. Law 14 states: “the side which bats first and leads by at least 200 runs shall have the option of requiring the other side to follow their innings.”
21% of the matches in the past decade met the conditions that would enable the side batting first to require their opponents to follow-on. Captains of the side with the advantage opted to do so less than half the time (46%). From the scorecard data, their decision was influenced by the size of their lead, and the number of overs they had they had been in the field to achieve their advantage. Other factors undoubtedly played a part as well: series situation, bowlers’ fitness, weather conditions, etc.
More than one-in-nine Test matches in the 2010s featured a dominant side choose to build-up its lead in an often successful effort to take the sting, the jeopardy and the interest out of the remainder of the game. It’s a significant minority of all matches. Had I not had access to the statistics, I would have guessed the proportion was higher. It seems such a common occurrence – a blight on the sport that has me in its grip.
Catch-up cricket – advantages and disadvantages
I want to argue that if one side establishes a hefty lead, then it is better for the game that the other side bats next – either to bring the game to a swift conclusion, or to straight away challenge the side that has fallen behind to stage a comeback. Either way, all of the cricket is vital and well contested. This is what happens when the side batting second has the hefty first innings lead. These suggestions then, only apply to situations where it is the side batting first that holds the advantage.
I can conceive of a range of options for avoiding the third innings cruise:
- the side with a first innings deficit always bats third
- set an arbitrary first innings lead that would require the side behind to bat third – a return to the mandatory follow-on law
- invert the follow-on rule, so that the captain of the team in deficit decides whether to bat next.
The downside of each of these is the potential for manipulation – gifting runs or wickets to gain a positional advantage: eg allowing the deficit to fall below 200 so that the follow-on isn’t enforced. Let’s park that objection and look at other arguments against forcing the team behind the game back out to the middle to bat again.
There is a strongly held notion that the side that has won the first innings advantage has won the right to determine the sequence of the match – eg avoid batting fourth on a pitch that is likely to be deteriorating. There are political echoes to this understanding of the sport, which I’ll return to later. For now, I’ll restrict my response to noting that this ‘deserved’ advantage may have been the result of a good deal of fortune: winning the toss, batting/bowling in more favourable conditions. More fundamentally, I would counter this objection with the assertion that the sport should be structured to foster competition, not reward a particular team for where they find themselves part-way through the game.
A second argument, which is I suspect more persuasive to the players, is that forcing a side into the field for back-to-back innings risks injuries and fatigue to its bowlers and fielders. The risk is real, but it applies also to the team against whom the opposition amass a score of 500 or more in a single innings over five or six sessions. We expect that fielding captain to manage his or her resources without offering them respite. Shouldn’t we expect the captain of the stronger team – with the sizeable first innings lead – to do the same?
In the knowledge that being the superior team could lead to longer stretches in the field, stronger teams may select more balanced sides, with more bowling options, to drive home the advantage won in the first innings. On winning the toss, they may choose to bowl first, avoiding the possibility of an enforced follow-on, giving the weaker team first use of the pitch. It may change the nature of pitches that home boards task their groundskeepers with preparing. The risk of injury and fatigue is genuine, but so is the ability of cricketers to adapt, possibly in ways that enhance the game.
It can further be argued that the enforced follow-on may shorten games, denying action to spectators with 4th day tickets, advertising revenue for broadcasters and providing sustenance to those wanting to lop a day from the scheduled duration of all Tests. The evidence of the 2010s is that matches where sides with deficits over 200 runs were required to follow-on did wrap up faster – by an average of 50 fewer second innings overs (in excess of half a day’s play). I am not persuaded that we need Tests to be any longer than it takes for one side to dismiss the other twice for fewer runs that it has scored. That is the essence of the sport.
I acknowledged earlier that forcing teams to do something they don’t want to do could bring about match manipulation – gifting of runs or wickets. To assess this risk, it is worth understanding what is at stake for the captain of the side on top, who prefers to bat again rather than enforce the follow-on. There is the concern, already mentioned, about the physical demands on bowlers.
Another factor in that captain’s thinking is wanting to avoid batting last when the pitch conditions will be most difficult for batting. Over the last decade, the fourth innings batting average across all Tests is 4.8 runs per wicket below that for the third innings. Applying that statistic to a first innings lead effectively adjusts a 200 run advantage to 152. It is this sort of calculation that could beget manipulation.
Imagine the team batting second is approaching the (now mandatory) follow-on score: 14 runs (the average 10th wicket partnership in the first innings of the side batting second in the 2010s) short as the ninth wicket falls. The fielding captain could subtly gift those runs to ensure his bowlers get a rest and he avoids the disadvantage of batting last. An average 10th wicket partnership would realise 14 runs, plus the 14 gifted – 28 runs more than the lowest total had the captain managed to take the 10th wicket without conceding any runs. Add four more runs to represent the average partnership score once 14 runs are made: 32. The advantage of batting third over fourth is quickly whittled away.
The reverse argument can be made for the batting team, who may want to manipulate proceedings to maximise their score without passing the mandatory follow-on score. Perhaps both sides would enter an ultra-attacking phase, one willing to risk conceding runs but accepting wickets falling; the other accepting the runs but willing to see their innings close.
It would be an audacious or desperate captain who deliberately reduced their first innings advantage, or increased their deficit. Their control of the degree to which they concede ground to the opposition would not be precise and could just turn out to be match-losing or win-forfeiting. Nonetheless, match-fixing gives us evidence that some players will under-perform for some future or other benefit. If the risk of changing the follow-on law were to introduce the prospect of tactical under-performance in the expectation of creating a superior match situation, it probably wouldn’t be worth it.
An alternative approach would be to leave alone the laws over following-on and innings sequences and take other steps to prevent the third innings drift. This could be done by giving the choice at the outset of the game of whether to bat or bowl, not to the winner of the coin toss, but to the weaker team and trusting them to seek the advantage of batting first (note 1). A number of ways could be employed to identify the weaker team: including the ICC rankings, score to date in the series. I would recommend, at the first test in the series, using away status as a proxy for ‘weaker’. Thereafter, the choice to bat or bowl first would devolve to the team behind in the series, or stay with the away team if the series score were even.
The political dimension
Making the game more competitive in most sports is an issue of equity and entertainment. In international cricket, it is squarely political. In trying to come up with solutions to the third innings malaise, with its passages of play that I would find hard to justify as worth my younger son spending his time watching, I came up against a far stronger barrier than health and safety concerns. International cricket is not run to be competitive. More than that, it is run to be uncompetitive. Catch-up proposals that could, ever so slightly, tilt the balance of a match, have no hope of success when the fabric of the game takes the advantage of some nations, institutionalises it and makes it a matter of active policy. If the health of the wider international sport is not prioritised then it is futile expecting changes that benefit weaker teams mid-match to find any traction.
The nations that participate in international cricket find themselves in the early twenty-first century unequal: population, resources, playing facilities, history, climate, etc. Advantage isn’t truly earned in Test and international cricket. It is an accident of geography, empire, national determination and economic development amongst many other factors. Onto that inequality we graft decision-making authority, match scheduling, access to competitions, distribution of funds and migration of players in ways that entrench relative advantage. But still we praise the strong for exerting their strength and pity the weak for not overcoming their disadvantages. International cricket needs something more fundamental than a catch-up device – a fully-articulated handicap system would be more suitable.
I referred earlier to the objection to the mandatory follow-on that the team with the first innings advantage had earned the right to decide whether or not they would bat next. Underlying it are two ideas that are joined by a golden thread to the politics of international cricket: 1) those with the advantage have decision-making authority; 2) the advantage they hold is deserved. The first idea is base realpolitik and as applied to match-play, relates to nothing intrinsic in the sport. In other words, cricket would lose nothing, if, at the stroke of a pen, the Laws were amended and the authority to decide who bats in the third innings of a match was invested not in the captain of the side with the advantage, but his or her opposite number. The second is the conservative sleight of hand that encourages the status quo to go unexamined: the wealthy and the powerful are deserving of their advantage, when even the shallowest digging below the surface would expose the combination of privilege and good fortune that really accounts for their status.
Back at home
If politics is to continue to prevent cricket becoming the best sport it could be, I don’t think I should shield this fact from my younger son. In future, as a team starts its second innings, aiming just to bloat its already hefty lead in the game, I’ll draw this to No.2 son’s attention. “Look. They have chosen to bat again, to take the game out of reach. It’s what the powerful do: they defend their advantage.”
If Test cricket cannot always be entertaining, let it be educational.
Note 1: for an assessment of the advantage of winning the toss (aka batting first), read criconometrics
This year’s selection of blog posts is as diverse as each of the previous eight years’ selections, featuring authors from four continents, content spanning the international and recreational game, cricket of the early twentieth century, the modern and near future. There are themes, though: the summer’s World Cup provides material for four pieces; statistical insights inform three and concerns about how the game is run are found in three. The qualification for the Select XI remains that they should be independent and unremunerated writing from the web. Bloggers featured in any of my previous annual round-ups are excluded.
Red Ball Data (@EdmundBayliss) has been one of the pleasures of 2019, providing frequent, ingenious investigations of cricket tactics and performance, with a focus on the red ball game, but in the example I’ve selected, looking at the interaction of one format (T20) with the others: On the decline of Test Batting
To counter-balance the rationalist approach, here is the romantic viewpoint: Mahesh dissects and celebrates a single shot from this year’s Test cricket:
Kusal Perera takes the smallest of strides forward, without the slightest of pretensions to get near the line of the ball, backs his hands to work at his eye’s command, and deposits the ball onto the roof of the stands over extra cover with a most pristine swing of the bat. Dean Elgar pulls down his sunglasses to see how far the ball has gone. Aleem Dar completes the formality of signaling a six but keeps staring towards extra cover as if he is trying to visualise that moment of perfection again.
From this one shot, the author builds an argument for the Six – a shot he describes as unnecessary to Test cricket – as testimony to the adventurous spirit of sports players.
Six appeared on the 81 all out site, alongside my favourite player appreciation of 2019, Yuvraj Singh and the journey from hope to possibility. Aftab Khanna describes the catalysing impact Yuvraj had on the India ODI team of the early 2000s. The core of his success with the bat was having the cleanest of swings:
There was a smooth, unbroken backswing, a stable head at contact, and a clean follow-through with a pristine extension of the arms. In the middle of the cacophony of a packed stadium, Yuvraj brought the tranquillity of the golf course to the art of swinging.
Yuvraj is joined in the Select XI by Wayne Madsen, whose milestone of reaching 10,000 runs for Derbyshire was marked in this affectionately written piece by Steve Dolman (@Peakfanblog).
2019/20 marks the start of the next four year cycle of building towards the 2023 ODI World Cup. Dan Weston (@SAAdvantage) used this as an opportunity to look backwards to the 2019 tournament cycle and forwards to address the personnel changes Pakistan and South Africa would need to make. In Managing the Overhaul, Weston mines his domestic limited overs database to pinpoint which players who could help those nations have more impact in the next World Cup. An article to read, bookmark and review in four years time.
The 2019 World Cup also features in You Couldn’t Write the Script, by Nicholas Freestone. The source of celebration is not simply England’s trip to the final, but the game’s reappearance on free-to-air TV in the UK. Freestone writes about the rise and fall of Channel 4’s cricket coverage, which ran alongside and informed his school days’ fascination with the sport. Days before the final, he concludes:
The sun will be shining on Lord’s and, no matter the result, it will be one of my greatest days as a cricket fan. It was over half my lifetime ago that Channel 4 was showing live England cricket, something that was seminal in my childhood.
Now it is back, and it will be something special – one day in which I can relive the greatest sporting coverage that I will ever see in my lifetime.
Blogging is for amateurs, aspiring professionals and for those who have already made it to the media profession. Peter Della Pena is a CricInfo correspondent, who used his own website to publish an account of watching a World Cup tie with Evan, a friend from his childhood in New Jersey. The Americans watching Afghanistan in England on the 4th of July might not earn Della Pena a paycheck but it makes for a fine, long-form blog, that weaves together a number of themes topical and timeless. Of the latter, there is the insight a cricket watching veteran can gain from accompanying a debutant:
Of all the things Evan could have come across at a cricket match to pique his interest, the Duckworth-Lewis-Stern Method would have been near the bottom of the list of things I would have ever imagined drawing him in. But it happened.
Neil Manthorpe, South African cricket broadcaster, is a dedicated blogger. In recent months he has detailed the crisis in South African cricket governance. My selection comes from the middle of this year: Manners’ reflection on that final in July, notable for its cool detachment as it considers What will the Greatest Final be remembered for?
They [the New Zealand players] held their nerve as well as they held their catches. Time and time again when the intensity of the moment demanded that somebody, or something, should crack, it was not them. They had learned the lesson from the last final, four years ago, when the occasion was too much.
But somebody did crack.
Going back 100 years, the Wisden of 1919 had little cricket to cover and featured obituaries of many players who didn’t make it to the resumption of regular play that spring. For its cricketers of the year, Wisden selected five public school boys. In Whatever happened to? John Winn traced the careers of these five promising youngsters, headed by future England captain, Percy Chapman.
Returning to cricket statistics, an increasingly fertile area for cricket blogging, Playing for Lunch and Tea charts whether batsmen really do alter their play as intervals approach. Karthik’s (@karthiks) exemplary analyses – on this topic and many others – provide eye-catching proof that this truism is statistically supported.
The Club Cricket Development Network (@ClubDevel) is about the only thing I have ever found of value on LinkedIn. It shares experience and good practice on issues that affect club cricket and has taken a representative and lobbying role with the ECB. This piece – The strange death of English cricket – skewers the innumeracy that underlies the grand strategy for the future of the UK’s national summer game and exposes the agenda that makes a particular number attractive to those in authority.
I will end this round-up, in imitation of Wisden, with my nomination of the World’s Leading Cricket Blogger of the Year – aka the blogger whose output has given me most reading pleasure in these last twelve months. Limited Overs is the work of Matt Becker who, from his home in Minnesota, bridges the personal and the global meaning of cricket, with a tender mix of emotion, humour and sincerity. Catching up on Becker’s recent posts, I came across this – characteristic of much of his writing – and apt to use in conclusion of this piece, this year, this decade;
Cricket is an old game. And with that age comes ghosts. And with those ghosts comes weight, and a sense of belonging to something great. I am not sure what that something is. Whether it is time or history or God or the universe. But when we allow ourselves to feel cricket’s ghosts, that is when the game becomes more than a game, and then we have no choice but to keep coming back, to keep that wonderful sense of doleful joy alive in everything we see.
In the UK, you may be paying £100 or more to watch the England team play in a one day international (ODI). Based on the ODIs of recent years, you have around a one in ten chance of seeing a match with a tight, even thrilling climax. On the other hand, you are three times as likely to see one of the teams trot to a comfortable victory (margin of over 100 runs or with ten or more overs to spare).
The spectating experience depends on much more than whether the match delivers jeopardy to the very end. But the competitiveness of the format is topical and a feature that the game’s administrators appear to want to promote.
The evidence for my assertion that there is a one in ten chance of seeing a thrilling finish to an ODI can be reviewed in my post ‘Thrilling finishes and the 50 over game’. In this article, I extend that analysis by updating the sample to February 2019 and by reviewing the competitiveness of ODI series.
In the 12 months since my earlier post, there has been something of a revival of the tight ODI. Spectators in this period have had a one-in-five chance of seeing a game with a thrilling finale. The criteria I use for defining tight matches comprise: a tie; a victory batting first by fewer than 10 runs; if chasing, winning in the final over or with eight or nine wickets lost.
On the other hand, there has been no let-up in the incidence of crushing victories: 34% by margins of over 100 runs or with more than ten overs to spare.
One-sided or closely matched series?
This analysis is based on the 79 series of three or more ODIs played between two teams since the 2015 World Cup and completed by the end of January 2019 (note 1). It omits shorter bilateral encounters and tournaments involving three or more sides – all of which are included in the match analysis chart above.
The table below summarises the results by series duration. One-half of series remained undecided heading into the final match. Sample sizes are small, but shorter series (three matches) were more likely to deliver a final match with everything to play for.
The unwanted spawn of the uncompetitive series is the dead rubber. Matches that had no bearing on the series outcome occurred 44% of the time that they could potentially have happened. Of the 52 dead rubber matches that went ahead, eleven ended up as consolation victories for the series loser. Six of the games (11.5%) produced ‘tight’ finishes, but 22 (42.3%) were crushing victories. The value of these games, other than selling air-time and bringing international cricket to more towns and cities is questionable.
A useful benchmark of competitiveness can be found in Test cricket. In Test series in the same period, 55% of the 36 series with three or more matches were wrapped up before the final game was played, creating dead rubber games. ODI series, therefore, have recently been more competitive than the Test match equivalent. Moreover, the Test match draw raises the probability of teams playing that format reaching the last game of a series with the result undecided.
Looking more broadly – at pure probability – gives further evidence that ODI series are not particularly uncompetitive. A ‘best of three’ coin toss would produce a definitive result with the first two tosses one-half of the time; five percentage points higher than that seen in three match ODI series.
The five (or more) match series, presents a more mixed picture. A definitive series result was obtained from the first three games in over one-third (34%) of match-ups – compared to 25% in a ‘best of five’ coin toss. The seven clean sweeps (18%) is three times the likelihood of five coin tosses ending all heads or all tails. Yet, 45% progressed to a fifth match decider, exceeding the expected 37.5% in the coin toss scenario.
In conclusion, ODI series sustain interest to their conclusion relatively frequently. The problem the format faces perhaps isn’t uncompetitiveness, but inflexibility of scheduling. On the occasions that a series is decided early, the remaining fixtures have been booked with broadcasters and grounds, who have sold ad space and tickets. The show must go on, even if intensity and interest decline.
Note 1: 3 match series in which either the first or second scheduled match was abandoned or cancelled are excluded (ie considered as 2 match encounters). If the third match was a victim of the weather, the series is included in the analysis.
…Allan Donald standing, turning, dropping his bat, then running, but too, too late, as the celebrating Australians converge (Edgbaston, 1999)…
…Ajmal Shehzad clubs a first ball six as England’s 8th wicket pair gather 13 runs from the final over to tie the game on the final ball (Bengalaru, 2011)…
…Grant Elliot launches Dale Steyn over long-on, over the boundary from the penultimate ball of the 2015 semi-final… (Auckland, 2015)
These are the dramatic conclusions to One Day Internationals (ODIs) that come to my mind when I think of what makes the 50 over format so exciting. Matches that have run for hours but are decided by a pressure-forced error or a single audacious act. Games when all 22 players look back and can each reflect on just one thing had they done differently – pushed to turn a long single into a two; not bowled that wide; collected the ball cleanly on the boundary – that may have made all the difference.
Major tournaments are felt to be smouldering, not truly catching alight, until they feature at least one of these thrilling finishes. TV stations shelving the next scheduled programme so they can stay with the action until the very end. Pictures of fans chewing fingernails, or covering their eyes from the spectacle that both holds them rapt and that they cannot bear to watch.
Thrilling finishes seem to be the essence of one day, limited overs cricket. Yet how representative are they of the format? How often is the team batting second still chasing in the final over, or with their lower order batsmen stretching for the target? To approach an answer to these questions, I have analysed results and victory margins for ODIs since the last World Cup (March 2015 – January 2018). To provide some context for that analysis I have completed similar reviews of national, list A competitions and two non-full member 50 over tournaments from this year.
To be engaging, ODI cricket doesn’t have to culminate in a final over where all three results are possible. 100 overs gives plenty of opportunity for fortune to swing back and forth, with the final decisive swing happening deep into the second innings and producing a convincing margin of victory, rather than a nail-biting conclusion. An individual innings or bowling spell may blow away the opposition, yet provide adequate reward for the spectator or viewer. But still the sport yearns for the crazed uncertainty of a match that hinges on cricket’s high pressure equivalent of the duel.
The ODI sample I have assessed comprises 312 completed matches: 157 won by the side batting first; 153 by the side chasing; and two ties. The tied matches (0.6% of the total) qualify automatically as thrilling finishes.
Looking at the matches won by the side batting first, 13 (8.2%) were won by single figure margins (fewer than 10 runs) and so were likely to be in the balance going into the final over. Another 12 had victory margins of between 10 and 19 runs and so delivered some degree of jeopardy for players and fans deep into the game.
The chart below shows the distribution of victory margins for sides batting first (one decile is 10% of the matches in this sample). Not only are tight finishes relatively rare, but substantial wins are the norm: the median victory is by 70 runs and almost 30% of matches are won by 100 runs.
ODIs won by the team chasing were unresolved until the final over on 10 occasions (6.7% – excluding five matches decided by Duckworth Lewis when the side batting second was already ahead of the par figure when weather intervened). 27 (18.2%) reached the target in the penultimate over. Over half (14) of these had five wickets or more in hand, suggesting a well-calibrated chase rather than genuine uncertainty over the result.
The tactic of chasing teams to set a pace to their innings based on the target set, rather than the optimum score they might achieve, can make victory margins based on ball remaining in the innings misleading. Nonetheless, the chart below, showing the distribution of balls remaining in matches won by chasing teams, again shows that convincing victories are far more common that thrilling conclusions. The median margin is almost five overs and more than 30% of these games are won with 10 or more overs to spare.
A chasing side, of course, risks losing a game by being bowled out. There were eight (5.4%) instances where the game was won by one or two wickets. Five of these are already recognised as tight finishes as they finished in the last or penultimate over. At the other extreme, 35 games were won with the loss of three or fewer wickets.
Of the 312 completed matches in the sample, 28 (9.0%) appear to have delivered a truly tight game to the end, giving about a one in eleven chance of seeing a thrilling finish. Those do not seem unreasonable odds of a game staying alive until its very last passage of play.
More concerning is that 30% of the sample produced games that were not just comfortable victories but, achieved by margins of over 100 runs or with more than 10 overs to spare, were veritable blow-outs. Excluding matches involving non-Test playing nations made little difference to the incidence of crushing defeats/victories.
International sport has in-built inequalities with the population size and wealth of countries acting as constraints on their performance. The same is less true (although it remains a feature) of domestic sport where counties, states, provinces and clubs are able to recruit to strengthen sides and players migrate to where there are better opportunities to play. List A (i.e. top level domestic 50 over competitions) matches, therefore, provide something of a control sample to test whether the frequency of one-sided ODIs is a function of the match format or of international competition.
I drew my sample of domestic 50 over matches from the most recently completed List A competitions in Australia, South Africa, Pakistan (2017/18), England (2017), India, New Zealand and West Indies (2016/17). The results of 315 completed matches were analysed.
Using the same criteria for a thrilling finish (victory margin: batting first < 10 runs; chasing in last over or by 1 or 2 wickets; or a tie), there were 51 (16.2%) games that stayed alive until the very end. With odds of a little over one in six, List A matches produced tight finishes nearly twice as frequently as ODIs.
At the other extreme, trouncings were also rarer – but only slightly. 28.6% of the matches were won by 100 runs or more or with 10 or more overs to spare.
There were significant variations between the national competitions. England and New Zealand produced closer matches – shown below with the median margin of victory for each competition. The incidence of games curtailed by bad weather and decided on the Duckworth Lewis system may have played a part in creating closer finishes in those two countries.
Returning to international competition, two recent tournaments provided contrasting records for tightness of matches. At the 2018 under 19 World Cup, the median margins of victory were:
– batting first: 101 runs (ODI median: 70 runs)
– batting second: 63 balls, 7 wickets (ODI median: 29.5 balls, 6 wickets)
Only two of the 48 matches in the tournament (4%) met my criteria for a very close finish: batting first – victory by less than 10 runs; batting second – victory in final over or by two wickets or less.
Fans of thrilling finishes should pay attention to World Cricket League, Division 2. Six of the eighteen matches in the recent tournament qualified as very close finishes, with one team featuring in four of those games. On that basis, Nepal deserves to be the favourite team of every cricket fan who cherishes the tension of a 50 over game fulfilling its potential of going down to the wire.
As Karun Nair surged to a triple-hundred in his third Test innings, commentators dared the Indian selectors to drop the young batsmen for the next Test, when the three more senior players, whose injuries had opened the way for his debut, will have returned to fitness. That’s a recurring selection dilemma – form versus seniority; promise against proven ability. Nair’s situation raises another dilemma, one that I find even more interesting, but suspect selectors do not.
Measurement of the impact of fielders has become topical. It has found official recognition in Cricket Australia’s publication of a fielding index. To appreciate the broader scope of the subject and its potential there’s no better source than Jarrod Kimber’s ESPNCricinfo post, Why doesn’t cricket have proper metrics for fielding?
Now that fielding performance is being subjected to more intense analytical scrutiny, it follows that its impact on batting and bowling performance also needs to be understood. This post presents some options for adjusting batting statistics to take account of certain aspects of fielding performance, drawing on data collected from the India v England series and reported in the Declaration Game post, A series of missed opportunities.
The end of series batting tables showed the dominance of Virat Kohli, the impact of Nair’s mammoth innings and the continuing prolific run-scoring of Root, Pujara, Bairstow and Vijay. But in a series of 49 dropped catches and missed stumpings, how dependent were the batsmen on the competence of the fielders? I have assessed the impact of missed chances on the output of the 18 players who scored more than 100 runs across the series.
42 missed chances were distributed across these 18 batsmen. No distinction is made between chances of different levels of difficulty. All chances that went to hand (or body), or flew between fielders stationed close together are counted, but not those that looped just out of reach or through areas where one might have expected captains to have placed fielders. Also excluded are missed run outs and missed opportunities relating to umpiring decisions or to the operation of the Decision Review System.
The bar to the far right of the chart represents Alastair Cook who benefited from the highest number of misses: six; Ben Stokes is one place to his left.
More interesting than a count of drops is how the missed chances impacted on batting performance. A measure of this is the number of runs scored, had each of the innings ended when the batsman gave their first chance. The full height of the column in the graph below shows the total number of runs scored by the player in the series. The filled blue part of the column shows the number of ‘chanceless’ runs accumulated by each batsman; runs scored after a missed chance are depicted by an unfilled (white) area.
On this measure, Root supplants Kohli as the most prolific batsman in the series, with the Indian captain falling to third place behind Pujara. Nair and Jennings have the highest proportion of their runs bitten off by this metric. At the other extreme, Rahul and Patel were unaffected having not benefited from any missed chances.
With four comfortable victories, India’s batsmen had fewer innings than England’s. Standardisation can be achieved by converting the measure into a batting average – the ‘chanceless’ average – by dividing by the number of dismissals.
Patel, Kohli and Rahul are the three players who maintain ‘chanceless’ averages (orange columns) above 50. Kohli’s average when only chanceless runs are counted falls 43%. But it is Nair with the steepest drop from a conventional average of 160 to just 17.
By including only the runs scored before giving the first chance of an innings, this measure has the drawback of giving no credit for runs that played a part in the match result. I have calculated a second alternative measure of batting performance: the batting average per chance (orange columns). Total runs scored are divided by dismissals plus missed chances.
Kohli, the dominant figure of the series with the bat, returns to the top of the list, followed by Patel. Nair, showing how he made England pay for their errors in the 5th Test, rises to third-place. Cook is near the bottom of the list, having managed just 23 runs per chance.
Adjusting measures of batting performance in this way offers some insights: it shows how certain players’ success relied upon the opposition making fielding errors, while others enjoyed no good fortune of that kind at all, and some failed to capitalise on the good luck that came their way. In this series, there is also a pronounced levelling out of individual batting performance when chances given are taken into account. The range for batting average of the players in this sample fell from a factor of 11 separating top from bottom with conventional batting average, to eight using the chanceless batting average. This type of analysis may, with a far larger sample, start to factor out elements of luck in batting performance measures.
A single Test match series does provide far too small a sample for drawing statistically robust conclusions. Yet, it is exactly the sample most pressing on the attentions of international team selectors, particularly when assessing the contribution of players new to the Test arena. My contention is that selectors and other observers are better served by a batting measure that attempts to control for the varied dose of luck experienced by players than the conventional and crude batting average.
In the Test match in Mumbai, there was a lot said about the fact we played four seamers and two spinners… [but] if we’d caught our catches, we wouldn’t have been talking about our combination; we’d have been talking about how we probably had a chance of winning a game of Test cricket. But consistently, we’ve missed chances – and you can’t afford to do that against the best teams in their home conditions.”
Paul Farbrace – Assistant Coach (speaking after 5th Test at Chennai)
The focus on England’s dropped catches in the series in India is understandable given that, in four of the five Tests, one or more of India’s first innings century makers was dropped early in their innings. Vijay, Kohli (twice), Jayant Yadav, Karun Nair accumulated a combined 649 runs from five innings after an initial escape. England committed seven drops in those five innings and a further eleven across the whole series. Understandable but, in the round, is it justified?
Using ESPNcricinfo’s ball-by-ball commentary, I have recorded each chance of a catch given during the series. I have included any chances that went to hand (or body) and those described as passing between two adjacent fielders. Excluded are balls that looped out of reach, or fell short of, fielders making reasonable attempts, as well as those that passed where one might have expected there to be a fielder, but there was not.
The raw results are shown in the table below. India committed 26 drops compared to England’s 18 and converted a lower proportion of chances into catches.
In October 2016, Charles Davis published in The Cricket Monthly a summary of the results of his analysis of almost 15 years of fielding errors in Tests – Tracking the Misses. Courtesy of Davis, it is possible to put into context the numbers from the India v England series (NB Davis included stumpings in his data, which I have not).
Davis found around 25% of opportunities were missed in the field – an average of seven per Test match. In this series, 31% were missed – 8.8 per Test. Both sides under-performed their recent record: England 24.8%; India 27.2%. This comparison does support the view that fielding errors were a feature of the series. But is it simply losers’ regret that has the England team pointing at missed opportunities? They did, after all out-perform India in terms of the proportion of catches taken.
England, as hinted at above with the roll call of India’s century makers who were dropped, bore a higher average cost for the chances they missed. The mean number of runs scored by an Indian batsman after a drop was 44 (median 22). The equivalent figure for England was 28 (median 21) [footnote 1].
The contrast is most acute when looking at the two captains. Cook was dropped six times (the most of any player) but only added 134 runs. Kohli made 282 runs after the three misses he benefited from.
The two captains were also the most frequent offenders. Cook shelled four of his seven chances; Kohli could not hang onto five of his ten catches.
This analysis supports the conclusion that England, had they taken their opportunities, would have shifted somewhat the balance of the series. However, I believe there are associated conclusions that are probably more profound about the cricket England and India played.
India’s ability to limit the damage of their fielding errors was a great strength: their bowlers were able to continue to create opportunities. England’s bowlers, on the other hand, lacked the penetration to keep their opponents under the kind of pressure that would, sooner rather than later, lead to another wicket-taking opportunity. Moreover, England were significantly more reliant on their fielders for taking wickets. 72% of the wickets taken by England in the series were catches. India’s equivalent figure was almost twenty percentage points lower (53%). Ashwin and Jadeja, in particular, threatened the England batsmen’s stumps to an extent unmatched by the England attack.
The argument that England’s fortunes were hampered by their inability to take the catching chances that came their way obscures the greater insight that England were over-reliant on snatching any opportunities falling to their fielders because they were unable to trouble India’s batsmen often enough and in a sufficient variety of ways.
Footnote 1 – in calculating the number of runs scored by a batsman after a drop, I have subtracted the score when dropped from either their innings end score, or in the case of batsmen dropped more than once in a single innings, from the their score when they were dropped again.
Jonathan Trott and Ian Bell wore down the Indian bowling in a partnership lasting nearly 80 overs on days four and five of the final Test at Nagpur in December 2012. The Warwickshire pair’s efforts were instrumental in defending England’s 2-1 series lead, recognised at the time as a great achievement and one that has not diminished since. Not only was it the last time India have lost a home series, but the Nagpur Test was the last time India have failed to win a home Test (other than a match reduced to less than two days play).
Since England’s visit, India have won 12 of the 13 Tests they have hosted. During this time, India have:
- never conceded a first innings deficit. India’s average first innings lead has been 157.
- won five games in less than three days play. Matches have lasted an average of 316 overs.
- won four matches while losing ten or fewer wickets. On average India have lost 14 wickets per victory.
- dismissed the opposition for under 200 thirteen times. They have conceded 300 or more only twice.
- recorded 14 individual hundreds and conceded just one (Michael Clarke).
- taken 19 individual innings hauls of five or more wickets and been on the other end of seven.
Home advantage has rarely been as telling in Test cricket as in the 2010s. But none of the other highly ranked Test nations have a home record as compelling as India’s since 2013:
- Australia: won 12, drawn 4, lost 0
- England: won 16, drawn 5, lost 7
- South Africa: won 11, drawn 4, lost 4
- Pakistan (in UAE): won 9, drawn 3, lost 4
- Sri Lanka: won 10, drawn 2, lost 5
India’s record as a host is even stronger than those of the West Indies in the 1980s and Australia in the 2000s – albeit over a shorter period than the peaks of these two dominant sides of recent years.
The source of that supremacy is rapidly apparent from a tabulation of aggregate bowling figures. India’s spin bowlers have taken almost twice the wickets at less than half the average and more than one run per over more economically than their opposition. The home team’s pace bowlers are also more effective.
Five spinners have played for India in these series, but two players dominate: Ravi Ashwin (99 wickets at 16.56) and Ravindra Jadeja (61 wickets at 16.47).
Looked at from the perspective of the batting (top 7 in the order) this picture, of course, persists: almost twice the batting average at a scoring rate faster by 25%. Che Pujara (1124 at 62.44), Murali Vijay (895 at 42.61) and Virat Kohli (853 at 44.89) are the heaviest scorers. Ashwin and Jadeja have each contributed over 300 runs as well.
|Batting (top 7)||Runs||Average||Strike rate||100s||50s|
To understand the causes of this run of home dominance it needs first to be acknowledged that it has come at the expense of four countries for whom the sub-continent conditions are particularly challenging: West Indies, Australia, South Africa and New Zealand. It is seven years since India hosted any of its neighbouring Asian nations for a Test series – Sri Lanka in 2009/10. Pakistan last visited nine years ago and Bangladesh, of course, have not yet had the honour.
Yet, India is no longer an exotic final frontier for the cricketers of non-Asian countries. There is now an annual migration in April. The format (T20) is different, but the climate, the pitches and the players are all made familiar. It has not, though, carried through into Test performances in the country. AB de Villiers (258 at 36.85), David Warner (195 at 24.37), Kane Williamson (135 at 33.75), Shane Watson (99 at 16) and Chris Gayle (100 at 25) are some of the highest profile IPL contract-holders who have under-achieved at batsmen in Tests in India since 2012.
India’s method of success, more often than not in this period, has been to choke their visitors on dry, dusty pitches favourable to spin bowlers. Slow bowling, the country’s traditional strength, has brought it unprecedented home success recently. To appreciate the change that has occurred, it is helpful to revisit where this post began – at Nagpur in December 2012. There, on a slow, dead pitch that grew gradually more worn over the five days, England secured a draw and the series victory. Three years later, South Africa played on the same ground. The match was over on the third day; 33 of the 40 wickets to fall were to spinners. India bowled only 17 overs of pace, without picking up a wicket.
The majority of pitches prepared for Tests in the period under review have been amenable to spin from the first day. In the case of Nagpur, a hot and dry location, this has produced far more compelling Test cricket than the alternative, were the pitch allowed to develop its flat, unyielding and slow character that England batted on for longer than the 2015 South Africa Test lasted. (Note 1)
Looking ahead to the England series, the local climate can be expected to deliver arid conditions for the first, third and fourth Tests (although October was wetter than normal in Gujarat, the state hosting the first Test). The visitors may prefer the option of a dead pitch on which they can dig in and force a draw, particularly for the first Test. It would be understandable and preferable from the neutral’s standpoint if the pitch preparation led to Ashwin taking the new ball and igniting puffs of dust early in the game. Rajkot, Mohali and Mumbai all appear to have the dry and hot weather that readily creates pitches on which this Indian team has been impregnable.
|Average monthly rainfall|
|Test||City||Date||Oct rain||Oct days of rain||Nov rain||Nov days of rain||Dec rain||Dec days of rain|
|1||Rajkot||9-13 Nov||19 mm||1||6 mm||1|
|2||Visak’nam||17-21 Nov||258 mm||8||115 mm||3|
|4||Mumbai||8-12 Dec||56 mm||3||17 mm||1||5 mm||1|
|5||Chennai||16-20 Dec||279 mm||11||407 mm||12||191 mm||6|
But the Indian sub-continent encompasses a wide range of climatic types. Average monthly rainfall in Visakhapatnam (2nd Test) and Chennai (5th Test) in the build-up to, and during their matches, is significantly greater than the summer rainfall in the damp north-west of England. The pitches, barring sustained and significant effort from the ground staff, will inevitably be moister and more friendly to seam bowling at those grounds (assuming the weather is in line with norms). We will get a feel for the extent to which the groundsmen in the country are willing, or required, to bend nature to the demands of India’s continued impregnability when the series reaches these two centres.
Test cricket benefits from a strong and interested Indian Test team. The sport also gains from fast-moving, exciting matches. I hope, though, that the pitches played on in this and future series reflect the diversity of India’s environment. And, even if England cannot breach India’s impregnability, stiffer challenges may come in the next 15 months with planned visits from Bangladesh, Pakistan and Sri Lanka.
Note 1: Thank you to Nakul Pande for this observation, via Twitter, about Nagpur 2012 v Nagpur 2015.
Sarfaraz Khan, Gidron Pope, Alzarri Joseph, Avesh Khan, Jack Burnham. Names that have earned recognition for performances at the Under 19 World Cup this month. But will they, and their peers at this tournament, be the successors to Brendon McCullum, Mitch Johnson, MS Dhoni and Kumar Sangakkara in the wider consciousness of world cricket?
An analysis of previous Under 19 World Cup participants will not tell us specifically whether, say, Keemo Paul will become better known for his exploits as a senior than junior international cricketer. It will, though, cast some light on the development of international cricketers.
For every member of a full nation squad at the Under 19 World Cups of 2000, 2002, 2004 and 2006, I have recorded the highest level of senior cricket attained in their career. The ten year elapse since the most recent tournament reviewed makes it unlikely that any of the 555 players will reach a new peak. Unlikely, but not impossible: Stephen Cook, graduate of the 2002 tournament, made his Test debut this year.
Four levels of senior cricket have been identified, in ascending order: i) professional limited overs (List A or national T20 tournament), ii) first class, iii) international limited overs (ODI or T20) and iv) Test. With very few exceptions, this grading represents progress in a player’s career – ie he will have played the form of cricket considered lower than the level I have taken to be the highest level he attained.
Within each level, there is a broad range of attainment, measured by appearances. For example, from the 2000 tournament, grouped together at the first class level are Mark Wallace (England) with 249 appearances and Gareth Irwin (New Zealand) who played a single first class match in 2002/03. (Irwin is one of the exceptions to my hierarchy, as he did not appear in professional limited overs fixtures.) It might be fairer, therefore, to think of each group as containing players who have passed a common threshold, rather than attaining the same level.
The summary analysis of the 555 players shows that 45% have gone on to play international cricket (not all with the nation they represented at the Under 19 age group). 5% have not played any professional senior cricket.
I would have hypothesised that the conversion rate of under 19 internationals to senior internationals would have increased over this period; this being a reflection of the more structured approach taken towards the development of youth cricketers. The results don’t support that hypothesis: the proportion of under 19 players going on to play international cricket has varied: 2000 – 48%; 2002 – 40%; 2004 – 46%; 2006 – 42%.
There are some stark country-by-country differences. The youngsters of Bangladesh and Zimbabwe have had a higher likelihood of becoming full internationals, two-thirds in the latter case – perhaps reflecting that selection in those countries is from a smaller pool of players. On the other hand, barely one-quarter of those who have appeared at under 19 World Cups for England have played for the senior team. Unsurprisingly, England, with its 18 first class counties has had no players fail to reach the senior professional game – nor did Pakistan, South Africa and India.
I also looked at whether performance at the under 19 World Cup was a good predictor of future prospects by narrowing the analysis to the top run scorer and wicket taker for each of the ten nations at the four tournaments. 50 of the 81 players in this sample (64%) have played senior international cricket, compared to 55% of the total population, which is less of an increase than I would have expected. The outlier is New Zealand’s Jonathan McNamee, who was their top scorer at the 2000 tournament, but has no senior professional record.
At the team level, success in the under 19 tournament has not been associated with having teams choc full of future international cricketers. Looking at the eight finalists in these four tournaments, 43% (Test: 31%; Limited over: 12%) of their squad members went on to play senior international cricket, compared with 45% (35%; 10%) of the total.
I was also interested in understanding the proportion of players who reach Test level who have been participants at the junior World Cup. My method provides an estimate, not a precise figure. I extracted the number of Test debutants for each nation in the period 2002-2012. The chart below shows the number of players in the four under 19 World Cups who went on to play Test cricket and the proportion they are of the total debutants in the eleven year period. It provides a rough, rather than definitive, picture as some participants in those four tournaments had debuts before and after the eleven year period; and some players from the 2008 and 2010 tournaments probably had debuts during the period.
Approximate though this analysis is, it does show that England and Zimbabwe are outliers. Around half the Test debutants from the other eight nations had played in the four under 19 World Cups. For England, that figure was below one-quarter. At the other extreme, those players accounted for over 90% of Zimbabwe’s Test debutants.
There are positive and negative connotations to these two extremes. England’s position could be evidence that it performs poorly at identifying future talent, or that its junior cricketers mature at a later age. It could be a strength that international selection remains open to players emerging from outside of the elite juniors. England may have the resources to invest in a broader base of juniors, making precise selection at 19 difficult. Experience of international cricket as a teenager may be a poor one, having a negative impact on English juniors, or their development is interrupted by injury. The opposite to each of these arguments can be made for Zimbabwe. The data cannot help us with this key point. I would be interested in the views of readers.
In conclusion, the data analysis shows:
- Unless from England or India, an Under 19 World Cup participant has close to, or better than, an evens chance of senior international cricket.
- The first class game should definitely be within reach – if not already attained.
- Having a strong tournament (relative to your teammates), desirable in its own right, boosts by a modest amount a player’s likelihood of moving onto senior international cricket.
- At Test level, there is a heavy dependence on Under 19 World Cup graduates, with around one-half of the debutants in the years following tournaments having participated in the junior World Cup.
- England and Zimbabwe are, respectively, less and more likely to choose Test debutants from Under 19 World Cup players.
It seems uncontroversial to state that batsmen are more likely to be dismissed immediately after an interval, than when they have settled back into the new session. But similarly well-worn aphorisms – the nervous nineties and batsmen tending to fall quickly after sharing a sizeable partnership – have shown not to stand up to statistical scrutiny. This post, therefore, attempts to apply numerical analysis to the received wisdom that in Test matches batsmen are more vulnerable immediately after resuming play.
Before introducing the numbers, it’s worth reflecting why this common understanding is so readily accepted by cricket followers. I think there are two mutually reinforcing factors at play, each of which could be supported by associated statistical evidence.
The first factor is that batsmen are at their most vulnerable early in an innings. Owen Benton, in his post ‘When is a batsman ‘in’?‘ demonstrated that the likelihood of a Test opening or middle order batsman falling before scoring another five runs is at its highest when on a score below five. It can be argued that this early-innings fallibility revisits the batsman in the analogical position of re-starting an innings after a break in play.
The second factor is that it is accepted good tactical practice for the fielding side to start the session with its most potent bowlers. While there are no statistics to hand to demonstrate that this tactic is actually applied regularly, nor that those bowlers are more threatening immediately after a break, it would be straightforward to compare the career strike rates of the bowlers opening after the resumption against other bowlers used in that innings.
To test the proposition that wickets fall more frequently after a break in play, I selected a random sample of Test matches played since May 2006 (the date from which cricinfo.com scorecards recorded the score at every break in play). Details on the sampling method are provided at the foot of this post.
From the sample of 20 Tests, I noted the incidence of wickets falling in the three overs following (and prior to) 436 breaks in innings, including lunch, tea, drinks breaks, close of play and weather interruptions. Excluded from this figure are any breaks which coincided with the start of a team’s innings.
All results are strike rates expressed as wickets per over. In the period 2006-2015, wickets fell on average in Test cricket at 0.08 per over. As the chart below depicts, there was a 50% increase in the strike rate in the first over after a break in play (0.125). This effect wore off rapidly, so that the second over after the resumption saw a strike rate (0.090) that was barely above the period average and equivalent to the sample average (0.091).
The result for the 1st over after a break in play is statistically significant. The sample size doesn’t enable the analysis by type of break in play to be anything other than indicative, but is presented below for interest – based upon the first three overs of the restart.
Weather breaks appear to be the most damaging to a batsman’s prospects, but the 20 Test sample only featured 11 weather breaks. There does not appear to be any relationship to the duration of the break. For example, the overnight break was associated with a lower strike rate than the brief evening drinks break.
The sample results do seem to bear out the received wisdom that batsmen are vulnerable immediately following a break in play. However, the brevity of the impact – a single over – doesn’t strongly support the two explanations offered above.
If batsmen find a new session is like starting a new innings, then the effect would be visible in the second over, as six deliveries is unlikely to be sufficient for both batsmen to pass this phase.
If the phenomenon is caused by the more potent (and refreshed) bowlers, it too would be discernible in the second over (bowled by the other fresh strike bowler) and third overs of the new session.
There remains an explanation and it’s a prosaic one, which will often be used by commentators seeing a batsman fall soon after a break. It may simply be that the batsman’s concentration has been interrupted and not sufficiently refocused for that first over of the restart. There’s a message here for players – prepare psychologically for the new session – and spectators – don’t dither, get back to your seat for the restart.
383 Test matches were played in the period from May 2006. Based on an estimated 9,000 breaks in play with an expected strike rate per 18 deliveries of 0.3, 478 breaks in play were needed to give a result with confidence interval of 0.04 at a 95% confidence level. Excel’s random integer function was used to pick numbers between the Test match references of the first (1802) and last (2181) in the sample period. It is worth noting that the random sample was based on Test matches, not breaks in play.
Using the number of relevant breaks in play from the 20 Test sample, a lower total number of breaks of play in the population was calculated for the population of Tests: 8,600. The adjusted sample size was 417, which is lower than the sample on which data was collected.