leg lamp

Masterpoint Notes

by Matthew Kidd

A small blog on masterpoint issues.

First 1000 masterpoints revisited (2013 data)

February 14, 2014

Three years ago I examined how many masterpoints players gain on average in a year based on how many masterpoints they have at the start of the year. The most interesting result was a steeping of the curve starting at roughly 1000 MP, suggestive of a qualitative difference between players above and below this cutoff. I redid this calculation for the 2012 masterpoint data. Here are the results for 2013.

The 2013 data is the first truly clean set of data. In previous years I have had small issues with missing data, which I corrected for in ways that I think have very little impact on the overall result. Nonetheless, I’m pleased to present clean data.

As for the 2012 data and explained there, the curves are now accurately labeled “Win MPs…” because if a player does not win any masterpoints in an event the ACBL has no record that the player played in the event and by extension if a player does not win any masterpoints in a month, the ACBL has no record that the player played at all during the month.

average masterpoints gained in 2013 binned by masterpoints at start of the year
Number of months per year in 2013 that active players won masterpoints

There was a increase of roughly 6,500 active players, up to 149,000 in 2013 from 142,500 in 2012. This seems to reflect a real change since the calculation methods used for the 2013 data are the same as those for the 2012 data. Both years use the straightforward method of checking whether the masterpoint total for each player increased in each month to determine activity in each month.

The steeping of the curves around 1000 MP remains robust. The fits are very slightly revised:

expected mp earnings = exp( ln(mp) × 0.1966 + 3.0851 )     for players with 10-1000 MP
expected mp earnings = exp( ln(mp) × 0.7800 - 1.0567 )      for players with 1000+ MP

Here mp is the number of masterpoints at the start of a year, ln is the natural logarithm, and exp is the natural exponent.

The slopes in particular are very similar to the 0.1993 and 0.7974 calculated from the 2012 data. This makes sense. No big masterpoint changes, for example further curtailment of “triple point” charity games, have been enacted since the start of 2012.

District 22 players continue to track the national average. Mike Passell has now joined Jeff Meckstroth in the “last bin” on the plot which runs from ~68,000 to 100,000 MP.

Analysis details: The analysis period was from masterpoints recorded at the start of January 2013 to those recorded at the start of January 2014.

blue document icon Download plot data as tab delimited text.
blue document icon Download plot data in Excel format.

Masterpoints awarded in each month of 2013

The plot below shows the total number of masterpoints awarded in each month of 2013. The summer months of July and August are very strong, even stronger than in 2012. September continues to be a low.

By the way, this is the first truly clean set of data. In 2012 I was missing the February masterpoint file and so had to split the January to March masterpoint increase in half as an estimate for both January and February. That might not have been so bad as it turns out. The 2013 data suggest that masterpoints in January and February are earned at almost exactly the same rate per day once the different number of days in each months is factored in. For the 2009-2010 dataset, a similar kludge was required for October and November of 2009.

Total masterpoint awarded by month in 2013

Plus ça change, plus c’est la même chose

“If spending big sums of money—and, incidentally, keeping up with Goren—is not necessarily a great American game, keeping up with the Joneses is, and it is on this level particularly that duplicate bridge has boomed. First, the holder of a master point automatically qualifies as a figure of awe in a neighborhood bridge game. He can and will join such a game with feigned condescension, acting like Sam Snead entering a Flag Day tournament at Happy Knoll. Once playing, he will be allowed to explain with cool erudition his own tactics to his rapt audience, and to tut-tut at the mistakes they have made. He will have, in short, a glorious chance to show off.”

This is from a 1961 Sports Illustrated article titled Every Man A Bridge Master. It’s a good read, though you’ll probably need to zoom to read it.

First 1000 masterpoints revisited (2012 data)

September 17, 2013

Two years ago I examined how many masterpoints players gain on average in a year based on how many masterpoints they have at the start of the year. The most interesting result was a steeping of the curve starting at roughly 1000 MP, suggestive of a qualitative difference between players above and below this cutoff. Since then there have been changes to the masterpoint allocation, notably a curtailment of the triple point charity games. Also my technical understanding of input data has changed slightly. Below are the results for 2012 masterpoint data.

One important difference is that the curves are labeled “Win MPs…” in a certain number of months of the year rather than “Played” (at least once) in a certain number of months of the year. This statement is more accurate because if a player does not win any masterpoints in an event the ACBL has no record that the player played in the event and by extension if a player does not win any masterpoints in a month, the ACBL has no record that the player played at all during the month. The evidence for this assertion is detailed in Masterpoint Reports Decoded.

average masterpoints gained in 2012 binned by masterpoints at start of the year
Number of months per year in 2012 that active players won masterpoints

The other major change is an increase of roughly 14,000 players which is far beyond the growth in membership in a two year period. The increase reflects a change in how a player is determined to have won masterpoints in a given month. The new analysis uses the straightforward methods of checking whether the masterpoint total for each player increased in each month. The previous analysis relied on the “last active” field in the monthly masterpoint report being current, which is presumably indicative of masterpoints being won in the preceding month. These two methods are largely in agreement but differ enough to cause concern. My guess is that most of the difference results from the reporting of masterpoints won online such that the new analysis more accurately reflects the activity of players who primarily play online. A smaller part of the discrepancy may result from late reporting of masterpoints. For example, if a club misses the October submission cutoff, the increase from September masterpoints will show up in the October totals but the last active date might remain in September, causing the two methods to disagree. Players who get behind in paying ACBL dues might also cause discrepancies.

The steeping of the curves around 1000 MP remains robust. The fits are slightly revised:

expected mp earnings = exp( ln(mp) × 0.1993 + 3.0351 )     for players with 10-1000 MP
expected mp earnings = exp( ln(mp) × 0.7974 - 1.1993 )      for players with 1000+ MP

Here mp is the number of masterpoints at the start of a year, ln is the natural logarithm, and exp is the natural exponent.

District 22 players continue to track the national average. Jeff Meckstroth has moved into the “last bin” on the plot which runs from ~68,000 to 100,000 MP.

The need to estimate playing frequency with a metric correlated with a separate factor, player skill, is problematic but unavoidable. The essence of the problem is how do we know whether a player who won masterpoints eleven out of twelve months of the year, played during the twelfth month but didn’t win any masterpoints or simply didn’t play at all? For many players this is barely an issue. The depth of ACBL awards is 40% of the field. Multiple flights mean that a typical game pays masterpoints, no matter how little, to about half the field. So a player who plays once a week and is average in the field, has only a ½ × ½ × ½ × ½ = ~6% of not winning any masterpoints during the month. Players tend to gravitate to a game they can win in instead of playing in a strong game to improve their skills, so this is a realistic scenario. However, a player that scratches only a third of time has almost a 20% chance of not winning any masterpoints when playing once per week. And players who play less frequently than once per week could easily play during a month and not win any masterpoints.

There is no way to answer the question of didn’t win at all versus didn’t play in a given month for any one player. The best one could hope to statistically assign some fraction of the 8-11 players to the every month curve and some fraction of the 1-7 months / year players to the 8-11 months and every month curves. I’ve tried a number of approaches to do this and none has proven mathematically convincing. My tentative conclusion is that the better part of the 8-11 month players are actually playing every month; however I still expect the average number of games they play per month is much lower than that for the group that wins masterpoints every month. As for the 1-7 month group, I think most of them are not playing bridge at all during one or more months.

Analysis details: The analysis period was from masterpoints recorded at the start of January 2012 to those recorded at the start of January 2013. Because I was missing the February 2012 masterpoint report, the number of months a player won masterpoints in is actually based on winning masterpoints during December 2011, during January or February 2012, and during each of the ten months from March 2012 to December 2012. This detail should have negligible impact on the results. Although December 2011 activity is used in the estimation of the frequency of play, masterpoints won in that month are not counted in the total that each player won in 2012.

The ”last activity” date for each player in the monthly masterpoint report file is a bit confusing. For example, if a player wins masterpoints in August 2012, the masterpoint report might not be submitted until after the September 7th monthly deadline for the September masterpoint report, in which case they will appear in the October 2012 masterpoint report and the ”last activity” date will be set to November 2012. It’s a bit like magazine publishing where the publication date is often ahead of date on which one receives the publication. Nonetheless, it is easy for computer code to compensate for this behavior such that the player is known to have won masterpoints in August. But even with this correction, there are some discrepancies as noted above.

blue document icon Download plot data as tab delimited text.
blue document icon Download plot data in Excel format.

Masterpoints awarded in each month of 2012

The plot below shows the total number of masterpoint awarded in each month of 2012. The summer months of July and August are quite strong. The dip in September is notable and does seem to confirm the empirical observation of directors that clubs struggle in the fall. Still it is only a one month dip. My guess is that bridge players take their vacation in September; it’s a great month to travel if you’re retired and can pick any time. The September dip is broad based, cutting across all masterpoint bins and frequency of play cuts (plot not shown).

With more than a year of masterpoint data since the curtailment of triple point charity games effective July 1, 2010 we can take a good look at the effect of that decision on masterpoint awards. Before the curtailment, the ACBL was awarding an average of 613,400 MP / month. In 2012, the ACBL awarded an average of 561,800 MP / month. That is a drop of 8.4%. Membership is up very slightly, such that the drop in the MP award per player is closer to 10%.

Total masterpoint awarded by month in 2012

Basing masterpoint awards on the strength of the field

On page 27 of the January 2011 Bulletin, the new ACBL president Craig Robinson says his plans include working on changes to the ACBL masterpoint plan.

The latter [changes to the masterpoint plan] is undertaken every five years, and Robinson says this time around, there will be an emphasis on “measuring the strength of field” to help determine masterpoint awards. “We are working on how to do that fairly,” he says.

Currently, the strength of the field is considered in only the crudest possible manner, e.g. awards in BCD event are less than those in an AX event (where the number of tables is also factored in). I think Mr. Robinson is moving in the right direction. Even a simple adjustment based on the geometric mean of the masterpoint of the players in a field would be a big step forward.

The first 1000 masterpoints are the hardest

Three people climb ladders towards a trophy

It is no surprise that on average the more masterpoints you have, the more you gain per year. But what do the actual numbers look like and is there more to the story? The first figure below shows the average number of masterpoints gained for players grouped in each of 30 logarithmically spaced bins, six per decade. No one has yet crossed into the final bin starting around ~68,000, though Jeff Meckstroth and Mike Passell are very close. The first bin contains all players who started the year with less than 1 MP.

In order to get a handle on the impact of playing frequency, the data set is split into three components: players who played at least once every month (blue), played in 8-11 months (green), and in 1-7 months (red). Players who did not play during the last year, i.e. were not active, were excluded from the analysis. The lower stacked bar chart shows the number of players in each category in each bin. The ~35,000 blue players are the true “regulars.” Error bars are included for the regulars; they represent the error on the mean, i.e. the standard deviation of the bin divided by sqrt(N-1).

The most striking feature of this plot is the sudden steeping of the curve around 1000 masterpoints, highlighted in gold. This steeping suggests a qualitative difference in the players above that level. My guess is that after ~1000 masterpoints one has truly mastered the fundamentals of the game. Alternatively, it could also mean these players are fiends who play nearly every day of the week and attend many tournaments. But I do not lend as much weight to the second explanation because the steeping of the curve happens ~1000 MP rather than say ~10,000 MP; while it is true that 1000+ MP players are certainly enthusiastic, they are by no means largely pros. Approximately 600 players have more than 10,000 MP; ~32,0000 have more than 1000 MP and ~12,000 have more than 2000 MP. Taking the conservative cut of 2000 MP, there are perhaps 20 potential superstars (12,000 / 600) for each actual superstar, probably limited more by desire and other considerations, than raw talent.

Ah but to get to a 1000 MP, there’s the rub! The first thousand are the hardest; the 18% of active players with 1000+ MP are swallowing up 48% of the masterpoints. How many masterpoints should you expect in a year? That depends on your frequency of play. If you are a regular player starting with fewer than 10 MP, you should expect to earn about 40 MP. If you have between 10-1000 MP (silver highlight) you should expect between 40 and 95 MP. It is possible to be more precise by fitting a line to each region. However, a linear fit on a log-log plot is actually an exponential fit. This leads to the two formulas for the expected earnings per year where mp is the number of masterpoints at the start of a year, ln is the natural logarithm, and exp is the natural exponent.

expected mp earnings = exp( ln(mp) × 0.1919 + 3.2170 )     for players with 10-1000 MP
expected mp earnings = exp( ln(mp) × 0.7223 - 0.4755 )      for players with 1000+ MP

Are District 22 players different? Not really. The purple line shows the D22 regulars. It tracks the blue curve quite closely except where the statistics are small.

average masterpoints gained per year binned by masterpoints at start of year

Analysis details: The analysis period was from masterpoints recorded at the start of October 2009 to those recorded at the start of October 2010. This period is slightly impacted by the curtailment of triple point charity games which went into effect July 1, 2010. It will be interesting to regenerate this figure a year from now to fully understand the impact of that change. Because I was missing one of the monthly reporting files, a regular player is not actually someone who has played in 12 of the last 12 months but rather someone who has played in all 12 of the last 13 months for which I have activity data; the meaning of 1-7 and 8-11 months is similarly altered. This detail should have negligible impact on the results and does not affect the calculation of total masterpoints gained for the year.

The ACBL tracked about ~289,000 player numbers as of October 2010 but only had around ~165,000 members. Presumably the additional ~124,000 player numbers are retained in case players decide to rejoin the ACBL. Of the ~165,000 members, only ~138,000 (84%) played during the last year. Around ~9,000 active players were excluded from consideration because they were not members during the entire period examined either because they joined the ACBL part way though or dropped their membership. 82 players were excluded from consideration because they lost masterpoints during the examined period. I don’t know how these individuals lost masterpoints. Perhaps it was due to a combination of scoring revisions, administrative errors, and the use of out of date versions of ACBLscore with the wrong masterpoint formulas. Some may have been penalized for cheating. 60 players were excluded because their gain in masterpoints was due to transferring their lifetime accumulated points and/or achievements from another bridge organization to a roughly equivalent number of ACBL masterpoints. In some cases the transfer is obvious, for example a gain of 5000+ MP. However, to automate the process, I assumed any player who gained more than 120% of the masterpoints achieved by the 2009 Mini-McKenney winner of the appropriate bracket, was a transfer.

In any given month ~93,000 (56%) players are active. The number of months per year that active players play is shown in the histogram below. I am using the number of months in which a player plays as a surrogate for the frequency of play because it is the data I have available. I do not have direct access to the ACBL database. It would be interesting to regenerate these results using the actual number of sessions per player and possibly with distinction between club and tournament play.

Number of months per year that active players play per year

From October 2009 to October 2010, the ACBL gained ~15,000 members and lost ~8,000. Of the members lost ~1,800 died, ~1,700 had never played, and ~5,200 had not played during the last year. There may be some overlap between the categories, particularly the first and the third. As for the ~27,000 players who did not play during the last year, some had never played, e.g. they had received a student membership and never used it. Others may be on their way to dropping their membership and still others may no longer easily be able to play anymore but still enjoy receiving the magazine and continuing to be part of an organization in which they have long held membership.

blue document icon Download plot data as tab delimited text.
blue document icon Download plot data in Excel format.

Masterpoint inflation reversed

Revised September 16, 2013

The impact of the new rules curtailing triple point charity games which went into effect July 1, 2010 is beginning to show up in the masterpoint statistics. The plot below shows the total number of masterpoint awarded each month from August 2009 though September 2010. These numbers are based on the reporting released around the 6th of each month; thus the September bar represents matchpoints recorded between September 6, 2010 and October 6, 2010. Since clubs are often a couple of weeks behind in reporting masterpoints, it is only now that the full impact of the new regulations becomes apparent.

Compared to last August, masterpoint awards are down 18%. Compared to last September, masterpoint awards are down 10%. Compared to the 2010 average before the new regulations went into effect masterpoint awards are down 21%. Since both clubs and tournaments award masterpoints and the new regulations only affect clubs, club masterpoint awards have probably fallen at least 30% though I do not have the numbers to say this conclusively.

Total masterpoint awarded by month in 2009-2010

I don’t know what explains the small Dec to Jan discontinuity. Maybe people play a bit more bridge during the holidays to escape their in-laws and then get busy in the new year and can not play as much bridge. The discontinuity could also be explained by a reporting issue; perhaps clubs are extra careful to get up to date with their masterpoint submissions at the end of the year even if they are a couple of months behind.

Note: the similarity of the October 2009 and November 2009 peaks is a data massaging artifact because I only had the delta between October and December and simply divided by two to generate monthly data.


This plot was significantly revised in September 2013. The previous plot showed only about 70% of the total masterpoints awarded each month and displayed greater monthly variability than is actually the case. The error arose from failing to account for player deaths. The ACBL retains a player’s number in the monthly masterpoint file in case they rejoin the organization; however when a player dies the ACBL remove the player’s number from the monthly masterpoint file. Fewer than 700 players are dying each month but the impact on the total number of masterpoint in the file is significant because a lifetime total is being removed in each case.

Towards a Rating System

The June 2010 issue of the ACBL Bulletin contains a letter from Rona Goldberg in New York, who proposes replacing the masterpoint system with a rating system.

I have been reading a lot lately about masterpoints, achieving Life Master status, masterpoint races and the Barry Crane Top 500. it is now time to enter the 21st century. As do the chess and Scrabble organizations, the ACBL should give members a ranking based on performance. Among other factors, accomplishments can be based on the rankings of the opponents.

I am not savvy enough to develop the formula, but I know we have members who are. We can convert current masterpoint totals and evaluate future performances based on results and the rankings of the opponents – e.g., doing well against opponents with higher rankings will count for more than doing well against opponents with lower rankings.

Let’s get rid of the current system and acknowledge accomplishments based on performance.

I basically agree. Moreover I have a decent technical understanding of the how the chess rating formulas work and I have tracked down the statistician, Mark Glickman, who probably knows more than anyone about the United States Chess Federation (UCSF) rating system and a hell of a lot about rating systems in general. I’ve been thinking over a few details that I would like to bounce off him when the they are more fully baked.

Unlike chess, bridge has the added complication that the fundamental competitive unit is usually a pair rather than an individual and yet of course we seek to rate individuals. One approach would be to simply rate established pairs and leave it at that. This is computationally feasible and would by itself be interesting. Maybe San Diego could have the Oakley-Walters Top 50 list… or should the list be named differently?

I think there are two fundamental approaches to determining individual ratings, what might be termed the forward and backward approaches. The forward approach starts with individual ratings which adjust depending on the implied strength of the partnership versus the implied strength of the opponents and the results. I think this is the approach the online services generally take; certainly it was what OKbridge did in its early days, in the early 1990s when I dabbled in online bridge, and it is computationally relatively simple. The problem is that the implied partnership strength is just a guess based on the typical performance of a pair of players with Rating A and Rating B, even if the formula is more sophisticated than the average of their ratings. Such a system does not account for individual partnership synergies, positive or negative. And that can lead to practical problems; for example stronger players who will not play with weaker players because they feel their rating will drop even given the rating system’s accounting for the expected performance of their partnership.

The backwards approach is to infer individual ratings from partnership ratings. This amounts to solving a very large but sparse system of over-constrained equations, which is typically done by using a least squares fit. If the implied average partnership rating is assumed to be a linear function of the individual ratings, we enter well trod mathematical territory. How well would this work in practice and is it computationally feasible for all ~160,000 ACBL members? I don’t know yet. But I am considering running some simulations in Matlab with different assumptions about the distribution of the number of regular partnerships per player and the general network topology, i.e. the extent of non-local interactions. Non-local connections, which are helpfully promoted by sectionals, regionals, and nationals, are important for establishing a globally meaningful ratings. If the calculations look plausible on say a $5000 dual quad core box with 48 GB of memory, which the ACBL could readily afford, it would be worth trying to get the ACBL to perform the calculation using real data.

Should a rating system replace the masterpoint award system? Probably. But I wouldn’t hurry to do it as Ms. Goldberg suggests. It’s not computationally burdensome to run the old system in parallel with a new system for a few years. We can afford the luxury of easing into a rating system as the quirks are ironed out.

Rex Latus Speaks Out Against Excessive K.O. Team Awards

In the June 2010 issue of the Contract Bridge Forum, Rex Latus, the District 22 President, reviewed masterpoint history, masterpoint inflation, and the recent curtailment of the triple point charity game madness. He noted in part the following:

As a result of these [masterpoint award] changes new players are rocketing up the rank ladder before they have time to really learn the game. This forces them into brackets and strats they are unprepared for, and some have lost interest and quit the game.

Inflation has devaluated the masterpoint. We now see the ripple affect of good intentions gone astray. Effective January 1, 2010, changes were made to Life Master Requirements. The masterpoint requirements have risen from 300 to 500 with corresponding increases in black, silver, red, and gold points. Is this progress? Is inflation a good thing? Are today’s new players better off than yesterday’s?

Then he went on to address the excessive awards for K.O. Teams:

This is one [issue] down and one to go. Let’s hope the ACBL next addresses the masterpoint award disparity between KOs and pairs events. It’s time to level the playing field. After all, is it only about masterpoints … or is it “For the love of the game?”

I’m happy to see Rex taking on the other big masterpoint distortion. Although the definition of a level playing field may seem subjective, Robert Frick has done some serious thinking on the matter and stated the principle that, “the total number of masterpoints awarded should not depend on whether they [the players] organize themselves into teams or pairs. They are not suddenly better bridge players, or more deserving of masterpoints, just because they decide to play teams rather than pairs.” This seems obvious to me. He notes in part:

Unfortunately, the ACBL's formula for computing overall awards is based on tables rather than competitors. It’s just a bug in the formula (emphasis added). But as a result, overall awards of a team event are approximately twice that of a pairs event.

For example, suppose 80 people show up for a Regional event. If they organize themselves into pairs, they are awarded a total of 69.06 masterpoints in overalls, which is .86 masterpoints per person. If they organize themselves into teams, they are awarded 128.12 masterpoints in overalls, which is an average of 1.73 masterpoints per person.

Since Mr. Frick’s website appears to be a few years old, his calculations may be slightly out of date but qualitatively the problem is still very much the same.

Deep down I think the ACBL knows what it needs to do. And there are clearly ACBL members who grasp the principles and mathematics behind a logical award system. It is just a question of whether the ACBL finds the willpower to tackle the problem head on.

Why did communism collapse? Was it the economic punishment exacted by the arms race with the U.S. (the Jeane Kirkpatrick school of thought), the U.S.S.R.’s hard currency crisis when Saudi Arabia flooded the world with cheap oil in the 1980s (see Gaidar, 2007), the inability to compete technologically while restricting information flow? All these reasons were probably contributing factors. But in part many people seemed to have simply stopped believing in the system. Even the leadership. Masterpoints should not be allowed to suffer the same fate unless they are to be replaced with something better.

Triple Point Charity Games Curtailed

The ACBL finally took action on the triple point charity game distortion at the Reno NABC (March 2010). Hooray. The following is excerpted from Ken Monzingo’s remarks, published in the May issue of the Contract Bridge Forum and also on his website.

New Terms for Charity & Special Games

In 2004 the floodgates were opened to allow near unlimited special and charity games in clubs. Although that paste is still out of the tube, to restore some sanity we had another one of those “nobody’s satisfied” compromise issues. The heated debate went from no change at all, all the way down to doing away with special games completely. Of course neither of those extremes is viable, but there was a lot of soul searching about the landslide masterpoints we are now awarding in clubs.

The compromise was that the month of February is reserved for Junior Fund Games, the month of April is reserved for charity games, and the month of September is reserved for International Fund games. In those months any and all ACBL sanctioned club sessions may be special games for the named funds. In the remaining nine months of the year one game per month per sanctioned session may be a special game for those purposes. Masterpoints for all these special games will now be awarded at 70% of sectional rating with a cap of 6.00 MPs.

Also, any club, in any calendar year, that runs one (or more) allowed special local charity game that is sanctioned for extra masterpoints must make available for public inspection an accounting of all funds raised in such games no later than Feb. 28 of the following year.

Motion passed 23-2, effective July 1, 2010.

Ken’s comment that, “neither of those extremes is viable” is curious. What exactly would happen if the ACBL went cold turkey? Would people really stop playing? Yes, the Junior Fund and International Fund, which support bridge activities widely seen as desirable, would have to be funded in some other manner, either by a direct increase in dues or more stealthily by raising sanction fees. But that seems more desirable than a major distortion in the award system.

At times, discussions of masterpoint inflation recall those stories of one’s grandparents about having to walk uphill each way in the snow to and from school. Moreover, in principle the inflation could be calculated quite accurately year to year and used to calculate “real masterpoints” just as economists calculate real dollars indexed to a given base year. I even suggested this once to Richard Oshlag, who is responsible for the databases and backend web services at the ACBL. He thought it was a fine idea though spending time on it is not his call to make.

But triple point charity games introduce a fundamental inequity not across time, but rather amongst clubs. For example, how much meaning does the Ace of Clubs ranking have if it turns out that most of the winners picked up most of their points in charity games? Maybe those masterpoints should be recalculated as ordinary club awards for the purpose of this ranking. But the more one attempts to correct for an inequity piecemeal, the more one wants to tear down the entire edifice.

Triple point charity games also kick off an arms race between clubs. If one club runs frequent charity games, other clubs come under pressure to do the same despite their principles, lest their attendance fall. Yet in the end when all clubs have caved in, no one is better off than before because the value of masterpoints is relative. Meanwhile tournaments lose some of their luster because the charity game awards are 70% of a sectional award. Actually someone is better off than before, namely the wealthier regions whose players can readily afford the extra $1 per game. It’s easy to scoff at this if you have lived in wealthy San Diego for a long time but I have played bridge in other parts of the country where I can assure you that each dollar increase in card fees caused serious clamoring from people on fixed incomes.

I have no trouble with the ACBL engaging in charitable activities but those activities should not be entangled with the award system.

2010 San Diego Regional Statistics

For the second year in a row I have worked with Ken Monzingo, Betty Bratcher, and Sergio Mendivil to post all the results from the San Diego regional online (2010, 2009) with integrated hand records using ACBLmerge. This is a significant but manageable task though it always turns out to be a bit more work than “anticipated” because I end up making improvements to the program.

One new thing this year is the computation of field strength. Since the masterpoint system is really an award system rather than a rating system, one might question how meaningful such statistics are. However, despite the entanglements of longevity, frequency of play, and grade inflation (especially triple point charity games), masterpoint holdings must have some correlation with skill level, especially when averaged over all players in an entire event. And in a Rumsfeldian sort of way it is the system we have if not the one we want to have.

Still, should one use an arithmetic mean (the ordinary average), a geometric mean, or something else entirely? There is no perfect answer but the first two have the advantage of being easy to compute so I presented both of them. But intuitively I feel the geometric mean is more meaningful than the arithmetic mean. In a geometric mean, the mean is taken in a logarithmic manner. For example, the geometric mean of two players with 10 and 1000 MP respectively, is 100 MP not 505 MP. Less experienced players drag down the geometric mean more than the arithmetic mean. A field with a high geometric mean should in principle be uniformly fairly tough and offer few gifts.

The fact that the histogram of masterpoint holdings on a logarithmic scale for tournament attendees (see below) is approximately Gaussian (bell curve shaped) lends support for considering the geometric mean. Notice Jeff Meckstroth and Eric Rodwell filling in the bin centered at ~56,000 MP. Arguably to get a feeling for the entire population, the histogram should be calculation for player-sessions rather than players. This is done in the second plot where the distribution shifts to the right because on average the stronger players play more sessions. This plot is also approximately Gaussian. For comparison consider the logarithmic distributions for all ACBL players ever and all ACBL players who have played during the last two months, the latter constituting 66% of the current membership of ~164,000. Both distributions are quite far from Gaussian.

Note: as a practical matter, all the masterpoint distributions below exclude players with fewer than 1 MP.

Histogram of masterpoints for 2010 San Diego regional players
Histogram of masterpoints for 2010 San Diego regional player-sessions
Histogram of masterpoints for ACBL players who played during the last two months
Histogram of masterpoints for all ACBL players
Histogram number of sessions played

February 2010 Bridge Bulletin Letters

As if on cue, the February 2010 Bridge Bulletin arrived two days after writing the previous entry and has five letters to the editor touching on masterpoints and the meaning of the life master rank.

Lou Stern suggests that passing a test or series of tests should be a requirement for the gold card. Henry Francis reminds us what a herculean task making life master once was. Frank Walsh notes that ~50 years ago he received a club rating point slip for 0.06 MP for coming in third; this year the same result was worth 1.50 MP in a club charity game. He suggests a ~7500 MP requirement for Life Master would rollback the clock, accounting for masterpoint inflation. Dennis Cohen also cites masterpoint inflation as a major shock when returning to bridge after a 17 year hiatus.

Masterpoint Reform

light bulb

As a lead up to the 2010 January sectional, I wrote a couple of articles about the ACBL reward system (Masterpoints or Glory, Against 499er, 799er, 999er Events). Since then I found Robert Frick’s masterpoint reform website that covers the matter in detail. Mathematically inclined visitors may wish to head directly to the page titled A Formula for Masterpoints. If you care about this issue, bear in that we have a new president, Rich DeMartino, at the ACBL. Now is a good time to speak up.