leg lamp

Masterpoint Notes

by Matthew Kidd

A small blog on masterpoint issues.

Mixed feelings on masterpoint inflation

For currencies deflation is regarded as far worse than anything short of hyperinflation. Why spend money today if it will be worth more later? Once everyone reaches this conclusion, the economy crashes. Partly to avoid risk of deflation and partly as a mild goad to spend and invest in profitable activities, economists generally regard a small inflation rate, say 2-3%, as a good thing. Should the same thing apply to masterpoints? The Masterpoint Committee itself can not decide. Stu Goodgold made the following comment on Bridge Winners:

… having served on the Masterpoint Committee near the end of their 2 year effort to update the MPs for pair and other events, I can definitely say there was an extensive study done on how the new formulas would work against previous regional events.

Even on the Masterpoint Committee there are two schools of thought: one that MP should not be inflated by changes in the formulas, and another that MP inflation is a way of life and is good for encouraging lower ranked players to play more.

The latter position suggests there would be a collapse of the “economy” if some degree of inflation were not built into the system. I don’t believe this. Masterpoints aren’t equivalent to a currency because they can not be exchanged. At best there is only an indirect trade in terms of one’s bragging rights and the partners (or clients) that one might attract based on the perception of your skill level from a masterpoint total alone. Introducing inflation is akin to asserting that more recent results are more important. Certainly this should be the case for a rating system but should it be the case for an award system? Perhaps it makes sense if the average skill level is rising. But is it? My perception is that bidding has advanced in the last thirty years but I’m not convinced that card play has. Perhaps a 20-50% inflation over the same 30% year period is justified but certainly not the inflation we have seen.

Unit Championships demoted

Unit Championships took a hit in the new masterpoint formula. The R factor for them fell from 8.5 to 7 when the January 2015 changes went into effect. This change is noted in the proposal but the R factor for Unit Championships is undocumented in the January 2015 version of Masterpoint Award Rules & Regulations. Sam Whitten at ACBL headquarters confirmed both the omission and the new value of 7.

In absolute terms, the payout for Unit Championship pair games is still increasing about 3% because all tournament pair games awards are boosted 25%. The exact calculation is (7 / 8.5) × 1.25 = 1.03 ⇒ a 3% boost. But in relative terms Unit Championships are now on par with Club - Class One tournament games, a group that includes Club Championships, Inter-Club Championships, and the Junior, Charity, International, Educational, and Grass Roots Fund games and significantly behind Club Class Two tournament games such as Club Appreciation, Club Memberships, and Upgraded Club Championships.

Masterpoint Formula Tweaks approved

The masterpoint tweaks discussed below were approved at the second reading in the fall meeting of the national board of directors in Providence, RI.

Also, the now accepted proposal contains the cryptic comment, “Notice that this also changes how X strata (now the A2 strata) awards are calculated. It is strictly on the percentage that X contributes to the total A flight,” in the section that discusses fixing the concurrent table kludge. Richard DeMartino (D25, New England) clarified the meaning of this in his July 2014 District Direction report.

X Strata Awards: Presently, there is a serious problem in the way X awards are computed. What causes this problem is that under present rules the X award must be 105% of the first place award for the top strata in the game below. Consequently, a very small stratum will receive a higher award than perhaps a far larger game below. Those in the X category have a chance at winning the A overall awards which is a big benefit but now the award will be computed based on a factor times the full A award and is no longer guaranteed to be higher than the top award for the lower flight.

Proposed Masterpoint Formula Tweaks for 2015

The ACBL Masterpoint Committee has unanimously voted on a set of changes to how masterpoints are calculated at tournaments and for special club games that are based on some fraction of the sectional rating (note: ordinary club games are treated differently: see Chapter 4 of the ACBL Handbook of Rules and Regulations). The changes will go into effect on January 1, 2015 if the National Board of Directors approves the changes at their fall meeting at the NABC in Providence, RI. The proposal has the details but a careful reading of Masterpoint Award Rules & Regulations (Mar 2013) for the current formulas is necessary to fully understand the proposal. Below are the key changes followed by some background that is missing from these sterile documents.

  1. Pair games increase by 25% relative to team games (but see #5).
  2. Sectionals gain at expense of STACs.
  3. Special club games decline relative to sectionals.
  4. Limited games decrease slightly relative to open games.
  5. Concurrent (or “subordinate”) table kludge removed.
  6. Scaling with the number of tables changes (regionals only?).

The first item increases the payout for pair games relative to team games. The pair-team balance has been off for years, a problem that I’ve heard is attributable to Rich DeMartino. It should be noted that the base multiplier for pairs goes up while the multiplier for teams is unchanged. In principle this introduces a small amount of masterpoint inflation but the impact is less clear when all changes are accounted for.

Many changes are proposed to the multiplier for the tournament classification (R factor). The National and Regional values would remain the same. The Sectional and Sectional Tournament at Club (STAC) multipliers are currently identical. The proposal boosts the sectional multiplier 10% while decreasing the STAC multiplier by the same amount. The logic is that a sectional is a more controlled environment provided by duplicated boards and ACBL tournament directors. All special club games, e.g. Club Appreciation, Membership, ACBL-Wide Charity, Fund, and Senior Pairs, will be decreased in value relative to sectionals, though the net impact is to leave them almost unchanged since the sectional R factor is going up 10%.

Limited events pay out less than open events in a manner smoothly dependent on the upper limit. The proposal makes a small change whose intent seems to be to compensate for masterpoint inflation.

The fifth change is to remove the “concurrent table” kludge. If you look at ACBL posted tournament results (example), you will see wording like, “19 tables / based on 34 tables”. This means that the event actually had 19 tables but a value of 34 was used in the formula for calculating the B factor. This is because the awards for the top (unlimited) flight of an open events are based on the table count for the event and all concurrent limited events, for example a B/C/D Swiss run at the same time (concurrent with) a A/X Swiss or an open pairs run at the same time as Gold Rush pairs (0-750 say) and maybe even 299er pairs. The assumption is that the weaker concurrent events draw away players from the open event, who if they had been rolled into the open event would easily have been beaten by the winning pair or team* and by nearly all the winners of the open pairs event. In other words, the concurrent table kludge is a zero-order correction for the fact that an open event that is almost completely depleted of players with fewer than say 750 MP, is typically stronger than an open event not similarly depleted.

Bob Hamman once complained about matchpoints versus IMPs: “…Now take the same two fighters, blindfold them and tie one hand behind their backs. Divide the [boxing] ring diagonally with a solid barrier. Now go down to the local tavern and collect 20 drunks. Place 10 drunks on each side of the ring and let the fighters go at it. Whoever knocks out his drunks first is the winner. That’s matchpoints!” With the current matchpoint formula Hamman’s complaint might be updated as, “Now go down to the local taverns and collect a few dozen drunks and let them brawl outside the ring; the tournament manager who collects them best creates the biggest winner. That’s the ACBL!”

The obvious technical fix is to compute the strength of the field and scale the award based on the strength (not necessarily linearly). Despite the shortcomings of treating an award system as a rating system, there is some correlation between masterpoints and skill. Masterpoints averaged over a large field should be a reasonable estimate of average skill. This may require a geometric average instead of a linear average or even a more complicated transformation rooted in average observed skill differences at different masterpoint levels. Serious Strength of Field (SoF) proposals have been put forth (example) but they never seem to get anywhere. Our own D22 representative, Ken Monzingo has railed repeatedly against such proposals, though his basic argument doesn’t seem to go beyond, “Let’s not break a masterpoint system that is working.” Sure, in the big picture the current masterpoint system keeps a lot of people playing. But any award system, even a worse one, will probably achieve that to some degree. Creating an equitable system based on logical principles shouldn’t break anything and may well increase participation.

I find it concerning that the zero-order concurrent table kludge is being removed without an attempt to replace it with anything better. As screwball as concurrent tables are, simply removing concurrent tables may make things worse. Or perhaps it will lay the groundwork for an SoF factor. It’s hard to know what the people on the Masterpoint Committee are thinking. They seldom provide any commentary.

It would seem that awards should scale with the number of participants, i.e. teams, pairs, or individuals, depending on the type of event. This can be formulated as scaling with the number of tables after compensating for the type of event. But nothing is ever simple with the masterpoint formulas. The current formula for the B factor is B = (Tables + 10) / 60 for the first 60 tables. My guess is that the +10 is included to keep players in small tournament games from getting discouraged by small payouts. The proposal replaces this with the following formula:

(3 × the first 15 tables) + (2 × the next 10 tables) + (# of remaining tables < 100)

Though the documentation isn’t very clear, I read this change as only applying to top strata in regionally-rated open events. I think the intent of this formula is to partly compensate for removing the concurrent table kludge. It effectively assumes a certain amount of concurrent tables without being subject to the payout fluctuations due to a varying numbers of actual concurrent tables.

Kludge is built on kludge without reference to principles. Strength of Field continues to be ignored at best and actively dismissed at worst.

One last thing: the committee wimped out on addressing the S factor issue.


*The implicit assumption behind concurrent tables is not always correct! Neil Chua and I won a two session, 32 table open pairs event at the 2014 San Diego regional. We both had few enough masterpoints that we could have played in the concurrent Gold Rush pairs event.

Comments from Bob Heller
(D7 National Board Representative)

Bob Heller, the District 7 National Board of Directors representative and 2013 chair of the Masterpoint Policy committee, made some informative comments in long conversation in January 2014 on Bridge Winners.

[In 2013] we made some baby steps toward a major overhaul of the entire masterpoint picture, aiming for logic and consistency as well as review of what traditionally has been done. This obviously is no small task, and our CEO, Robert Hartman, is heading a committee/work group to piece everything together this year in hopes of having the ACBL board approve numerous substantial improvements this summer or fall (2014). This is indeed a different kind of committee, with the CEO, a longtime tournament director and field supervisor, and two members of the league’s tournament department serving along with five board members. Views of the board members are quite varied. Some give great weight to history and tradition; some look through a mathematician’s eye; some are most concerned with the Flight B, C and advancing players; some have been on the board a long, long time; and one is serving his first year. Small and medium size districts with small and medium size tournaments have most of the positions. This, too, functions as the official Masterpoint Committee this year. Which individuals fit where is not really an issue (tho if someone wishes to email me, I’d be happy to explain where I’m coming from). Responses to some comments:

  1. “Subordinate tables” have created skewed MP awards, with A/X wildly different depending on whether there is a 50- or 60-table Gold Rush or B/C/D played concurrently. This aspect should pretty much disappear.
  2. “X” has been misunderstood and misinterpreted since Day 1. It never was intended to be a separate flight. It always was meant to be a sub-set of Flight A, or a Strat B in the context of that game. If there were no “X,” all of those players would be in A. They’d have no choice. “X” players have been showered with large numbers of masterpoints they never should have received. They won’t be taken away, but equity will be restored.
  3. Two-session events paying 1.5 times the MPs of a one-session event [The S-factor issue] is being hotly debated. Everyone seems to have a different opinion. I don’t do well in Vegas, but I’m betting on the “over,” possibly up to 1.75.
  4. Pair game MP awards have been inherently inferior to team game awards. Many of us believe we are years or decades overdue for a fix, and the debate is how great a fix. (The major factor in awards is the number of tables. With 25 tables of teams, there are 25 entries or contestants. With 25 tables of pairs, there are 50 entries or contestants. Why would we think fifth place is worth the same in both events?)
  5. The comment [on Bridge Winners] about lower brackets of KOs winning excessive masterpoints is about 5 years out of date. The change in formula from tables at and below the bracket level to one based entirely on masterpoint average fixed that. Not many folks today believe that lower brackets “over pay.” Remember, these are four-session events.

    (Two less relevant items not included.)

Hooray for item #1, the source of many distortions. Anything strength of field related, as is currently done for bracketed K.O.s, must be better and far easier to explain and justify. Just try reading pages 9 and 10 of the Masterpoint Awards, Rules & Regulations and see if your head doesn’t spin.

Item #4 is a huge issue. I remain in firm support of Robert Frick’s statement as a guiding principle.

If 80 people show up to play bridge, the total number of masterpoints awarded should not depend on whether they organize themselves into teams or pairs. They are not suddenly better bridge players, or more deserving of masterpoints, just because they decide to play teams rather than pairs.

What I heard was that Robert DeMartino screwed up the balance between pairs and teams years ago and we live with this mess today. Make it will get fixed this year.

S-factor revisited

After running a two session event at the 2009 La Jolla memorial weekend sectional and noticing how small the masterpoint payoff was, I contacted the ACBL and learned about the S-factor from Butch Campbell though he was unable to provide an explanation of the logic behind it. I wrote up what I learned in the article Masterpoints or Glory.

Henry Bethe discussed the S-factor a bit in a long conversation in January 2014 on Bridge Winners.

First the 1.5 factor for two sessions. I was on a committee that wrestled with this. Back perhaps around 1980, perhaps a few years earlier; I can’t date it precisely. I argued for more - 1.75, perhaps. Perhaps you need the background that one-session sectional events on Saturday were a rarity at that time. And that the ratio had been 1.33! You also need to know that in those days second got 50% of first, not 70%. So the 1.5 factor meant that winning a two session event paid the equivalent of winning one single session event and finishing second in the other. This was never updated when the relationship of lower places to first was changed. Note, by the way, that the MP reward for winning a two session event is the equivalent to winning and finishing third in two one session events.

Typical, very typical. No one understands the machine anymore.

However, Steve Bloom says Rich DeMartino recently sent out e-mails asking for opinions/suggestions about this S-factor for multiple sessions. You can contact him at district25director@acbl.org. Note: cut-and-paste may not work for the e-mail address due to a e-mail harvesting prevention measure.

Gold-plated business opportunity

Here is a funny exchange from a long conversation in January 2014 on Bridge Winners about masterpoints.

Gary Hann

The old adage continues to be true: “You can't take it with you” :)

Love the game with a passion; when one transitions to the Bridge Game Above, the masterpoint counter resets…

Steve Bruno

Actually a friend of mine and I about ten years ago contemplated going into the business of etching gold-plated plaques which showed a person’s masterpoint total at the time of his or her death. They would be permanently affixed to the tombstone. The motto of our company would have been “you can take them with you”. But, even though we thought there would be a high demand for such a product, we couldn't bring ourselves to perpetuate the false value of masterpoints.

First 1000 masterpoints revisited (2013 data)

Three years ago I examined how many masterpoints players gain on average in a year based on how many masterpoints they have at the start of the year. The most interesting result was a steeping of the curve starting at roughly 1000 MP, suggestive of a qualitative difference between players above and below this cutoff. I redid this calculation for the 2012 masterpoint data. Here are the results for 2013.

The 2013 data is the first truly clean set of data. In previous years I have had small issues with missing data, which I corrected for in ways that I think have very little impact on the overall result. Nonetheless, I’m pleased to present clean data.

As for the 2012 data and explained there, the curves are now accurately labeled “Win MPs…” because if a player does not win any masterpoints in an event the ACBL has no record that the player played in the event and by extension if a player does not win any masterpoints in a month, the ACBL has no record that the player played at all during the month.

average masterpoints gained in 2013 binned by masterpoints at start of the year
Number of months per year in 2013 that active players won masterpoints

There was a increase of roughly 6,500 active players, up to 149,000 in 2013 from 142,500 in 2012. This seems to reflect a real change since the calculation methods used for the 2013 data are the same as those for the 2012 data. Both years use the straightforward method of checking whether the masterpoint total for each player increased in each month to determine activity in each month.

The steeping of the curves around 1000 MP remains robust. The fits are very slightly revised:

expected mp earnings = exp( ln(mp) × 0.1966 + 3.0851 )     for players with 10-1000 MP
expected mp earnings = exp( ln(mp) × 0.7800 - 1.0567 )      for players with 1000+ MP

Here mp is the number of masterpoints at the start of a year, ln is the natural logarithm, and exp is the natural exponent.

The slopes in particular are very similar to the 0.1993 and 0.7974 calculated from the 2012 data. This makes sense. No big masterpoint changes, for example further curtailment of “triple point” charity games, have been enacted since the start of 2012.

District 22 players continue to track the national average. Mike Passell has now joined Jeff Meckstroth in the “last bin” on the plot which runs from ~68,000 to 100,000 MP.

Analysis details: The analysis period was from masterpoints recorded at the start of January 2013 to those recorded at the start of January 2014.

blue document icon Download plot data as tab delimited text.
blue document icon Download plot data in Excel format.

Masterpoints awarded in each month of 2013

The plot below shows the total number of masterpoints awarded in each month of 2013. The summer months of July and August are very strong, even stronger than in 2012. September continues to be a low.

By the way, this is the first truly clean set of data. In 2012 I was missing the February masterpoint file and so had to split the January to March masterpoint increase in half as an estimate for both January and February. That might not have been so bad as it turns out. The 2013 data suggest that masterpoints in January and February are earned at almost exactly the same rate per day once the different number of days in each months is factored in. For the 2009-2010 dataset, a similar kludge was required for October and November of 2009.

Total masterpoint awarded by month in 2013

Plus ça change, plus c’est la même chose

“If spending big sums of money—and, incidentally, keeping up with Goren—is not necessarily a great American game, keeping up with the Joneses is, and it is on this level particularly that duplicate bridge has boomed. First, the holder of a master point automatically qualifies as a figure of awe in a neighborhood bridge game. He can and will join such a game with feigned condescension, acting like Sam Snead entering a Flag Day tournament at Happy Knoll. Once playing, he will be allowed to explain with cool erudition his own tactics to his rapt audience, and to tut-tut at the mistakes they have made. He will have, in short, a glorious chance to show off.”

This is from a 1961 Sports Illustrated article titled Every Man A Bridge Master. It’s a good read, though you’ll probably need to zoom to read it.

First 1000 masterpoints revisited (2012 data)

Two years ago I examined how many masterpoints players gain on average in a year based on how many masterpoints they have at the start of the year. The most interesting result was a steeping of the curve starting at roughly 1000 MP, suggestive of a qualitative difference between players above and below this cutoff. Since then there have been changes to the masterpoint allocation, notably a curtailment of the triple point charity games. Also my technical understanding of input data has changed slightly. Below are the results for 2012 masterpoint data.

One important difference is that the curves are labeled “Win MPs…” in a certain number of months of the year rather than “Played” (at least once) in a certain number of months of the year. This statement is more accurate because if a player does not win any masterpoints in an event the ACBL has no record that the player played in the event and by extension if a player does not win any masterpoints in a month, the ACBL has no record that the player played at all during the month. The evidence for this assertion is detailed in Masterpoint Reports Decoded.

average masterpoints gained in 2012 binned by masterpoints at start of the year
Number of months per year in 2012 that active players won masterpoints

The other major change is an increase of roughly 14,000 players which is far beyond the growth in membership in a two year period. The increase reflects a change in how a player is determined to have won masterpoints in a given month. The new analysis uses the straightforward methods of checking whether the masterpoint total for each player increased in each month. The previous analysis relied on the “last active” field in the monthly masterpoint report being current, which is presumably indicative of masterpoints being won in the preceding month. These two methods are largely in agreement but differ enough to cause concern. My guess is that most of the difference results from the reporting of masterpoints won online such that the new analysis more accurately reflects the activity of players who primarily play online. A smaller part of the discrepancy may result from late reporting of masterpoints. For example, if a club misses the October submission cutoff, the increase from September masterpoints will show up in the October totals but the last active date might remain in September, causing the two methods to disagree. Players who get behind in paying ACBL dues might also cause discrepancies.

The steeping of the curves around 1000 MP remains robust. The fits are slightly revised:

expected mp earnings = exp( ln(mp) × 0.1993 + 3.0351 )     for players with 10-1000 MP
expected mp earnings = exp( ln(mp) × 0.7974 - 1.1993 )      for players with 1000+ MP

Here mp is the number of masterpoints at the start of a year, ln is the natural logarithm, and exp is the natural exponent.

District 22 players continue to track the national average. Jeff Meckstroth has moved into the “last bin” on the plot which runs from ~68,000 to 100,000 MP.

The need to estimate playing frequency with a metric correlated with a separate factor, player skill, is problematic but unavoidable. The essence of the problem is how do we know whether a player who won masterpoints eleven out of twelve months of the year, played during the twelfth month but didn’t win any masterpoints or simply didn’t play at all? For many players this is barely an issue. The depth of ACBL awards is 40% of the field. Multiple flights mean that a typical game pays masterpoints, no matter how little, to about half the field. So a player who plays once a week and is average in the field, has only a ½ × ½ × ½ × ½ = ~6% of not winning any masterpoints during the month. Players tend to gravitate to a game they can win in instead of playing in a strong game to improve their skills, so this is a realistic scenario. However, a player that scratches only a third of time has almost a 20% chance of not winning any masterpoints when playing once per week. And players who play less frequently than once per week could easily play during a month and not win any masterpoints.

There is no way to answer the question of didn’t win at all versus didn’t play in a given month for any one player. The best one could hope to statistically assign some fraction of the 8-11 players to the every month curve and some fraction of the 1-7 months / year players to the 8-11 months and every month curves. I’ve tried a number of approaches to do this and none has proven mathematically convincing. My tentative conclusion is that the better part of the 8-11 month players are actually playing every month; however I still expect the average number of games they play per month is much lower than that for the group that wins masterpoints every month. As for the 1-7 month group, I think most of them are not playing bridge at all during one or more months.

Analysis details: The analysis period was from masterpoints recorded at the start of January 2012 to those recorded at the start of January 2013. Because I was missing the February 2012 masterpoint report, the number of months a player won masterpoints in is actually based on winning masterpoints during December 2011, during January or February 2012, and during each of the ten months from March 2012 to December 2012. This detail should have negligible impact on the results. Although December 2011 activity is used in the estimation of the frequency of play, masterpoints won in that month are not counted in the total that each player won in 2012.

The ”last activity” date for each player in the monthly masterpoint report file is a bit confusing. For example, if a player wins masterpoints in August 2012, the masterpoint report might not be submitted until after the September 7th monthly deadline for the September masterpoint report, in which case they will appear in the October 2012 masterpoint report and the ”last activity” date will be set to November 2012. It’s a bit like magazine publishing where the publication date is often ahead of date on which one receives the publication. Nonetheless, it is easy for computer code to compensate for this behavior such that the player is known to have won masterpoints in August. But even with this correction, there are some discrepancies as noted above.

blue document icon Download plot data as tab delimited text.
blue document icon Download plot data in Excel format.

Masterpoints awarded in each month of 2012

The plot below shows the total number of masterpoint awarded in each month of 2012. The summer months of July and August are quite strong. The dip in September is notable and does seem to confirm the empirical observation of directors that clubs struggle in the fall. Still it is only a one month dip. My guess is that bridge players take their vacation in September; it’s a great month to travel if you’re retired and can pick any time. The September dip is broad based, cutting across all masterpoint bins and frequency of play cuts (plot not shown).

With more than a year of masterpoint data since the curtailment of triple point charity games effective July 1, 2010 we can take a good look at the effect of that decision on masterpoint awards. Before the curtailment, the ACBL was awarding an average of 613,400 MP / month. In 2012, the ACBL awarded an average of 561,800 MP / month. That is a drop of 8.4%. Membership is up very slightly, such that the drop in the MP award per player is closer to 10%.

Total masterpoint awarded by month in 2012

Basing masterpoint awards on the strength of the field

On page 27 of the January 2011 Bulletin, the new ACBL president Craig Robinson says his plans include working on changes to the ACBL masterpoint plan.

The latter [changes to the masterpoint plan] is undertaken every five years, and Robinson says this time around, there will be an emphasis on “measuring the strength of field” to help determine masterpoint awards. “We are working on how to do that fairly,” he says.

Currently, the strength of the field is considered in only the crudest possible manner, e.g. awards in BCD event are less than those in an AX event (where the number of tables is also factored in). I think Mr. Robinson is moving in the right direction. Even a simple adjustment based on the geometric mean of the masterpoint of the players in a field would be a big step forward.

The first 1000 masterpoints are the hardest

Three people climb ladders towards a trophy

It is no surprise that on average the more masterpoints you have, the more you gain per year. But what do the actual numbers look like and is there more to the story? The first figure below shows the average number of masterpoints gained for players grouped in each of 30 logarithmically spaced bins, six per decade. No one has yet crossed into the final bin starting around ~68,000, though Jeff Meckstroth and Mike Passell are very close. The first bin contains all players who started the year with less than 1 MP.

In order to get a handle on the impact of playing frequency, the data set is split into three components: players who played at least once every month (blue), played in 8-11 months (green), and in 1-7 months (red). Players who did not play during the last year, i.e. were not active, were excluded from the analysis. The lower stacked bar chart shows the number of players in each category in each bin. The ~35,000 blue players are the true “regulars.” Error bars are included for the regulars; they represent the error on the mean, i.e. the standard deviation of the bin divided by sqrt(N-1).

The most striking feature of this plot is the sudden steeping of the curve around 1000 masterpoints, highlighted in gold. This steeping suggests a qualitative difference in the players above that level. My guess is that after ~1000 masterpoints one has truly mastered the fundamentals of the game. Alternatively, it could also mean these players are fiends who play nearly every day of the week and attend many tournaments. But I do not lend as much weight to the second explanation because the steeping of the curve happens ~1000 MP rather than say ~10,000 MP; while it is true that 1000+ MP players are certainly enthusiastic, they are by no means largely pros. Approximately 600 players have more than 10,000 MP; ~32,0000 have more than 1000 MP and ~12,000 have more than 2000 MP. Taking the conservative cut of 2000 MP, there are perhaps 20 potential superstars (12,000 / 600) for each actual superstar, probably limited more by desire and other considerations, than raw talent.

Ah but to get to a 1000 MP, there’s the rub! The first thousand are the hardest; the 18% of active players with 1000+ MP are swallowing up 48% of the masterpoints. How many masterpoints should you expect in a year? That depends on your frequency of play. If you are a regular player starting with fewer than 10 MP, you should expect to earn about 40 MP. If you have between 10-1000 MP (silver highlight) you should expect between 40 and 95 MP. It is possible to be more precise by fitting a line to each region. However, a linear fit on a log-log plot is actually an exponential fit. This leads to the two formulas for the expected earnings per year where mp is the number of masterpoints at the start of a year, ln is the natural logarithm, and exp is the natural exponent.

expected mp earnings = exp( ln(mp) × 0.1919 + 3.2170 )     for players with 10-1000 MP
expected mp earnings = exp( ln(mp) × 0.7223 - 0.4755 )      for players with 1000+ MP

Are District 22 players different? Not really. The purple line shows the D22 regulars. It tracks the blue curve quite closely except where the statistics are small.

average masterpoints gained per year binned by masterpoints at start of year

Analysis details: The analysis period was from masterpoints recorded at the start of October 2009 to those recorded at the start of October 2010. This period is slightly impacted by the curtailment of triple point charity games which went into effect July 1, 2010. It will be interesting to regenerate this figure a year from now to fully understand the impact of that change. Because I was missing one of the monthly reporting files, a regular player is not actually someone who has played in 12 of the last 12 months but rather someone who has played in all 12 of the last 13 months for which I have activity data; the meaning of 1-7 and 8-11 months is similarly altered. This detail should have negligible impact on the results and does not affect the calculation of total masterpoints gained for the year.

The ACBL tracked about ~289,000 player numbers as of October 2010 but only had around ~165,000 members. Presumably the additional ~124,000 player numbers are retained in case players decide to rejoin the ACBL. Of the ~165,000 members, only ~138,000 (84%) played during the last year. Around ~9,000 active players were excluded from consideration because they were not members during the entire period examined either because they joined the ACBL part way though or dropped their membership. 82 players were excluded from consideration because they lost masterpoints during the examined period. I don’t know how these individuals lost masterpoints. Perhaps it was due to a combination of scoring revisions, administrative errors, and the use of out of date versions of ACBLscore with the wrong masterpoint formulas. Some may have been penalized for cheating. 60 players were excluded because their gain in masterpoints was due to transferring their lifetime accumulated points and/or achievements from another bridge organization to a roughly equivalent number of ACBL masterpoints. In some cases the transfer is obvious, for example a gain of 5000+ MP. However, to automate the process, I assumed any player who gained more than 120% of the masterpoints achieved by the 2009 Mini-McKenney winner of the appropriate bracket, was a transfer.

In any given month ~93,000 (56%) players are active. The number of months per year that active players play is shown in the histogram below. I am using the number of months in which a player plays as a surrogate for the frequency of play because it is the data I have available. I do not have direct access to the ACBL database. It would be interesting to regenerate these results using the actual number of sessions per player and possibly with distinction between club and tournament play.

Number of months per year that active players play per year

From October 2009 to October 2010, the ACBL gained ~15,000 members and lost ~8,000. Of the members lost ~1,800 died, ~1,700 had never played, and ~5,200 had not played during the last year. There may be some overlap between the categories, particularly the first and the third. As for the ~27,000 players who did not play during the last year, some had never played, e.g. they had received a student membership and never used it. Others may be on their way to dropping their membership and still others may no longer easily be able to play anymore but still enjoy receiving the magazine and continuing to be part of an organization in which they have long held membership.

blue document icon Download plot data as tab delimited text.
blue document icon Download plot data in Excel format.

Masterpoint inflation reversed

Revised

The impact of the new rules curtailing triple point charity games which went into effect July 1, 2010 is beginning to show up in the masterpoint statistics. The plot below shows the total number of masterpoint awarded each month from August 2009 though September 2010. These numbers are based on the reporting released around the 6th of each month; thus the September bar represents matchpoints recorded between September 6, 2010 and October 6, 2010. Since clubs are often a couple of weeks behind in reporting masterpoints, it is only now that the full impact of the new regulations becomes apparent.

Compared to last August, masterpoint awards are down 18%. Compared to last September, masterpoint awards are down 10%. Compared to the 2010 average before the new regulations went into effect masterpoint awards are down 21%. Since both clubs and tournaments award masterpoints and the new regulations only affect clubs, club masterpoint awards have probably fallen at least 30% though I do not have the numbers to say this conclusively.

Total masterpoint awarded by month in 2009-2010

I don’t know what explains the small Dec to Jan discontinuity. Maybe people play a bit more bridge during the holidays to escape their in-laws and then get busy in the new year and can not play as much bridge. The discontinuity could also be explained by a reporting issue; perhaps clubs are extra careful to get up to date with their masterpoint submissions at the end of the year even if they are a couple of months behind.

Note: the similarity of the October 2009 and November 2009 peaks is a data massaging artifact because I only had the delta between October and December and simply divided by two to generate monthly data.


This plot was significantly revised in September 2013. The previous plot showed only about 70% of the total masterpoints awarded each month and displayed greater monthly variability than is actually the case. The error arose from failing to account for player deaths. The ACBL retains a player’s number in the monthly masterpoint file in case they rejoin the organization; however when a player dies the ACBL remove the player’s number from the monthly masterpoint file. Fewer than 700 players are dying each month but the impact on the total number of masterpoint in the file is significant because a lifetime total is being removed in each case.

Towards a Rating System

The June 2010 issue of the ACBL Bulletin contains a letter from Rona Goldberg in New York, who proposes replacing the masterpoint system with a rating system.

I have been reading a lot lately about masterpoints, achieving Life Master status, masterpoint races and the Barry Crane Top 500. it is now time to enter the 21st century. As do the chess and Scrabble organizations, the ACBL should give members a ranking based on performance. Among other factors, accomplishments can be based on the rankings of the opponents.

I am not savvy enough to develop the formula, but I know we have members who are. We can convert current masterpoint totals and evaluate future performances based on results and the rankings of the opponents – e.g., doing well against opponents with higher rankings will count for more than doing well against opponents with lower rankings.

Let’s get rid of the current system and acknowledge accomplishments based on performance.

I basically agree. Moreover I have a decent technical understanding of the how the chess rating formulas work and I have tracked down the statistician, Mark Glickman, who probably knows more than anyone about the United States Chess Federation (UCSF) rating system and a hell of a lot about rating systems in general. I’ve been thinking over a few details that I would like to bounce off him when the they are more fully baked.

Unlike chess, bridge has the added complication that the fundamental competitive unit is usually a pair rather than an individual and yet of course we seek to rate individuals. One approach would be to simply rate established pairs and leave it at that. This is computationally feasible and would by itself be interesting. Maybe San Diego could have the Oakley-Walters Top 50 list… or should the list be named differently?

I think there are two fundamental approaches to determining individual ratings, what might be termed the forward and backward approaches. The forward approach starts with individual ratings which adjust depending on the implied strength of the partnership versus the implied strength of the opponents and the results. I think this is the approach the online services generally take; certainly it was what OKbridge did in its early days, in the early 1990s when I dabbled in online bridge, and it is computationally relatively simple. The problem is that the implied partnership strength is just a guess based on the typical performance of a pair of players with Rating A and Rating B, even if the formula is more sophisticated than the average of their ratings. Such a system does not account for individual partnership synergies, positive or negative. And that can lead to practical problems; for example stronger players who will not play with weaker players because they feel their rating will drop even given the rating system’s accounting for the expected performance of their partnership.

The backwards approach is to infer individual ratings from partnership ratings. This amounts to solving a very large but sparse system of over-constrained equations, which is typically done by using a least squares fit. If the implied average partnership rating is assumed to be a linear function of the individual ratings, we enter well trod mathematical territory. How well would this work in practice and is it computationally feasible for all ~160,000 ACBL members? I don’t know yet. But I am considering running some simulations in Matlab with different assumptions about the distribution of the number of regular partnerships per player and the general network topology, i.e. the extent of non-local interactions. Non-local connections, which are helpfully promoted by sectionals, regionals, and nationals, are important for establishing a globally meaningful ratings. If the calculations look plausible on say a $5000 dual quad core box with 48 GB of memory, which the ACBL could readily afford, it would be worth trying to get the ACBL to perform the calculation using real data.

Should a rating system replace the masterpoint award system? Probably. But I wouldn’t hurry to do it as Ms. Goldberg suggests. It’s not computationally burdensome to run the old system in parallel with a new system for a few years. We can afford the luxury of easing into a rating system as the quirks are ironed out.

Rex Latus Speaks Out Against Excessive K.O. Team Awards

In the June 2010 issue of the Contract Bridge Forum, Rex Latus, the District 22 President, reviewed masterpoint history, masterpoint inflation, and the recent curtailment of the triple point charity game madness. He noted in part the following:

As a result of these [masterpoint award] changes new players are rocketing up the rank ladder before they have time to really learn the game. This forces them into brackets and strats they are unprepared for, and some have lost interest and quit the game.

Inflation has devalued the masterpoint. We now see the ripple affect of good intentions gone astray. Effective January 1, 2010, changes were made to Life Master Requirements. The masterpoint requirements have risen from 300 to 500 with corresponding increases in black, silver, red, and gold points. Is this progress? Is inflation a good thing? Are today’s new players better off than yesterday’s?

Then he went on to address the excessive awards for K.O. Teams:

This is one [issue] down and one to go. Let’s hope the ACBL next addresses the masterpoint award disparity between KOs and pairs events. It’s time to level the playing field. After all, is it only about masterpoints … or is it “For the love of the game?”

I’m happy to see Rex taking on the other big masterpoint distortion. Although the definition of a level playing field may seem subjective, Robert Frick has done some serious thinking on the matter and stated the principle that, “the total number of masterpoints awarded should not depend on whether they [the players] organize themselves into teams or pairs. They are not suddenly better bridge players, or more deserving of masterpoints, just because they decide to play teams rather than pairs.” This seems obvious to me. He notes in part:

Unfortunately, the ACBL's formula for computing overall awards is based on tables rather than competitors. It’s just a bug in the formula (emphasis added). But as a result, overall awards of a team event are approximately twice that of a pairs event.

For example, suppose 80 people show up for a Regional event. If they organize themselves into pairs, they are awarded a total of 69.06 masterpoints in overalls, which is .86 masterpoints per person. If they organize themselves into teams, they are awarded 128.12 masterpoints in overalls, which is an average of 1.73 masterpoints per person.

Since Mr. Frick’s website appears to be a few years old, his calculations may be slightly out of date but qualitatively the problem is still very much the same.

Deep down I think the ACBL knows what it needs to do. And there are clearly ACBL members who grasp the principles and mathematics behind a logical award system. It is just a question of whether the ACBL finds the willpower to tackle the problem head on.

Why did communism collapse? Was it the economic punishment exacted by the arms race with the U.S. (the Jeane Kirkpatrick school of thought), the U.S.S.R.’s hard currency crisis when Saudi Arabia flooded the world with cheap oil in the 1980s (see Gaidar, 2007), the inability to compete technologically while restricting information flow? All these reasons were probably contributing factors. But in part many people seemed to have simply stopped believing in the system. Even the leadership. Masterpoints should not be allowed to suffer the same fate unless they are to be replaced with something better.

Triple Point Charity Games Curtailed

The ACBL finally took action on the triple point charity game distortion at the Reno NABC (March 2010). Hooray. The following is excerpted from Ken Monzingo’s remarks, published in the May issue of the Contract Bridge Forum and also on his website.

New Terms for Charity & Special Games

In 2004 the floodgates were opened to allow near unlimited special and charity games in clubs. Although that paste is still out of the tube, to restore some sanity we had another one of those “nobody’s satisfied” compromise issues. The heated debate went from no change at all, all the way down to doing away with special games completely. Of course neither of those extremes is viable, but there was a lot of soul searching about the landslide masterpoints we are now awarding in clubs.

The compromise was that the month of February is reserved for Junior Fund Games, the month of April is reserved for charity games, and the month of September is reserved for International Fund games. In those months any and all ACBL sanctioned club sessions may be special games for the named funds. In the remaining nine months of the year one game per month per sanctioned session may be a special game for those purposes. Masterpoints for all these special games will now be awarded at 70% of sectional rating with a cap of 6.00 MPs.

Also, any club, in any calendar year, that runs one (or more) allowed special local charity game that is sanctioned for extra masterpoints must make available for public inspection an accounting of all funds raised in such games no later than Feb. 28 of the following year.

Motion passed 23-2, effective July 1, 2010.

Ken’s comment that, “neither of those extremes is viable” is curious. What exactly would happen if the ACBL went cold turkey? Would people really stop playing? Yes, the Junior Fund and International Fund, which support bridge activities widely seen as desirable, would have to be funded in some other manner, either by a direct increase in dues or more stealthily by raising sanction fees. But that seems more desirable than a major distortion in the award system.

At times, discussions of masterpoint inflation recall those stories of one’s grandparents about having to walk uphill each way in the snow to and from school. Moreover, in principle the inflation could be calculated quite accurately year to year and used to calculate “real masterpoints” just as economists calculate real dollars indexed to a given base year. I even suggested this once to Richard Oshlag, who is responsible for the databases and backend web services at the ACBL. He thought it was a fine idea though spending time on it is not his call to make.

But triple point charity games introduce a fundamental inequity not across time, but rather amongst clubs. For example, how much meaning does the Ace of Clubs ranking have if it turns out that most of the winners picked up most of their points in charity games? Maybe those masterpoints should be recalculated as ordinary club awards for the purpose of this ranking. But the more one attempts to correct for an inequity piecemeal, the more one wants to tear down the entire edifice.

Triple point charity games also kick off an arms race between clubs. If one club runs frequent charity games, other clubs come under pressure to do the same despite their principles, lest their attendance fall. Yet in the end when all clubs have caved in, no one is better off than before because the value of masterpoints is relative. Meanwhile tournaments lose some of their luster because the charity game awards are 70% of a sectional award. Actually someone is better off than before, namely the wealthier regions whose players can readily afford the extra $1 per game. It’s easy to scoff at this if you have lived in wealthy San Diego for a long time but I have played bridge in other parts of the country where I can assure you that each dollar increase in card fees caused serious clamoring from people on fixed incomes.

I have no trouble with the ACBL engaging in charitable activities but those activities should not be entangled with the award system.

2010 San Diego Regional Statistics

For the second year in a row I have worked with Ken Monzingo, Betty Bratcher, and Sergio Mendivil to post all the results from the San Diego regional online (2010, 2009) with integrated hand records using ACBLmerge. This is a significant but manageable task though it always turns out to be a bit more work than “anticipated” because I end up making improvements to the program.

One new thing this year is the computation of field strength. Since the masterpoint system is really an award system rather than a rating system, one might question how meaningful such statistics are. However, despite the entanglements of longevity, frequency of play, and grade inflation (especially triple point charity games), masterpoint holdings must have some correlation with skill level, especially when averaged over all players in an entire event. And in a Rumsfeldian sort of way it is the system we have if not the one we want to have.

Still, should one use an arithmetic mean (the ordinary average), a geometric mean, or something else entirely? There is no perfect answer but the first two have the advantage of being easy to compute so I presented both of them. But intuitively I feel the geometric mean is more meaningful than the arithmetic mean. In a geometric mean, the mean is taken in a logarithmic manner. For example, the geometric mean of two players with 10 and 1000 MP respectively, is 100 MP not 505 MP. Less experienced players drag down the geometric mean more than the arithmetic mean. A field with a high geometric mean should in principle be uniformly fairly tough and offer few gifts.

The fact that the histogram of masterpoint holdings on a logarithmic scale for tournament attendees (see below) is approximately Gaussian (bell curve shaped) lends support for considering the geometric mean. Notice Jeff Meckstroth and Eric Rodwell filling in the bin centered at ~56,000 MP. Arguably to get a feeling for the entire population, the histogram should be calculation for player-sessions rather than players. This is done in the second plot where the distribution shifts to the right because on average the stronger players play more sessions. This plot is also approximately Gaussian. For comparison consider the logarithmic distributions for all ACBL players ever and all ACBL players who have played during the last two months, the latter constituting 66% of the current membership of ~164,000. Both distributions are quite far from Gaussian.

Note: as a practical matter, all the masterpoint distributions below exclude players with fewer than 1 MP.

Histogram of masterpoints for 2010 San Diego regional players Histogram of masterpoints for 2010 San Diego regional player-sessions Histogram of masterpoints for ACBL players who played during the last two months Histogram of masterpoints for all ACBL players Histogram number of sessions played

February 2010 Bridge Bulletin Letters

As if on cue, the February 2010 Bridge Bulletin arrived two days after writing the previous entry and has five letters to the editor touching on masterpoints and the meaning of the life master rank.

Lou Stern suggests that passing a test or series of tests should be a requirement for the gold card. Henry Francis reminds us what a herculean task making life master once was. Frank Walsh notes that ~50 years ago he received a club rating point slip for 0.06 MP for coming in third; this year the same result was worth 1.50 MP in a club charity game. He suggests a ~7500 MP requirement for Life Master would rollback the clock, accounting for masterpoint inflation. Dennis Cohen also cites masterpoint inflation as a major shock when returning to bridge after a 17 year hiatus.

Masterpoint Reform

light bulb

As a lead up to the 2010 January sectional, I wrote a couple of articles about the ACBL reward system (Masterpoints or Glory, Against 499er, 799er, 999er Events). Since then I found Robert Frick’s masterpoint reform website that covers the matter in detail. Mathematically inclined visitors may wish to head directly to the page titled A Formula for Masterpoints. If you care about this issue, bear in that we have a new president, Rich DeMartino, at the ACBL. Now is a good time to speak up.