Discussion:
Bridge deals stat format
(too old to reply)
Douglas
2019-07-26 20:28:07 UTC
Permalink
Some weeks ago I looked at the playBridge.com web site. Specifically, the
"Shuffle Project" tab. Since 07-06-2011 that project has accumulated bridge
hand shapes. They are now approaching 50 billion total hands dealt.

As an intellectual exercise, I turned their then current numbers dealt in
each 38 with a positive amount, of 39 possible hand shapes, into a spreadsheet
listing of expected value (EV), probability, and standard deviation from EV
for each of those 38.

When completed, it struck me as a lot of information to take in at once.

It happens that the first three most common hand shapes in theory add to
almost precisely 50% of the total 100% probability for all 39 possible
hand shapes.

I took the 3rd most common hand shape, 5-4-3-1 first. Opposite it, I placed
the 4th most common hand shape, 5-4-2-2, and added 7-3-2-1 + 7-2-2-2, to
create two nearly equal probability categories.

Then I took the 2nd most common hand shape, 5-3-3-2. Opposite it, I placed
the 5th most common hand shape, 4-3-3-3, and added 4-4-4-1 + 5-4-4-0 +
6-5-1-1, to create two more nearly equal probability categories.

Finally I placed all the remaining 29 hand shapes together opposite the most
common hand shape, 4-4-3-2.

I envision this as a mirror image hand shape probability accumulator format.
It only needs 10 hands dealt entries to populate the 6 stat output categories.
It also allows for the analysis of hand records less than 10 deals; that is
a ridiculously small number, which means this format will work with all
single current hand records I have ever seen. No doubt, someone will inform
me of an exception.

I now have less information to take in at once, and for me, this condensed
format is noticeably more informative.

Douglas
Douglas
2019-07-29 16:47:52 UTC
Permalink
When looking at the playBridge.com "Shuffle Project" hand shape listing of
nearly 50 billion hands, one could be forgiven thinking the expected hands
seem very similar to those which are occurring. Many of the percentages match
across columns. It is this fact which lead me to dig deeper into this report.

50 billion has 11 significant digits. The percentages reported only have 6
digits. 5 missing smallest digits of 50 billion can cover a great deal of
absolute difference.

I used the state of the report on July 6, 2019. It updates frequently, so
today's numbers will differ slightly.

The most important single hand shape is the most common in theory; 4-4-3-2.
In my report there is a difference of almost 141,000 from E.V., and what was
dealt. This results in a standard deviation of minus 1.54 from E.V.

I totaled all 38 proportionate occurrence probabilities. The total is 0.499.

These are two significant evidences that the random number source used to
create these nearly 50 billion bridge hands is pseudo-random.

Douglas
nrford100
2019-08-14 10:36:50 UTC
Permalink
Post by Douglas
...
These are two significant evidences that the random number source used to
create these nearly 50 billion bridge hands is pseudo-random.
I'm not sure that I understand the point of your post. If it is that computer "random" number generators are only pseudo-random, then that is a well known fact.
Charles Brenner
2019-08-16 06:23:00 UTC
Permalink
Post by Douglas
When looking at the playBridge.com "Shuffle Project" hand shape listing of
nearly 50 billion hands, one could be forgiven thinking the expected hands
seem very similar to those which are occurring. Many of the percentages match
across columns. It is this fact which lead me to dig deeper into this report.
50 billion has 11 significant digits. The percentages reported only have 6
digits. 5 missing smallest digits of 50 billion can cover a great deal of
absolute difference.
I used the state of the report on July 6, 2019. It updates frequently, so
today's numbers will differ slightly.
The most important single hand shape is the most common in theory; 4-4-3-2.
In my report there is a difference of almost 141,000 from E.V., and what was
dealt. This results in a standard deviation of minus 1.54 from E.V.
The standard deviation is the number to look at. It is a smallish number, no evidence here of bias.

Your story is a little hard to follow precisely but assuming your clumping of 36 categories down to some smaller number such as 6 was settled on before you examined any measures of bias, still you examined several different datasets. I understand from that that you had 12 or 18 clumps. That's enough that expectation is to see several clumps a few standard deviation from expectation, so there's nothing remarkable in the data you report.
Post by Douglas
I totaled all 38 proportionate occurrence probabilities. The total is 0.499.
Now you've lost me. As I read your description of what you did the total "probability" (I assume that you mean observed sample frequency) must as a matter of arithmetic be exactly 1.
Douglas
2019-08-11 07:13:28 UTC
Permalink
Apparently this subject is too abstract so far.

My six categories are reducible to two: The two most common hand shapes; 4-4-3-2
and 5-3-3-2 are one. They total 37% of of expected probability. The other
category is almost all 33 of the least common hand shapes. They total about 37%
of expected probability (mirror image probability categories).

Results from almost 50 billion playBridge deals:

Most common: Minus 1.50 standard deviation from expected probability.
Least common: Plus 1.01 standard deviation from expected probability.

Results from 11,040 Dutch bridge club hand-dealt deals:

Most common: Plus 1.52 standard deviation from expected probability.
Least common: Minus 2.63 standard deviation from expected probability.

I have accumulated four contiguous groupings of hand dealt bridge deals
over the years. They each comport to the pattern of substantial excess 4-4-3-2
and 5-3-3-2 hand shapes paired with substantially diminished least common hand
shapes noted above.

I now have accumulated six different computer bridge dealing programs results in
varying quantities. Usually 36, or 360, or 3600 at a time. The larger the sampling size, the closer their results comport to the playBridge almost 50 billion deals above.

How can this be true? Does this not in some way violate the generally accepted
principles of classical probability theory?

This is enough for now.

Douglas
Steve Willner
2019-08-19 21:09:32 UTC
Permalink
Post by Douglas
Apparently this subject is too abstract so far.
I think the problem is your writing. I didn't have a clue what you
meant until this message, and I'm not sure I understand now.
Post by Douglas
My six categories are reducible to two: The two most common hand shapes; 4-4-3-2
and 5-3-3-2 are one. They total 37% of of expected probability. The other
category is almost all 33 of the least common hand shapes. They total about 37%
of expected probability (mirror image probability categories).
Must be a typo there. If there are only two categories, they must total
to 100%.
Post by Douglas
Most common: Minus 1.50 standard deviation from expected probability.
Least common: Plus 1.01 standard deviation from expected probability.
Looks consistent with random expectation to me and more importantly to
Charles. If you want us to look closer, provide a table of number of
hands of each shape. (You can consolidate the rare ones under "other.")
I'd use chi-square to determine expectation, but it won't surprise me
if Charles has a better idea. Also say what hands you are counting.
All North hands? All dealer hands? All hands? Something else?
Post by Douglas
Most common: Plus 1.52 standard deviation from expected probability.
Least common: Minus 2.63 standard deviation from expected probability.
Many writers have claimed that hand-dealing produces flatter
distributions than random, and this seems consistent. My own
statistics, collected in the ACBL, agree with random expectations.
Again, can you provide a table of distributions and frequencies? Also,
as asked above, what exactly is being counted? As Charles has
repeatedly pointed out, it makes a difference.

What do we know about Dutch practices? Do players typically reshuffle
before returning cards to the board? Sort cards? Just pick them up in
play order? Depending on what they do, there may be reasons for
non-randomness.
Post by Douglas
I have accumulated four contiguous groupings of hand dealt bridge deals
over the years. They each comport to the pattern of substantial excess 4-4-3-2
and 5-3-3-2 hand shapes paired with substantially diminished least common hand
shapes noted above.
What exactly are the data, and how were they collected?

In the ACBL, it has been common for decades for players to shuffle hands
before returning them to the board. My hypothesis is that this is
sufficient to randomize suit lengths in subsequent dealing. If players
elsewhere don't do that, deviations from randomness might result.
Douglas
2019-08-14 16:09:51 UTC
Permalink
Post by nrford100
Post by Douglas
These are two significant evidences that the random number source used to
create these nearly 50 billion bridge hands is pseudo-random.
I'm not sure that I understand the point of your post. If it is that
computer "random" number generators are only pseudo-random, then that is
a well known fact.
It would be reasonable to take as fact that a computer caused the almost
50 billion hands information to appear on the internet.

Because it is almost 50 billion hands, it is reasonable to infer it was
a high speed source of the random numbers used to create the hands in
a computer somewhere. It is extremely improbable that many hands could be
created by hand, and recorded, in such few years.

The question remains, was that high speed source pseudo-random, or "true?"

By true, I mean one that would somewhat accurately simulate "natural"
hand-dealt deals.

I think this is where the math people that frequent this group get lost. They
do not seem to grasp the meaning of sampling, with replacement, and without
replacement, upon the almost 50 billion sampling results reported.

If this were the report of almost 50 billion "true" hands, it would be
significantly different. The most obvious difference in plain sight would be
several of the categories, of the 38 having results, would instead be empty.
That would be the extremely probable practical effect in this case of sampling
with replacement using "true" random numbers.

Douglas
KWSchneider
2019-08-15 18:12:51 UTC
Permalink
I’ve dealt more than 50 billion hands myself over the past 20 years. If pushed, I think I can deal 25 million an hour and 50 billion in 3 months.

Pseudo random of course...

Again, what is point of your post? The vast majority of hands found online are dealt by computer.
Barry Margolin
2019-08-15 19:37:00 UTC
Permalink
Post by Douglas
By true, I mean one that would somewhat accurately simulate "natural"
hand-dealt deals.
Why would you want to do that? Hand dealing is probably not as random as
it should be. Even though pseudo-random isn't truly random, it's almost
certainly better than hand dealing.
--
Barry Margolin
Arlington, MA
Douglas
2019-08-15 21:53:29 UTC
Permalink
Post by KWSchneider
I’ve dealt more than 50 billion hands myself over the past 20 years.
If pushed, I think I can deal 25 million an hour and 50 billion in
3 months.
Yes. But did you keep any kind of record of them? And if so, is it a
useful record?
Post by KWSchneider
Pseudo random of course...
Yes. But it does not have to be so. Because there is an alternative
to that.
Post by KWSchneider
Again, what is point of your post? The vast majority of hands found
online are dealt by computer.
A point: I am suggesting to you a useful way to stat analyze even the
smallest hand record.

A point: We now have a useful point of accessible reference to compare
against. For instance, if this is the outcome for pseudo-random deals
in general, one could adjust applicable bridge odds from the current
hypothetical.

A point: I am pointing out the difference between what the presented
stats seem to support, i.e. their is little difference between theoretical
expectation, and what actually happened. And the actual significant differences.

I suspect there are other points.

Douglas
Douglas
2019-08-15 23:48:32 UTC
Permalink
Post by Barry Margolin
Why would you want to do that? Hand dealing is probably not as random as
it should be. Even though pseudo-random isn't truly random, it's almost
certainly better than hand dealing.
I think there is a simple way to test your assertion.

Think back to the time when you last were playing hand dealt deals. Do
you ever remember once, even, when you successfully predicted the complete
details of a deal before you picked up the cards to play? Even most? Any?

Douglas
Douglas
2019-08-16 08:08:39 UTC
Permalink
Post by Charles Brenner
The standard deviation is the number to look at. It is a smallish number,
no evidence here of bias.
This is no standard deviation for an academic sampling of 30, or a medical
journal sampling of 100, or 100's, or even 1,000's. In those sampling the
level of uncertainty traditionally requires the usual two-sided 1.96 level
of confidence as significant confirmation.

This is a large enough sampling that the 1.54 amount is so well established
as an average, as to be a very probable predictor of what it will be if the
sampling continues on to 100 billion, or on to 1 trillion, and so on.

In short, at this point it is a near certainty. And , as such, it is very much
a well established, significant bias.

I do not understand your "clumping," or your "different datasets" ideas at all.

I do have the final result of my analysis down to three numbers. One is of
historical interest to bridge players, and keeps me from confounding an
important variable in my analysis.

I know, I know. You do not understand that final paragraph.

Douglas
Charles Brenner
2019-08-16 18:19:04 UTC
Permalink
Post by Douglas
Post by Charles Brenner
The standard deviation is the number to look at. It is a smallish number,
no evidence here of bias.
This is no standard deviation for an academic sampling of 30, or a medical
journal sampling of 100, or 100's, or even 1,000's. In those sampling the
level of uncertainty traditionally requires the usual two-sided 1.96 level
of confidence as significant confirmation.
Sometimes, sort of.

Where did you learn your statistics?

I'm quite serious in asking because I now realize that it was you with whom I recently had a discussion about singleton probabilities. Eventually it became clear that you learned the wrong formula for the probability of a disjunction ("or"), i.e. if Pr(A) and Pr(B) are the probabilities of events A and B, someone taught you that
Pr(A or B) = Pr(A) + Pr(B)
(which applies only in special cases as you can easily learn from the Internet).

You have been quite emphatic about your confidence in your own statistical expertise. Again please, where does it come from?

Charles
Douglas
2019-08-16 21:40:21 UTC
Permalink
Post by Charles Brenner
Post by Charles Brenner
The standard deviation is the number to look at. It is a smallish number,
no evidence here of bias.
Where did you learn your statistics?
The usual basic courses in College and grad school. Plus lots of reading and
experimenting since 1983-84.

So, now I get to ask you back. Have you ever thoroughly evaluated a nearly
50 billion stat sample before? Are you cognizant of basic sample size effects?

I happen to have several year's experience preparing numerous U.S. federal court
cases with enormous sums of money at stake, and I was once qualified, and
testified, as an expert statistics witness in a U.S. federal court.
Post by Charles Brenner
Sometimes, sort of.
This is simply too obscure for me to understand.

I wonder if you have any idea about how wonderfully deflective you appear
to me with your, so far, occasional writings to me like this.
Post by Charles Brenner
Post by Charles Brenner
no evidence here of bias.
I love this short sentence. I wonder if you realize the riff I could go
off on turning your rather obvious attitude back at you.

I will instead focus on the truth level expressed in it.

It may well be not enough evidence to satisfy you, but on its face it
is at least "some" evidence supported by reproducible facts. There is some
substantial distance between "no" and "some" in the usual discourse where I exist.

If you want to seriously undercut me, how about you display that wondrous math
expertise of yours, and illustrate how my estimate of 1.54 will probably NOT be
reliable when this particular sampling reaches 100 billion bridge hands. You know, how all the categories will keep changing achieved probabilities in the same up and down fashion as if it were still a mere 1,000, or 5,000 sampling.

At that point, you and I could possibly begin to be useful to each other.

Douglas
Charles Brenner
2019-08-17 15:35:44 UTC
Permalink
Post by Douglas
Post by Charles Brenner
Post by Charles Brenner
The standard deviation is the number to look at. It is a smallish number,
no evidence here of bias.
Where did you learn your statistics?
The usual basic courses in College and grad school. Plus lots of reading and
experimenting since 1983-84.
Thanks for that. I didn't ask my question very well but can infer from your answer that you assimilated statistics as it arose tangentially in your science or engineering courses, not from any specifically statistics course.

Nothing wrong with that - I never took a statistics course myself but that was apparently of no concern to the UCLA math department as they assigned me to teach the basic stats course while I was doing my PhD. As a result I of course acquired a formal grounding in the tedious theory of tailed probabilities, p-values, and significance testing.
Post by Douglas
So, now I get to ask you back. Have you ever thoroughly evaluated a nearly
50 billion stat sample before? Are you cognizant of basic sample size effects?
I don't scoff at experimentation. I suspect that even greats like Gauss sometimes stumbled on discoveries as a result of looking at examples and noticing patterns.

But observing patterns is only a start. Without proof, without a theorem stating what exactly is true, observations can be muddled, vague, or mistaken.

So when you ask about my data analysis - yes I have professional experience including at least one influential publication (on the mathematical evaluation of Y chromosomal DNA evidence) that required statistical analysis of a lot of data.

But playing with data can only be a starting point toward statistical expertise, and my argument with some of your claims and apparent claims is that your math is wrong.
Post by Douglas
Post by Charles Brenner
Sometimes, sort of.
This is simply too obscure for me to understand.
Especially since you forgot to include what it was in reply to, namely your
Post by Douglas
In those sampling the
level of uncertainty traditionally requires the usual two-sided 1.96 level
of confidence as significant confirmation.
I meant "confirmation" is to strong a word. Your "1.96 level of confidence" is wrongly worded but I can understand that by 1.96 refer to a z-score which *corresponds* to 95% confidence. I would usually interpret a 95% confidence result as worthy of further investigation but not of confirmation. That's why I wrote "sort of."
Post by Douglas
Post by Charles Brenner
Post by Charles Brenner
no evidence here of bias.
It may well be not enough evidence to satisfy you, but on its face it
is at least "some" evidence supported by reproducible facts.
You are correct so far. I blush to admit my "no" is writing not like a mathematician but rather borrowing from the statistical phrase "no significance".
Post by Douglas
There is some substantial distance between "no" and "some"
Sometimes but not in this case. Finally we are getting down to brass tacks.
Post by Douglas
If you want to seriously undercut me, how about you display that wondrous math
expertise of yours, and illustrate how my estimate of 1.54 will probably NOT be
reliable when this particular sampling reaches 100 billion bridge hands. You know, how all the categories will keep changing achieved probabilities in the same up and down fashion as if it were still a mere 1,000, or 5,000 sampling.
At that point, you and I could possibly begin to be useful to each other.
Douglas
You seem to be saying that although a z-score of 1.54 calculated from a small sample may be not very significant (true), you believe that if we get same z-score from a very large sample it will be significant. That would be 100% false. The merit of a z-score in this situation is precisely that it is a statistic whose interpretation is independent of sample size.

For example: Suppose we want to test a null hypothesis that real dealing is statistically identical to mathematically random dealing with respect to proportion of balanced hands dealt. We examine some number of deals and find that the discrepancy (excess or deficiency of observed versus expected) of balanced hands corresponds to a z-score of 1.54. We look in a table and find that a z-score that large or larger is a 12% event. Regardless of sample size, that means that what we observed is not particularly remarkable because a 12% coincidence isn't amazing.

Have I misunderstood what you claim?

Or do we have contradictory views of what these statistics mean and how they work? If this latter I suggest that pointing to well-written Wikipedia articles would be better than me or you writing a mathematical demonstration.

Furthermore what you seem to be suggesting (several posters have responded that it's not clear what, and they have a point) involves a handful of different data sets, ways to distill the data, things to look at. If you look at enough different data sets it is inevitable that one of them provides strong evidence against even a true hypothesis. Hence it looks to me that you may in part be falling into the fallacy that a Buonferroni correction helps to avert. Do you recall learning that concept?
EllisMorgan
2019-08-17 06:39:08 UTC
Permalink
Post by Douglas
Some weeks ago I looked at the playBridge.com web site. Specifically, the
"Shuffle Project" tab. Since 07-06-2011 that project has accumulated bridge
hand shapes. They are now approaching 50 billion total hands dealt.
As an intellectual exercise, I turned their then current numbers dealt in
each 38 with a positive amount, of 39 possible hand shapes, into a spreadsheet
listing of expected value (EV), probability, and standard deviation from EV
for each of those 38.
When completed, it struck me as a lot of information to take in at once.
It happens that the first three most common hand shapes in theory add to
almost precisely 50% of the total 100% probability for all 39 possible
hand shapes.
I took the 3rd most common hand shape, 5-4-3-1 first. Opposite it, I placed
the 4th most common hand shape, 5-4-2-2, and added 7-3-2-1 + 7-2-2-2, to
create two nearly equal probability categories.
Then I took the 2nd most common hand shape, 5-3-3-2. Opposite it, I placed
the 5th most common hand shape, 4-3-3-3, and added 4-4-4-1 + 5-4-4-0 +
6-5-1-1, to create two more nearly equal probability categories.
Finally I placed all the remaining 29 hand shapes together opposite the most
common hand shape, 4-4-3-2.
I envision this as a mirror image hand shape probability accumulator format.
It only needs 10 hands dealt entries to populate the 6 stat output categories.
It also allows for the analysis of hand records less than 10 deals; that is
a ridiculously small number, which means this format will work with all
single current hand records I have ever seen. No doubt, someone will inform
me of an exception.
I now have less information to take in at once, and for me, this condensed
format is noticeably more informative.
Douglas
You could ask in (say) sci.stat.math if you want the opinions of an
expert in statistics rather than a fellow bridge player.
Douglas
2019-08-17 18:15:18 UTC
Permalink
Post by Charles Brenner
You seem to be saying that although a z-score of 1.54 calculated from
a small sample may be not very significant (true), you believe that if
we get same z-score from a very large sample it will be significant.
I apologize if I am being too picky: I would change "will" in the last
sentence above to "can." And I think it is in this specific instance.

Because I posit it has been the same minus 1.54 for some time now, possibly
plus or minus 0.01 (because of internal rounding). The evidence is the amount,
plus the persistence of the amount. Think of it as semi-permanent bias.

If the random number source used to create these deals is indeed pseudo-
random, somewhere past the middle of its "period," the 1.54 will begin a
slide toward zero.
Post by Charles Brenner
That would be 100% false. The merit of a z-score in this situation is
precisely that it is a statistic whose interpretation is independent of
sample size.
I express the same thought, I think, as a z-score is standardized.

I have thought of a way to begin testing my posit. The first step I am
scheduling for a week from tomorrow. Who knows, I may well end with egg
on my face quickly.

BTW, I am an auditor in my former working life. I am apparently unable to
escape that point of view.

Douglas
Charles Brenner
2019-08-17 20:39:38 UTC
Permalink
Post by Douglas
Post by Charles Brenner
You seem to be saying that although a z-score of 1.54 calculated from
a small sample may be not very significant (true), you believe that if
we get same z-score from a very large sample it will be significant.
I apologize if I am being too picky: I would change "will" in the last
sentence above to "can." And I think it is in this specific instance.
Because I posit it has been the same minus 1.54 for some time now, possibly
plus or minus 0.01 (because of internal rounding). The evidence is the amount,
plus the persistence of the amount. Think of it as semi-permanent bias.
If the null hypothesis is true then z must converge to 0. If not, if the dealing is biased by however small an amount, z converges to infinity. I see no way that z can stabilize at 1.54; you may be falling for an illusion.

You seem to be confusing the [i]confidence[/i] that there is bias with the [i]amount[/i] of bias. The z-score is a measure of the former, not of the latter. The amount of bias is simply the limit of (expected-observed)/expected.
Post by Douglas
If the random number source used to create these deals is indeed pseudo-
random, somewhere past the middle of its "period," the 1.54 will begin a
slide toward zero.
It might or it might not, depending on whether the subset of deals sampled by the full period of the pseudo-random number sequence mimics the null hypothesis.

Side note: If you are thinking about periodicity of the random number sequence, then you are thinking about a situation that violates the axiom that trials are independent events which is a foundation of the whole theory you are trying to apply.

Still, we can imagine what would happen to z if the trials are a periodic sequence and the answer is the same I wrote above: If observed=expected, exactly, over a period, then z will converge to 0. If not, if the discrepancy over a full period of say 10^10 deals is even 1 balanced hand more than expected, then after 10^100 periods -- i.e. 10^110 trials -- the discrepancies will have accumulated to 10^100 which is so much bigger than the standard deviation of the number of trials (on the order of 10^55, the square root of 10^110) that z will be enormous - not 1.54. Hence even if the bias is minuscule, eventually statistics would inform us to a virtual certainty that the bias is non-zero.

In summary, I have explained why z=1.54 does not become strong evidence of bias even if we imagine it persisting forever (i.e. with ever greater sample sizes), and also explained why it cannot persist forever.
Douglas
2019-08-17 23:00:06 UTC
Permalink
Post by Charles Brenner
Side note: If you are thinking about periodicity of the random number
sequence, then you are thinking about a situation that violates the
axiom that trials are independent events which is a foundation of the
whole theory you are trying to apply.
Where is there independence of trials when the hands are dealt hyper-
geometrically using hyper-geometrically arranged random numbers?

Pseudo-random numbers have the characteristics of a finite period, and
being algorithmically determined. No exception.

I suggest a useful distinction for our purposes is to divide pseudo-
random "generators" into two categories: Those with a period up to
the number of possible unique bridge deals. And those with a period
greater than that.

I wonder if you can now see the big piece that ties this all together
statistically?

Meanwhile, well done. Sincerely.

Douglas
Douglas
2019-08-20 01:25:34 UTC
Permalink
Post by Steve Willner
Post by Douglas
My six categories are reducible to two: The two most common hand shapes;
4-4-3-2 and 5-3-3-2 are one. They total 37% of of expected probability.
The other category is almost all 33 of the least common hand shapes.
They total about 37% of expected probability (mirror image probability
categories).
Must be a typo there. If there are only two categories, they must total
to 100%.
I guess I am not allowed to focus on the most important 74% of the total
100% where you exist. How interesting. Maybe you could point me toward
where that is written. Maybe somewhere in the English Wikipedia? It is
the current paragon of generally accepted knowledge, as I understand it.
Post by Steve Willner
Post by Douglas
Most common: Minus 1.50 standard deviation from expected probability.
Least common: Plus 1.01 standard deviation from expected probability.
Looks consistent with random expectation, to me and more importantly to
Charles.
Not the point. You might want to note the reversal of categories from
the Dutch bridge club stat results. That is the key point.
Post by Steve Willner
I'd use chi-square to determine expectation, but it won't surprise me
if Charles has a better idea. Also say what hands you are counting.
All North hands? All dealer hands? All hands? Something else?
So you use the Chi-square to determine expectation? How terribly unique
of you. I do not know another person in the world who does. Do you find
yourself feeling lonely?

Possibly you actually meant you use it to determine a value that lies
between zero, and up to, and including, one. And furthermore, you
accept that value as a valid probability amount. Well, good luck with
that.

As to the hands, the answer is in the portion of my posting you quote
in your posting. Try playBridge.com and the "Shuffle Project" tab.
Post by Steve Willner
Post by Douglas
Most common: Plus 1.52 standard deviation from expected probability.
Least common: Minus 2.63 standard deviation from expected probability.
Many writers have claimed that hand-dealing produces flatter
distributions than random, and this seems consistent. My own
statistics, collected in the ACBL, agree with random expectations.
I remember your writing about your own "statistics" in this group quite
a few years ago. They were a collection of notes about hands you had
at bridge tables taken at occasional ("random?") times.

You seem to take the view that a value less than 1.96 (plus or minus)
meets random expectations. I wonder what you do when the value of greater
than 1.96 (plus or minus) shows up. Like it does about 5 times out of every
100 systematic measuring times. Does this mean everything meets your
agreement?

I note I have an error in my posting snippet: The Dutch bridge club
results are from 11,040 hands.

Everything I am talking about in this thread is denominated in bridge
hand units.
Post by Steve Willner
What do we know about Dutch practices? Do players typically reshuffle
before returning cards to the board? Sort cards? Just pick them up in
play order? Depending on what they do, there may be reasons for
non-randomness.
I sure the Dutch bridge players appreciate being questioned about their
shuffling practices.

I am going to go to the heart of this combination of myth and superstition
you are attempting to propagate here.

All you have to do is demonstrate an example of the "non-randomness" you
speak blithely of in your posting. Maybe you could demonstrate a single
bridge deal to others where you pick up the cards from a previous bridge
deal in any order you choose, shuffle the cards as ineffectively you can,
and deal them out any way (face down now!). Now pick up any one hand of
the four, and tell the others watching this demonstration what this hand
consists of before facing it for you and the others to see what it actually
contains. See if you can do even substantially better than chance, much
less significantly better.

How about a mere explanation of what is "non-randomness?" Like, how would
I know it if it bit me in the kneecap?
Post by Steve Willner
Post by Douglas
I have accumulated four contiguous groupings of hand dealt bridge deals
over the years. They each comport to the pattern of substantial excess 4-4-3-2
and 5-3-3-2 hand shapes paired with substantially diminished least common hand
shapes noted above.
What exactly are the data, and how were they collected?
1. In an article from the UK which eventually found its way into the ACBL
Bulletin, and which I posted a number of comments about in this group
some years ago. I remember the names of "Abington and Whitny" associated
with it. I think they are two towns near Oxford, UK.

2. I think it was a bridge club in New York which advertised using only hand-
dealt deals for years. It finally stopped operating. I remember it being called
the Cavendish bridge club. I downloaded their hand records regularly for more than a year, and recorded, and analyzed, their reported stats. I posted some comments about them in this group some years ago you might be able to retrieve.

3.https://www.nbbclubsites.nl/sites/default/files/groepen/5073/bestanden/kaartverdeling%20na%2011040%20handen%20per%2026-2-2018.pdf

4. Again, look to playBridge.com and the "Shuffle Project" tab.

Douglas
Steve Willner
2019-08-21 21:30:42 UTC
Permalink
Neither of us is understanding what the other has written. I'll add a
little information, which I hope will be clear, then quit.
Post by Douglas
So you use the Chi-square to determine expectation?
Of course not. Expectations are from the random dealing probabilities,
which are easy to look up.

Chi-square measures deviation of an observed set of deals from
expectation. In particular, it says _if_ the deals are truly random
(null hypothesis), how likely is it that the observed distributions
would deviate from expectation by as much as they do or more.
Post by Douglas
an article from the UK which eventually found its way into the ACBL
Bulletin, and which I posted a number of comments about in this group
some years ago. I remember the names of "Abington and Whitny" associated
with it. I think they are two towns near Oxford, UK.
There were two articles, I think, or at least two sets of data. The
first set had decidedly non-random distributions -- too many flat hands.
After the change requiring players to shuffle cards before putting
them back in the board, the distributions were consistent with random.

There is supposed to be an article in a math journal reporting
non-random distributions of hand dealing. One person said Persi
Diaconis was an author, but I haven't found any such article. If anyone
knows more, I'd very much appreciate help tracking down the reference.
Steve Willner
2019-08-30 21:11:29 UTC
Permalink
Post by Steve Willner
There is supposed to be an article in a math journal reporting
non-random distributions of hand dealing.
Found the reference, sent to me in 1998 by David Martin. Article is
"Mathematics of Duplicate Bridge Tournaments" by J. R. Manning in
_Bulletin of the Institute of Mathematics and its Applications, vol 15,
pp 201-206, 1979 Aug/Sep issue. I'm trying to find a copy of the
article -- having procrastinated since 1998 -- but haven't so far. The
Bulletin seems to have been little-circulated here in the US but better
in the UK and Commonwealth, and as far as I can tell is no longer
published. If anyone has access to it, I'd appreciate a copy of the
article. Meanwhile, I'm still looking via what resources I have.

It may be amusing how I found the reference. Yesterday, a stack fell
off my desk and onto the floor. I decided to dump all the things I
didn't need, and while going through them happened on a printout of
David's message with the reference. Better to be lucky than good, both
at bridge and other matters!
Peter Smulders
2019-08-30 22:28:48 UTC
Permalink
Post by Steve Willner
There is supposed to be an article in a math journal reporting
non-random distributions of hand dealing.
Found the reference, sent to me in 1998 by David Martin.  Article is
"Mathematics of Duplicate Bridge Tournaments" by J. R. Manning in
_Bulletin of the Institute of Mathematics and its Applications, vol 15,
pp 201-206, 1979 Aug/Sep issue.  I'm trying to find a copy of the
article -- having procrastinated since 1998 -- but haven't so far.  The
Bulletin seems to have been little-circulated here in the US but better
in the UK and Commonwealth, and as far as I can tell is no longer
published.  If anyone has access to it, I'd appreciate a copy of the
article.  Meanwhile, I'm still looking via what resources I have.
It may be amusing how I found the reference.  Yesterday, a stack fell
off my desk and onto the floor.  I decided to dump all the things I
didn't need, and while going through them happened on a printout of
David's message with the reference.  Better to be lucky than good, both
at bridge and other matters!
That is the famous article by Manning on how to quantify the balance of
bridge movements. As far as I know it is the first publication that
shows one round of arrow switches is optimal for a to turn a 7-table
Mitchell into a one-winner movement.

The article is available at the EBU website
https://www.ebu.co.uk/documents/laws-and-ethics/articles/bridge-movements.pdf

But it has nothing to do with the randomness of hand dealing.

Douglas
2019-08-20 05:37:48 UTC
Permalink
Douglas ended his last posting with this sentence.
4. Again, look to playBridge.com and the "Shuffle Project" tab.
I plead tiredness after such a long answering post.

playBridge.com data is not hand-dealt.

Corrected:

4. My carefully dealt and recorded 493 deals which involved
no shuffling to insure their randomness. Again, these deals
were discussed in this group years ago. I feel no need to
relitigate them at this moment.

Douglas
Douglas
2019-08-22 00:34:20 UTC
Permalink
Post by Steve Willner
Chi-square measures deviation of an observed set of deals from
expectation. In particular, it says _if_ the deals are truly random
(null hypothesis), how likely is it that the observed distributions
would deviate from expectation by as much as they do or more.
This is causing me acute discomfort just reading it for understanding.

If the author added the kind of deviation to sentence one, I would
agree with that sentence. Sentence two is what pains me. If I posted
such a sentence, this group's math persons would rightfully be all
over me. An "if" rather than "is" undefined "truly random" null
hypothesis conditioned by "likely" rather then a specified confidence
level. I shudder to think what Charles Brenner would do to me if
I were the author.
Post by Steve Willner
There were two articles, I think, or at least two sets of data.
The first set had decidedly non-random distributions -- too many
flat hands.
After the change requiring players to shuffle cards before putting
them back in the board, the distributions were consistent with random.
You might be interested in knowing the almost 50 billion deals recorded
by the playBridge.com shuffle project has three hand-shapes which exceed
the generally accepted critical Z-value of 1.96. Not by a little bit
either; 2.12, 2.50, and 2.94.

Using your point of view, this makes the almost 50 billion "decidedly
non-random."

I appreciate you quitting this thread.

Douglas
Charles Brenner
2019-08-22 19:04:46 UTC
Permalink
Post by Douglas
You might be interested in knowing the almost 50 billion deals recorded
by the playBridge.com shuffle project has three hand-shapes which exceed
the generally accepted critical Z-value of 1.96. Not by a little bit
either; 2.12, 2.50, and 2.94.
I see my suggestion to study up on Buonferroni was not followed. I am disappointed in grasshopper.
Douglas
2019-08-23 06:09:30 UTC
Permalink
Post by Charles Brenner
I see my suggestion to study up on Buonferroni was not followed.
I am disappointed in grasshopper
chirp I was merely extending Willner's apparent logic as an
example. I think the nearly 50 billion hands are perfectly
(pseudo-) random. chirp

Douglas
Charles Brenner
2019-08-24 02:03:58 UTC
Permalink
Post by Douglas
Post by Charles Brenner
I see my suggestion to study up on Buonferroni was not followed.
I am disappointed in grasshopper
chirp I was merely extending Willner's apparent logic as an
example. I think the nearly 50 billion hands are perfectly
(pseudo-) random. chirp
I'm not buying that oblique reply. It was your paragraph

"You might be interested in knowing the almost 50 billion deals recorded
by the playBridge.com shuffle project has three hand-shapes which exceed
the generally accepted critical Z-value of 1.96. Not by a little bit
either; 2.12, 2.50, and 2.94."

particularly the words "generally accepted", "critical" and "Not by a little bit" that convinced me that you had not yourself on how to interpret z- (or equally, p-) values, and that is what triggered my comment.

Perhaps you looked something up afterwards. Is that it?
Douglas
2019-08-24 09:21:11 UTC
Permalink
"particularly the words "generally accepted", "critical" and "Not by
a little bit" that convinced me that you had not yourself on how to
interpret z- (or > equally, p-) values, and that is what triggered
my comment."
I'm sorry, but this is argumentation just for the sake of arguing.

By now I've read many thousands of articles with stat outcomes
included within them. 99+% of them used one of the several
representations for 1.96 (two-sided) as their critical (decision)
value. Most commonly, 0.05 p.

As to the three values that do not meet your sense of a "bit more,"
by happenstance they fairly closely correspond with 0.025, 0.0125,
and 0.00625 p respectively. I am doing this out of my head, because
I run into them with some frequency.

I am genuinely sorry that you once more seem to me to only seek to
deflect what I am saying. You will excuse me if I choose to ignore
you for at least a little while.

I invite you to join in any time you have something useful to say.

Douglas
Charles Brenner
2019-08-24 17:40:50 UTC
Permalink
Post by Douglas
"particularly the words "generally accepted", "critical" and "Not by
a little bit" that convinced me that you had not yourself on how to
interpret z- (or > equally, p-) values, and that is what triggered
my comment."
By now I've read many thousands of articles with stat outcomes
included within them. 99+% of them used one of the several
representations for 1.96 (two-sided) as their critical (decision)
value. Most commonly, 0.05 p.
I'll give you one thing. There surely are thousand of articles that use statistics nearly as blindly as you claim.
Post by Douglas
As to the three values that do not meet your sense of a "bit more,"
by happenstance they fairly closely correspond with 0.025, 0.0125,
and 0.00625 p respectively.
Aha. You had a good reason to remove and ignore my repeated hints about Buonferroni. You don't need any stinkin' mathematics or insight. Surely a superficial and confused knowledge of "cookbook" statistics is plenty!
Post by Douglas
I am genuinely sorry that you once more seem to me to only seek to
deflect what I am saying. You will excuse me if I choose to ignore
you for at least a little while.
I invite you to join in any time you have something useful to say.
It's not your house Douglas.
Douglas
2019-08-26 08:17:18 UTC
Permalink
Post by Douglas
Some weeks ago I looked at the playBridge.com web site. Specifically, the
"Shuffle Project" tab. Since 07-06-2011 that project has accumulated bridge
hand shapes. They are now approaching 50 billion total hands dealt.
As an intellectual exercise, I turned their then current numbers dealt in
each 38 with a positive amount, of 39 possible hand shapes, into a spreadsheet
listing of expected value (EV), probability, and standard deviation from EV
for each of those 38.
When completed, it struck me as a lot of information to take in at once.
It happens that the first three most common hand shapes in theory add to
almost precisely 50% of the total 100% probability for all 39 possible
hand shapes.
I took the 3rd most common hand shape, 5-4-3-1 first. Opposite it, I placed
the 4th most common hand shape, 5-4-2-2, and added 7-3-2-1 + 7-2-2-2, to
create two nearly equal probability categories.
Then I took the 2nd most common hand shape, 5-3-3-2. Opposite it, I placed
the 5th most common hand shape, 4-3-3-3, and added 4-4-4-1 + 5-4-4-0 +
6-5-1-1, to create two more nearly equal probability categories.
Finally I placed all the remaining 29 hand shapes together opposite the most
common hand shape, 4-4-3-2.
I envision this as a mirror image hand shape probability accumulator format.
It only needs 10 hands dealt entries to populate the 6 stat output categories.
It also allows for the analysis of hand records less than 10 deals; that is
a ridiculously small number, which means this format will work with all
single current hand records I have ever seen. No doubt, someone will inform
me of an exception.
I now have less information to take in at once, and for me, this condensed
format is noticeably more informative.
Douglas
I've my six category in good order now. I've made the categories slightly
different. They are not quite so pristine in even expected probability,
but all the categories are in descending continuing order. Also, more
helpful is it only needs entry of six values to complete the data entry
per analytic format.

Douglas
Loading...