Discussion:
need fast ways to build double dummy database
(too old to reply)
peter cheung
2009-11-05 00:11:56 UTC
Permalink
Is there a fast program that will take input from a file of deals and
output to a file the deals with all 20 double dummy trick results.
Since all new cpu chips are quad core, I would like to be able to run
three to four copies working on different deal files as input.

I would like to have some set up that all I need is load and run the
program under windows and I supply as input a file of my deals. If it
can generate random deals on it's own that will be fine too.

Currently my set up can only run one instantce and it takes about one
second per deal. and uses only 15% cpu time and 15% memory of my new
pc (i7 950 chip and 12 g memory)
Jürgen R.
2009-11-05 10:51:18 UTC
Permalink
Post by peter cheung
Is there a fast program that will take input from a file of deals and
output to a file the deals with all 20 double dummy trick results.
Since all new cpu chips are quad core, I would like to be able to run
three to four copies working on different deal files as input.
I would like to have some set up that all I need is load and run the
program under windows and I supply as input a file of my deals. If it
can generate random deals on it's own that will be fine too.
Currently my set up can only run one instantce and it takes about one
second per deal. and uses only 15% cpu time and 15% memory of my new
pc (i7 950 chip and 12 g memory)
Probably the bottleneck isn't the processor but disk read/write.
That can be improved upon rather easily, but you would need the
source code.
What is the purpose of the excercise?
Jari Böling
2009-11-05 12:42:34 UTC
Permalink
Post by peter cheung
Is there a fast program that will take input from a file of deals and
output to a file the deals with all 20 double dummy trick results.
Since all new cpu chips are quad core, I would like to be able to run
three to four copies working on different deal files as input.
I would like to have some set up that all I need is load and run the
program under windows and I supply as input a file of my deals. If it
can generate random deals on it's own that will be fine too.
Currently my set up can only run one instantce and it takes about one
second per deal. and uses only 15% cpu time and 15% memory of my new
pc (i7 950 chip and 12 g memory)
Thomaso Andrews deal program does both deal generation and double
dummy analysis (in two separate steps, so if it can read your deals
you can use it for only the latter), and from a dos-prompt. And you
can run as many dos-prompts as you like (start -> run -> cmd). The
home page of deal is http://bridge.thomasoandrews.com/deal/.
KWSchneider
2009-11-05 13:26:19 UTC
Permalink
Post by Jari Böling
Post by peter cheung
Is there a fast program that will take input from a file of deals and
output to a file the deals with all 20 double dummy trick results.
Since all new cpu chips are quad core, I would like to be able to run
three to four copies working on different deal files as input.
I would like to have some set up that all I need is load and run the
program under windows and I supply as input a file of my deals. If it
can generate random deals on it's own that will be fine too.
Currently my set up can only run one instantce and it takes about one
second per deal. and uses only 15% cpu time and 15% memory of my new
pc (i7 950 chip and 12 g memory)
Thomaso Andrews deal program does both deal generation and double
dummy analysis (in two separate steps, so if it can read your deals
you can use it for only the latter), and from a dos-prompt. And you
can run as many dos-prompts as you like (start -> run -> cmd). The
home page of deal ishttp://bridge.thomasoandrews.com/deal/.
Make sure that you have separate instances of the program and temp
files [separate directories]. This is the set-up that I use...

Kurt
Lorne
2009-11-06 17:20:58 UTC
Permalink
Post by peter cheung
Is there a fast program that will take input from a file of deals and
output to a file the deals with all 20 double dummy trick results.
Since all new cpu chips are quad core, I would like to be able to run
three to four copies working on different deal files as input.
I would like to have some set up that all I need is load and run the
program under windows and I supply as input a file of my deals. If it
can generate random deals on it's own that will be fine too.
Currently my set up can only run one instantce and it takes about one
second per deal. and uses only 15% cpu time and 15% memory of my new
pc (i7 950 chip and 12 g memory)
I do not think it is currently possible.

If you are using Bo Haglunds DDS (the only DDSI know can achieve a deal a
second) or other software that uses it then I am almost certain it can't be
done. I tried to write a multi thread program to run separate instances in
each thread using one thread per processor and the different instances
interfered with each others memory allocation. When I spoke to Bo Haglund
about it he confirmed that his memory allocation was not thread safe and he
suspected he needed a substantial rewrite to allow multiple instances of his
DDS to run at the same time.
KWSchneider
2009-11-06 18:57:41 UTC
Permalink
Post by Lorne
Post by peter cheung
Is there a fast program that will take input from a file of deals and
output to a file the deals with all 20 double dummy trick results.
Since all new cpu chips are quad core, I would like to be able to run
three to four copies working on different deal files as input.
I would like to have some set up that all I need is load and run the
program under windows and I supply as input a file of my deals. If it
can generate random deals on it's own that will be fine too.
Currently my set up can only run one instantce and it takes about one
second per deal. and uses only 15% cpu time and 15% memory of my new
pc (i7 950 chip and 12 g memory)
I do not think it is currently possible.
If you are using Bo Haglunds DDS (the only DDSI know can achieve a deal a
second) or other software that uses it then I am almost certain it can't be
done.  I tried to write a multi thread program to run separate instances in
each thread using one thread per processor and the different instances
interfered with each others memory allocation.  When I spoke to Bo Haglund
about it he confirmed that his memory allocation was not thread safe and he
suspected he needed a substantial rewrite to allow multiple instances of his
DDS to run at the same time.
Lorne - have you compared Bo Haglunds against the current GIB DD
engine? My comparisons show the GIB engine faster, using DEAL 3.1.7
against the "latest" Haglund engine.

By caching the intermediate values to minimize calls to the DD engine
[see below] and "including" the GIB DD engine [which replaces the
Haglund engine] when using deal::tricks, I've seen 25-30% faster
results with GIB [the latest released this year].

Coding for the DD engine for taking advantage of caching...

foreach denom {clubs diamonds hearts spades notrump} {
foreach hand {north east south west} {
set tricks [deal::tricks $hand $denom]
...
}
}


Cheers,
Kurt
Lorne
2009-11-07 01:04:25 UTC
Permalink
Lorne - have you compared Bo Haglunds against the current GIB DD
engine? My comparisons show the GIB engine faster, using DEAL 3.1.7
against the "latest" Haglund engine.

*****************
No I have not.

Is the GIB DD engine available to download or is it only available if you
buy the bridge playing software? I have never found a place to get just the
engine itself which is why I use Bo Haglunds (which is free and 10x faster
than Deep Finesse).
peter cheung
2009-11-07 06:12:43 UTC
Permalink
My set up uses Masakatsu Sugino's psbridge as windows front end and
gib's bridge.exe or gib.exe as DD engine.
The gib is an early version (2002) and since then the interface has
changed and I cannot use any newer version.

I have try in 2 days to put Bo Haglunds dds routine into my program
and cannot get it to work. I took out all the memory allocation and
just pin them to fix memory location. With most new pc having 4 G of
memory, allocation of memory is no longer the way to go. Let the
operating system manage the memory for you. I used to have a very
complicated sorting algorithms for large file and only put 20,000
deals in memory. Now I just put 20 million deals in memory and let the
operating system handles it. My current record size for double dummy
deals is less then 100 bytes and single dummy deals is less then 200
bytes and if the operating system is smart they should all be in
physical memory and never swoop out to disk with 12 G of real memory.

One way to run more then one copies is to set up a VM and run multiple
versions of windows under V pc.
This is theorectically possible and supposedly easy to set up under
window 7. and is how the xp competable environment is set up under
window 7.

I have just started installing window 7 and have 2 lap top and 2
computers up running. I am working on getting at least the lap top
computers working and run into a big problem. I cannot find window
media player version 12 which is supposed to come with top 3 editions
of window 7. My wife use that to run vedios to teach sunday school
singing. I try to load the VM and XP mode on an old computer and
window 7 say the computer do not have enough resources to support
that. I'll try that on my new computer when I have the time.

Also for those of you who uses c or c++ under windows get visual
studio 2010 beta 2 and it has a new compiler which improve the speed
of some c++ code by up to 50%. I search the web and there is only one
posting that said Microsoft has not done that in the last two new
release of visual studio and is catching up in this new version.

Peter Cheung
Carl
2009-11-07 16:30:09 UTC
Permalink
Post by peter cheung
My set up uses Masakatsu Sugino's psbridge as windows front end and
gib's bridge.exe or gib.exe as DD engine.
The gib is an early version (2002) and since then the interface has
changed and I cannot use any newer version.
Peter Cheung
You would benefit by revising your code to accept the newer GIB bridge
engine, in my experience. You can pass one text file to the executable
that will process all 20 cases with one DOS call. Plus the internal
search routines are much faster than the older "free" version.

You could also buy a few older PCs at next to nothing that could do
nothing but run multiple instances of a compiled DOS shell (see the
code I emailed to you) and crank out several completed deals per
second per instance per CPU. When I was assembling my 350 million non-
random deals, that's what I did. A PC with a 2GHz and 512MB can be had
for $50 at flea markets.

Carl
Jürgen R.
2009-11-08 11:22:59 UTC
Permalink
Post by Carl
Post by peter cheung
My set up uses Masakatsu Sugino's psbridge as windows front end and
gib's bridge.exe or gib.exe as DD engine.
The gib is an early version (2002) and since then the interface has
changed and I cannot use any newer version.
Peter Cheung
You would benefit by revising your code to accept the newer GIB bridge
engine, in my experience. You can pass one text file to the executable
that will process all 20 cases with one DOS call. Plus the internal
search routines are much faster than the older "free" version.
You could also buy a few older PCs at next to nothing that could do
nothing but run multiple instances of a compiled DOS shell (see the
code I emailed to you) and crank out several completed deals per
second per instance per CPU. When I was assembling my 350 million non-
random deals, that's what I did. A PC with a 2GHz and 512MB can be had
for $50 at flea markets.
Carl
Why does anybody need such a gigantic database?
Carl
2009-11-08 15:48:37 UTC
Permalink
Post by Jürgen R.
Post by Carl
second per instance per CPU. When I was assembling my 350 million non-
random deals, that's what I did. A PC with a 2GHz and 512MB can be had
for $50 at flea markets.
Carl
Why does anybody need such a gigantic database?
I was attempting to place a value on each of the 8192 possible
holdings in a suit, for each hand shape and hand strength. So I always
specified the West hand and then used Richard Pavlicek's dealer
program to generate the other three hands.

I reduced the significant cards in each suit to AKQJT987 and that
reduced the total combinations by quite a lot. I estimated the final
product at 350 million records, but I put the project on hold
somewhere around 35% complete.

Since I started with the most common shapes and worked toward the
freaks, I suspect the tail end of the sampling would have gone much
much quicker, and perhaps I'll pick this up again some time in the
future.
peter cheung
2009-11-09 03:47:44 UTC
Permalink
I currently have about 12 million deals (6 million from Carl, thanks
again) in use and have another 5 million deals done but not used
regularly.
I normally run the simulation using just those 12 million deals which
are stored with the deal and 20 double dummy results.
If the result is unusual then I use the GIB's 700K double dummy
database to verify it.

The reason for using a hugh double dummy database is the speed to get
the results of the simulation.

e.g. the most often asked question is 1nt opening and what should pd
do with different type of hands.
If pd has very few hcp but a nice shape or a long suit should pd xfer
and play in a partial suit contract or pass 1nt.
Should pd play in a suit game or 3nt?
What kind of a hand pd should just invite game and invite suit game or
nt?
There are hundreds of questions like those and add to the complication
of different players open nt with different hcp range and types of
hand.
Recently due to computer simulation results players start to open a
lot more 1nt hands including hands with 5 card major 6 card minor no
stopper in one or two suits singleton or singleton A and K etc. As you
can see there are thousands of combinations just for 1nt opening
situations.

With a large double dummy database you can get an answer to most of
those questions in less than 2 minutes. If the number of hands you get
is too few you can still at least get an idea of what it is. You can
either proceed to generate more deals for that situation or you say
since it happen so infrequently an approximate answer is good enough
and no need to do further study.

Last time I run one situation of nt opener 12 hcp and pd 12 hcp and
that took 2 minutes due to a lot of deals fall into that situation.
If I have more then 50 million deals I'll gererate a data base which
not only have the deal and 20 double dummy results, I will also store
other hand conditions like length of suit hcp in each suit scores for
the deal base on 4 vul conditions etc. to speed up the process and try
to keep the search of most situations in 2 minutes time. This is
possible now and not a few years ago sue to the increase in speed of
the cpu and the increase of memory and hard disk space.
peter cheung
2009-11-09 04:06:39 UTC
Permalink
Just FYI some additional information I have on Window 7
First of all it is much better then all previous versions in general
and I strongly recommend people to upgrade. I have been using it for 2
weeks and 4 machines and it has yet to crash. Very stable compare with
all previous versions.
The boot time is at least 2 times faster. For my laptop it used to
take almost 5 minutes to boot. Now with a new faster hard disk and
Window 7 it takes less than 1 minute.

I found out the hard way there is a normal version of Window 7 and a
version N. Version N has a release date of Oct and the regular version
has a release date of Aug for the file creation date. I thought
version N must be better and N stands for "newer". I was dead wrong. N
stands for stripe down version without window media player and many
more features like performance analysis and I speculate it is
generated due to anti trust conditions. The worst part is you cannot
upgrade the N version to a regular version you have to do a fresh
complete new instore. This cause me at least a waste of one man week.
So if you have access to msdn download be very careful which file you
download from Microsoft.

Great news for xp competible mode. I got it running in one computer
and it is very easy to instore and use and is very efficient. I can
now run those thread not safe programs under xp mode and have two
copies running. Then I think 2 is good 3 must be better. I then
installed a copy of vista under the VM software in Window 7. Big
problem. xfer rate of a disk file is about 15k byte a second and I
have not seen anything this slow in 20 years. Took me almost a day to
set this up. Took over an hour to copy 3G of data from DVD (faster
then from disk so I burn the data from disk to dvd to copy into this
VM) Once set up running a program seems to be about 95% the normal
speed. Too bad I have to take this machine down and reinstore the
regular version of Window 7 in the next few days.
KWSchneider
2009-11-09 21:07:17 UTC
Permalink
Post by Carl
Post by Jürgen R.
Post by Carl
second per instance per CPU. When I was assembling my 350 million non-
random deals, that's what I did. A PC with a 2GHz and 512MB can be had
for $50 at flea markets.
Carl
Why does anybody need such a gigantic database?
I was attempting to place a value on each of the 8192 possible
holdings in a suit, for each hand shape and hand strength. So I always
specified the West hand and then used Richard Pavlicek's dealer
program to generate the other three hands.
I reduced the significant cards in each suit to AKQJT987 and that
reduced the total combinations by quite a lot. I estimated the final
product at 350 million records, but I put the project on hold
somewhere around 35% complete.
Since I started with the most common shapes and worked toward the
freaks, I suspect the tail end of the sampling would have gone much
much quicker, and perhaps I'll pick this up again some time in the
future.
Carl,

So, if I understand you correctly - you generate "deals" [which are
totally analysed from all directions, in all 5 denominations, hence 20
results per deal - although you only need 10 results] and store these
in a database for future data mining and analysis.

Seems simpler to create and play the specific hands you need - because
the processing power is increasing and you can generate very specific
results quickly [I can generate 5,000 deals with all 5 denominational
results [in one direction] - in 10-30 minutes depending on the
complexity of the deal].

Although I have kept every deal that I've analysed [millions], I'm not
convinced that mining it later would be faster than regenerating new
deals per my current specification.

Also, one of the problems with your methodology [and Thomas Andrews]
is that you cannot vary a specific parameter while keeping other
paramters constant. For example, say you wanted to determine the
impact of each suit length [trump and offsuit] in DUMMY opposite a
fixed shape [or fixed point count or both] in declarer's hand. Not
easy to do be brute force, a lot simpler to vary the paramters
necessary to find the result and properly weight the result by
frequency - and use statistical regression analysis to determine the
coefficients of regression.

All you need is a properly defined "baseline" [of tricks] and
statistically determine/regress the impact that each independent
variable [suit length, points in suit, controls, singletons, voids,
whatever] has on the dependent variable [trick differential].

I've been doing this for 9 years and it seems to work. My problem is
publishing [like for most of us...] since I have a huge amount of
data.

Cheers,
Kurt
peter cheung
2009-11-10 04:12:54 UTC
Permalink
Post by KWSchneider
So, if I understand you correctly - you generate "deals" [which are
totally analysed from all directions, in all 5 denominations, hence 20
results per deal - although you only need 10 results] and store these
in a database for future data mining and analysis.
You need all 20 results. This enable you to calculate EW response.
On each deal I can calculate for each vul conditions (4) whether a sac
is profitable or not.
So there are 8 situations total.

Also I can rotate the hands and get 4 deals. That is 4 times the
chance to satisfy the search condition.
Post by KWSchneider
Seems simpler to create and play the specific hands you need - because
the processing power is increasing and you can generate very specific
results quickly [I can generate 5,000 deals with all 5 denominational
results [in one direction] - in 10-30 minutes depending on the
complexity of the deal].
I assume most of the time is used to calculate the double dummy
results
So if you need all 20 results to analyse the deal it will take 40 to
120 minutes for 5000 deals
Again as I said I can get most of my results in 2 minutes.
For some condition it will have many thousands of hands and for some
very specific situation it may have very few.
Those (like a distribution of 12 1 0 0) can only be done by generating
specific hands.
Since those happen so infrequently they are only useful for academic
discussion and no practice use in actual play.
When you generate by specific hands you do not have the general
statistic information
Post by KWSchneider
Although I have kept every deal that I've analysed [millions], I'm not
convinced that mining it later would be faster than regenerating new
deals per my current specification.
You cannot mine a set of non random generated deals. Lets say you
search for all 4333 hands from a non random set of deals.
You come up with say x% make certain contracts. Since you do not know
how those hands are generated the result is almost useless.
You may have a lot of hands that has very high hcp in one hand because
you did a research on say 2nt opening and you end up with a lot more
4333 hands that has 20 to 21 hcp.
Post by KWSchneider
Also, one of the problems with your methodology [and Thomas Andrews]
is that you cannot vary a specific parameter while keeping other
paramters constant. For example, say you wanted to determine the
impact of each suit length [trump and offsuit] in DUMMY opposite a
fixed shape [or fixed point count or both] in declarer's hand. Not
easy to do be brute force, a lot simpler to vary the paramters
necessary to find the result and properly weight the result by
frequency - and use statistical regression analysis to determine the
coefficients of regression.
My program is window form based. It is very flexable.
You can specify most conditions I can think of on each of the four
hands plus NS and EW combine.that
There are limitations on any form based program vs script based
program.
But it is a lot easier to use if you do not have extremely complicated
conditions.
I have three forms for each hand definition.
The one I used most often allows two sets of 4 situations.
one set is hands to be excluded and one set is hands to be included.
Each set has 4 lines of input.
Each input has a hcp range and 8 specific hand conditions.
specific hand conditions fall into 4 main groups
One is opening bid definitions like 1nt 1s and is only hand
distributional description only
One is hand shape like 4333 5521 etc.
One is conditions of any one suit. This include hcp length any
combination of honors long and short suit points etc.
One is double dummy result conditions.

The output results is very extensive and is about 100 pages.
Summary results are only about 3 pages.
Specific results e.g. statistics on only the hands that make exactly
3nt, 4h/4s, small slam and grand slam.
Post by KWSchneider
All you need is a properly defined "baseline" [of tricks] and
statistically determine/regress the impact that each independent
variable [suit length, points in suit, controls, singletons, voids,
whatever] has on the dependent variable [trick differential].
Since you have been doing this for a long time you probably know more
than I do how to regress and calculate trick differentials.
.I have not been able to do that. May be you can explain a little more
on what you did in this area.
Lets say you have calculate the trick a certain kinds of deals make.
e.g. 15 hcp 1 nt opening opposit 8 hcp.
How do you calculate the effect of each independent variable [suit
length, points in suit, controls, singletons, voids]
Andrew Thomas has some algorithms that put weights on a lot of
specific suit combination and has one set for nt and one set for suit.
Those values are general for say nt hands and I am not sure how you
can apply them to calculate or adjust a general simulation to say
hands with a specific condition (like pd's hands has one singleton)
His hand distributional weight or value may be useful for projecting
or adjusting the result say pd's hand is 4432 but I do not know how to
do that.
Post by KWSchneider
I've been doing this for 9 years and it seems to work. My problem is
publishing [like for most of us...] since I have a huge amount of
data.
Cheers,
Kurt-
KWSchneider
2009-11-10 15:23:52 UTC
Permalink
Post by peter cheung
Post by KWSchneider
So, if I understand you correctly - you generate "deals" [which are
totally analysed from all directions, in all 5 denominations, hence 20
results per deal - although you only need 10 results] and store these
in a database for future data mining and analysis.
You need all 20 results. This enable you to calculate EW response.
On each deal I can calculate for each vul conditions (4) whether a sac
is profitable or not.
So there are 8 situations total.
Also I can rotate the hands and get 4 deals. That is 4 times the
chance to satisfy the search condition.
Post by KWSchneider
Seems simpler to create and play the specific hands you need - because
the processing power is increasing and you can generate very specific
results quickly [I can generate 5,000 deals with all 5 denominational
results [in one direction] - in 10-30 minutes depending on the
complexity of the deal].
I assume most of the time is used to calculate the double dummy
results
So if you need all 20 results to analyse the deal it will take 40 to
120 minutes for 5000 deals
Again as I said I can get most of my results in 2 minutes.
For some condition it will have many thousands of hands and for some
very specific situation it may have very few.
Those (like a distribution of 12 1 0 0) can only be done by generating
specific hands.
Since those happen so infrequently they are only useful for academic
discussion and no practice use in actual play.
When you generate by specific hands you do not have the general
statistic information
Post by KWSchneider
Although I have kept every deal that I've analysed [millions], I'm not
convinced that mining it later would be faster than regenerating new
deals per my current specification.
You cannot mine a set of non random generated deals. Lets say you
search for all 4333 hands from a non random set of deals.
You come up with say x% make certain contracts. Since you do not know
how those hands are generated the result is almost useless.
You may have a lot of hands that has very high hcp in one hand because
you did a research on say 2nt opening and you end up with a lot more
4333 hands that has 20 to 21 hcp.
Post by KWSchneider
Also, one of the problems with your methodology [and Thomas Andrews]
is that you cannot vary a specific parameter while keeping other
paramters constant. For example, say you wanted to determine the
impact of each suit length [trump and offsuit] in DUMMY opposite a
fixed shape [or fixed point count or both] in declarer's hand. Not
easy to do be brute force, a lot simpler to vary the paramters
necessary to find the result and properly weight the result by
frequency - and use statistical regression analysis to determine the
coefficients of regression.
My program is window form based. It is very flexable.
You can specify most conditions I can think of on each of the four
hands plus NS and EW combine.that
There are limitations on any form based program vs script based
program.
But it is a lot easier to use if you do not have extremely complicated
conditions.
I have three forms for each hand definition.
The one I used most often allows two sets of 4 situations.
one set is hands to be excluded and one set is hands to be included.
Each set has 4 lines of input.
Each input has a hcp range and 8 specific hand conditions.
specific hand conditions fall into 4 main groups
One is opening bid definitions like 1nt 1s and is only hand
distributional description only
One is hand shape like 4333 5521 etc.
One is conditions of any one suit. This include hcp length any
combination of honors long and short suit points  etc.
One is double dummy result conditions.
The output results is very extensive and is about 100 pages.
Summary results are only about 3 pages.
Specific results e.g. statistics on only the hands that make exactly
3nt, 4h/4s, small slam and grand slam.
Post by KWSchneider
All you need is a properly defined "baseline" [of tricks] and
statistically determine/regress the impact that each independent
variable [suit length, points in suit, controls, singletons, voids,
whatever] has on the dependent variable [trick differential].
Since you have been doing this for a long time you probably know more
than I do how to regress and calculate trick differentials.
.I have not been able to do that. May be you can explain a little more
on what you did in this area.
Lets say you have calculate the trick a certain kinds of deals make.
e.g. 15 hcp 1 nt opening opposit 8 hcp.
How do you calculate the effect of  each independent variable [suit
length, points in suit, controls, singletons, voids]
Andrew Thomas has some algorithms that put weights on a lot of
specific suit combination and has one set for nt and one set for suit.
Those values are general for say nt hands and I am not sure how you
can apply them to calculate or adjust a general simulation to say
hands with a specific condition (like pd's hands has one singleton)
His hand distributional weight or value may be useful for projecting
or adjusting the result say pd's hand is 4432 but I do not know how to
do that.
Post by KWSchneider
I've been doing this for 9 years and it seems to work. My problem is
publishing [like for most of us...] since I have a huge amount of
data.
Cheers,
Kurt-
Calculating the impact of specific parameters is relatively easy - and
can use Excel. The process is this:

1) Establish the parameters that you wish to determine - say suit
length [shape] in opener for 15 point hands vs 8 pt dummy hands.
2) Deal and play a large quantity of unrestricted 15 point vs 8 point
dummy hands [at least 100K]. This forms your baseline for tricks - for
this SPECIFIC case.
3) Now we have to vary the parameters - next deal/play a decent amount
of hands [at least 20K, equal amounts, we will properly "weight" them
later] using each "shape" that opener can hold independently ALL with
15 points and dummy with unrestricted shape and ALL with 8 points.
4) DEAL BUT DO NOT PLAY one million unrestricted 15 vs 8 point hands
and sort them by shape frequency.
5) Analyze and regress:
a) Choose the parameters you wish to use as independent parameters [I
would use actually suit lengths, but you could use anything like
"suits with 5+ cards", "singletons", "doubletons"]
b) Determine the differential in tricks for each of your restricted
deals FROM the baseline [positive or negative]
c) In matrix arrangement, under a column for each independent variable
and opposite a row representing each deal you have made from step 3)
indicate the number of times the independent parameter occurs for this
deal [so for a 4333 shape you would have ONE 4card suit and THREE
3card suits, if you were using exact suit lengths as your independent
parameters. You could add ZERO voids and ZERO doubletons if you wanted
to include these as well - the more the merrier (more accurate) but
the correlation becomes less useful due to complexity].
d) You now have to minimize this equation [using the solver tool in
excel]:

Sum [actual trick differentials X weighting] - Sum [Projected trick
differentials]

where: Projected trick differentials [each deal] = a1*[independent
parameter1] + a2*[independent parameter2] + etc.

The "calculated" constants are a1, a2, a3 which represent the impact
that each independent variable has on the dependent variable.

You could then do the same for dummy by taking a specific shape for
opener [say 4333] and use that as a baseline while you vary the shape
of dummy.

For some results that I've posted, and a more detailed explanation of
the process, please see www.bridgeruminations.com. I've just started
to publish my 9 years of work. My biggest problem is converting excel
graphs to HTML in a "polished" manner.

Cheers,
Kurt
peter cheung
2009-11-10 20:05:37 UTC
Permalink
This post might be inappropriate. Click to display it.
KWSchneider
2009-11-10 20:36:18 UTC
Permalink
Post by peter cheung
You have a very interesting concept and I'll study it in detail in the
next few days or it may take me weeks to complete.
I have finished first look at your web site and study the first page
carefully especially the 4333 nt section.
It appears to me it is very useful for computer playing programs.
Your work is concentrated in average trick taking of NT contracts.
I believe you may find more useful results in analysis of suit
contracts when using averge number of tricks.
In suit contracts not only double dummy trick taking agrees with
online playing records, the average number of tricks also is an
excellent number to extrapolate to all kinds of situations. The
distributional shape of tricks make in suit contract for most
situations are similar.
So if you know the average you have a pretty good idea of the % of
making game or slam etc.
In NT contracts there are many situations that the distibutional shape
of the tricks make changes, sometimes more close to the average
sometimes very flat.
e.g. long suit point or long suits do not affect the average number of
tricks taken by much.
I was amazed 10 years ago when I first start running coef of
corellation and see long suit point has almost no relationship to # of
tricks taken in nt contracts.(Correlation coef for nt contracts for 10
million random deals :-  SShort is -0.0279  SLong is -0.0231 HCLS is
0.7594  HCL is  0.8400  HCP is  0.8775  LTC is -0.5671 control is
0.8297) But if you have a long suit it increases your chances to make
3nt, but also increases your chances to go down by more tricks and
resulted in almost no changes in the average tricks taken.
In suit contract I have not found any parameter that behaves like
that. (does not mean there is none, just means I have not found them
yet)
So your work can really be apply directly to suit contract bidding.
Looking foreward to see more results from you on suit contracts.
You can help me by either coming up with results that supports my
research results or even better help find my mistakes.
That is why in scientific studies you need at least two different
group using two different sets of conditions before it can be accepted
as accurate results.
It is very easy to have an error in the software code that will
produce small errors in results and overlook by the developer.
Peter - my original work from 6-9 years ago was ONLY on suit
contracts. My NT work is the most recent. I'll rework my older stuff
and post it to my website.

I correlated the following for suit contracts:

1) general correlation [best suit contract] based on shape and honors
- BOTH length and shortness with 98% correlation
2) specific correlation based on trump length and honors in declarer,
and off-suit shape and honors in declarer - if you send me an email,
I'll send you a PDF summary of these results.
3) very specific correlation for trump length and shape for dummy
[HERE I ran into some issues with my procedure - when you are varying
the "strength" of either the declarer or the dummy WHILE KEEPING THE
OTHER FIXED, you are introducing an additional variable. Note that
Thomas Andrews work [NOT Binky] was based on "controlling" the
declarer's hand and floating the other hands [so if declarer has 16
points, then each other hand will have 8 points on average]. This
method is great when you are only "constraining" one hand. Once you
start to constrain BOTH hands, you need to establish TWO baselines,
since as declarer is getting stronger, responder is not necessarily
getting weaker [the defenders are].

Bottom line - for suit contracts [from memory, I'm at work].

a) Each trump is worth 0.75 tricks
b) Trump honors are worth 1/4-1/2 trick more than non-trump honors
c) Aces are worth more than 1 trick
d) 3card suits are a negative [in any shape]
e) 2&4card suits are neutral [in any shape]
f) 5card suits and singletons are both positives
g) Voids and 6+ suits are very positive [except 6 card suits are a
negative as an offsuit, especially with short trump]

Cheers,
Kurt
Carl
2009-11-10 15:24:17 UTC
Permalink
Post by KWSchneider
So, if I understand you correctly - you generate "deals" [which are
totally analysed from all directions, in all 5 denominations, hence 20
results per deal - although you only need 10 results] and store these
in a database for future data mining and analysis.
Cheers,
Kurt
Very briefly, I was hoping for a way to pre-process some information
that would provide a boost in speed to real-time simulation for bridge
playing programs. I quickly came to realize I could not outrun Moore's
Law.

My database was/is useful for what-if scenarios if one hand was
specified. Once you start to add conditions on the other hands, the
data becomes less useful. Today, the real-time simulation rate is
pretty darn good.
KWSchneider
2009-11-10 15:27:29 UTC
Permalink
Post by Carl
Post by KWSchneider
So, if I understand you correctly - you generate "deals" [which are
totally analysed from all directions, in all 5 denominations, hence 20
results per deal - although you only need 10 results] and store these
in a database for future data mining and analysis.
Cheers,
Kurt
Very briefly, I was hoping for a way to pre-process some information
that would provide a boost in speed to real-time simulation for bridge
playing programs. I quickly came to realize I could not outrun Moore's
Law.
My database was/is useful for what-if scenarios if one hand was
specified. Once you start to add conditions on the other hands, the
data becomes less useful. Today, the real-time simulation rate is
pretty darn good.
Carl - we posted at almost identical times and you may have missed my
post above. You may want to visit my website as well [listed in the
post].

Cheers,
Kurt
Carl
2009-11-10 23:39:57 UTC
Permalink
Post by KWSchneider
Carl - we posted at almost identical times and you may have missed my
post above. You may want to visit my website as well [listed in the
post].
Cheers,
Kurt
Oh yeah, lots of good stuff there, thanks. Someday maybe Bill Gates
will hire the lot of us and do a Deep Blue version for bridge. It's
not intractable, it's just hard.

Carl
peter cheung
2009-11-11 02:00:19 UTC
Permalink
I have more interest in the bidding part of bridge then playing the
hands. (I have a deficiency in memory and I just cannot remember
cards)
I wrote my fist bridge bidding program in 1969 when I was in Berkeley
using LISP.
With my knowledge and the improve speed of computers I believe I can
write a program that can bid at an expert level if opponent passes all
the time. I already have ideas of hand evaluation method that can be
better than systems human players use. With a huge double dummy
database and on the fry real time simulation with double dummy
analysis that can do 1000 deals in a few seconds and hand evaluation
methods and point count systems that is accurate to 3 digits, human
players will have a hard time competing in non competing auctions.
The problem with opponent bidding is that there are so many different
systems and you need a very extensive database of how to react to all
those different meanings of the bid. In addition their bid may not be
within their description. Chess and a lot of other games that have
been solved do not have this problem. you can see all the pieces and
it is just try to calculate and get the best move. My guess is that I
may not live long enough to see the day a computer program will beat
the best human players. Humans can come up with all kinds of
strategies and new and strange biding systems to defeat a computer
program. But it will be fun when a human player ask the computer
program what a bid means. The answer is there are more than 10,000
posible combinations, and our hcp count is A=4.35176 (this number is
better than 4.0 by a mile) and adjusted 10,000 ways(and that is real
gainer how to adjust base on other cards and the bidding) due to
other factors etc. etc. which no human players can understand. and
even another computer program will have a difficult time. So the rule
will be all computer program must submit a program that will
automatically supply the bidding system information to the opponent
computer programs. Human players can use a computer to analyse the
meaning of all the bids. This sound like science fiction but will come
one day in the next 100 years.
Now play of the hand is another story. I am very sure computer can get
to be as good as a human player soon. It may take computers with
100,000 times the power of our desk top pc and break throughs in
algorithms to beat a human player but that is posible in the next 20
to 40 years.
Carl
2009-11-11 02:27:51 UTC
Permalink
I have a deficiency in memory and I just cannot remember cards)
I have been battling this since about 2001 when I noticed I could no
longer recite the hand just played in order. Now, I usually cannot
count side-suit cards past trick 3. So I have also focused on bidding.

I agree with your observations. Competitive bidding between computers
has ready answers, although tough to implement. Competitive bidding
human to/from computer is going to require a new interface language or
standard, or some way to satisfy the rules. It might even require a
new set of rules for this case.

I think this will come sooner than 100 years. Petaflop computers are
approaching the computational equivalent of the human brain*, so the
hardware is near at hand; the software is lacking.

* I read this somewhere but it could be off base. This is an informal
conversation.


Carl
e***@gmail.com
2017-06-02 16:29:38 UTC
Permalink
Is your database available somewhere? I would really like to start researching on the Bridge Bidding topic, but I don't know where to find labeled data...
KWSchneider
2009-11-09 20:29:06 UTC
Permalink
Post by KWSchneider
Lorne - have you compared Bo Haglunds against the current GIB DD
engine? My comparisons show the GIB engine faster, using DEAL 3.1.7
against the "latest" Haglund engine.
*****************
No I have not.
Is the GIB DD engine available to download or is it only available if you
buy the bridge playing software?  I have never found a place to get just the
engine itself which is why I use Bo Haglunds (which is free and 10x faster
than Deep Finesse).
I own GIB [$79] and upgraded the GIB engine for free recently. Deal
3.1.7 allows the direct comparison of the two engines since it
"invokes" GIB instead of BO's engine when you include GIB.tcl
subroutine. I ran and timed a 1,000 notrump DD analysis and GIB
outperformed by 30-40% [awhile back, don't remember the exact times]/

Cheers,
Kurt
Carl
2009-11-10 16:38:37 UTC
Permalink
Post by peter cheung
Is there a fast program that will take input from a file of deals and
output to a file the deals with all 20 double dummy trick results.
Since all new cpu chips are quad core, I would like to be able to run
three to four copies working on different deal files as input.
I would like to have some set up that all I need is load and run the
program under windows and I supply as input a file of my deals. If it
can generate random deals on it's own that will be fine too.
Currently my set up can only run one instantce and it takes about one
second per deal. and uses only 15% cpu time and 15% memory of my new
pc (i7 950 chip and 12 g memory)
I have an off the wall question for Peter. What if we had a double
dummy solver that was faster than what we have now, but not quite as
accurate? At what point would the inaccuracies outweigh the speed
advantages?

Peter will probably tell me that's impossible to answer, but here's
what I am thinking about. Say we reduce the rank of the lower cards
such that each strain is AKQJ T987 nnnnn and I don't think it matters
what n is set to as long as it is < 7. The solver will have to be
rewritten to accommodate this state. It would be great to rewrite it
such that the quantity of significant cards can be varied.

Question 1: Will this appreciably speed up the double dummy solver?
Question 2: Will the error induced by the insignificant cards affect
our analyses enough to matter?

Say I want to know how often we make 3NT with 15 HCPs opposite 8 HCPs
and both hands are 4-4-3-2. What sort of error in the answer is enough
for me to determine that I need to switch back to the 100% accurate
solver?

I am thinking of a solver that can be adapted to the needs of the
particular analysis at hand. Is there any point in looking into
something like this, or am I way out in left field?

Carl
peter cheung
2009-11-10 20:20:24 UTC
Permalink
Post by Carl
I have an off the wall question for Peter. What if we had a double
dummy solver that was faster than what we have now, but not quite as
accurate? At what point would the inaccuracies outweigh the speed
advantages?
Peter will probably tell me that's impossible to answer, but here's
what I am thinking about. Say we reduce the rank of the lower cards
such that each strain is AKQJ T987 nnnnn and I don't think it matters
what n is set to as long as it is < 7. The solver will have to be
rewritten to accommodate this state. It would be great to rewrite it
such that the quantity of significant cards can be varied.
Question 1: Will this appreciably speed up the double dummy solver?
It will speed it up a little bit but not by much.
say 2 to7 is identical in rank and that is only 6 cards.
the effect is on a suit base and you reduce the card play by one if
there is another card of equal rank in the suit.
Now all programs treat connected cards as equal rank, so you only save
if the two card is not connected (4 and 2 but not 4 and 3).
My first rough guess is an improvement of 15%
Post by Carl
Question 2: Will the error induced by the insignificant cards affect
our analyses enough to matter?
To begin with double dummy analysis is accurate if the simulation is
on something that is general
For that purpose or for the purpose of bidding and playing by a
computer program it should be accurate enough.

Any thing that is too specific double dummy analysis is more of an
accademic study and must be used very carefully and extrapolation of
the result can sometimes lead to wrong conclusion.
Continue reading on narkive:
Loading...