peterbirks: (Default)
[personal profile] peterbirks
Weirdly, it isn't the counterintuitive 13/27 answer that caused the controversy (thus showing that you guys actually do understand numbers), but whether the starting point is a half or a third (thus showing that we don't really understand semantics, language and philosophy).

I've been thinking about this "two children" problem, and I think I've come up with an interesting, and probably solvable (given a few assumptions about certain likelihoods) poser.


Let's apply this to gambling!

A man walks in. Slaps his palm down on the table on top of two coins and looks at them. He says to you (Mr Aardvark) "There are two coins here. At least one of them is a head."

A second man (a Mr Birks) (who knows no more than you do about this state of affairs) now says: "I will offer you 3-to-2 on both coins being heads".

Do you take the bet?

Well, you, Mr Aardvark, are in the 50:50 camp. So obviously you do.

As it happens, Mr Birks wins the bet (the second coin is a tail), and you are ten quid down.

"Mr Aardvark", scoffs Mr Birks, "You are a fool! Everyone knows that the chance of the other coin being a head is one-third!"

But you, Mr Aardvark, are not non-plussed. "Double or nothing?" you say.

"Hah HAH!" says Mr Birks. "This is going to pay for my holiday!"

The first man then walks out of the door. Ten seconds later he comes back in. He repeats the exercise, and says:

"At least one of these coins is a tail."

At this point, Mr Birks (foolishly) says "ahh, it's exactly the same problem, so clearly the chance of the second coin being a tail is still 1/3." He offers you 3-to-2, and you absolutely lump on, because now you are certain (incorrectly, as it happens, but we can cover that later) that the chance of the other coin being a tail is 50%.

Let's take the sample of four.

(A) Coins are HH. Man says "At least one of these coins is a head"
(B) Coins are TT. Man says "At least one of these coins is a tail"
(C) Coins are TH. Man says "At least one of these coins is a tail"
(D) Coins are TH. Man says "At least one of these coins is a head"

The "law of restricted choice" in A and B causes the probability of the other coin to be the same to move up to 50% from 33%.



But, suppose the man comes in the second time, tosses the two coins and puts his palm down, and, once again, says... "At least one of the coins is a head". Once again, Mr Birks offers you 3-to-2 on the other coin being a head. Do you, Mr Aardvark, still lump on?

This is a tougher prospect.

Clearly, therefore, what matters here (in terms of making money) is the sample of "choices of what to say" held by the man who tosses the coin.

Suppose he comes in, tosses the two coins, looks at them. Says nothing. And then tosses the two coins again. At which point he says "At least one of the coins is a head".

Even you, Mr Aardvark, will deduce from the fact that the man said nothing after the first toss that there is a state of affairs that commits him to silence. Thus, when he makes the second toss and states "At least one of them is a head", you will induce (perhaps incorrectly, but it's a reasonable induction) that his previous silence was because he had tossed two tails.

At this point you, Mr Aardvark, will conclude that the chance of the other coin being a head is 1/3.

In real life, of course, Mr Birks would deduce as soon as the man said the second time "at least one of them is a tail" (having said the first time that "at least one of them is a head", that the chance of the other coin being a tail was, because of the law of restricted choice, 50%, not 33.3%.


The key, therefore, is what the man is instructed to say. Even the above is simplistic.

I have, in fact, assumed two possibilities (nos (1) & (4) below) when there are more than this. remember, we are toalking about real prop bets here.

1) The man is told (or has decided) to say either "at least one is a head" or "at least one is a tail". If it's a head and a tail, he can say either.
2) The man is instructed to say "at least one is a tail" ONLY if both are tails. Otherwise, he says "at least one is a head".
3) The man is instructed to say "at least one is a head" ONLY if both are heads. Otherwise, he says "at least one is a tail".
4) The man is instructed to say "at least one is a head" if at least one is a head. Otherwise, he stays silent and tosses again.
5) The man is instructed to say "at least one is a tail" if at least one is a tail. Otherwise, he tosses again.

There are many other combinational possibililties, and clearly it would be erroneous to apply an equal probability to all of them. So let's just stick to casees (1) and (4) for the moment.


Now, let's go back to the first coin toss again. He tosses it, and says "At least one is a head". What is the chance that the other is a head?

If we take the simple sample here (choices of (1) and (4) being the "actual state of affairs") , we could say: "He might have been told to stay silent if neither was a head, or he might have been told to say either "at least one is a head" or "at least one is a tail" (his choice) if it was a head and a tail. These are equally likely states of affairs (under the principle of 'in the beginning, everything was even money')".

Now, this is where it gets interesting. In effect, if we are talking about having a bet on this, we know that there's an evens chance that it's 33.3% (state of affairs 4 is true), and an evens chance that it's 50:50 (state of affairs 1 is true).

This gives us the intermediate real, I'm prepared to put my money on it, chance of 41.67% that the other coin is a head!

Once you add in the other "posible states of affairs", things become more complicated. I can return to that later.

Now, if the man goes out, comes back in, and says "At least one is a tail", I think that the odds shift directly to 50%.

However, if he once again says "at least one is a head", we have an interesting calculation.

It's my guess (and it is a guess -- I haven't seriously worked it out) that this would shift the best "prop bet" likelihood to about 37.75% that the other coin is a head.

For every time the man comes in and says "at least one is a head", rather than saying "at lest one is a tail" or saying nothing and tossing again, you halve the distance between your previous percentage and 33.33%.

If anyone wants to test this financially, I am happy to walk into a room and be the coin tosser and statement maker while two people bet against each other!.


PJ

Date: 2010-07-09 01:24 pm (UTC)
ext_44: (mobius-scarf)
From: [identity profile] jiggery-pokery.livejournal.com
The key, therefore, is what the man is instructed to say.

Ahh, that's what I feared.

I haven't spent time thinking about this yet, but more than anything else I'm reminded of the Monty Hall problem (http://en.wikipedia.org/wiki/Monty_Hall_problem). Specifically, see the "Other host behaviours" section. Depending upon the way you ask the host to behave, you can get the probability to be pretty much anything you want.

I suspect that it's probably possible to use a similar line of thought on the "boy born on Tuesday" issue, but I haven't worked through the logic here.

Date: 2010-07-09 01:30 pm (UTC)
ext_44: (crash smash)
From: [identity profile] jiggery-pokery.livejournal.com
Obviously I had not read any of the comments to your previous post at the time I posted that one.

Date: 2010-07-09 01:48 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
Give me some credit, Birks -- I did stay awake until 3am proving to myself (fairly rigorously) that I was wrong on Monty Hall.

I haven't got around to re-evaluating my (presumably false, because Those Who Know in academia, etc, disagree with me) two boys stance. Basically it boiled down to the the use of language and my interpretation, based on the specific use of language, that the law of restricted choice didn't apply in the case quoted. I mean, everybody is capable of handling a "classic" four-state HH HT TH TT problem, because, as you say, the maths are trivial. (Doesn't always help me, though, as my arse-first "solution" to Monty Hall demonstrates all too clearly.)

I'll concede an assumption of defeat on yesterday's BB (I'm not being mealy mouthed, just admitting that testing that assumption is pretty fruitless) and have a think about this current formulation, which has interesting subtleties.

1 In 4, Baby, 4 In 1

Date: 2010-07-09 02:55 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
OK. As a preliminary, you meant "deduce" by "induce," but I'm sure I did the same thing at some point yesterday. As an aside, have you noticed how often one finds oneself re-using a word in, say, a four para piece not because it's the right word, but simply because one used it earlier on? If there's a cure for this affliction, I wish somebody would let me know. It's driving me crazy.

For added stringency, case (1) needs a verbal toughening up -- something along the lines of the man tossing a coin whilst outside to predetermine his choice of words if the result inside is HT or TH. But it's fair to assume that some such aleatory procedure takes place.

I actually agree with everything you say after "Now, let's go back to the first coin toss again." I had to think quite hard about it, but I'm fairly sure I'd have reached each separate conclusion myself, given the wording. Of course that's easy for me to say, because you've already written it down ... Not entirely sure about all the preceding stuff, but then I haven't looked carefully enough at it.

The interesting thing, of course, is that the world changes the minute the man walks in, tosses the coins, and says nothing. At that point you've eliminated (1) and the odds permanently shift to 33%. If you expect Aardvark not to spot this and consequently not adjust his bet, there's a certain amount of arbitrage available on the previous bets to make sure he thinks he's winning...

To return briefly to the BB problem, I found this (http://mathforum.org/library/drmath/view/52186.html) interesting on the topic of "choosing" the candidate family. To simplify, if you pick a boy out of the pool to determine the family, the chances are 50:50. If you pick the family out of the pool and find a boy, the chances are 33:66.

Except ... if the set of all families with two children has a cardinality of one -- there's only one such family in the whole world -- then the chances are, apparently, simultaneously 50:50 and 33:66. Which just goes to show how difficult it is to formulate these problems.

Re: 1 In 4, Baby, 4 In 1

Date: 2010-07-09 03:19 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
(To further comment on my final paragraph, note that Gardner (as quoted in wikipedia (http://en.wikipedia.org/wiki/Boy_or_Girl_paradox#cite_note-gardner-0)) got the phraseology right.

(If you're going to insist on specifying the gender, no matter what, then I believe that the Bayesian formulation in the wikipedia article is the way to go.)

Re: 1 In 4, Baby, 4 In 1

Date: 2010-07-10 01:51 pm (UTC)
From: [identity profile] peterbirks.livejournal.com
I had some minor thoughts about induce vs deduce. I decided on "induce" and I still do not think that it was wrong so to do.

PJ

Re: 1 In 4, Baby, 4 In 1

Date: 2010-07-10 05:12 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
It's a side issue, but still. Your comment in question:

"Thus, when he makes the second toss and states "At least one of them is a head", you will induce (perhaps incorrectly, but it's a reasonable induction) that his previous silence was because he had tossed two tails."

Induction in the mathematical sense does pretty much what it says on the tin. You start with the assumption of no historical knowledge, and you build up from one or more minimalist preconditions. (Usually it's one, but as long as it covers the entire problem space for preconditions, that's fine.) The reason you use induction is because (a) it eliminates all things that you do not know -- or else it fails, by definition -- and (b) you can use etablished rules to build from a lemma to a proof.

In this case, you are clearly (and correctly) using deduction, which is mathematically a process that involves historical knowledge. I'm not sure I've got that definition completely right, so let me assert that proof by induction is a recursive proof, whereas proof by deduction is linear.

I'm prepared to consider that it is possible to present a "proof by deduction" as a "proof by induction." Within the set of proofs (given axioms blah blah don't forget Goedel blah blah) there may be proofs that are, idempotently, both inductive and deductive.

Let's say you found one of those.

In that case, using a common verb that reminds people of using mechanical and/or pharmaceutical means for causing a birth to occur is almost certainly an unfortunate alternative to using another common verb that reminds people of Sherlock Holmes.

Re: 1 In 4, Baby, 4 In 1

Date: 2010-07-10 05:54 pm (UTC)
From: [identity profile] peterbirks.livejournal.com
As you say, it's a side issue. I wouldn't have been unhappy with using "deduce" either, even though I don't think it's quite right.

The latter is right (and induce is wrong) when you say "It's a number from 1 to 8, but it isn't 8". I therefore deduce that it's "a number from 1 to 7".

I "induce" the general case that the silence was because he had two tails, because his statement "at least one of them is a head" after his previous silence acts in a different logical way on the previous silence after a coin toss from the way the statement "but it isn't 8" acts on the previous statement "it's a number from 1 to 8".

It was possible, for example, that the man was operating under a rule of which we were unaware (but which was not the rule that, if he threw two tails, he remained silent and tossed again) that required him to toss the coins again.

Therefore I felt that I was not using mathematical deduction in the Holmesian sense (eliminating all cases until whatever remained, must, however improbable, be the truth).

I was, instead, inducing a state of affairs in the world from a single instance.

The unfortunate fact that "induction" has other medical associations is an irritation, but I don't think I should allow that to discourage me from using it in the sense which does exist.

PJ

Re: 1 In 4, Baby, 4 In 1

Date: 2010-07-10 06:36 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
Actually, it should. Communication is communication. (How's that for a piece of Sartrean wit?)

Or, in other words, and God knows I've been reading you for 30 years because you obviously believe in this, language is about clarity.

(It's still a side-issue, though.)

I can't think of a single instance of a logical or mathematical framework that allows you to "induc[e] a state of affairs in the world from a single instance in the future."

Well, I can, actually. That single instance (and as I say, it doesn't have to be a single instance -- as long as it covers the problem domain) is by necessity the single instance from which you infer the subsequents. Given my assumption that there is a part of the proof space where deduction is exactly equivalent to induction (and don't forget, I'm not sure there is one; I'm just allowing you that possibility), then, I believe, you are taking a deductive proposition such as

D <- C <- B <- A

and, in your words, "inducing [actually inducting, I think] a state of affairs in the world from a single instance."

I wish to put forth the proposition (in a Jim Morrison sort of way) that you cannot induce D from A. You cannot. Dum de dum de soft machine dum de dum ...

Your single instance is in the historical present. Your conclusions are based upon analysis of the historical past.

That would be deduction -- not induction.

Induction is normally represented as a recursive process. There is no recursion present in your argument.

On the side: No, you weren't using Holmesian elimination. You weren't even suggesting that you were. I used that as a metaphor, and you are throwing it back as an accusation. What you were doing was to use a set of (reasonably assumed) preconditions and arguing based on those preconditions. That still makes it a deduction when you are talking about the set of results based upon those preconditions.

Basically, I think what I'm saying is that you can inductively generate a statistical spread on "first principles," without history. If you are going to use (perfectly validly) a set of historical results (which already represents a statistical spread), then it's deduction. If only because you know more. (Which is what I thought was the point at issue in the first place.)

Of course, this might only go to prove that I was right to do two A-levels in Maths & Mechanics rather than one in pure maths and one in stats.

Can't say I feel the loss after all these years.

Date: 2010-07-09 03:30 pm (UTC)
ext_44: (9diamonds)
From: [identity profile] jiggery-pokery.livejournal.com
There's an interesting ("interesting if you're a boring person!", which I am) analogy to those situations you sometimes see in TV poker, and quite possibly in real-life ftf poker, where you have folded to a villain who you believe to have a specific pocket pair. Villain offers to show you one of his cards and it is indeed of the specific rank. These are more probably questions of villain's psychology (and choice of card revelation protocol) than of probability, but:

1) Is the chance that he has the pocket pair different if you've chosen which of the two cards to see to the chance that he has the pocket pair if he's chosen which of the two cards to show you?

2) Is the chance that he has the pocket pair different if it's your idea to pick which card to see than if it's his idea to pick which card to see?

Date: 2010-07-09 05:26 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
Not entirely sure what the difference between (1) and (2) is, but ignoring psychology and protocol, and going for (1), and reducing it to the by now familiar {HH HT TH TT}, and assuming that your desired pocket pair is HH, my answer is disturbingly simple:

If you chose card 1, you're 50/50.
If you chose card 2, you're 50/50.

You are therefore 50/50.

If villain chooses, you sort of have to specify whether he knows the pair you're looking for or not. (We can assume that he knows you're looking for a pair.)

We can assume that he wants you to turn the other card over.

If he knows that you're looking for HH, then he'll flip an H (and you've excluded TT). You are therefore 33/66, because this is a matter of combinations and not the number of Hs available.

If he just knows you're looking for a pair, then his choice is totally random. However, he hasn't flipped a T, so you're still left with {HH HT TH}. In this case, however, his random selection encompassed all six cards, four of which are H. In two of those cases, you hit the pocket pair. Your chances are therefore 50/50.

Well, that's my analysis anyway. Probably gruesomely wrong, as usual. Now, if you're going for a 52 card pack, less known cards, with a given range at a certain percentage confidence, and you're not dealing with {HH HT TH TT}, and and and ... then all bets are off.

Except in the case where villain knows that you're looking for HH, and you can replace 'T' with 'o' where 'o' doesn't necessarily represent a pair. In which case, your odds if you let villain pick are still 33/66.

Date: 2010-07-09 05:30 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
(Er, not quite on the last para. On the alternative view that villain knows you're looking for a pair, but doesn't know which one, then A and B coalesce on the assumption that 'o' is not a pair. In that case your odds are 33/66, because villain actually has more information, implicitly in the cards.)

Date: 2010-07-11 12:49 am (UTC)
ext_44: (wtf)
From: [identity profile] jiggery-pokery.livejournal.com
If he has the pocket pair, then you're always going to see what you want to see.

If he doesn't have the pocket pair, then either (a) he would prefer you to see he had it or (b) he would prefer you to see he didn't have it.

In case (a),

1) If you've chosen, then you've got a 50% chance of seeing the card you want. If he's chosen, then you've got a 100% chance of seeing the card you want.

In case (b),

1) If you've chosen, then you've got a 50% chance of seeing the card you want. If he's chosen, then you've got a 0% chance of seeing the card you want.

I can't work it out now because I'm off to bed, but there might be some way of drawing up a tree diagram which goes through the eight possibilities and comes up with an appropriate function. Possibly I will have a go at this tomorrow.

I'm sure I had a point with question 2 at the time, but I can't remember what it was and on reflection it may well not make a difference.

Date: 2010-07-11 07:15 am (UTC)
From: [identity profile] peterbirks.livejournal.com
You did not specify that you had told villain what the pair was that you believed he had. Once you have done that then, yes, it changes things.

It basically depends on whether villain wants you to think whether you made the right decision or a wrong decision when you folded. Usually one wants people in tournies to think that folding was the right decision, so, yes, villain would choose to show you the card that confirmed you were right to fold more than he would choose to show you the "other" card.

Date: 2010-07-10 02:34 pm (UTC)
From: [identity profile] peterbirks.livejournal.com
It's a law of restricted choice, question, but as phrased I think it's difficult to be certain that my instinctive answer ("the chance is the same as far as you are aware whether you pick the card or he does) is correct.

The famous example of this, of course, was when a player offered his opponent to pick one of his hole cards for $25. The player who paid the $25 thought that seeing one of the hole cards would give him more information about his opponent's hand, whereas in fact it did not. I'll dig out the example when I have time. I'm fairly sure that it is in Al Alvarez's first book on poker. "The Biggest Game In Town".

This does bring up a minor philosophical point. Suppose I am drawing to a flush with one card to come. We casually say that "your chance of hitting is 9/46", but, since the order of the cards is predetermined, we could (if we knew the order of the cards still in the deck) say that "your chance is 1" or "your chance is 0".

Both of us could be "right" simultaneously, simply because we (who know the order of the cards) are essentially inhabiting a different universe (in probabilistic terms) than the person who does not know the order of the cards. This has a large number of real-world applications, obviously.

PJ

Date: 2010-07-10 03:08 pm (UTC)
From: [identity profile] slowjoe.livejournal.com
It's Biggest Game in Town, Strauss vs AN Other, can't find my copy, so, from memory:

Board is x72, Strauss has raised PF with 72, and I think Mr. Other has played back at him, such that Strauss thinks that his two pair is behind. So Strauss bets, then he offers to show a card for $25 while villain is considering a call. The implication is that he has a set, so he gets his opponents to fold a better hand.

(I suspect it might be the turn and the turn paired the board, vs an overpair.) AN Other MIGHT be Jesse Alto.

Date: 2010-07-10 03:56 pm (UTC)
ext_44: (dealer)
From: [identity profile] jiggery-pokery.livejournal.com
The 9/46 approach is a long-term view, the 1-or-0 approach is a one-instance, short-term view. However, I'm very interested to see (and I've only seen this in recent years; I don't know whether this is because it really is happening more frequently or because I'm being exposed to it more frequently) more and more players prepared to "run it twice" after a player has gone all in, particularly in crucial rivers and turns. This, to me, seems to bias things in favour of the bias of long-term probabilities and away from the results of a one-time gamble, when so much of poker is about one-time gambles.

It makes me wonder whether there would be players interested in playing a poker variant with the extra rule that, at any point before the deal of the final card, any player still in the pot can propose that the hand concludes at that point. If all players still in the pot agree, then all pocket cards are revealed and the pot is shared in proportion to the probability of each player winning the hand, based on the cards yet undealt. Effectively, it's not so much "run it twice", as "run it through the entire deck". Perhaps this is a rare gap in the market by which a poker site might yet distinguish itself. Is this still an interesting game to you? Do you think there would be many players who would enjoy this?

Date: 2010-07-10 04:20 pm (UTC)
From: [identity profile] peterbirks.livejournal.com
There is a severe potential collusion problem with "run it twice", and I don't think it should be allowed. Your "variant" points out the collusion aspect even more starkly.

If X & Y agree that, whenever they are all in against each other, they will "run it infinite times", but that, wheneer either X or Y are in a pot against player C they will only run it once, then player C may have the same long-term expectation, but he faces far higher volatility.

Since volatility has a (negative) value (stocks with high beta are valued less than equivalent stocks with low beta), such an agreement between X and Y works to the joint benefit of X and Y and to the detriment of C. It is, therefore, collusion.

In live poker games, if you are playing high-low PLO or something, the effect is even more marked. C might be well up on the night. X & Y have pulled out more cash. Raising and reraising commences. C folds, at which point X & Y run it twice and, nearly always, split it.

Or, to put it another way, suppose X & Y raised like nutters, getting C to fold, and then split the pot between them in the toilets immediately afterwards, no matter who "won" the hand?

PJ

Date: 2010-07-10 05:29 pm (UTC)
From: [identity profile] real-aardvark.livejournal.com
I think it's more of a category error. It's an extremely unattractive variant, as far as I can see, because it completely trashes any line of thought that leads you to the "share the pot, proportionately" stage.

To put it another way, if you've worked a perfectly sensible line based on perfectly sensible judgement and a reasonable approximation of opponents' range and a fair understanding of the way their stacks and your stack affect the odds and a good few guesses on the side pots, if any, and some idiot comes up and says:

Game over, boys! I've got an overpair on the turn! Let's stop betting right now, because I don't feel like risking a possible flush draw on the river. We'll just share what we've got...

Hell, I might be totally ignorant about poker, but this is a seriously unattractive proposition.

Date: 2010-07-11 12:53 am (UTC)
ext_44: (games)
From: [identity profile] jiggery-pokery.livejournal.com
I don't agree with the assertion that people who choose to play poker necessarily give volatility an inherent negative value. Heck, most people who choose to play games involving any non-deterministic elements think that volatility generates entertainment which makes it worth their time, as well as the enjoyment from making the decisions that were required in the game.

Have you ever gambled money on a coin flip in a circumstance where you have convinced yourself that it's an even money proposition - for instance, you're flipping a coin that you provided yourself and believed to be fair? I'm prepared to believe that you're rational enough and EV-focused enough that you haven't, or at least not while you were sober, but I bet that many poker players have.

Date: 2010-07-11 07:09 am (UTC)
From: [identity profile] peterbirks.livejournal.com
Volatility has a negative economic value, because it increases the required size of your bankroll to have the same chance of not going busto. So for players agreeing to run it twice against each other but not against another player is collusion between the first two players, whether or not the third is an action-junkie.

PJ

Date: 2010-07-11 11:01 pm (UTC)
From: [identity profile] miltonkeynesman.livejournal.com
I was enjoying the recent blogs on probability, but got a bit lost at the end on the poker discussion. Try this one, if you had said that the last two football World Cup Finals would be contested by four different European countries, none of whom were Germany, how many people would have expected England to be one of them ? Fortunately, I can just remember 1966, for those of us watching in black & white, England were in the red shirts.
Cheers old mate.
Richard

August 2023

S M T W T F S
  12345
6789101112
13 14151617 1819
20 212223242526
27282930 31  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 29th, 2025 09:31 am
Powered by Dreamwidth Studios