## Mathematical Recreations

 MathRec Home Last Month's Topic Introductory Topic Older Topicsand Links Educational Resources Steve's Personal Page

#### High Card Wins

The following game is a variation of a puzzle that I found in Michael Shackleford's Math Problems site. Problem 60 reads as follows

Computer helpful. Two players are each dealt a card face down. Each player may look at his own card. The highest card wins. Cards are valued as in poker with aces being low. The first player may either keep his card or switch with the second player. The second player may keep his card, whether it be his original card or one that the first player gave him after switching, or trade it with the next card on the deck, which is also face down. The loser pays the winner \$1 and if both cards are equal then no money exchanges hands. For the sake of simplicity assume an infinite number of decks. Both players are infinitely logical. At what point should the first player switch? At what point should the second player switch if the first player doesn't switch? What is the expected gain of the first player?

The game that I want to consider is only slightly different. Rather than have all ties end in a draw, the second player wins all ties, with just one exception. If the first player elects to trade cards and they turn out to be equal, then player 2 must draw the third card in an attempt to break the tie. This slight rule change makes a profound difference in the solution, and that difference is the entire point of this topic.

#### A simple approach

There are only three decision points in this game. Chronologically, the first decision point is Player 1's: Should Player 1 elect to trade with Player 2? The other two decision points are Player 2's. If Player 1 elects to trade, should Player 2 switch her card for the top card in the deck? If Player 1 doesn't elect to trade, should Player 2 switch her card for the top card in the deck?

We'll start with Player 2's decision after Player 1 elects to trade cards. At that point, Player 2 is either holding a winning position (Player 1's original card was greater than Player 2's original card) or a losing position (Player 2's original card was the same or greater than Player 1's original card. If Player 2 is holding a winning position, then she will never exchange her card, because she has nothing to gain. On the other hand, Player 2 will always exchange her card if she holds a losing position.

So, we've solved one of the three decision points. Using this information, we can determine Player 1's chance of winning if he elects to trade his card with Player 2. Let i be the value of Player 1's card (from 1 to 13). Let j be the value of Player 2's card. Player 1's probability of winning is zero if j < i. If, however, j ≥ i, then Player 2 will be forced to switch with the top card in the deck, and Player 1's chance of winning will be (j–1)/13. (Remember, Player 2 wins in a tie, so even if j = 13, Player 2 could still draw another king (value 13) to win.)

Player 1's chance of winning P1 after a trade is given by

 = 113 ∑j ≥ i j – 113

 = (14 – i) (11 + i)2 · 132

There at least two straightforward ways to do the sum. I did it by recongizing that j–1 depends linearly on j; there are 14–i terms; and the average value of j–1 is [(i–1)+(13–1)]/2 = (i+11)/2.

Now, if we can determine Player 1's chance of winning when he keeps his card, then we can compare the two options to determine the correct decision. But Player 1's chance of winning depends on when Player 2 elects to switch her card with the top card from the deck. Similarly, Player 2's decision about whether or not to switch with the top card from the deck depends on when Player 1 elects to switch cards with Player 2. So these two decisions are directly coupled.

Let's look briefly at Player 2's decision. When Player 2 is making her decision, Player 1's decision is already complete. There are 13 possible situations that Player 2 could be in at this point, since the only knowledge that she has is the value of her own card and the fact that Player 1 did not want to switch cards. Suppose Player 2 is holding a card with value g, and that her chance of winning is greater if she switches that card with the top card from the deck than it is if she keeps her original card. Then she should switch when holding a card with value g, and she should also switch when holding any card with value less than g. Let's take g to be the greatest value value that Player 2 should switch. Then Player 2 will always switch if j ≤ g, and Player 2 will always hold if j ≥ g+1. (Then g/13 is the overall probability that Player 2 will switch with the top card from the deck after Player 2 has elected not to switch with Player 2.)

This analysis of Player 2's decision isn't quite rigorous, but it is essentially correct. We've now determined that Player 2's decision can be characterized by a single parameter g. Since both players are infinitely logical, Player 1 will know the value of g, and he will use that knowledge to make his decision.

A similar analysis indicates that Player 1's decision is also characterized by a single parameter. If it makes sense for Player 1 to trade when holding a card with value f, then it makes sense for Player 1 to trade when holding any card with value less than f. We take f to be the greatest value that Player 1 should trade with Player 2.

The first set of graphs below shows how Player 2 would make her decision, if she knows the value of Player 1's decision point f.

##### Graphs of Player 1's probability of victory as a function of Player 2's decision point g for different Player 1 strategies f

Player 1 trades A–5 and holds 6–K (f = 5)  Player 2 should trade A–7 and hold 8–K (g = 7) giving P1 = 1051/2197.

Player 1 trades A–6 and holds 7–K (f = 6)  Player 2 should trade A–8 and hold 9–K (g = 8) giving P1 = 1082/2197.

Player 1 trades A–7 and holds 8–K (f = 7)  Player 2 should trade A–8 and hold 9–K (g = 8) giving P1 = 1097/2197.

Player 1 trades A–8 and holds 9–K (f = 8)  Player 2 should trade A–9 and hold 10–K (g = 9) giving P1 = 1096/2197.

Player 1 trades A–9 and holds 10–K (f = 9)  Player 2 should trade A–9 and hold 10–K (g = 10) giving P1 = 1074/2197.

Player 1 trades A–10 and holds J–K (f = 10)  Player 2 should trade A–10 and hold J–K (g = 10) giving P1 = 1029/2197.

From these graphs, we would infer that Player 1 should choose f = 7 because that gives Player 1 the best possible outcome (P1 = 1097/2197). In response, Player 2 should choose g = 8.

The next set of graphs below shows how Player 1 would make his decision, if he knows the value of Player 2's decision point g.

##### Graphs of Player 1's probability of victory as a function of Player 1's decision point f for different Player 2 strategies g

Player 2 trades A–6 and holds 7–K (g = 6)  Player 1 should trade A–8 and hold 9–K (f = 8) giving P1 = 1128/2197.

Player 2 trades A–7 and holds 8–K (g = 7)  Player 1 should trade A–8 and hold 9–K (f = 8) giving P1 = 1113/2197.

Player 2 trades A–8 and holds 9–K (g = 8)  Player 1 should trade A–8 and hold 9–K (f = 8) giving P1 = 1098/2197.

Player 2 trades A–9 and holds 10–K (g = 9)  Player 1 should trade A–7 and hold 8–K (f = 7) giving P1 = 1102/2197.

Player 2 trades A–10 and holds J–K (g = 10)  Player 1 should trade A–7 and hold 8–K (f = 7) giving P1 = 1120/2197.

Player 2 trades A–J and holds Q–K (g = 11)  Player 1 should trade A–6 and hold 7–K (f = 7) giving P1 = 1154/2197.

From these graphs, we would infer that Player 2 should choose g = 8 because that gives Player 1 the worst possible outcome (P1 = 1098/2197). In response, Player 1 should choose f = 8.

If Player 1 settles on a strategy first, then Player 2 can ensure that Player 1's chance of winning is no more than 1097/2197, but if Player 2 settles on a strategy first, then Player 1 can ensure that his chance of winning is at least 1098/2197. This does not make sense, so what was wrong with the analysis?

#### A more rigorous approach

Put simply, the problem with the simple approach is that the decision point is not necessarily an integer. Player 1 does not have to choose between f = 7 and f = 8. He can do something in between. That is, he can (and should) always trade 7 and below, and always keep 9 and above, but trade 8 some of the time. You may freely skip this section without losing the thread.

A rigorous approach to this problem recognizes that Player 1 and Player 2 may have a probability of trading associated with each card value. Let fi be the probability that Player 1 will trade a card with value i. This gives

 (2) f  = 113 ∑i fi

where f is the overall probability that Player 1 will trade his card with Player 2.

Similarly, we can let gj be the probability that Player 2 will switch a card with value j for the top card from the deck (after Player 1 has declined to trade).

We find that Player 1's chance of winning when holding a card with value i is given by

 (3) P1(i, hold)  = 113 ∑j  <  i ( (1 – gj) + gj (i – 1)/13 ) + 113 ∑j  ≥  i gj (i – 1)/13

Even if we don't know the specific values of gi, we can set bounds on the value of P1(i, hold), and we can compare these bounds with P1(i, trade) from Equation (1).

The minimum value of P1(i, hold) is given when gj = 1 for all j < i and gj = 0 for all j > i. So we find

(4) P1(i, hold)
 ≥ 113 ∑j  <  i (i – 1)/13

 = (i - 1)2132

So, Player 1 should always hold when 2(i – 1)2 ≥ (14 – i)(11 + i). This is true for i ≥ 9. So we know that fi = 0 for all i ≥ 9.

For the other bound, it's helpful to ask "What is the lowest value that Player 1 would ever keep?" Let this value by x. For values less than x, Player 2 will always trade; that is, gj = 1 for all j < x. In addition, we know already that Player 2 will never trade a 13, because doing so can never improve her holding. Let's look at Player 1's probability of winning when holding this minimum value x.

(5) P1(x, hold)
 = 113 ∑j  <  x (x – 1)/13  + 113 12∑j  =  x gi (x – 1)/13

 ≤ 12 (x – 1)132

It must be possible for P1(x, hold) to be greater than P1(x, trade), so 24(x – 1) > (14 – x)(11 + x). This only holds for x ≥ 7, so Player 1 will always trade when holding 6 or less, and fi = 1 for all i ≤ 6.

At this point, we've determined all but two of the fi, so Player 1's strategy is defined by the two undetermined parameters f7 and f8. Now let's look at Player 2's decision more directly.

Consider at Player 2's choice when holding j. We'll need to express the conditional probability that Player 1 has a particular value given that Player 1 did not trade cards. The probability that Player 1 holds a 13 at this point is simply (1/13)/(1 – f). That is the probability that he was dealt and kept a 13 divided by the probability that he kept any card. Player 1 has the same probability of holding any of the other values from 9 through 12. Using similar considerations, the probability that Player 1 holds a 7 is ((1 – f7)/13)/(1 – f). Since 1 – f = (7 – f7 – f8)/13, We find that the probability that Player 1 holds any particular value i is given by

 (6) pi  = 1 – fi7 – f7 – f8

where fi = 0 for i ≥ 9.

Player 2's chance of winning if she elects to keep a card with value j is given by

 (7) P2(j)  = ∑i  ≤  j pi

Player 2's chance of winning if she elects to draw is given by

(8) P2(draw)
 = 113 ∑j P2(j)

 = 28 – 7f7 – 6f813 (7 – f7 – f8)

We should note that Player 2's chance of winning when keeping a card with value j increases with j, while her probability of winning if she draws from the deck does not change. We'll see in a moment that P2(10) > P2(draw) for all values of f7, f8 and f9. Therefore, the same will also be true for all P2(j) where j ≥ 10.

The value of P2(10) is given by

 (9) P2(10)  = 4 – f7 – f87 – f7 – f8

When comparing P2(10) with P2(draw), we can multiply each by 13(7 – f7 – f8). We then find that 52 – 13f7 – 13f8 > 28 – 7f7 – 6f8 for all values of f7 and f8.

Similarly, we find that P2(8) is always less than P2(draw)

 (10) P2(8)  = 2 – f7 – f87 – f7 – f8

Note that 26 – 13f7 – 13f8 < 28 – 7f7 – 6f8 for all values of f7 and f8.

Player 2's strategy is now defined by only one undetermined parameter, g9.

We can now put a much better upper bound on P1(i, hold) since Player 2 will always hold a card that is 10 or more. Comparing with Equation (4), we find

 (11) P1(x, hold)  ≤ 9 (x – 1)132

and 18(x – 1) > (14 – x)(11 + x) for all x ≥ 8, so we have learned that the lowest card that Player 1 may keep is an 8, and f7 = 1. Therefore, Player 1's strategy is also defined by only one undetermined parameter, f8.

#### A randomized strategy

If Player 1 never trades with Player 2 when he's dealt an 8 (that is, if f8 = 0) then Player 2 will never draw from the deck when she's dealt a 9 (that is, g9 = 0). In the first set of graphs above, this corresponds to the graph labeled "Player 1 trades A–7 and holds 8–K (f = 7)" which shows that Player 2 should trade an 8, but not a 9.

If, on the other hand, Player 1 always trades with Player 2 when he's dealt an 8, then Player 2 will always draw from the deck when she's dealt a 9. This corresponds to the next graph labeled "Player 1 trades A–8 and holds 9–K (f = 8)" which shows that Player 2 should trade a 9, but not a 10.

Now, if Player 1 only "occasionally" trades with Player 2 when he's dealt an 8, then Player 2 can't afford to switch strategies. She'll still hold onto a 9 if Player 1 doesn't trade cards, and Player 1 gets a little bit better margin by breaking his pattern. On the other hand, if Player 1 trades too often with Player 2 when he's dealt an 8, then Player 2 will switch strategies and start drawing from the deck when she holds a 9. When should Player 2 switch? (At what value of f8?)

I'm going to introduce one more notation. Let Pfg be Player 1's overall probability of winning when Player 1 and Player 2 are using thresholds f and g, respectively. For example, the minimum value in the graph labeled "Player 1 trades A–7 and holds 8–K (f = 7)" is P7, 8. If f and/or g are unspecified, then I'm considering a randomized strategy for Player 1 and/or Player 2 with parameters f8 or g9, respectively.

If Player 2 never draws from the deck when holding a 9, then Player 1's overall probability of winning is

(12) Pf, 8    =  (1 – f8)P7,8 + f8 P8, 8

 =  (1 – f8) 10972197 +  f8 10982197

If Player 2 always draws from the deck when holding a 9, then Player 1's overall probability of winning is

(13) Pf, 9    =  (1 – f8)P7,9 + f8 P8, 9

 =  (1 – f8) 11022197 +  f8 10962197

Player 2 should draw from the deck when holding a 9 if Pf, 9 is greater than Pf, 8. This happens when 1102 – 6f8 > 1097 + f8. That is, when f8 < 5/7. Choosing f8 = 5/7 gives the best possible overall result for Player 1. When Player 1 is dealt an 8, he should trade with Player 2 5/7 of the time, and his overall probability of winning is (1097 + 5/7)/2197 = 7684/15379.

Note that Player 1 get's the same probability of winning regardless of whether or not Player 2 draws from the deck when holding a 9. Of course, if Player 2 always draws (or never draws) when holding a 9, then Player 1 can take advantage of the situation by adjusting his own strategy. How often should Player 2 trade in a 9 to keep Player 1 from taking advantage of the situation?

(14) P7, g    =  (1 – g9)P7, 8 + g9 P7, 9

 =  (1 – g9) 10972197 +  g9 11022197

(15) P8, g    =  (1 – g9)P8, 8 + g9 P8, 9

 =  (1 – g9) 10982197 +  g9 10962197

Player 1 should trade cards with Player 2 when he's dealt an 8 if 1098 – 2g9 > 1097 + 5g9. That is, when g9 < 1/7. Choosing g9 = 1/7 gives the best possible overall result for Player 2. This makes Player 1's overall chance of winning equal to 7684/15379, regardless of the value of f8.

With a bit more thought, we can write the complete equation for Pfg.

(16) Pf, g    =  (1 – f8)(1 – g9)P7, 8 + (1 – f8)g9 P7, 9 + f8(1 – g9)P8, 8 + f8g9P8, 9

 = 1097 + f8 + 5g9 – 7f8g92197

Finishing with just a little calculus, the partial derivatives with respect to f8 and g9 must both be zero, in order to maximize each player's strategy, and we can readily see that this gives us f8 = 5/7 and g9 = 1/7, with Pfg = 7684/15379.

Send all responses to .

Thanks,
Steve