For those who don’t know, The Riddler is a weekly set of two puzzles published by FiveThirtyEight on math, logic, and probability. I’ve been following the puzzles for quite a while, but have decided to start to write up my work and answers to particularly interesting puzzles—or puzzles that could have a fun extension component—here (whenever I have enough time to do so).

Today, I’m tackling The Riddler Classic for this weekend, which reads as follows:

From Jordan Miller and William Rucklidge, three-deck monte:

You meet someone on a street corner who’s standing at a table on which there are three decks of playing cards. He tells you his name is “Three Deck Monte.” Knowing this will surely end well, you inspect the decks. Each deck contains 12 cards …

- Red Deck: four aces, four 9s, four 7s
- Blue Deck: four kings, four jacks, four 6s
- Black Deck: four queens, four 10s, four 8s

The man offers you a bet: You pick one of the decks, he then picks a different one. You both shuffle your decks, and you compete in a short game similar to War. You each turn over cards one at a time, the one with a higher card wins that turn (aces are high), and the first to win five turns wins the bet. (There can’t be ties, as no deck contains any of the same cards as any other deck.)

Should you take the bet? After all, you can pick any of the decks, which seems like it should give you an advantage against the dealer. If you take the bet, and the dealer picks the best possible counter deck each time, how often will you win?

In brief, you should **not** take the bet (assuming even odds), and it’s not too difficult to see why: each deck has a complementary deck with 8/12 cards higher than the corresponding cards for your own original deck. For example, the red deck’s aces might defeat the black deck’s queens, but then the black deck’s 10s and 8s beat the red deck’s 9s and 7s. In a way, this game is like a probabilistic version of rock, paper, scissors, where `red > blue > black > red`

(by probabilistic, I mean that even if you’re the “weaker” deck you’ll still win ocassionally). Assuming the dealer will always pick the deck that tends to trump your deck, the interesting question, is of course, just *how often* you would win. One way to approach this question is through simulations. I wrote the following code in R to try and tackle the problem:

```
red <- c(rep(14, 4), rep(9, 4), rep(7,4))
blue <- c(rep(13, 4), rep(11, 4), rep(6,4))
black <- c(rep(12, 4), rep(10, 4), rep(8,4))
player_pick <- red #change to switch decks
player_deck <- player_pick
if (identical(player_pick, red)) {
dealer_deck <- black
} else if (identical(player_pick, blue)) {
dealer_deck <- red
} else {
dealer_deck <- blue
}
N <- 100000 #change to increase/decrease sims
player_draw <- lapply(1:N, function(i) sample(player_deck))
dealer_draw <- lapply(1:N, function(i) sample(dealer_deck))
hands <- list()
for (i in 1:N) {
hand <- NA
j <- 1
while (length(which(hand==0)) < 5 & length(which(hand==1)) < 5) {
hand[j] <- ifelse(player_draw[[i]][j] > dealer_draw[[i]][j], 1, 0)
j = j+1
}
hands[[i]] <- hand
}
hands_winner <- sapply(1:N, function(i) ifelse(sum(hands[[i]]) >= 5, 1, 0))
mean(hands_winner)
```

To give you a sense of the parameters here, `player_draw`

is a shuffling of the player’s deck, `dealer_draw`

is a shuffling of the dealer’s deck and `hand`

is an vector of individual iterations of the game, where you draw cards until one person wins 5 times (in this case drawing the first item in the `i`

th element of the `player_draw`

list and comparing it to the first item in the `i`

th element of the `dealer_draw`

list). Subsequently, `hands_winner`

is a vector of binary values corresponding to the winner of an individual game, where 1 refers to the player winning and 0 refers to the dealer winning. As a result, the mean of `hands_winner`

is the proportion of times you would win against the dealer. As you can imagine, regardless of the deck that you picked, on average, the percentage of time the player wins vs. the dealer is the same, **about 30%**. The following graph represents the means of 1,000 simulations of 1,000 games played:

Why the answer isn’t 33% (as the player’s deck has 33% of cards that can beat the dealer’s deck) is another interesting question I haven’t quite been able to wrap my head around. I suspect it has to do with the fact that the game is played not with individual cards randomly drawn, but first to 5. I think it’s the same logic as to why a “worse” sports team is more likely to win a best-of-1 series as opposed to a best-of-7 series: there’s more room for the “worse” team to get lucky and pull off an upset in the former case. This is not something I can express formally, and messing with my code a bit gave me mixed results, so if you know the reason, please drop a comment below.

comments powered by Disqus