* Clubs: face value

* Diamonds: face value plus 13

* Hearts: face value plus 26

* Spades: face value plus 39

It may take some practice first to be able to look at a card, and quickly know it’s numerical value, so sorting through the deck remains quick.

]]>While I understand the algorithm, and can execute it fine, it may be helpful for others to see it in action. A video demonstrating the algorithm posted online (YouTube, Vimeo, etc.) would be exceptionally helpful.

Shuffling goes hand-in-hand with sorting, so here’s a deterministic algorithm for maximizing the Shannon Entropy in a deck by hand.

Holding the deck facedown, flip the bottom card around, so it is face up on the bottom of the deck.

Start counting at 1 with the top facedown card.

Insert the card at the bottom of the deck, if:

The count is divisible by 7 (count % 7 = 0), or

The count contains the number 7 (7, 17, 27, etc.)

Otherwise, insert the card randomly somewhere in the middle of the deck.

When the bottom faceup card reaches the top of the deck, insert randomly into the deck facedown.

This algorithm is similar to the Fisher-Yates shuffle, however, rather than using a RNG to determine where the card is placed in the array (deck), you are the RNG. You are using the count in [7, 14, 17, 21, 27, 28, 35, 37, 42, ...] to deterministically determine the bottom card, bringing the faceup card in the array.

A maximum Shannon Entropy value of calculating all 52 deltas of two sequential cards (wrapping the bottom and top cards), creating a histogram of the 51 p-values, and summing “-pi*ln(pi)” for each 51 value would yield a value of ~= 3.88.

In practice, following this shuffle algorithm, I see entropy values of ~= 3.3 to ~=3.6 quite consistently. This is equivalent to riffle shuffling the deck 11-12 times. Also, I find this a touch slower to execute than 11-12 riffle shuffles. The process is essentially a riffle shuffle, just one card at-a-time, rather than binary cutting the deck.

Where 11-12 riffle shuffles can leave a degree of uncertainty about the entropy added to the deck, this shuffling algorithm gives a high degree of certainty that you have maximized the randomness in the deck sequence.

Anyway, FYI.

]]>I really liked your algorithm, but I noticed that it takes up a ton of table space — i.e., like a hash table, you’re gaining speed at the expense of efficient memory use. You say you get a manageable 8 or 9 piles, but in practice I wind up with more like 12. Running some computer simulations tells me it does range between around 8 and 15 piles. That’s a lot of wasted table space, and also it is slow to merge if I have to eyeball a lot of cards at once.

So I tried experimenting with Ada’s suggestion, and threw away some methods that required too much mental effort or too many extra steps. Here’s the version I’ve hit on, which adds one extra merge step.

1. Grab about 1/4 of the cards. Do the sort you mentioned above on only those cards. Set the sorted pile aside.

2. Repeat three more times.

3. Use the merge step to merge the four piles.

Technically all these methods are n log n, so in principle it can’t possibly waste very much time to do some extra smaller sorts. The table space is greatly reduced, typically around 4-6 piles (not counting the piles I set aside, which at least don’t have to be organized). I do have to look at every card twice as many times, but I also have fewer cards on the table to think about during the merge phase, so it may not slow me down much at all.

]]>https://en.wikipedia.org/wiki/Patience_sorting

In particular, wikipedia mentions the interesting fact that k is on average on the order of root n.

]]>