Sorting a deck of cards the fast way

I have a deck of cards on my desk at work, which I play with to keep my hands busy while thinking about tricky problems. I sometimes find myself trying to implement different sorting algorithms to put the deck in order and to no great surprise, they all tend to suck.

The algorithmically fast sorts, like quick-sort or merge-sort, require repeated splitting of the deck until you essentially have all 52 cards laid out in front of you, which isn’t very practical. The algorithmically slow ones, like insertion, selection and bubble sort, let you keep all the cards in your hands, but are held back by their fundamental complexity.

In practice to sort a deck of cards I’ll usually do a bucket + human sort: deal the cards out into four buckets, one for each suit, and then pick up the 13 cards for each suit and sort  them in hand, using no fixed algorithm. This tends to be fast in practice, but lacks a certain elegance. It doesn’t scale from a deck of cards to an arbitrary set of numbers to be sorted on a computer.

An ideal sorting algorithm would be a) easy to implement with a physical deck of cards and b) algorithmically fast (i.e. O(n*log(n))) when thought of abstractly.

After endless noodling around I stumbled on the following algorithm which neatly fits both criteria. I was pleasantly surprised when I did, because usually when I think I’ve found a new sorting algorithm I think a bit harder and it turns out I’ve just reinvented quick-sort! So, here’s my shiny new* algorithm.

Phase 1: Dealing

  1. Deal the first card from the deck face up on the table. This is the first pile.
  2. Take the next card from the deck. If it is smaller than the card in the first pile, place it on top of the first pile. If it’s larger than the card in the first pile, create a second pile to the right of the first pile.
  3. Deal each subsequent card, placing it on the pile which has the smallest top card which is larger than the card being dealt. If the top card of all piles is smaller than the card being dealt, create a new pile to the right of the other piles.

Phase 2: Collecting

  1. Pick up the smallest card. It will be on top of the left-most pile.
  2. Pick up the next smallest card, it will be on top of one of the piles.
  3. Continue picking up cards in order. The next card will always be on top of one of the piles.
  4. Play a game of cards!

So, let’s look at how it works. The key is in the invariant properties which we maintain. Each time we deal a new card in Phase 1, the piles maintain two invarients:

  1. Each pile is sorted (low→high, top→bottom)
  2. The top cards of the piles are sorted (low→high, left→right)

If you look at the dealing phase, you’ll notice we always place cards either on top of a larger card or into a new pile of their own (Property 1). We also make sure we place each card on the smallest possible top card, which maintains Property 2.

It turns out that for manual sorting with a deck of cards, Property 2 is not so important, so we break it in the collection phase, but we’ll come back to it in the algorithmic analysis.

Property 1 is what lets us do the collection phase quickly. Since each pile is sorted, the smallest remaining card must be on top of one of the piles. If it was below another card then that pile would not be sorted! So, we know that we can always find the next card and we can quickly pick up all the cards in order.

Practical complexity

In practice I’ve found I generally end up with 8 or 9 piles. This is a small enough number that working out where to deal each card is very quick, as is finding which card to pick up next in the collection phase. The end result is that each card is only handled twice; once when we deal it and once when we collect it. This means the algorithm is effectively O(n), given the physical movement of cards totally dwarfs the ~O(n*log(n)) visual processing steps required.

Algorithmic complexity

Now let’s look at how this method performs in a purely algorithmic context, and make sure it is indeed O(n*log(n)). We’ll deal with each phase separately.

In the dealing phase, for each card we need to do an insertion into a sorted list (i.e, the list of top cards on the piles). Using a binary search each insertion can be done in log(k) steps, where k is the number of piles. The worst case is a deck sorted smallest to largest which will result in a new pile for each card. So in the worst case we have k=n, and we need to do n insertions, leading to O(n*log(n)) operations. In the general case we have O(n*log(k)) where k is the number of piles.

In the collection phase we need to be a little bit careful and add one extra step. When working manually we break the left→right sorting (Property 2) as we pick up cards. This isn’t a problem, since we can visually find the next smallest card even if the top cards are unsorted. Algorithmically if we broke Property 2 the collection phase would require a search through k elements to pick up each card, leading to O(n*k) performance, so we need to maintain Property 2 in log(k) time at each step.

If we maintain Properties 1 and 2 then the smallest card will always be on the left-most pile. So, algorithmically, we take this card. This exposes a new top card on the left-most pile. To maintain Property 2 we now take this pile and insert it amongst the other piles to keep the left→right ordering. This operation is again an insertion into a sorted list, which can be done in log(k) operations. This means our modified collection process has O(n*log(k)) complexity. The insertion step is crucial algorithmically, but is not worth the physical cost when manually sorting cards.

So, both phases can be done in O(n*log(k)) steps, with a worst case of O(n*log(n)). So it’s algorithmically fast!

Conclusion

In summary, this algorithm is super efficient for manually sorting a deck of cards, requiring exactly 2 handlings of each card. It also matches other fast sorting algorithms in a purely abstract sense by having a worst case complexity of O(n*log(n)).

So next time you need to sort a deck of cards, whip out this trick and impress you friends with how quickly and efficiently you can put things in order. No doubt they’ll be extremely jealous of your mad skills!

* It’s new to me anyway. No doubt someone’s invented this before me, so if you’ve seen it and know what it’s called please let me know, I havn’t bothered to go hunting for it myself.

8 thoughts on “Sorting a deck of cards the fast way

  1. I’m going to call it timsort until further notice. Imagine how jealous they’ll be when I tell them that I know *the* Tim of timsort fame.

  2. It seems to be a novel mergesort variant.

    A variant: limit k, and if you would create a new deck, merge the two smallest piles together. This doesn’t necessarily help in the instance of physically sorting cards, but I think it improves the degenerate cases (because you’re doing lots of cheap little merges).

  3. Hi Tim. I know this post is pretty old, but I’ve also been really interested in sorting physical cards efficiently. One, because I’m a software engineer and I just like sorting stuff; and two, because I’ve worked a lot on teaching my teenage son to program, so we do a lot of card stuff too. I picked up a deck of trading cards with names on them, and I keep them at my desk to play with once in a while.

    I really liked your algorithm, but I noticed that it takes up a ton of table space — i.e., like a hash table, you’re gaining speed at the expense of efficient memory use. ;) You say you get a manageable 8 or 9 piles, but in practice I wind up with more like 12. Running some computer simulations tells me it does range between around 8 and 15 piles. That’s a lot of wasted table space, and also it is slow to merge if I have to eyeball a lot of cards at once.

    So I tried experimenting with Ada’s suggestion, and threw away some methods that required too much mental effort or too many extra steps. Here’s the version I’ve hit on, which adds one extra merge step.

    1. Grab about 1/4 of the cards. Do the sort you mentioned above on only those cards. Set the sorted pile aside.
    2. Repeat three more times.
    3. Use the merge step to merge the four piles.

    Technically all these methods are n log n, so in principle it can’t possibly waste very much time to do some extra smaller sorts. The table space is greatly reduced, typically around 4-6 piles (not counting the piles I set aside, which at least don’t have to be organized). I do have to look at every card twice as many times, but I also have fewer cards on the table to think about during the merge phase, so it may not slow me down much at all.

  4. I know this post is almost 4 years old, however, if looking for a name, I suggest the “Leslie Sort”. I don’t know of any other sorting algorithm with that name, so it should work well.

    While I understand the algorithm, and can execute it fine, it may be helpful for others to see it in action. A video demonstrating the algorithm posted online (YouTube, Vimeo, etc.) would be exceptionally helpful.

    Shuffling goes hand-in-hand with sorting, so here’s a deterministic algorithm for maximizing the Shannon Entropy in a deck by hand.

    Holding the deck facedown, flip the bottom card around, so it is face up on the bottom of the deck.
    Start counting at 1 with the top facedown card.
    Insert the card at the bottom of the deck, if:

    The count is divisible by 7 (count % 7 = 0), or
    The count contains the number 7 (7, 17, 27, etc.)

    Otherwise, insert the card randomly somewhere in the middle of the deck.
    When the bottom faceup card reaches the top of the deck, insert randomly into the deck facedown.

    This algorithm is similar to the Fisher-Yates shuffle, however, rather than using a RNG to determine where the card is placed in the array (deck), you are the RNG. You are using the count in [7, 14, 17, 21, 27, 28, 35, 37, 42, ...] to deterministically determine the bottom card, bringing the faceup card in the array.

    A maximum Shannon Entropy value of calculating all 52 deltas of two sequential cards (wrapping the bottom and top cards), creating a histogram of the 51 p-values, and summing “-pi*ln(pi)” for each 51 value would yield a value of ~= 3.88.

    In practice, following this shuffle algorithm, I see entropy values of ~= 3.3 to ~=3.6 quite consistently. This is equivalent to riffle shuffling the deck 11-12 times. Also, I find this a touch slower to execute than 11-12 riffle shuffles. The process is essentially a riffle shuffle, just one card at-a-time, rather than binary cutting the deck.

    Where 11-12 riffle shuffles can leave a degree of uncertainty about the entropy added to the deck, this shuffling algorithm gives a high degree of certainty that you have maximized the randomness in the deck sequence.

    Anyway, FYI.

    • Also, it should be mentioned that with your “Leslie Sort” algorithm, you need to assign a numerical value to every card, so each has a unique value of 1 through 52. Any way to do this is fine. Probably a good “standard” would be to use alphabetical ordering of suits, aka “Bridge Order”:

      * Clubs: face value
      * Diamonds: face value plus 13
      * Hearts: face value plus 26
      * Spades: face value plus 39

      It may take some practice first to be able to look at a card, and quickly know it’s numerical value, so sorting through the deck remains quick.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>