Pentagons with Rational Trigonometry

The other day I wanted to work out the properties of a pentagon, for reasons which I might go into in a future post. In particular I wanted to know the horizontal and vertical dimensions of a pentagon with unit length “spokes”.

A started scratching out diagrams and end up with a mess of lines and stupid values like sin\left(\frac{\pi}{5}\right). It all quickly because a mess and gave me no precise numbers. Everything was in terms of trig functions and pi. Don’t get me wrong, pi’s a great number to work with if you’ve got circles, but I’m just dealing with straight lines here, I shouldn’t need transcendental numbers!

So, I reverted back to good old rational trigonometry. Rational trigonometry uses the concept of spread to measure the angle between lines, and quadrance to measure the extent between points. If the angle between two lines is \theta then the spread, s, is equal to \sin^2\theta. The quadrance between two points is just the distance squared. It turns out that using these measures makes calculations much simpler than in traditional trigonometry.

The diagram below shows a regular pentagon with unit length (and thus unit quadrance) spokes. We shall use rational trigonometry to compute exact values for all the labels shown.

The first (and most complex) step is to compute s_1 and s_2. We note that s_1 is one fifth of a full revolution and s_2 is one fifth of a half revolution. Both a half and a full revolution represent a spread of zero. Therefore s_1 and s_2 are both solutions of the 5th order spread polynomial,

S_5(s) = s(16s^2 - 20s + 5)^2 = 0.

We use the quadratic equation to find the two non-zero solutions to this equation and find

 s_1 = \frac{5 + \sqrt{5}}{8}\\  s_2 = \frac{5 - \sqrt{5}}{8}.

A corollary of the triple spread law when applied to right angled triangles gives

s_3 = 1 - s_2 = \frac{3 + \sqrt{5}}{8}.

By the spread law we can now directly find

 Q_2 = s_2 = \frac{5 - \sqrt{5}}{8}\\  Q_3 = s_3 = \frac{3 + \sqrt{5}}{8}\\  Q_4 = s_1 = \frac{5 + \sqrt{5}}{8}\\  Q_5 = 1 - s_1 = \frac{3 - \sqrt{5}}{8}.

Applying the triple quad formula to a bisected line we get

Q_1 = 4Q_2 = \frac{5 - \sqrt{5}}{2}.

Pythagoras’ theorem now gives us

Q_6 = Q_1 - Q_4 = \frac{15 - 5\sqrt{5}}{8} = 5Q_5.

Once again applying the spread law we get

 Q_7 = \frac{s_3}{s_1} = \frac{5 + \sqrt{5}}{10} = \frac{4}{5}Q_4\\  Q_8 = \frac{1 - s_1}{s_1} = \frac{5 - 2\sqrt{5}}{5}.

So, there we have it, all the interesting dimensions of a pentagon without once having to resort to transcendental numbers or functions.

If you want to know more about rational geometry I strongly suggest watching this series of short lectures by Norman Wildberger.

Sorting a deck of cards the fast way

I have a deck of cards on my desk at work, which I play with to keep my hands busy while thinking about tricky problems. I sometimes find myself trying to implement different sorting algorithms to put the deck in order and to no great surprise, they all tend to suck.

The algorithmically fast sorts, like quick-sort or merge-sort, require repeated splitting of the deck until you essentially have all 52 cards laid out in front of you, which isn’t very practical. The algorithmically slow ones, like insertion, selection and bubble sort, let you keep all the cards in your hands, but are held back by their fundamental complexity.

In practice to sort a deck of cards I’ll usually do a bucket + human sort: deal the cards out into four buckets, one for each suit, and then pick up the 13 cards for each suit and sort  them in hand, using no fixed algorithm. This tends to be fast in practice, but lacks a certain elegance. It doesn’t scale from a deck of cards to an arbitrary set of numbers to be sorted on a computer.

An ideal sorting algorithm would be a) easy to implement with a physical deck of cards and b) algorithmically fast (i.e. O(n*log(n))) when thought of abstractly.

After endless noodling around I stumbled on the following algorithm which neatly fits both criteria. I was pleasantly surprised when I did, because usually when I think I’ve found a new sorting algorithm I think a bit harder and it turns out I’ve just reinvented quick-sort! So, here’s my shiny new* algorithm.

Phase 1: Dealing

  1. Deal the first card from the deck face up on the table. This is the first pile.
  2. Take the next card from the deck. If it is smaller than the card in the first pile, place it on top of the first pile. If it’s larger than the card in the first pile, create a second pile to the right of the first pile.
  3. Deal each subsequent card, placing it on the pile which has the smallest top card which is larger than the card being dealt. If the top card of all piles is smaller than the card being dealt, create a new pile to the right of the other piles.

Phase 2: Collecting

  1. Pick up the smallest card. It will be on top of the left-most pile.
  2. Pick up the next smallest card, it will be on top of one of the piles.
  3. Continue picking up cards in order. The next card will always be on top of one of the piles.
  4. Play a game of cards!

So, let’s look at how it works. The key is in the invariant properties which we maintain. Each time we deal a new card in Phase 1, the piles maintain two invarients:

  1. Each pile is sorted (low→high, top→bottom)
  2. The top cards of the piles are sorted (low→high, left→right)

If you look at the dealing phase, you’ll notice we always place cards either on top of a larger card or into a new pile of their own (Property 1). We also make sure we place each card on the smallest possible top card, which maintains Property 2.

It turns out that for manual sorting with a deck of cards, Property 2 is not so important, so we break it in the collection phase, but we’ll come back to it in the algorithmic analysis.

Property 1 is what lets us do the collection phase quickly. Since each pile is sorted, the smallest remaining card must be on top of one of the piles. If it was below another card then that pile would not be sorted! So, we know that we can always find the next card and we can quickly pick up all the cards in order.

Practical complexity

In practice I’ve found I generally end up with 8 or 9 piles. This is a small enough number that working out where to deal each card is very quick, as is finding which card to pick up next in the collection phase. The end result is that each card is only handled twice; once when we deal it and once when we collect it. This means the algorithm is effectively O(n), given the physical movement of cards totally dwarfs the ~O(n*log(n)) visual processing steps required.

Algorithmic complexity

Now let’s look at how this method performs in a purely algorithmic context, and make sure it is indeed O(n*log(n)). We’ll deal with each phase separately.

In the dealing phase, for each card we need to do an insertion into a sorted list (i.e, the list of top cards on the piles). Using a binary search each insertion can be done in log(k) steps, where k is the number of piles. The worst case is a deck sorted smallest to largest which will result in a new pile for each card. So in the worst case we have k=n, and we need to do n insertions, leading to O(n*log(n)) operations. In the general case we have O(n*log(k)) where k is the number of piles.

In the collection phase we need to be a little bit careful and add one extra step. When working manually we break the left→right sorting (Property 2) as we pick up cards. This isn’t a problem, since we can visually find the next smallest card even if the top cards are unsorted. Algorithmically if we broke Property 2 the collection phase would require a search through k elements to pick up each card, leading to O(n*k) performance, so we need to maintain Property 2 in log(k) time at each step.

If we maintain Properties 1 and 2 then the smallest card will always be on the left-most pile. So, algorithmically, we take this card. This exposes a new top card on the left-most pile. To maintain Property 2 we now take this pile and insert it amongst the other piles to keep the left→right ordering. This operation is again an insertion into a sorted list, which can be done in log(k) operations. This means our modified collection process has O(n*log(k)) complexity. The insertion step is crucial algorithmically, but is not worth the physical cost when manually sorting cards.

So, both phases can be done in O(n*log(k)) steps, with a worst case of O(n*log(n)). So it’s algorithmically fast!

Conclusion

In summary, this algorithm is super efficient for manually sorting a deck of cards, requiring exactly 2 handlings of each card. It also matches other fast sorting algorithms in a purely abstract sense by having a worst case complexity of O(n*log(n)).

So next time you need to sort a deck of cards, whip out this trick and impress you friends with how quickly and efficiently you can put things in order. No doubt they’ll be extremely jealous of your mad skills!

* It’s new to me anyway. No doubt someone’s invented this before me, so if you’ve seen it and know what it’s called please let me know, I havn’t bothered to go hunting for it myself.