# Intro to Python Context Managers

Python context managers are a powerful tool for expressing a common programming pattern in a simple, concise manner. Most commonly they are used to handle opening and closing files, but this is just the beginning of what they can be used for.

Consider the following code snippet, which measures the time it takes to make a list of the first ten thousand square numbers.

We have three steps here; 1) we initialise our timer, 2) we perform some processing and 3) we stop our timer and print the result. We’ll call these three steps enter, process and exit.

The important thing to note here is that the enter and exit steps are codependent. They come as a logical pairing, and it doesn’t make sense to do one without the other.

In this example the enter and exit blocks are physically close to one another, so it is easy to see how they match up, but if we were timing a larger block of code it might not be so obvious. It would be great if we could bundle them together somehow.

Also, we might want to use this same timing pattern in a bunch of places throughout our code. It would be nice if we could put the enter/exit code into something like a function.

The solution is to create a context manager.

In this example timer() is a function which returns a context manager. The details of how it does this are all taken care of by the @contextmanager decorator. The key element to notice is that a yield statement must be placed between the enter and exit code.

To use the context manager we use the with statement, which is followed by an indented block of code containing the process code.

By factoring the enter/exit code into a context manager function we have created a timing device which can be reused throughout our code. Furthermore, the codependent enter and exit code are kept together, which reduces the likelihood of introducing errors when changes need to be made to the logic.

Finally, the process code is identifiable as a single unit which is “wrapped” by the enter/exit code of the context manager, as it is indented as a block below the with statement.

The enter/process/exit pattern is one which shows up in a range of places, and context managers are a neat way to implement this pattern. There are many complex things which can be done with context managers, but these examples show a simple way to get up and running with them to refactor a common, simple pattern.

The examples shown work in Python versions 2.6 and 2.7. Stay tuned for future posts which will go into more details about how to use context managers in more advanced ways.

# Strongly Connected Components

The problem of finding the strongly connected components of a directed graph has been solved in an algorithmically optimal manner for over 40 years. In this article we look at an algorithm which is efficient not only algorithmically, but also from a memory usage perspective.

Directed Graphs

One of the fundamental data structures in computer science is the directed graph. It finds applications in everything from quantum mechanics to physical mail delivery.

A directed graph can be thought of as a collection of circles connected by arrows. An arrow tells us that it is possible to get from one circle to another. If we can get from one circle to another circle by following a series of arrows (in the correct direction) then we say that these circles are connected.

In many cases we might be able to get from circle “A” to circle “B” by following arrows, but we might not be able to get from B back to A. In this case we say that A and B are weakly connected. If its possible to get from A to B and also from B back to A then we say that they are strongly connected.

If A is strongly connected to both B and C then we know that B and C are also strongly connected to each other, since we can get between them by going via A.

By grouping together circles which are strongly connected to each other we can break a graph up into strongly connected components (SCCs).

It turns out the problem of breaking up a graph into strongly connected components is one which pops up all the time in areas including disease transmission and ocean current analysis.

In the rest of this article we’ll look at algorithms to solve this problem and how they’re implemented in Scipy to be super fast and efficient.

SCC Algorithms

The canonical SCC algorithm was published by Robert Tarjan in 1972 [PDF]. His paper describes a recursive algorithm which is based on a depth first search. Many implementations of this algorithm can be found in a variety of languages.

Tarjan’s algorithm is algorithmically optimal, since the time taken to compute the SCCs is linear in both the number of circles (nodes) and arrows (edges). One of its drawbacks however is that it is a recursive algorithm. When implemented in most languages recursion is both slow and memory intensive due to the need to maintain a complete call stack.

One way to mitigate this problem is to convert the recursive algorithm into an iterative one. It can be shown that this allows the problem to be solved with a per-node memory usage of 5 words plus 2 bits, or 162 bits per node when working with 4 byte words. This equates to approximately 19.3MB per million nodes. This of course assumes that the algorithm has been implemented with memory optimisation in mind, which is often not the case!

Now 5 words per node sounds pretty good, but if we could shave this down by a word or two we could make significant savings! Is such an improvement possible?

In 2005 David Pearce analysed this exact question and came to the conclusion that with a bit of tweaking, Tarjan’s original algorithm could be modified to only require 3 words per node (or 11.4MB per million nodes) [PDF]. So far this is the most efficient (in terms of memory usage) algorithm known to solve the SCC problem.

Pearce’s paper presented pseudocode for his algorithm in recursive format in order to contrast it with Tarjan’s original algorithm. This recursive algorithm was implemented within Scipy in version 0.11.0 (scipy.sparse.csgraph.connected_components).

This implementation suffered from being recursive and used orders of magnitude more memory than the theoretically possible 3 words per node. In version 0.12.0 of Scipy I implemented the iterative version with optimal memory usage. This version was able to handle graphs around 7,500 times larger than the original before hitting the memory limits of a standard PC.

Lets have a look at how this algorithm works and how we are able to run it with such a low memory overhead.

Pearce’s SCC Algorithm – Details

We start with two counters, index and label. We initialise these to 1 and N respectively, where N is the number of nodes in our graph.

We now pick an arbitrary node as a starting point and perform a depth first search (DFS) traversal of our graph. At each node we assign the current value of index to the node and increment index by one. In this way we assign incrementing numbers to each node which is weakly connected to our starting node. We call the number assigned to each node its link number.

The clever part of the algorithm comes when we have processed all the children of a node and are ready to backtrack up the tree to continue the DFS traversal. When backtracking past a node, we consider all of its children and find the smallest link number of these child nodes. If this is smaller than the link number of the current node, we update the link number of the current node and put the node onto a stack.

If none of the children have a smaller link number then we have found a root node. None of the children of this node point to a node with a smaller link number than the current one, which means that they can’t possibly be strongly connected to any nodes “lower” in the graph.

In this case we assign the value of label to the root node. We also assign label to all the nodes in the stack with a link number greater than or equal to the root’s link number, removing them from the stack at the same time. Once we’ve done all this we decrease label by 1 and decrease index by the number of nodes we just updated.

Once we have completed this backtracking step for all the nodes in our depth first traversal from the original node we will have assigned a label to all the nodes (weakly) connected to the original node. There might still be nodes which are completely disconnected from the original node, so we repeat the process for every node in our graph, skipping nodes which have been assigned a label already.

At the end of all this each node will have a label assigned to it, and all nodes which are strongly connected to each other will have the same label. As such, we’ve solved the problem of determining the SCCs of the graph.

The following tool allows your to step through this algorithm for an example directed graph.

Memory Usage Analysis

We can now look at how much memory we need for this algorithm.

For each node we use one word of memory to store the link number/label.

To perform the DFS we need to keep a stack of nodes. We only need to visit each node once. As such, if we’re trying to push a node which is already on the stack we would like to be able to extract the node from inside the stack and move it to the top of the stack. We can implement this data structure with a doubly linked list in a fixed sized array, which requires 2 words per node. The operation of looking for an element within the stack and moving it to the top can be performed in constant time, maintaining the linear algorithmic complexity of the complete algorithm.

We also need to keep a stack of backtracked nodes, which would usually require another one word per node. As we can observe from the interactive tool above, nodes on the backtrack stack have already been removed from the DFS stack. This means we can share the memory used by the DFS stack and implement the backtrack stack as a singly linked list in a fixed size array, requiring no extra memory.

This gives us a total of 3 words per node, as claimed by Pearce. As such we can claim that the SCC algorithm implemented within Scipy is optimal from a memory usage perspective, whilst maintaining linear computational complexity.

# Time, what is time?

So earlier this evening Commander Chris Hadfield, a Canadian astronaut on the ISS, tweeted the question

I gave a quick reply which ended up generating quite a bit of discussion, so I thought I might write up a slightly more detailed answer to the question.

The problem as posed relates to the special theory of relativity. Objects in relative motion will measure time running at a different rate. In this case, the astronauts travelling on the ISS are travelling quite quickly relative to an observer on the ground.

We can measure this time dilation using the Lorentz transformation. We begin by taking the speed of the space station, and use it to calculate a quantity called $\gamma$.

$\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}.$

We use the values
$v = 8km/s = 8000m/s\\ c = 300,000,000m/s$

and get

$\gamma = \frac{1}{\sqrt{1 - \frac{8000^2}{300000000^2}}}\\ = \frac{1}{\sqrt{0.9999999992888889}}\\ = 1.0000000003555556\\$

So this is our time dilation factor, which tells us how much slower the clocks on the space station are running.

Now, if the astronauts have been up there for what we on Earth would measure as 5 months, how much time have the astronauts experienced with their slow running clocks? Well, we can work that out with the equation

$t = \frac{t_0}{\gamma}$.

Here we let $t_0$ be 5 months, or $5 \times 30 \times 24 \times 60 \times 60 = 12960000s$. So the time experienced by the astronauts will be

$t = \frac{12960000s}{1.0000000003555556}\\ = 12959999.995392s.$

This is very close to exactly 5 months, but it’s just a tiny bit smaller. How much smaller is it? Well,

$\Delta t_{SR} = t_0 - t\\ = 12960000s - 12959999.995392s\\ = 0.004608s$

This is much less than a single second! In fact, it’s a touch over 4.6 milliseconds.

So there we go, the astronauts will be 4.6 milliseconds younger than they would be if they’d stayed planted on Earth. The time difference is due to time dilation, a direct consequence of their movement relative to Earth and the theory of special relativity.

But this is only part of the picture! My answer to the original question only took into account the relative speeds of the space station and an observer on Earth. If we delve deeper, we find that gravity also has an effect on the rate at which clocks tick. When standing at sea level, we’re around 6371km from the centre of the Earth. But the space station orbits at around 370km above sea level, which puts them 6741km from the centre of the Earth. Because of this difference in distance, the space station experiences a weaker gravity field than we do here on terra-firma.

The effect of gravity on time can be calculated by using the equations from general relativity. Gravity causes time to run more slowly, so we need to work out how strong this effect is both on Earth, and on the space station. We’ll use the following equation to work out the time dilation factor due to gravity.

$\gamma = \frac{1}{\sqrt{1 - \frac{r_0}{R}}}.$

In this equation $r_0$ is the Schwarzschild radius of the Earth, which is based on its gravity. It essentially tells us how far away the event horizon would be if the Earth collapsed into a black hole. It turns out to be quite small, just 9mm, or 0.009 metres.

The variable $R$ is the distance from the centre of the Earth. Based on the numbers above we can work out that for someone standing at sea level we get

$\gamma_E = \frac{1}{\sqrt{1 - \frac{0.009}{6371000}}}\\ = \frac{1}{\sqrt{0.9999999985873489}}\\ = 1.0000000007063257.$

For a person on the space station, we get

$\gamma_S = \frac{1}{\sqrt{1 - \frac{0.009}{6741000}}}\\ = \frac{1}{\sqrt{0.9999999986648865}}\\ = 1.000000000667557.$

We can now work out how long 5 Earth months would take on the space station, by considering the relative clock speeds on Earth and the space station.

$t = t_0\times\frac{\gamma_E}{\gamma_S}\\ = 12960000s \times \frac{1.0000000007063257}{1.000000000667557}\\ = 12960000.000502443s.$

So we see that the effect of general relativity causes the astronauts to experience more time than if they’d stayed on Earth! How much more time?

$\Delta t_{GR} = t_0 - t\\ = 12960000s - 12960000.000502443s\\ = -0.000502443s,$

or around 0.5 milliseconds.

So if we put together the effects of both general relatively (gravity) and special relativity (speed differences) we get the total time difference,

$\Delta t_{tot} = \Delta t_{SR} + \Delta t_{GR}\\ = 0.004608 - 0.000502443\\ = 0.004105557s.$

So there we have it, the answer to the Commander’s question is 4.1 milliseconds! 5 months in space to travel into the future by less than the blink of an eye. Worth it? Totally!