In this post, I describe an alternative to the Narayana Pandita’s algorithm, an algorithm used to generate permutations of a set of elements which I have already written about.
Comparison with the alternative
This is one of the most efficient ways to generate the permutations of a list. It is more efficient than the Narayana Pandita’s algorithm if both are properly implemented.
This algorithm does not generate permutations in lexicographic order, which is required in some scenarios.
This algorithm does not require any comparison between elements of the list it is sorting, making it more appealing for sequences without defined order or whose elements are difficult to compare.
Based on implementations found on the Internet, I wrote two PEP 8 compliant Python 3 versions of Heap’s algorithm. The first one, which is also the most elegant one in my opinion, is a recursive implementation. As one may assume, the cost of the several levels of recursion make it substantially more expensive than it needs to be.
def swap(elements, i, j): elements[i], elements[j] = elements[j], elements[i] def generate_permutations(elements, n): if n == 0: yield elements else: for i in range(n - 1): # Generate permutations with the last element fixed. yield from generate_permutations(elements, n - 1) # Swap the last element. if i % 2 == 0: swap(elements, i, n - 1) else: swap(elements, 0, n - 1) # Generate the last permutations after the final swap. yield from generate_permutations(elements, n - 1) def permutations(elements): yield from generate_permutations(elements, len(elements))
This code uses the
yield fromsyntax, added in Python 3.3. You can read more about it here.
The algorithm is not trivially understood. Essentially, what is happening is a locking of the rightmost element and the recursive permutation of all other elements, then an intelligently chosen swap involving the rightmost element and the repetition of the process until all elements have been in the rightmost position.
def swap(elements, i, j): elements[i], elements[j] = elements[j], elements[i] def generate_permutations(elements, n): # As by Robert Sedgewick in Permutation Generation Methods c =  * n yield elements i = 0 while i < n: if c[i] < i: if i % 2 == 0: swap(elements, 0, i) else: swap(elements, c[i], i) yield elements c[i] += 1 i = 0 else: c[i] = 0 i += 1 def permutations(elements): return generate_permutations(elements, len(elements))
I have benchmarked the algorithm for lexicographic permutations and both implementations above using a sequence of 10 integers and a sequence of 10 strings of 100 characters which only differed by the last character. Note that although an input of length 10 may seem small for an unaware reader, there are 10! = 3,628,800 results generated even for such small input.
The benchmarking results follow.
Lexicographic with 10 integers took 10.99 seconds. Recursive Heap's with 10 integers took 14.10 seconds. Non-recursive Heap's with 10 integers took 4.29 seconds. Lexicographic with 10 strings took 11.66 seconds. Recursive Heap's with 10 strings took 14.43 seconds. Non-recursive Heap's with 10 strings took 4.31 seconds.
As you can see, the cost of recursion makes the more elegant implementation of Heap’s algorithm even slower than the lexicographic permutation generator. However, the non-recursive implementation is substantially more efficient. It is also visible from these numbers that the performance hit from the use of big strings (which are more expensive to compare than small integers) was bigger on the lexicographic permutation generator than on the generators that do not compare elements.