# Sliding Maximum

In this post I will present a simple and somewhat generic solution for the sliding maximum problem.

Find the maximum element for every K consecutive elements.

Note that the sliding minimum can be seen as a negated sliding maximum problem, just like the maximum spanning tree can be seen as the minimum spanning tree of the negated graph.

# Implementation

Below is a generic C++ sliding window I implemented that takes a comparator as a template parameter. This allows it to be instantiated for both the sliding maximum and the sliding minimum.

template <typename T, typename Comparator>
class SlidingWindow {
struct Block {
Block(T v, size_t w) : value(v), width(w) {}
T value;
size_t width;
};
Comparator comp;
deque<Block> data;

public:
void push(const T t) {
size_t width = 1;
while (!data.empty() && comp(data.back().value, t)) {
data.pop_back();
width++;
}
data.emplace_back(t, width);
}
T get() const { return data.front().value; }
void pop() {
// Either reduce the width of the best block (front), or drop it.
if (data.empty()) {
return;
}
if (data.front().width > 1) {
data.front().width--;
} else {
data.pop_front();
}
}
};


This solution is amortized O(1) for all operations, making it O(N) for N elements. By using STL trees we cannot do better than O(N log N).

# Sliding Maximum Example

Using 20 terms and a window of width 4, we have the following table:

Current Maximum Minimum
16 16 16
1 16 1
2 16 1
13 16 1
9 13 1
11 13 2
15 15 9
14 15 9
4 14 4
8 8 4
3 8 3
7 7 3
6 7 3
12 12 6
5 12 5
18 18 5
19 19 5
17 19 17
20 20 17
10 20 10

# BigWord

This post is an update on a small side project I just started. It is called BigWord and it aims to find the biggest word in a dictionary based on a multiset of letters.

# Sample Usage

For instance, it finds out that the biggest English words made with the letters from “linuxwars” are “urinals” and “insular” and that the biggest English words made with “applepies” are “pappies” and “applies”.

$./bigword linuxwars > urinals > insular$ ./bigword applepies
> pappies
> applies


# Implementation Notes

The existing code is written using C++14 and should run under any system which has reasonable C++14 compiler. The program relies on the public domain word list shipped with Fedora, but you may use any plain text dictionary you want.

It loads all words which may be constructed with the provided letter multiset, ignoring words which are too big in order to improve performance, and then traverses the word vector from longest to shortest trying to match as many as possible of the biggest subsets of the provided multiset.

Even though it is performant enough for a fairly big word list, I think there must be a more efficient way to find the biggest subsets. Currently, the first abovementioned example finishes after 200 ms on a list of 479,000 words.

Ideally, I would like even the worst-case queries to finish in less than 100 ms for word lists with less than one million words, as suggested in Usability Engineering by Jakob Nielsen.

Some possible improvements are dumping the processed word vector after the first execution and reusing it in the next executions and sorting the input text file by word length so IO can be stopped earlier, but these possibilities add undesired complexity to the setup the user would need.

The following code snippet shows how letter counts are currently compared.

  static bool is_contained(const LetterCount &a, const LetterCount &b) {
if (a.letter_count > b.letter_count) {
return false;
}
size_t remaining = b.letter_count - a.letter_count;
for (size_t i = 0; i < alphabet_size; i++) {
if (a.counters[i] > b.counters[i]) {
return false;
}
// By catching excessive differences we may fail early.
const size_t difference = b.counters[i] - a.counters[i];
if (difference > remaining) {
return false;
}
remaining -= difference;
}
return true;
}


There may be a faster way to do it, but I am failing to spot it.

You can find the project source code (licensed under the ISC license) at the GitHub repository.

# Heap’s Algorithm

In this post, I describe an alternative to the Narayana Pandita’s algorithm, an algorithm used to generate permutations of a set of elements which I have already written about.

# Comparison with the alternative

This is one of the most efficient ways to generate the permutations of a list. It is more efficient than the Narayana Pandita’s algorithm if both are properly implemented.

This algorithm does not generate permutations in lexicographic order, which is required in some scenarios.

This algorithm does not require any comparison between elements of the list it is sorting, making it more appealing for sequences without defined order or whose elements are difficult to compare.

# Python implementations

Based on implementations found on the Internet, I wrote two PEP 8 compliant Python 3 versions of Heap’s algorithm. The first one, which is also the most elegant one in my opinion, is a recursive implementation. As one may assume, the cost of the several levels of recursion make it substantially more expensive than it needs to be.

## Recursive implementation

def swap(elements, i, j):
elements[i], elements[j] = elements[j], elements[i]

def generate_permutations(elements, n):
if n == 0:
yield elements
else:
for i in range(n - 1):
# Generate permutations with the last element fixed.
yield from generate_permutations(elements, n - 1)
# Swap the last element.
if i % 2 == 0:
swap(elements, i, n - 1)
else:
swap(elements, 0, n - 1)
# Generate the last permutations after the final swap.
yield from generate_permutations(elements, n - 1)

def permutations(elements):
yield from generate_permutations(elements, len(elements))



This code uses the yield from syntax, added in Python 3.3. You can read more about it here.

The algorithm is not trivially understood. Essentially, what is happening is a locking of the rightmost element and the recursive permutation of all other elements, then an intelligently chosen swap involving the rightmost element and the repetition of the process until all elements have been in the rightmost position.

## Non-recursive implementation

def swap(elements, i, j):
elements[i], elements[j] = elements[j], elements[i]

def generate_permutations(elements, n):
# As by Robert Sedgewick in Permutation Generation Methods
c = [0] * n
yield elements
i = 0
while i < n:
if c[i] < i:
if i % 2 == 0:
swap(elements, 0, i)
else:
swap(elements, c[i], i)
yield elements
c[i] += 1
i = 0
else:
c[i] = 0
i += 1

def permutations(elements):
return generate_permutations(elements, len(elements))



## Benchmarking

I have benchmarked the algorithm for lexicographic permutations and both implementations above using a sequence of 10 integers and a sequence of 10 strings of 100 characters which only differed by the last character. Note that although an input of length 10 may seem small for an unaware reader, there are 10! = 3,628,800 results generated even for such small input.

The benchmarking results follow.

Lexicographic with 10 integers took 10.99 seconds.
Recursive Heap's with 10 integers took 14.10 seconds.
Non-recursive Heap's with 10 integers took 4.29 seconds.

Lexicographic with 10 strings took 11.66 seconds.
Recursive Heap's with 10 strings took 14.43 seconds.
Non-recursive Heap's with 10 strings took 4.31 seconds.


As you can see, the cost of recursion makes the more elegant implementation of Heap’s algorithm even slower than the lexicographic permutation generator. However, the non-recursive implementation is substantially more efficient. It is also visible from these numbers that the performance hit from the use of big strings (which are more expensive to compare than small integers) was bigger on the lexicographic permutation generator than on the generators that do not compare elements.

# Balancing Chemical Equations

On this post, I describe an efficient way to balance chemical equations using linear algebra.

# Procedure

From any chemical equation, we can derive a vector equation that describes the number of atoms of each element present in the reaction. Row reducing this vector matrix leads to the general solution, from which we can easily derive a solution with integer coefficients.

# Example

Find the smallest integer coefficients that balance the following chemical equation.

MnS

• As2Cr10O35
• H2SO4 → HMnO4
• AsH3
• CrS3O12
• H2O

## Solution

From the chemical equation above, we can derive the following matrix. Notice that the first row is for Mn, the second is for S, and so on. However, how you choose to order does not matter, as long as all columns respect the order.

1    0    0   -1    0    0    0
1    0    1    0    0   -3    0
0    2    0    0   -1    0    0
0   10    0    0    0   -1    0
0   35    4   -4    0  -12   -1
0    0    2   -1   -3    0   -2


The last four columns all have negative atom counts because they have been subtracted from both sides of the original vector equation.

After row reducing the matrix with Octave, we get

1.00000   0.00000   0.00000   0.00000   0.00000   0.00000  -0.04893
0.00000   1.00000   0.00000   0.00000   0.00000   0.00000  -0.03976
0.00000   0.00000   1.00000   0.00000   0.00000   0.00000  -1.14373
0.00000   0.00000   0.00000   1.00000   0.00000   0.00000  -0.04893
0.00000   0.00000   0.00000   0.00000   1.00000   0.00000  -0.07951
0.00000   0.00000   0.00000   0.00000   0.00000   1.00000  -0.39755


However, this is not of much help as we want to end up with integer coefficients. Therefore, we change the number display format to rational by using format rat. This makes the last column six rationals with the same denominator.

 -16/327
-13/327
-374/327
-16/327
-26/327
-130/327


After multiplying the last column by 327 and changing its sign, we get the coefficients for the balanced equation.

 16
13
374
16
26
130


Note that the free variable is the coefficient of H2O, which is 327 as we are interested on the smallest integer that makes all other coefficients integers.

Wrapping up, the balanced equation has coefficients 16, 13, 374, 16, 26, 130, 327, in this order. Not something easy to find by trial and error.

# Lexicographic Permutations

This post describes how to generate the lexicographic permutations of a sequence. The lexicographic order is a generalization of the way the alphabetical order of words is based on the alphabetical order of their component letters. This generalization consists primarily in defining a total order over the sequences of elements of a finite totally ordered set. You may understand that this is a way to establish ordering between sequences based on how their elements compare.

# Algorithm description

There exist several ways to generate all permutations of a given sequence. The simple algorithm which I will discuss here is based on finding the next permutation in lexicographic ordering, if it exists, or reversing the last permutation to get back to the minimal permutation. This method goes back to Narayana Pandita in 14th century India.

The algorithm is quite straightforward and may be memorized:

Find the biggest i such that a[i] < a[i + 1];
Find the biggest j greater than i such that a[j] > a[i];
Swap a[i] and a[j];
Reverse the elements from a[i + 1] to the last element.


If the first step fails (because such index does not exist) the current permutation is the last one.

Given any permutation of a list, this generates the next one. It should be noted, however, that when given the greatest lexicographic permutation, this algorithm returns this same permutation, so it should be checked to ensure that if the permutation at hand is the last one, we reverse the sequence to get back to the first permutation.

This algorithm is simple to implement correctly, computationally efficient, and it only generates each distinct permutation once, which is convenient when there are many repeated elements.

# Python implementation

Below is an in-place Python 3 implementation of the described algorithm. If you do not want mutability, wrap the algorithm calls with a function that copies the list. However, take into consideration that for very large lists this may use a significant amount of memory if you are interested in several permutations.

def swap(elements, i, j):
elements[i], elements[j] = elements[j], elements[i]

def reverse(elements, i, j):
for offset in range((j - i + 1) // 2):
swap(elements, i + offset, j - offset)

def next_permutation(elements):
last_index = len(elements) - 1
if last_index < 1:
return

i = last_index - 1
while i >= 0 and not elements[i] < elements[i + 1]:
i -= 1

# If there is no greater permutation, return to the first one.
if i < 0:
reverse(elements, 0, last_index)
else:
j = last_index
while j > i + 1 and not elements[j] > elements[i]:
j -= 1

swap(elements, i, j)
reverse(elements, i + 1, last_index)


Documentation was omitted for the sake of brevity. One could also consider this an example of code as documentation.

## Algorithm output

Generating eight permutations of the string ‘abc’ in ascending lexicographic order produces the following sequence:

abc, acb, bac, bca, cab, cba, abc, acb, ...


Notice that after the sixth permutation we get back to the first one, as there are only six distinct permutations of the string ‘abc’.

## Algorithm complexity

This algorithm uses constant additional space (as it is in-place) and has linear time complexity in the worst case but is amortized to constant time.

# Using the C++ standard library

If you can use the C++ STL, you have access to std::next_permutation.

## Example

std::string text = "abbc";
do {
cout << text << '\n';
} while (std::next_permutation(text.begin(), text.end()));

abbc
abcb
acbb
babc
bacb
bbac
bbca
bcab
bcba
cabb
cbab
cbba


As per the documentation, the return value of the function is true if the new permutation is lexicographically greater than the old, or false if the last permutation was reached and the range was reset to the first permutation.