# Implementing Binary Search

If you are a programmer, I’m sure you’ve encountered the term binary search at some point in time. I know what binary search is, but I’m writing this post to solidify my understanding of it. I also want to compare how it might be naively implemented across my 3 favorite languages C, Ruby, and Haskell — because naive code is the best code (to learn from)!

# Binary Subdivision

You can skip this section if you want — I merely want to write how I first met and fell in love with the concept of binary subdivision. I first discovered binary division when I was in high school. It was a very pleasant realization, and at the time I did not fully realize what I had accidentally encountered.

The situation was so: my Biology teacher, Mr. Kennedy, told everyone to draw an even 4x4 square grid on the palms of our hands. Everyone was supposed to draw their own 4x4 grid — but this was when I asked myself, “how can I draw the most even-looking grid without using a ruler?” You see, I could not use a ruler because I was using my right hand to draw onto my left hand — and to hold a ruler properly I would need a third hand! So there was the problem.

On the one hand, I could not simply draw the four horizontal and four vertical lines one after the other, left to right, top to bottom, because I knew that I would introduce a great deal of additive error.1 I did not want to draw an ugly, lopsided square.

It took me a few seconds, but I came up with a plan. I first drew a single large square. Then knowing that I could easily eyeball with good accuracy where the middle point of that square was horizontally, I drew a vertical line down the middle. I then did the same thing in the other axis vertically. I repeated this procedure a few more times, recursively subdividing each smaller rectangular shape into halves, finally ending up with a nice-looking grid. I glanced around the room, and later looked at other students’ palms to see if they had discovered this “divide by $\frac{1}{2}$” trick, but to my dismay no one had done it; I knew this was the case because everybody else’s square was sloppy.

I cannot honestly say if this was the very first time I realized the underlying geometric concepts at play here, but I can affirmatively say that it really was the first time I systematically applied such an elegant solution to a given problem. I wonder if most people who draw grids even bother to think of it as a problem.

To me, binary subdivision is the underpinning principle behind binary search. Continued subdvision by halves is like exponentiation in reverse; pretty soon, you end up with extremely tiny numbers. This is where the power of binary search comes from! Simply put, binary search is like binary subdivision, but you get to subdivide toward the location of whatever you’re looking for. Isn’t that cool?

# The Problem

The problem is simple — given a sorted list KEYS of items (for simplicity’s sake, positive integers), determine if key is in it, and also, the position of key in the list if it is indeed in the list. The catch is that you have no idea as to the contents of KEYS — only that it is sorted from smallest to largest.

# Naive Approach — Linear Search

The simplest way of doing this is by linear search. It is probably the novice programmer’s first reaction. You just start from the first item in KEYS, and run a for-loop all the way across the list, looking at every single item. There are now two possibilities — either (1) you indeed discover key, and report back your position (aka the “index”, usually in the form of the familiar i variable in C/C++/Java code), or (2) you find nothing. If you are clever, you can optimize the search in the second situation by breaking out of the for-loop if the items you are comparing are larger than key; after all, KEYS is sorted, so we know for a fact that the later numbers are only going to get bigger, so there is no point in trying to find key in that crowd of numbers.

But think of the consequences — what’s the worst case scenario? Imagine you have 1 trillion items, and that key is not in it, because let’s say key is a much bigger number than the biggest number in KEYS — but of course you haven’t run the linear search yet, so you don’t know that. Given this situation, you would have to search the entire list of all numbers in KEYS before reaching condition (2) described above.

If you wanted to get a little more clever to avoid this problem of searching all 1 trillion items, you could tell the algorithm to refuse to enter the for-loop if key lies outside the bounds of KEYS. Checking the bounds is easy and takes constant time, as you merely check for the first and last item’s size (again, as KEYS is sorted), and those are your bounds. This way, if key lies outside the bounds, you can guarantee that it is not in KEYS, no matter how many items KEYS has.

And, this is it. There is nothing more to optimize using this method (let’s not get into parallelization). What else can you do, really, when searching linearly, looping through every item from the beginning to the next?

# Inverted Bounds-Checking, aka Binary Search

The heading of this subsection might have already given you a hint as to what binary search entails. (Although, if you’ve read the Binary Subdivision section, you should have had a good idea anyhow.) Binary search takes the concept of bounds-checking, and applies it repeatedly, recursively, against KEYS. The only difference when I say “bounds-checking” in the context of binary search is that we do not care about the values of those bounds, but merely that they are the bounds. This is because we only concern ourselves with dividing the list of sorted numbers by $\frac{1}{2}$ every time and take the middle index middle_index, which is located as close as possible to the middle element (halfway between the two bounds). Indeed, the only way to get a valid middle_index value is if we know the bounds (the size of the list). We keep doing this recursively until KEYS[mid] = key=.

The following is the pseudocode.

KEYS = some random list of numbers
key = the number we're looking for in KEYS

binary_search(KEYS, key):
middle_index = get_middle_index_index(length of KEYS)
lower_KEYS = get_below(middle_index, KEYS)
upper_KEYS = get_above(middle_index, KEYS)
if key < KEYS[middle_index]
binary_search(lower_KEYS, key)
else if key > KEYS[middle_index]
binary_search(upper_KEYS, key)
else
return middle_index
end

There are some obvious holes in the code above.

First, we always assume that KEYS is made up of multiple elements, and that its halves lower_keys and upper_keys also have multiple elements in them. In the extreme case, KEYS might be an empty list, which would make the whole thing explode.

Second, the get_middle_index(), get_below_mid(), and get_above_mid() functions remain undefined.

Aside: You might be wondering about lines 12-14. We could write

else if key == KEYS[mid]


instead of just else on line 12, but that is redundant. This is because we already test for the two other conditions of key being lesser or greater than middle_index. Therefore, we’ve excluded the two other conditions and are already only left with the third condition of key = KEYS[mid]= evaluating to TRUE — hence we write just else on line 12.

Addressing the holes above, we get the next version.2

KEYS = some random list of numbers
first_index = 0
last_index = 999 (our KEYS is 1000 elements big)
key = the number we're looking for in KEYS

binary_search(KEYS, key, first_index, last_index):
list_size = (last_index - first_index) + 1
if list_size == 0
return KEY_NOT_FOUND
end

middle_index = list_size / 2 + first_index

if key < KEYS[middle_index]
binary_search(KEYS, key, first_index, middle_index - 1)
else if key > KEYS[middle_index]
binary_search(KEYS, key, middle_index + 1, last_index)
else
return middle_index
end

There are some obvious differences — mainly the fact that we concern ourselves primarily with the first and last index numbers of the list, and work with these indices instead of working with the entire list KEYS itself. The get_below() and get_above() functions are gone and have been replaced with the index bounds first_index, middle_index and middle_index + 1, last_index, respectively. As you can see, working with these index numbers directly avoids a lot of abstraction. Actually, the list_size abstraction can be further reduced in terms of indices, so that list_size = 0= can be rewritten as first_index > last_index.3

# Theoretical Performance

You can probably see why binary search is so powerful. It repeatedly divides the search region into $\frac{1}{2}$ of its original size. It’s sort of like Zeno’s Dichotomy Paradox, except that it uses the “absurdity” of Zeno’s argument, and uses that to its advantage. To me, these somewhat related, even tangential, connections make binary search that much more elegant.

Consider this: a list that has 100,000 elements will only take, in the worst case, around 16 calls. Compare that to linear search, which will take at most 100,000 calls or iterations (if our item happens to be at the very last index).4 The time complexity of binary search for a list of $\mathit{KEYS\_TOTAL}$ elements is defined to be $\lfloor\log_2\mathit{KEYS\_TOTAL}\rfloor$. Because this defines an exponential relationship, we can rest assured that we can cut down a very large list quickly.5

# Naive Implementations

## Preliminary Details

I said at the beginning of the post that I would show a naive implementation in C, Ruby, and Haskell. I could have simply written a binary_search() function (and only that function) for all three languages — but instead I chose to write full standalone programs for all three that print out the same results. Because they are all standalone programs, you can easily tweak some settings (namely, the KEYS_TOTAL value), and see how it scales.6 All versions use the new PCG family of pseudorandom number generators (RNGs), which have been created by Prof. Melissa E. O’Neill, author of the great paper The Genuine Sieve of Eratosthenes.7

## C Version (Linux)

/*
*
* Compile with gcc -o binary-search-c -Wall -Wextra -Wpedantic --std=gnu11 -O2
* binary-search.c'. Check via valgrind with valgrind --leak-check=full
* --show-leak-kinds=all -v path/to/binary'.
*
* Usage: just execute the binary as-is without any arguments. To test the RNG,
* call with the argument "rng".
*/

#include <stdbool.h> /* bool */
#include <stdio.h>
#include <stdint.h> /* UINT32_MAX */
#include <stdlib.h> /* malloc() */
#include <string.h> /* strcmp() */
#include <inttypes.h> /* uint32_t */

typedef uint32_t u32;
typedef uint64_t u64;

/*
* "-1" is an invalid value to be used as an index for an array (the index
* number is what binary_search() looks for.)
*/
enum {KEY_NOT_FOUND = -1};
const int KEYS_TOTAL = 1000000;

// *Really* minimal PCG32 code / (c) 2014 M.E. O'Neill / pcg-random.org

typedef struct { u64 state;  uint64_t inc; } pcg32_random_t;

u32 pcg32_random_r(pcg32_random_t *rng)
{
u64 oldstate = rng->state;
rng->state = oldstate * 6364136223846793005ULL + (rng->inc|1);
u32 xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u;
u32 rot = oldstate >> 59u;
return (xorshifted >> rot) | (xorshifted << ((-rot) & 31));
}

u32 uniform32(int range, pcg32_random_t *rng)
{
unsigned rand_limit, rand_excess;
u32 x;
rand_excess = ((UINT32_MAX % range) + 1) % range;
rand_limit = UINT32_MAX - rand_excess;
while ((x = pcg32_random_r(rng)) > rand_limit) {};
return x % range;
}

/* Populate an array with increasing numbers; we randomly choose whether to skip
* to the next number by an increment of 1 or 2, so as to initialize the array
* slightly differently each time this function is called.
*/
u32 init_array(u32 *keys, bool has_key, pcg32_random_t *rng)
{
int i, j;
for (i = 0, j = 0; i < KEYS_TOTAL; i++) {
j += uniform32(2, rng);
keys[i] = i + j;
}

/*
* If we want to choose a key, randomly choose one from one of the
* existing elements; otherwise, return an impossible key (where
* "impossible" means a key whose value lies outside the range of values
* that exist in the array).
*/
if (has_key)
return keys[uniform32(KEYS_TOTAL, rng)];
else
/* Impossible key = biggest key + 1 */
return keys[KEYS_TOTAL - 1] + 1;
}

int binary_search(u32 *keys, u32 key, int min, int max)
{
int list_size;
int mid;

list_size = (max - min) + 1;
if (list_size == 0)
return KEY_NOT_FOUND;

mid = (list_size / 2) + min;

if (key < keys[mid]) {
return binary_search(keys, key, min, mid - 1);
} else if (key > keys[mid]) {
return binary_search(keys, key, mid + 1, max);
} else {
return mid;
}
}

int main(int argc, char **argv)
{
int i, min, max;
int mid;
bool has_key;
u32 *keys;
u32 key;
pcg32_random_t rng = {0x1234567890abcdefULL, 0x1234567890abcdefULL};

/* RNG self-test. */
if (argc > 1 && strcmp(argv[1], "rng") == 0) {
printf("Running RNG self-test.\n");
printf("%"PRIu32"\n", pcg32_random_r(&rng));
for (i = 0; i < 1000000; i++) {
pcg32_random_r(&rng);
}
printf("%"PRIu32"\n", pcg32_random_r(&rng));
for (i = 0; i < 100; i++) {
printf("%"PRIu32"\n",
uniform32((UINT32_MAX / 2) + (UINT32_MAX / 3),
&rng));
}
keys = malloc(KEYS_TOTAL * sizeof(u32));
for (i = 0; i < 10; i++) {
has_key = (bool)uniform32(2, &rng);
key = init_array(keys, has_key, &rng);
printf("last number in array %d for key %"PRIu32": ",
i, key);
printf("%"PRIu32"\n", keys[KEYS_TOTAL - 1]);
}
printf("Done.\n");
printf("END C VERSION\n");
free(keys);
return 0;
}

/*
* Allocate space for our big array of keys, as well as our
* in-place-modified "mid" value.
*/
keys = malloc(KEYS_TOTAL * sizeof(u32));

/* Stress-test binary_search(). */
for (i = 0; i < 20; i++) {
has_key = (bool)uniform32(2, &rng);
key = init_array(keys, has_key, &rng);
min = 0;
max = KEYS_TOTAL - 1;
mid = binary_search(keys, key, min, max);
printf("%02d - ", i + 1);
if (mid == KEY_NOT_FOUND) {
printf("key %"PRIu32"' not found.\n", key);
} else {
printf("key %"PRIu32"' found at keys[%d].\n",
key, mid);
}
}

printf("END C VERSION\n");
free(keys);
return 0;
}

Overview:

• pcg32_random_r() is PCG’s minimal implementation version. This is RNG we depend on to get identical randomly-generated data in the other Ruby and Haskell versions.
• uniform32() tames all raw RNG’s like pcg32_random_r(); it removes any bias that might be introduced if we were to use a simple modulo operation. Hence, we use uniform32() for all our RNG needs.
• init_array() takes an empty array of fixed size, and populates it with semi-random numbers. I say semi-random because the number chosen to populate the array, in sequence, is steadily bumped up with the j variable, eliminating the need for sorting it afterwards in preparation for passing it to binary_search().
• Finally, we have binary_search() itself, written in a way to closely match the pseudocode presented above.

I’ve tried to keep the code simple. You may find it disturbing that we use the same type for KEY_NOT_FOUND as the actual valid key value (mid) itself. This kind of type overloading is common in C, and is what gives C its bare metal speed — at the cost of (probable) disaster, of course.

## Ruby Version

# LICENSE: PUBLIC DOMAIN

# Interact with irb -I . -r path/to/this/file'.

# Usage: just execute the binary as-is without any arguments. To test the RNG,
# call with the argument "rng".

U32_MAX = 0xffffffff
U64_MAX = 0xffffffffffffffff
U32_MOD = U32_MAX + 1
U64_MOD = U64_MAX + 1

KEYS_TOTAL = 1000000

class PCG32
@state
@inc

def initialize(state, inc)
@state = state
@inc = inc
end

def pcg32_random_r
oldstate = @state
@state = (((oldstate * 6364136223846793005) % U64_MOD) +
(@inc | 1)) % U64_MOD
xorshifted = (((oldstate >> 18) ^ oldstate) >> 27) % U32_MOD
rot = oldstate >> 59
(xorshifted >> rot) | ((xorshifted << ((-rot) & 31)) % U32_MOD)
end

def uniform32(range)
rand_excess = ((U32_MAX % range) + 1) % range
rand_limit = U32_MAX - rand_excess
while ((x = self.pcg32_random_r) > rand_limit)
end
x % range
end
end

def init_array(keys, has_key, rng)
j = 0
for i in 0..(KEYS_TOTAL - 1)
j += rng.uniform32(2)
keys[i] = i + j
end

if has_key
keys[rng.uniform32(KEYS_TOTAL)]
else
keys[KEYS_TOTAL - 1] + 1
end
end

def binary_search(keys, key, min, max)
list_size = (max - min) + 1
if (list_size == 0)
return nil
end

mid = (list_size / 2) + min

if (key < keys[mid])
binary_search(keys, key, min, mid - 1)
elsif (key > keys[mid])
binary_search(keys, key, mid + 1, max)
else
mid
end
end

# Begin program

rng = PCG32.new(0x1234567890abcdef, 0x1234567890abcdef)

# RNG self-test.
if (ARGV == ["rng"])
puts "Running RNG self-test."
puts rng.pcg32_random_r
for n in 0..999999
rng.pcg32_random_r
end
puts rng.pcg32_random_r
for n in 0..99
puts rng.uniform32((U32_MAX / 2) + (U32_MAX / 3))
end
for n in 0..9
has_key = rng.uniform32(2) == 1
keys = []
key = init_array(keys, has_key, rng)
puts "last number in array #{n} for key #{key}: #{keys[KEYS_TOTAL - 1]}"
end
puts "Done."
puts "END RUBY VERSION"
exit
end

keys = []

# Stress-test 'binary_search' method.
for i in 0..19
has_key = rng.uniform32(2) == 1
key = init_array(keys, has_key, rng)
min = 0
max = KEYS_TOTAL - 1
mid = binary_search(keys, key, min, max)
printf("%02d - ", i + 1)
if mid.nil?
else
puts "key #{key}' found at keys[#{mid}]."
end
end
puts "END RUBY VERSION"

This version, like the Haskell version, tries to follow the C version as much as possible. One drawback of this version is that because Ruby does not support fixed-width integers, we have to make liberal use of the modulo operator % to emulate integer overflow. We could just do a bitwise AND (&) with a mask, but that would risk increased verbosity.

{-# LANGUAGE RecordWildCards #-}

--
-- Compile with ghc --make -Wall -Werror -O2 -dynamic -o binary-search-hs
-- binary-search.hs'. For better conformance with the C and Ruby versions, we
-- use snake_case instead of camelCase wherever there is a direct parallel.
--
-- Interact with ghci path/to/this/file.
--
-- Usage: just execute the binary as-is without any arguments. To test the RNG,
-- call with the argument "rng".

module Main where

import Data.Bits
import Data.Word
import System.Environment
import System.Exit
import Text.Printf

u32_max :: Word32
u32_max = 0xffffffff

keys_total :: Int
keys_total = 1000000

data PCG32 = PCG32
{ state :: Word64
, inc :: Word64
}

pcg32_random_r :: PCG32 -> (Word32, PCG32)
pcg32_random_r rng@PCG32{..} = (result, rng {state = state'})
where
state' :: Word64
state' = state * 6364136223846793005 + (inc .|. 1)
xorshifted :: Word32
xorshifted = fromIntegral $shiftR (xor (shiftR state 18) state) 27 rot :: Word32 rot = fromIntegral$ shiftR state 59
result :: Word32
result = fromIntegral
$(shiftR xorshifted$ fromIntegral rot)
.|. (shiftL xorshifted $fromIntegral ((-rot) .&. 31)) uniform32 :: Word32 -> PCG32 -> (Word32, PCG32) uniform32 range rng = find_within_range rng where rand_excess :: Word32 rand_excess = mod ((mod u32_max range) + 1) range rand_limit :: Word32 rand_limit = u32_max - rand_excess find_within_range rng' = if x > rand_limit then find_within_range rng'' else (mod x range, rng'') where (x, rng'') = pcg32_random_r rng' init_array :: Int -> Bool -> PCG32 -> ([Word32], Word32, PCG32) init_array keys_size has_key rng0 = (keys, key, rng3) where (keys', rng1) = genKeysList [] 0 0 rng0 -- Need to reverse the list, because Haskell (like all Lispy languages?) -- builds a list backwards when using the cons (:) operator. keys = reverse keys' genKeysList :: [Word32] -> Int -> Int -> PCG32 -> ([Word32], PCG32) genKeysList arr i j0 rng = if i < keys_size then genKeysList ((i' + j2'):arr) (i + 1) j2 rng' else (arr, rng) where i' = fromIntegral i j2' = fromIntegral j2 (j1, rng') = uniform32 2 rng j2 = j0 + fromIntegral j1 (key, rng3) = if has_key then let (idx, rng2) = uniform32 (fromIntegral keys_total) rng1 in (keys!!(fromIntegral idx), rng2) else (keys!!(keys_total - 1) + 1, rng1) -- We use min' and max' because the non-apostrophe versions name-clash with -- Prelude's own functions. We could hide Prelude's imports, but that seems too -- roundabout. binary_search :: [Word32] -> Word32 -> Int -> Int -> Maybe Int binary_search keys key min' max' | list_size == 0 = Nothing | key < keys!!mid = binary_search keys key min' (mid - 1) | key > keys!!mid = binary_search keys key (mid + 1) max' | otherwise = Just mid where list_size = (max' - min') + 1 mid = (div list_size 2) + min' main :: IO () main = do let rng0 = PCG32 { state = 0x1234567890abcdef , inc = 0x1234567890abcdef } args <- getArgs when (args == ["rng"])$ do
putStrLn "Running RNG self-test"
let
(num0, rng1) = pcg32_random_r rng0
putStrLn $show num0 rng2 <- foldM warmupRng rng1 [0..999999::Int] let (num1, rng3) = pcg32_random_r rng2 putStrLn$ show num1
rng4 <- foldM testUniform32 rng3 [0..99::Int]
_ <- foldM testArray rng4 [0..9::Int]
putStrLn "Done."
exitSuccess
_ <- foldM testBinarySearch rng0 [0..19::Int]
where
warmupRng rng _ = return . snd $pcg32_random_r rng testUniform32 rng _ = do putStrLn$ show num
return rng'
where
(num, rng') = uniform32 (div u32_max 2 + div u32_max 3) rng
testArray rng0 i = do
putStrLn $"last number in array " ++ show i ++ " for key " ++ show key ++ ": " ++ show (keys!!(keys_total - 1)) return rng2 where (res, rng1) = uniform32 2 rng0 has_key = res == 1 (keys, key, rng2) = init_array keys_total has_key rng1 testBinarySearch rng0 i = do printf "%02d - " (i + 1) case foundMid of Just mid -> putStrLn$ "key "
++ show key
++ "' found at keys["
++ show mid
++ "]."
Nothing -> putStrLn \$ "key "
++ show key
return rng2
where
(res, rng1) = uniform32 2 rng0
has_key = res == 1
(keys, key, rng2) = init_array keys_total has_key rng1
min' = 0
max' = keys_total - 1
foundMid = binary_search keys key min' max'

It pained me not to make use of Haskell’s much faster, efficient Array data structure instead of plain lists (that are constructed with the square brackets []). And, I have to admit that it is written in a strange style; I’ve preserved the names of the variables from C and Ruby where I could, even though mixing snake_case with camelCase results in utter ugliness. I also restrained myself from using the State monad for keeping track of PCG32’s state. For you non-Haskellers, that means that I manually passed around RNG state (as you can see with rng0, rng1, rng2, etc.) as arguments and return values, because I did not want to place another barrier against quickly grasping the code. Do you really want monad transformers in a “naive” implementation?8

The most immediate effect to me when writing the Haskell version was just how stateful the uniform32() and init_array() functions were. The C/Ruby brethren perform lots of variable mutation in those parts, and are consequently difficult to understand from a pure (type system) perspective. All of the silent type promotions in C were blatantly exposed by the Glasgow Haskell Compiler (GHC), making it necessary for me to include all of those explicit fromIntegral type promotions myself in pcg32_random_r and init_array.

But even with all of these explicit conversions and the un-idiomatic Haskell style (excluding coding style), I find the Haskell version much easier to understand. Just compare how clean binary_search looks in Haskell versus the other ones! And the fact that you can basically define nested functions/methods with the where clause makes hole-driven development a piece of cake.

# Conclusion and Hopes

I hope you’ve enjoyed looking at the various implementations of binary search. Binary search is certainly something you can write on your own, although getting the surrounding technicalities correct can be a chore — but isn’t that always the case when trying to obvserve the behavior of an algorithm in practice? You can look at the cute 10 or 15-line pseudocode on Wikipedia all day, but how can you be sure that it works? This focus on real world examples has been a driving principle behind all of my blog posts, and I hope it has helped you understand the algorithm better.

Binary search is something you can apply in real life, too. For me, I came into contact with it again when I learned about git bisect. I personally try to use binary search myself when I code; for example, if a large block of code does not work, I delete large chunks out, making the deletions ever finer, until I get to the source of the problem. You can think of these examples as binary search, where the key is the (as yet unknown) bad commit or line of code you have to fix. You can be your own algorithm! Isn’t that cool?

Thanks for reading, and happy hacking!

1. It’s like in the children’s “Telephone” game, where the error of one person gets magnified at every step, until the end when the message gets so garbled up it becomes comical.↩︎

2. “Hole-driven-development”, as I like to call it, is a top-down approach to development. You first define the larger pieces, and continuously define the smaller sub-pieces, until you reach atomic pieces (those pieces which cannot be further sub-divided). You might have noticed that this style of writing code has an eerie parallel to the whole (no pun intended!) discussion about binary subdivision, and so forth.

As an aside, in the Haskell community, hole-driven Haskell takes the same approach, but first you define the behaviors of the functions through its type signatures, and leave the implementation details undefined. This way, you can use the compiler’s type system to help you define what you want as you go; this is certainly a step up from unassisted hole-driven development that we are doing with the pseudocode here.↩︎

3. The condition first_index > last_index might not make sense. This was the pseudocode on Wikipedia at the time I wrote this, and it didn’t make sense to me at first. But think of it this way: binary search involves division of the list into halves, repeatedly. So first_index and last_index get closer and closer to each other. The point is that the distance between these two markers will close, shrinking the list into smaller and smaller subparts. We can’t simply check if these two points meet, by writing first_index = last_index=, because of the base case of a 1-element list. Such a list will have first_index as 0, and the last_index as also 0 — because there is only 1 index! In this case, the condition first_index = last_index= to check for an empty list is inadequate.

If you look at how we call binary_search() again in lines 15 and 17, you will notice that the new definitions of first_index and last_index depend on middle_index, and it’s this interplay with middle_index that forces last_index to eventually become smaller than first_index. If you work out the algorithm through some small cases, you will see this happen eventually.↩︎

4. Linear search does have the advantage that, on a sorted list, it can take advantage of branch prediction. This is because the if/else test will always go in one direction, until when we get a match or when the element considered is greater than the search key. But in the long run as you increase the search space, binary search will beat linear search hands down.↩︎

5. However, be mindful to the fact that binary search relies on the input list being sorted. Sorting a list itself is a fundamental problem in computer science, and there are numerous sorting algorithms as well as data structures that make such sorting more amenable. In the real world, I think 90% of your time is going to be spent sorting the list first, by which time the speed benefits of binary search probably won’t hold much influence. If the search space is always small, you could easily get away with linear search — why bother adding complexity where you don’t need it?↩︎

6. You could trivially add on a proper command-line argument handling mechanism. In particular, KEYS_TOTAL is dying to be decoupled from the program’s internals — but I leave that as an exercise to you. (Hint: use a command-line option parsing library!)↩︎

7. MELISSA E. O’NEILL (2009). The Genuine Sieve of Eratosthenes. Journal of Functional Programming, 19, pp 95-106. doi:10.1017/S0956796808007004. Online draft version.↩︎

8. What if I had indeed made use of the State monad, you ask? Well, first I wouldn’t need to pass in, or get back, the RNG state variables. I would just run the RNG-state-changing functions inside the State monad (actually, probably the StateT monad transformer as we’re in the IO monad anyway), to get=/=put the RNG states to read/write those values.↩︎