How to Use Big-O Notation

As we design functions, we have to consider how they will work in the real world. There are two common considerations as we compare solutions:

Time complexity – The amount of time it will take for our solution to complete
Space complexity – The amount of memory that our solution will consume

Unfortunately, because we can’t afford to test every solution in the exact conditions under which it will operate (and, let’s face it, we have no idea), we have to find a way to determine complexity without actual data. This is where mathematics come to the rescue.

Big-O Notation

There are three possible ranges of complexity we could look at:

  • The smallest possible situation (little-o notation)
  • The largest possible situation (Omega notation)
  • The average situation (Big-O notation)

Big-O notation shows us how a function scales with bigger and smaller data sets. We use the algebraic “n” to hold this number, and then we look at the function to determine how a change in “n” changes the time or space requirements. Common answers look like this:

  • O(1) – The size of the data has no effect on the size or time
  • O(log(n)) – The difference between larger data sets is less than that between smaller data sets (looks like a logarithmic curve)
  • O(n) – changing the size increases cost linearly (consistent increase)
  • O(n^2) – larger data sets increase the cost much more than smaller data sets (looks like a square curve)
  • O(n^y) – Larger data sets REALLY increase the cost

That probably doesn’t make a lot of sense, so let’s work through a few examples.

Example 1: A for loop

Let’s look at a common-sense for loop. In these cases…

int i;
int n;

for(i=0; i<n; i++)


In this case, we can easily determine the big-O notation, because we know that the for loop will execute once for every value of n. That means that the actual time cost of the for loop is x*n, where x is the time it takes to go through the for loop. The x is constant and (theoretically) much smaller than n, so we ignore it.

Thus, the big-O notation is:


Example 2: The bubble sort

Now we’ll look at a sort algorithm, which we know always have a complexity between O(n*log(n)) and O(n^2) because nice mathematicians have proved it for us. This algorithm takes a set of unsorted data and returns a set of sorted data:

int i,j;
int data[n];
int hold;

//Go through all the data in the set
for(i=0; i&lt;n; i++)
    //We're going to put the smallest value in the &quot;i-th&quot; position
    //compare against all the unsorted data
    for(j=i; j&lt;n; j++)
        //If we find something smaller, move it to the &quot;smallest&quot; place
        if(data[j] &lt; data[i])
            hold = data[i];
            data[i] = data[j];
            data[j] = hold;
    //Now the &quot;smallest&quot; position has the smallest value in the set

If we think about it, as the algorithm goes through the set, it has to go through most of the set at each stage. Mathematically, it looks like this:

n + (n-1) + (n-2) + (n-3) + … + (n – (n-1))

If we reorganize and simplify, it looks like this:

n^2 – (n)

n is much lower than n^2, so we can safely ignore it, leaving us with a Big-O notation of:


The Formula

When you need to solve for Big-O notation, you go through these steps:

  1. Write out your function
  2. Step through and figure out how many loops depend on the size of the data (n)
  3. Find the mathematic formula that describes just these loops
  4. Simplify
  5. Consider only the largest factor for Big-O notation
  6. Write a paper to convince your professor that this was really hard
  7. Win

Cleaning up – Comment out old code

If you’ve spent any appreciable time in the programming world, you’ve probably reorganized your code any number of times. When I’m writing code, I usually slap things out as I think of them, resulting in code that (while easy enough to read) defies the principle of unity. You’ll see me write things like this:

int main()
    int i, j, k=0;

    char * message = &quot;Says something\n&quot;;


        printf(&quot;%s&quot;, message);
    char * pandamonium = NULL;


    struct a_structure * thisone = (struct a_structure *) malloc(sizeof(struct a_structure);


You see my point. While there’s nothing functionally wrong with this code, we usually want to put all of our variables up top in one “data” section. This is something of a holdover from assembly languages (which have distinct sections for data and code), but it’s not a bad practice at all. Generally, we try to keep this standard.

Reorganizing these things, though, often introduces a number of errors. They can be a hair unpredictable – I recently dealt with a reorg that invariably threw SEG_FAULTs, even though the code appears functionally equivalent to the original.

There’s one way to ensure that you always have code that works, even as you reorganize it:

Comment the old code out instead of deleting it.

It’s a simple process:

  1. Copy your code out, so now you have two identical copies of the same blocks.
  2. Use either block comments or precompiler comments (I use #if 0 … #endif, but /*…*/ works just as well) to isolate one of those working pieces
  3. Rearrange the un-commented code, and test relatively often.
  4. If you can’t figure out why the reorg is broken, at least you don’t have to rewrite the function.

When you’re 100% happy with your reorg, you can delete the commented-out code, but until then the cost of duplicate code is too low to consider against the cost of rewrites.

Pointer Arithmetic Matters

What’s the first thing you think when I say the word: “Pointer”. Generally, you either think of a particular register (unlikely) or a hexadecimal number representing a location in memory (likely). Personally, I’m in the latter category, which is why this problem arose.

I was in the middle of programming, and I decided to allocate an array of a given structure. If we pretend that my structure was called “panda”, the code would look something like this:

//I had some number of structures to allocate, but I wasn't sure in advance how many
int j = get_quantity();
struct panda * NP = (struct panda *) calloc (j, sizeof(struct panda));

All very straightforward so far. I figured out how many structures I would need and allocated a contiguous (and zero-filled) memory space for them. This basically created an array of struct panda’s, with every field of every struct panda set to 0.

Here’s where it got a bit tricky. I was going to populate all of this space using a loop that looked something like this:

//Repeat once for every struct I allocated
for(i=0; i&amp;lt; j; i++)
        NP + (i * sizeof(struct panda)),

This code morsel threw a SegFault every time.

Even though there’s not much there, it took me hours to figure out how I screwed this up.

Here’s a hint: it’s right there in fill_panda.

Give up?

You see, I’m acting as though NP is a (64-bit) hexadecimal number corresponding to a Byte in memory. If that were 100% true, there would be no problem. However, NP isn’t just some hexadecimal value – it’s a POINTER.

Not just any pointer, but a pointer to a struct panda.

It turns out that C pointers know their size. If you say NP+1, it is functionally similar to NP[1].

I basically dynamically allocated an array, right? And we can use array notation on an array, right? Good.

So, when I meant to say:


I actually said:

NP[ i * sizeof(struct panda) ]

Whoops. Because struct panda is not the size of a single character, I was guaranteed to go out of the bounds of my allocated space.

To Drive the point home, so no one can forget:
NP + 1 == NP[1].
NP == NP[0].
NP + i == NP[i].

Don’t overthink it, like I did.

No post today, but a bit of advice

This bears repeating:


Even when there’s no way for a function to fail, return an integer.


Because it allows the user to go on autopilot. If all of your functions return int (except for a few explicitly-named constructors – these can return the desired struct), the end user can just work your code into his error-handling framework without complaint.

When some return int, some return char *, some return uint8_t, and some return void, it makes the user think about how to deal with every function, adding complexity where it doesn’t need to be.

Complexity is the enemy. Keep it simple.

Facebook Auto Publish Powered By :