A small introduction to Big O Notation

A small introduction to Big O Notation

It's said that everyone in the 21st century should be able to code because it teaches one to think however, there's much more to writing to code than meets the eye. This is true especially when building applications that need to scale and eventually support huge amounts of data and a huge number of users.

Just the way not everyone who rides a bike is a cyclist, not everyone who writes code can be termed a software engineer. A major difference exist between the two.

In saying this, engineers are more concerned with efficiency and craftsmanship. They're concerned with how things are done, how can it be done better, how elegantly does his code present itself and how well can his work be scaled and maintained. We all change code, some because it just doesn't make sense, some because it could be cleaner while others, because it just needs to perform better!

So we change code for many reasons, great! But how do we know that the code that was written initially is actually as bad as we made it out to be? or even better, how do we know that our newly written "better performing" code, actually is "better performing"?

One cannot prove that which he cannot measure so how would we measure the performance/efficiency of a piece of code or better yet, an algorithm?

Big O Notation

This is the language we use to articulate, how long an algorithm takes to run. In other words, it describes the performance and/or the complexity of the algorithm. This measurement is done in relation to the size of the data set it's dealing with.

So we aim to answer questions like how quick does the run time speed increase in relation to a data set? How quick does it increase as the size of the data set increases? We phrase our run time growth in terms of the size of the data set. It would be common to hear phrases like "run time grows on the order of the size of the data set" (O(n))

To make it simpler, lets have a look at some common orders of growth

  1. O(1)

This indicates that an algorithm will forever execute in the same amount of time irrespective of the input. The situation can be depicted with the following function:

bool CheckElement (List<int> numbers)

{

return numbers[0]==2;

}


2. O(n)

This describes an algorithm whose performance is in direct relation to the number of elements in the data set. If one item takes a second, its expected that 10 items will take 10 seconds.

bool PrintElement (List<int> numbers)

{

foreach (var num in numbers)

{

print(num);

}

}

3. O(n^2) (n to the power 2)

This describes an algorithm whose performance is in direct relation to the square of the number of elements in the data set. This can be seen in nested comparisons /iterations over a data set (nested loops?). The further the nesting goes, the higher the exponent grows (n^4, n^5)

xXx


The examples and possible scenarios can grow into an exhaustive which lead to various other topics however this article was meant to merely explore the concept of Big O Notation at the most basic level. For further reading, I recommend :

  1. "Data Structures and Algorithms Made Easy in Java"
  2. Khan Academy: Computer Science Algorithms
  3. Data Structures and Algorithms (2nd Edition)





To view or add a comment, sign in

More articles by Mohammed Sohail Ebrahim

  • THE BIG DATA PHENOMENON

    When it comes to the fairly new buzz word ("Big Data") many of us are very confused if not in the dark completely. Yes,…

  • What is the Internet of Things(IoT) and why does it matter?

    The term "Internet of Things" was supposedly coined by Kevin Ashton, a British Technology Pioneer, in 1999. This topic…

    2 Comments

Others also viewed

Explore content categories