The Big O notation seems to be one of the most difficult things for the programmers, probably because of its complexity. Let us try to understand what exactly do you mean by Big Oh notation with its examples, advantages, limitations and much more.

#### What is Big O notation?

**In simple words, the Big O notation is the most commonly used notation to measure the performance of any algorithm by defining its order of growth.**

In today’s era, we’re more interested in knowing the generic order of magnitude of the algorithm rather than the efficiency of the algorithm.

If we have two different algorithms to solve the same problem where one algorithm executes in 10 iterations and the other in 20 iterations, the difference between the two algorithms is not much.

However, if the first algorithm executed 10 iterations and the other executed 1000 iterations, then this is a matter of big concern.

The following Big O chart that represents the Big-O complexity.

**Source: HackerEarth**

We have seen that the number of statements executed in the function for n elements of data is a function of n elements of data is a function of the number of elements, expressed as f(n).

Even if the equation derived for a function may be complex, a dominant factor in the equation is sufficient to determine the order of magnitude of the result and hence the efficiency of the algorithm.

This factor is the **big-O**, as in **‘on the order of’** and is expressed as O(n). In Big Oh notation, **O** represents **‘order of’** is concerned with what happens for very large values of n.

For instance, if a sorting algorithm performs n^{2} operations to sort just n elements, then that algorithm would be described as an O(n^{2}) algorithm.

When expressing the complexity of any algorithm using the Big Og notations, constant multipliers are not taken into consideration. So, an **O(4n)** algorithm is equivalent to **O(n)**, which shouldn’t be.

If f(n) and g(n) are two functions defined for positive integers, then

f(n) is O(g(n)), if there exists constants c and n_{0} such that,

**f(n) <= Cg(n) for all n>=n _{0}**

This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the function f(n).

Here, C is a constant that depends on various factors such as:

- CPU
- Main memory and its access time
- Quality of the interpreter or the compiler
- Programming language
- Algorithm

**Note: O(g(n)) is read as f(n) is big oh of g(n) or f(n) is on the order of g(n).**

The constant values will be ignored because the main purpose of the Big O notation is to analyze the algorithm in a general fashion, so the anomalies appear for small input sizes are simply ignored.

#### Big O Notation Categories

The Big Oh notation categorizes an algorithm into a specific set of categories. So, let us take some common complexities and see the situations in which they occur.

**1. O(1): Constant Time Algorithm**

The constant time algorithms that have running time complexity given as O(1). This is the case when we have to obtain the first element from a set of data, or when we get element in the first time itself e.g. the **hash table**. In this case, the running time is independent of the input size.

**2. O(n): Linear Time Algorithm**

The linear time algorithms that have running time complexity given as O(n). This is the case when each element in a set of data has to be processed.

Here the running time is linearly proportional to n, i., if the input size is doubled then running time will also be doubled, e.g. the worst case of **linear search algorithm** or the best case of **bubble sort algorithm.**

**3. O(log n): Logarithmic Time Algorithm**

The logarithmic time algorithms that have running time complexity given as O(log n). This is the case in algorithms where a set of data is repeatedly divided into half and the middle element is processed, e.g., **binary search algorithm**, binary tree traversal.

**4. O(n log n): Linear Logarithmic Time Algorithm**

The logarithmic linear time algorithms that have running time complexity given as O(n log n). This is the case when a set of data is repeatedly divided into half and each half is processed independently, e.g. the best case of **quick sort algorithm**.

**5. O(n ^{2}): Quadratic Time Algorithm**

These are also known as Polynomial time algorithms that have a running time complexity given as O(n^{k}) where (k>1).

This is the case when a full set of data has to be traversed for each element of the set or we can say that all pairs of data items are processed e.g. the worst case of bubble sort. Quadratic problems are useful only for small sets of the data because n^{2} increase rapidly as n increases.

**6. O(n ^{3}): Cubic Time Algorithm**

This is another variant of Polynomial time algorithms that have a running time complexity given as O(n^{k}) where (k>1). This is the case when all the triplets of data items are processed.

**7. O(2 ^{n}) exponential**

The exponential time algorithms that have running time complexity given as O(2^{n}). The exponential time algorithms that have the running time complexity given as O(2^{n}). This is the case when all the possible subsets of a set of data are generated.

**8. O(n!)**

This is the case when all the possible permutations of a set of data are generated.

#### Big Oh Notation Examples

Let us see the different number of operations for different functions of n with the help of the following Big O table.

n | O(1) | O(log n) | O(n) | O(n log n) | O(n^{2}) | O(n^{3}) |
---|---|---|---|---|---|---|

1 | 1 | 1 | 1 | 1 | 1 | 1 |

2 | 1 | 1 | 2 | 2 | 4 | 8 |

4 | 1 | 2 | 4 | 8 | 16 | 64 |

8 | 1 | 3 | 8 | 24 | 64 | 512 |

16 | 1 | 4 | 16 | 64 | 256 | 4096 |

Now, you can guess why Big O notation is a convenient way to express the worst-case scenario for any given algorithm, although it can be used to express the average-case scenario as well.

**Calculate Big O notation for the given f(n).**

**f(n) = n(n + 1)/2**

The function can be expanded as 1/2(n^{2}) + 1/2(n).

**Step 1:** Set the coefficient of each term to 1, so now we have n^{2} + n.

**Step 2:** Keep the largest term and discard the others, so discard n and the big O notation can be given as O(g(n)) = O(n^{2}).

#### Advantages of Big O Notation

Although we are much aware of what the Big Oh notation can do, it’s still good to have a glance over the advantages of Big O notation.

- Independent of computer architecture
- Mostly independent of the interpreter or compiler
- An easy understanding piece of information.

#### Limitations of Big O Notation

There are some limitations with the Big Oh notation of expressing the complexity of the algorithms. These limitations are enlisted here:

- There are numerous algorithms are the way too difficult to analyze mathematically.
- There may not be sufficient information to calculate the behaviour of the algorithm in an average case.
- The Big Oh notation ignores the important constants sometimes. For instance, if a particular algorithm takes O(n
^{3}) time to run and another algorithm takes O(100n^{3}) time to run, then both the algorithms would have equal time complexity according to the Big O notation. - The Big O notation tells us how the algorithm grows with the size of the problem and not the efficiency related to it.

If you have any doubts or suggestions about the Big Oh notation, share it with our readers in the comment section below. Find more about the Big O notation on **Wikipedia**.