Big O Notation

·

6 min read

Hello and welcome to our discussion on algorithm analysis and efficiency, including big O notation, simplifying big O expressions, defining time and space complexity, evaluating complexity using big O notation, and defining logarithms.

Let's consider the example where we want to write a function that calculates the sum of all numbers from 1 to (and including) n.

function addUpTo(n) {
  let total = 0;
  for (let i = 1; i <= n; i++) {
    total += i;
  }
  return total;
}

Let's see the above example and then use the timers and visualize.

function addUpTo(n) {
  let total = 0;
  for (let i = 1; i <= n; i++) {
    total += i;
  }
  return total;
}

let t1 = performance.now();
addUpTo(1000000000);
let t2 = performance.now();
console.log(`Time Elapsed: ${(t2 - t1) / 1000} seconds.`)

The Problem with time

  • Different machines will record different times

  • The same machine will record different times!

  • For fast algorithms, speed measurements may not be precise enough.

If not time, then What? Rather than counting seconds, which are so variable...

Let's count the number of simple operations the computer has to perform!

function addUpTo(n) {
  let total = 0;  // ; is 1 assignment
  for (let i = 1; i <= n; i++) { // = is 1 assignment, <= is n comparisons, ++ is n additions and n assignments  
    total += i; // += is n additions and n assignments 
  }
  return total;
}

Counting is hard!

Depending on what we consider as an operation, the number of operations required to compute the sum of all numbers from 1 to n can range from 2n to 5n + 2. However, regardless of the exact number, the number of operations increases in proportion to n. This means that if n doubles, the number of operations required to compute the sum will also roughly double.

Introducing Big O Notation

Big O notation provides a formalized way of expressing the rate at which the runtime of an algorithm increases as its input size increases. This allows us to accurately and precisely discuss how the efficiency of an algorithm changes as the input size grows.

Big O Definition

An algorithm is said to be O(f(n)) if the number of basic operations required by the computer to execute the algorithm eventually becomes less than a constant multiple of f(n) as the input size, n, increases.

  • f(n) could be linear (f(n) = n)

  • f(n) could be quadratic (f(n) = n )

  • f(n) could be constant (f(n) = 1)

  • f(n) could be something entirely different!

examples

function addUpTo(n) {
  let total = 0;
  for (let i = 1; i <= n; i++) {
    total += i;
  }
  return total;
}

The number of operations is (eventually) bounded by a multiple of n (say, 10n)

O(n)

Simplifying Big O Expressions

When analyzing the time complexity of an algorithm, there are some general rules of thumb that can be used to simplify Big O expressions. These rules of thumb are derived from the formal definition of Big O notation.

It does not matter if the terms are constants or smaller

examples:- 0(2n) = O(n), 0(500) = O(1), O(1000n + 50)= O(n)

Big O Shorthands

  • Analyzing complexity with big O can get complicated

  • There are several rules of thumb that can help

  • These rules won't ALWAYS work but are a helpful starting point

  • Arithmetic operations are constant

  • Variable assignment is constant

  • Accessing elements in an array (by index) or object (by key) is constant

  • In a loop, the complexity is the length of the loop times the complexity of whatever happens inside the loop

Space Complexity

We have mainly emphasized the analysis of the time complexity of an algorithm, which refers to the study of how the runtime of an algorithm increases as the size of the input grows.

However, it's also possible to apply big O notation to analyze the space complexity of an algorithm, which means examining how much extra memory we have to allocate to execute the code within our algorithm.

Sometimes you'll hear the term auxiliary space complexity to refer to space required by the algorithm, not including space taken up by the inputs.

Unless otherwise noted, when we talk about space complexity, technically we'll be talking about auxiliary space complexity.

Space Complexity in Javascript

There are some rules to keep in mind when analyzing space complexity in JavaScript:

  • Most primitives (booleans, numbers, undefined, null) are constant space

  • Strings require O(n) space (where n is the string length)

  • Reference types are generally O( n), where n is the length (for arrays) or the number of keys (for objects). Example

function sum(arr) {
  let total = 0;   //  = is one number 
  for (let i = 0; i < arr.length; i++) {  //  = is another number 
    total += arr[i];
  }
  return total;
}  // O(1) space

Logarithms

We've already learned about some of the most frequently encountered time complexities in algorithms, including O(1), O(n), and O(n^2).

However, sometimes the big O notation involves more complex mathematical expressions. One such expression that appears quite commonly is the logarithm.

here's a rule of thumb.

The logarithm of a number roughly measures the number of times you can divide that number by 2 before you get a value that's less than or equal to one.

example:- log(8) = 3

Certain searching algorithms, such as binary search, exhibit a logarithmic time complexity, meaning that the time required to perform the search grows logarithmically with the size of the input.

Efficient sorting algorithms, such as merge sort and quicksort, often involve logarithmic time complexity, as they use divide-and-conquer strategies that split the input into smaller and smaller subarrays, with each split involving a logarithmic cost.

In some cases, recursion can result in logarithmic space complexity, particularly when the recursive function divides the problem into smaller and smaller subproblems, resulting in a logarithmic call stack.

Conclusion

Big O notation is a tool that we use to evaluate the performance of an algorithm. By using Big O notation, we can gain a broad understanding of the time or space complexity of the algorithm.

It's important to note that Big O notation is concerned with general trends in the algorithm's performance, such as whether the time or space complexity grows linearly, quadratically, or remains constant, rather than exact measurements.

Furthermore, the time or space complexity, as measured by Big O notation, is dependent solely on the algorithm itself and not the hardware used to run it. This means that an algorithm that has a certain time or space complexity on one machine will have the same complexity on a different machine with different hardware specifications.

Thank you for taking the time to read this blog post about Big O Notation. I trust that the insights shared have been beneficial for you.

Did you find this article valuable?

Support Samridhi Vashisht by becoming a sponsor. Any amount is appreciated!