My boyfriend is a software engineer — What’s Big O?

spareproj
3 min readJun 15, 2024

--

This is a series of educational pieces on computer science-related topics that he insists I — someone without a computer science background — know.

So I got lectured for not remembering the complete meaning of BIG O. So i’m here to write this down and let you know too, in case, you know, if anyone asks.

What’s considered better?

Who’s a better software engineer?

Who’s code is better?

What’s considered “better”?

Unlike the complex vagaries of life and humans, the way to determine if an algorithm is superior to another can be done clearly via this BIG O NOTATION. I’m going to scream it in all caps.

There are many different algorithms that can be used to perform the same task and arrive at the same outcome.

You can have algorithm that consumes takes 10 hours and consumes 10x more space, and another that can do the same job in less than a minute and uses less space.

An algorithm is considered superior than the other if it uses less time and space to perform.

BIG O NOTATION is the mathematical formula used to represent this efficiency of the algorithm.

It calculates in the worst case scenario, how much the time and space the algorithm will require when the input size increases.

Common BIG O NOTATIONS for time

These are 3 easy-to-understand BIG O NOTATIONS for 3 different algorithms.

O(1): Constant time complexity. An algorithm with O(1) complexity takes the same time to run regardless of input size.

as input size increases, there’s no change in runtime

O(n): Linear time complexity. An algorithm with O(n) complexity has a runtime grows linearly with the input size.

as input size increases, runtime increases linearly with the same constant

O(log n): Logarithmic time complexity. An algorithm runtime grows logarithmically with input size.

as input size increases, runtime grows at a slower rate

Of these 3, which do you think is the superior algorithm?

In the software world, we like to imagine the worse case scenarios.

In this case of algorithms, the worst case scenario would be when a huge input size is required. If input size grows astronomically, an algorithm that is able to control its runtime and space requirement would be the better option.

Going by this logic, O(1) should generally be considered the best time complexity because the runtime doesn’t increase at all no matter the input size.

But very few algorithms fit this criteria. Most algorithms, especially the useful ones with broad-range usability, has some dependency on the input size.

Hence, algorithms that follow O(log n) are the second-best out of these 3 options.

Of course, there are many more complex BIG O that may be better.

But this is where my understanding ends.

--

--