Data Structures & Algorithms: A Beginner’s Grasp of Big O Notation
A brief overview of Big O for beginners by a beginner.
In good practice, when we write code, we are constantly thinking about how we can optimize it. Whether it be for better readability, faster, more concise, we often think, at least I did, that the less lines of code you write, the more optimized it is. However, this is not entirely true and a great segue to our discussion about Big O.
So what is Big O?
I thought you’d never ask! Big O allows you to understand the efficiency of your code by measuring its time and space complexity. It measures how time and space, or memory, scales with respect to some input variable n in an algorithm. Confused? It’s okay, this is quite a complex concept to grasp but once it clicks, it gets easier to understand, trust me (not saying I truly fully understand it but after reading Cracking the Code and watching some YouTube videos, it makes a little bit more sense lol).
Let’s first break down the notation with the slower growing functions listed first:
- Constant: O(1)
- Logarithmic: O(log n)
- Linear: O(n)
- Quadratic: O(n²)
There is definitely more to mention, however, in this post we will be talking about the four listed as they are the easier ones to explain and most common to come across. So, before we talk about them in action, let’s take a look at this graph that is extremely helpful to refer to as you learn more about Big O.
As you may notice from this graph, the slower growing constants perform more optimally as they run the fastest. Now when I say fastest I don’t necessarily mean in terms of seconds. Why is that? Well, it’s because counting time in reference to seconds becomes so unpredictable. It depends on what you run the algorithm on such as what type of hardware your computer has. However, even then, it is difficult to truly measure the speed, in seconds, of an algorithm because even on the same computer, the times will differ, therefore, the measurement is not as precise. So instead, we can count the number of operations that our software must perform in this algorithm to determine its time and space complexity.
We want to think about how our code performs as the input n goes towards infinity which is why we describe algorithm runtimes to be linear or quadratic, etc. So when we say this algorithm is O(n) or O(n²) or O(1) and so on, we are saying that in this algorithm, as n grows, the amount of operations and the time is takes is going to grow roughly in proportion with the function (f(n) = n), (f(n) = n²), (f(n) = 1), etc. We think about space complexity in the same manner in which we ask ourselves, how much additional memory do we need to allocate in order to run the code in our algorithm as n grows.
Some quick tips or shorthands you can try to think about when determining the time and space complexity of an algorithm:
- Arithmetic operations are constant.
- Variable assignment is constant.
- Accessing elements in an array by index or object by key is constant.
- In a loop, the complexity is the length of the loop times the complexity of whatever happens in side of the loop.
These shorthands will be a good beginner guide to understanding how efficient your code is and will help you in the long run to understanding Big O notation. This beginner guide video on Big O is extremely helpful when trying to understand overall concepts in this topic!
Now that we are getting into tips about Big O, there are also four important rules to know:
- If two different steps in an algorithm, add steps
- Drop constants
- Different inputs = different variables
- Drop non-dominant terms
So for rule number one, let’s say we have an algorithm with two steps. The first step is O(a) and the second step is O(b). So in order to determine the Big O notation for the algorithm as a whole, we would add the steps to result in O(a+b). Rule number three is just common sense mostly that if you have two different inputs, like two different arrays, then we will use different variables to represent them like a and b. For rule two, we drop constants because we are looking at how things scale roughly. Constants don’t really matter since we are only trying to determine if the relationship of the time and space complexity of the algorithm is linear, quadratic, etc. For rule number four, we drop the non-dominant terms because the non-dominant term is so small in comparison to the dominant term who has an exponential time and space complexity.
So…what’s the Big O deal?
It’s important to talk about how your code performs because we strive to write algorithms that are efficient. Time (& space) is money! The more optimized our code is the faster it runs and the less space it takes up. Let’s say you’re on the job and your team needs to be able to efficiently build a particular functionality. In order to determine which algorithm is most efficient to utilize, you can use Big O! Or let’s say your code slows down, you are able to identify the parts of your code that are inefficient which can aid in your debugging process and overall allow your application to run smoothly. The last reason why Big O is so important to learn is that IT COMES UP IN TECHNICAL INTERVIEWS. Now I’ve only done one mock technical interview but my interviewer, who has years and years of experience working at top companies, told me that you’ll most likely be given a problem and they will ask you to solve it to the best of your ability. Then once you’ve completed that, the interviewer will ask you, “how can you optimize this code?” In which you would use your now basic knowledge of Big O to optimize your code with respect to time and space complexity.
Conclusion
I have only really touched the surface of this complex algorithm concept. I definitely need to learn a lot more and practice identifying algorithms that runs optimally vs what doesn’t and how I can implement that when writing my own code. If I made a mistake on anything in this blog post or confused something else with another, feel free to let me know! I’m still learning and hope to continue to deepen my understanding of Big O so that I can exponentially grow as a developer! Happy coding!