## The Big O Notation - Time Complexity

I've just begun learning algorithms... Yes, just begun as I am not a Computer Science graduate (although I wish I had... some of my regrets in life. A story for another time.). I recently read swyx's blog post about learning in public and was inspired by it, thus I decided to blog about the things that I've learned whilst going through this course.

I won't go through a lot on explaining the definition of the Big O Notation, and it's use as that information is readily available in Wikipedia and is quite understandable. The TL;DR of it is that it's used to measure the complexity of an algorithm, which is really useful when optimizing your code.

Let's take for example that we would want to calculate the sum from 1 to a user-defined number, *n*. Me, as a person whose brain does not respond with formulas at first glance, would create a function as so:

P.S. I'll be using non-idiomatic (no fancy functions like map, reduce, etc.) JavaScript for all of my code samples as it's something that anyone could execute in their browser and should be easy to understand

```
function sumOfOneTo(n) {
let total = 0;
for (let index = 0; index <= n; index++) {
total += index;
}
return total;
}
```

This would work, and should be enough on the first go. But as we pass a larger value to the function, you would notice that this function would be "slower".

Now let's look into the *other* solution that does the exact same thing, albeit, with a sprinkle of math:

```
function sumOfOneTo(n) {
return n * (n + 1) / 2;
}
```

Wow, that's way less code... But that's not the point here, and no, I won't go into details as to how we arrived with this formula. What we have to point out though is the complexity of this function. We would notice that all we have here is **3 operations: **1 multiplication, 1 addition, and 1 division **regardless of the size of n**.

While as for the first solution, we had 4 assignments, 2 additions, and 1 comparison... oh wait, no... it's more than that.

We actually have 2 assignments, 2 additions, and 1 comparison that would be executed `n`

times. So if the value of `n`

is 5, it'll be 5 * 5 operations + 2 operations (2 from the assignments which does not get re-executed on each iteration of the loop).

So how many is that? 27 operations. If we pass in 10, it'll be **102 operations**.

With that said, it's very obvious that the first solution is way slower than the second solution because the first solution's complexity is tied up to the value of `n`

while the second isn't.

That's is what the Big O notation is all about. What I didn't quite get yet is the definition of the Big O... As per the course that I am currently taking:

We say that an algorithm isif the number of simple operations the computer has to do is eventually less than a constant timesO(f(n)), asf(n)increasesn

What I understood though is that the first example we had above, the number of operations is bounded by the value of `n`

. Thus, the Big O notation of the first one is `O(n)`

. It doesn't matter how many operations it has, which is 7 in our example, since what causes the complexity is the fact that the function is tied to the value of `n`

.

So basically, all we have to look into is if there are any relation between the number of times the loop would execute our operation and the value of the function's parameter.

If a loop would execute in a predetermined number of times regardless of the value of the function parameter, we can safely disregard the number of times.

With that in mind, the second example code, although having 3 operations, will be `O(1)`

, since we very well know the number of times those operations will be called beforehand.

That also means that the ff. codes wold be `O(1)`

```
// We very well know that the ff. code will have a constant number of operations... which is 21
for (let index = 0; index < 5; index++) {
sum += index;
}
// The same goes with the ff. since the maximum number of loops is predefined
for (let index = 0; index < Math.min(n, 5); index++) {
sum += index;
}
```

If we used `Math.max(n, 5)`

though, it would be `O(n)`

since the loop could either have a minimum of 5 iterations, or a user defined maximum number of iterations. We cannot pre-define the maximum number of iterations as this is defined by the parameter.

But how about nested loops?

```
for (let index = 0; index < n; index++) {
for (let index2 = 0; index < n; index2++) {
sum += index;
}
}
```

With just the outer loop, we already know it is `O(n)`

, but we also have an inner loop of `O(n)`

. If we combine the two, we very well know that the inner `O(n)`

will be executed `n`

times based on the outer `n`

. In math, we call this exponents. We would have n^2 here.

If it was a triple nested loop, it would've been `O(n^3)`

.

By this time, you must have an idea on what function is the most performant... O(1) would be the most performant as it this is the algorithm wherein the operations would only execute once regardless of the value of `n`

. While O(n) is much performant than O(n^2).

### Conclusion

And again, remember that with the Big O notation, we are looking at the bigger picture: the number of dependency the function has with its parameter. **If we have around 1000 operations, but we are certain that it will only have 1000 operations, that function would have a complexity of O(1). Yes, it's not O(1000), but instead, we would use the smallest value, which is 1.**

If the function's number of operations has a dependency with its parameters, wherein the maximum number of operations has no ceiling aside from the value of the parameter, that function's complexity would be O(n). If you have nested loops, just count the number of nested loops, and you'll have the value of its exponent. That is if there is a nested loop 3-level deep, with the maximum number of iterations is dependent with the value of the function parameter, we would have O(n^3).

Now, as a bonus, the ff. would still be O(1):

```
function test() {
for (let index = 0; index < 5; index++) {
sum += index;
}
for (let index2 = 0; index2 < 5; index2++) {
sum += index2;
}
}
```

Why? Well, since this function has a fixed number of operations.