It’s been a while since I started thinking about going back to the basics and brushing up on core computer science concepts. And I thought, before jumping into the pool of heavyweight topics like data structures, operating systems, OOP, databases, and systems design (seriously, the list is endless)?, I should probably pick the topic we all kinda don’t want. Touch: Algorithm complexity analysis.

Yes! Concept that gets overlooked most of the time, because most of us developers are thinking, “Hmm, I probably won’t need to know that when I actually code!”.?

Well, I’m not sure if you’ve ever felt the need to understand how algorithm analysis actually works. But if you did, here’s my attempt to explain it in the most clear way possible. I hope it will help someone like me.?

What is algorithm analysis anyway, and why do we need it??
Before diving into algorithmic complexity analysis, let’s first get a brief idea of ​​what algorithmic analysis is. Algorithmic analysis deals with comparing algorithms based on the number of computing resources used by each algorithm.

What we want to gain from this exercise is being able to make an informed decision as to which algorithm is the winner in terms of efficient use of resources (time or memory, depending on the use case). Does it even mean anything?

Let’s take an example. Suppose we have a function product() that multiplies all elements of an array except the element at the current index, and returns the new array. If I am passing [1,2,3,4,5] as an input, I should get [120, 60, 40, 30, 24] as result.

The above function uses two nested for loops to calculate the desired result. In the first pass, it moves the element to the current position. In the second pass, it multiplies that element with every element in the array – except when the first loop’s element matches the second loop’s current element. In that case, it simply multiplies it by 1 to keep the product unchanged.

Are you able to follow? great!

It’s a simple method that works well, but can we make it a little better? Can we modify it in such a way that we don’t need to use two of nested loops? Maybe storing and using the result on each pass?

Let us consider the following method. In this revised version, the principle applied is that, for each element, calculate the product of the values ​​on the right, the products of the values ​​on the left, and simply multiply those two values. Very cute, isn’t it?

Here, instead of using nested loops to calculate the values ​​on each run, we use two non-nested loops, which reduces the overall complexity by a factor of O(n) (we’ll get to that later). Will).

We can safely guess that the latter algorithm performs better than the former. So far, so good? Excellent!

At this point, we can take a quick look at the different types of algorithmic analysis that are out there. We don’t need to go into micro level details but just a basic understanding of technical jargon.

Depending on when the algorithm is analyzed, before implementation or after implementation, algorithm analysis can be divided into two phases:

Apriori Analysis – As the name suggests, in apriori (formerly), we analyze (space and time) an algorithm before running it on a specific system. So basically, it’s a theoretical analysis of an algorithm. The efficiency of an algorithm is measured under the assumption that all other factors, for example, processor speed, are constant and have no effect on the implementation.

Apostiary Analysis – Apostiary analysis of an algorithm is done only after it has been run on a physical system. The selected algorithm is implemented using a programming language that is executed on a target computer machine. It directly depends on the system configuration and changes from system to system.

In industry, we rarely do apostiary analysis, as software is usually built for anonymous users who can run it on different systems.

Since time and space complexity can vary from system to system, apriori analysis is the most practical method for finding algorithm complexities. This is because we only see asymptotic variations of the algorithm (we’ll come to that later), which gives complexity based on input size rather than system configuration.

Now that we have a basic understanding of algorithmic analysis, we can move on to our main topic: algorithmic complexity. We’ll focus on apriori analysis keeping in mind the scope of this post, so let’s get started.