Hello There, welcome back. For today’s discussion i will be talking about the very first thing we need to settle the scores on on before diving deep into how easy Data Structures should be. Let’s think about Big O Notation this way. Actually it is not just okay to write a piece of code that works and think that is all that there is in this life, trust me after some times you will realize there might be much better ways to code the same similar code, on that note when writing any code for production there are important factors to take note on.
- How fast or slow this code is against the scaling of my application
- How space efficient is this champion code i have put together
- How easy is this code to be read by other developers
- A few other equally important things to consider, i hope you get the point ? Thank you..
With those few points in my we need to know how to interpret these results in a standard manner that brings us to the definition of Big O Notation.
In my own words I think big O notation is the performance complexity of a Function/Code in relation to speed, space and readability, by space I mean the time taken to complete a task given to it based on it’s data input because as data grows we expect your code being affected in the stated above ways. (Time , Space).
Big O Notation is considered the worse case scenario or the upper bound of the running time of an algorithm/Code or Function complete operation, in that respect the best case scenario is called Omega Notation (Ω-notation) which represent the lower bound of the running time of the algorithm, so what about the Average case scenario ? don’t worry i will tell you and it’s called Theta Notation (Θ-notation). All the above confusions are what we call Asymptotic Notations which are supposed to be the standard Mathematical representations of the performances of your only one Function/Code/Algorithm.
Mostly we tend to focus more on the worse case scenario of your Algorithm because actually no one cares if your application is performing at it best but Hahahah we will all suddenly care about it when something goes wrong so it’s very important you factor all these cases when you think you’re done and want to go to bed.
now that we know our algorithms are affected by time and space we might be tempted to say if we write a code that is fast enough then we should done, well in most of the cases that is how it should be, why because the speed will definitely vary in different computers even in the same computer running the same code on multiple times will indicate different time of operations so how mush more on different set of computers which have different processing power and speed. let’s take a look at this in real coding.
consider the following problem.
Write a program that sums up all the numbers before a given number n including itself.
This sounds simple right ? you can pose the reading and try your hands on it. for me i will like to continue with your permission.
Alright for this simple problem there are a lot of ways to solve it but i am going to focus on two solutions for the comparing…
Solution 1
function sumAllNumbers(number){
return number * (number+1)/2
}
a simple console.log(sumAllNumbers(3)) should test the above code.
but for clarity i would like to add some performance check to it to make it more comprehensive.
let t1 = performance.now();
console.log(sumAllNumbers(3))//1+2+3
let t2 = performance.now();
timeTaken = (t2-t1)/1000;
console.log(`Time Elapsed:
${timeTaken.toFixed(5)} seconds`)
so now adding the above code to the first one should give you the take taken for the first solution to run.
now let’s look at the next solution
Solution 2
function sumAllNumbers(number){
let total = 0;
for(i=1; i<=number;i++){
total +=i;
}
return total;
}
Again to get your code showing some magic you need to test it against the time.
let t1 = performance.now();
console.log(sumAllNumbers(3))//1+2+3
let t2 = performance.now();
timeTaken = (t2-t1)/1000;
console.log(`Time Elapsed:
${timeTaken.toFixed(5)} seconds`)
you my not see any significant change in the time but when you try sumAllNumbers(1000000000) okay some things are now beginning to change. so like we said earlier different solutions will give different times at the same time running them on different hardware may give you different time frame. you can now clearly see it’s not about the time anymore as the data keeps augmenting… so rather than the exact time you will have to determine the Notations based on the number of operations (+,-,*,=,/ etc…) to know if the Algorithm is doing well or bad depending on the Task, and we are going to learn how to determine that as well.
By now you realize the first Algorithm seems to be much faster than the second one, so how then do we represent all these plenty talks above. Well to represent the Big O notation the standard formula goes like O(f(n)) yes you saw that right the O -> Represents the Notation, f-> is the function written as a solution and the n -> is the input of the function which i said earlier is dependent on how best or bad the solution will do as it grows in Quantity ie. Scaling
- (f(n)=n) is said to be Linear
- ((n) = n^2 is said to be Quadratic
- (f(n) = 1) is said to be Constant
- f(n) Could equally be something different from the three, but we will focus on the 3 for now.
Based on the two solutions
because the First one seems to run at the same same time with not much significance we say the Worse Case scenario or the Big O Notation is Constant ie O(1). The operation will run only once even if input is large it’s always one time.
And in the second Solution the time taken is much more different and the number of operations in the function will have to go through a loop each time which means if we have about a billion input the function will have to go through that same scenario a billion times which put this algorithm in it worse case scenario as Linear ie. O(n) the operation will run the number of time of the input which is n times.
let’s take one more example which will be of O(n^2) or Quadratic Big O notation
Print all pairs from a given number n
function printAllPairs(number){
for(i=0; i<number;i++){
for(j=0; j<number;j++){
console.log(i,j)
}
}
}
printAllPairs(3)
in this last case we have a two for loops but please take a look at it well one is inside another which puts it in the position of a square times the work to do. if we had the for loop out side making it two, that should not be considered a square times. So in this special case we have O(n^2) denoted as Quadratic Worse Case or Big O Notation Scenario.
So far we’ve just been talking about Time complexity but what about the Space complexity then, remember they all form part of the Big O notation.
Big O notation on Space complexity
This is obviously about the amount of space a function or algorithm occupies in it life cycle. our focus should be on the space taken by the algorithm itself but not the space taken by the inputs itself. this is what we call Auxiliary Space Complexity. i like to look at it as a mind your business concept. because here we care much about the space taken taken by the algorithm but not the input variables because we know that will defiantly be affected as the data input grows. That said let’s note the following principles and as we move along we will see how best we read the space complexity.
- Most primitive data types are Constant O(1) Space ie. Booleans, undefined, null
- Strings requires O(n) space (where n is the string length)
- reference types like arrays and objects are also O(n)
Logarithms
You remember i was saying the main three Big O Notations are not the only types we have ? Well this is the time we talk about them and they’re called Logarithms denoted as x = logb n which are the inverse of exponentiations. please don’t ask me what’s that. when we say Exponents we refer to either the square, cubic or quadratic of an object ie bx = n or 23 = 8 so the inverse of that should ideally come down to the base of that object. So we say that the log(n). in representation we can say 3 = log2 8. kindly read more on these Mathematics to understand more of them. Hint. google Them
So Technically we are saying the logarithm of a number roughly measures the number of times you can divide that number by 2 before you get a value that’s less than or equal to one.
log2 8 = 3 ——–> 23 = 8
log2 (value) = exponent ———> 2exponent = value
logarithm time complexity is a great one of course and mostly situated between the O(n) and O(1) and O(n2) and O(n).
below is represented a chart that can help you more understand and picture the Big O notation.
The End Let’s rest for now. Remember in this business we go to bed not when we’re tired but when we’re done.