Complexity of an Algorithm

Complexity of an Algorithm

The Space and Time complexity of an algorithm can be utilized in order to determine its effectiveness. While most of us know that there are various methods for addressing any problem in programming, understanding how an algorithm works effectively and efficiently can add value to your programming. In order to determine the efficacy of a program or algorithm, understanding the way of evaluating them with the help of Time and Space Complexity can help the program perform optimally under certain conditions. Consequently, we become more efficient programmers

Understanding the Algorithmic Complexity

Algorithmic Complexity is a measure of how long an algorithm would take to complete given an input of size n. If an algorithm needs to scale, it should calculate the output within a finite and practical time bound, even for large values of n. For this reason, complexity is determined asymptotically as n approaches infinity. While complexity is generally in terms of time, sometimes complexity is also evaluated in terms of space, which interprets the algorithm's memory requirements.

Analysis of the complexity of an algorithm is practical when comparing algorithms or seeking improvements. Algorithmic complexity lies within a branch of theoretical computer science called computational complexity theory. It is significant to state that we're concerned about the order of the complexity of an algorithm, not the actual execution time in terms of milliseco

Algorithmic Complexity is also known as Complexity or Running Time

What is the significance of Time Complexity?

First of all, let us understand what defines an algorithm.

In Computer Programming, an Algorithm is a finite sequence of well-defined instructions, typically executed in a computer, in order to solve a class of problems or perform a common activity. On the basis of the definition, there must be a sequence of defined instructions that are required to be given to the computer to execute an algorithm or perform the same activity. Moreover, with choices available to select any of the available programming languages, the instructions can take any form of syntax and the performance boundaries of the selected programming language. We also specified the algorithm to be performed in a computer, which leads to the succeeding deviation in terms of the operating system, processor, hardware, and many more that are utilized, which can also influence the manner in which an algorithm can be performed.

Now that we know various factors can manipulate the result of an algorithm being executed, it is wise to understand how efficiently such programs are utilized in the performance of an activity. To assess this, we must calculate both the Space and Time complexity of an Algorithm.

The Space Complexity of an algorithm is defined as a measure of the amount of space or memory occupied by an algorithm to execute as a function of the length of the input. At the same time, the Time Complexity of an algorithm is defined as a measure of the amount of time taken by an algorithm in order to execute as a function of the length of the input. Now that we know the significance of Time Complexity, it is time to understand what time complexity is and how we can evaluate it.

In order to explain, Time Complexity measures the time taken in order to execute each statement of code in an algorithm. In case a statement is set to run repeatedly, then the number of times that statement gets executed is equal to N multiplied by the time required for the execution of that function each time.

Let us consider the following codes to print a statement in Python:

Code 1:

  1. print("This is a simple statement.")<p></p>  
  2. <p><strong>Output:</strong></p>  
  3. <div class=codeblock3><pre>  
  4. This is a simple statement.  
  5. </pre></div>  
  6. <p><strong>Code 2:</strong></p>  
  7. <div class=codeblock><textarea name=code class=xml>  
  8. for i in range(10):  
  9.    print("This is a simple statement.")  

Output:

This is a simple statement.
This is a simple statement.
This is a simple statement.
This is a simple statement.
This is a simple statement.
This is a simple statement.
This is a simple statement.
This is a simple statement.
This is a simple statement.
This is a simple statement.

The first algorithm is defined to print the statement only once. The time taken for the execution of this algorithm is shown as 0 nanoseconds. The second algorithm is defined to print the same statement; however, this time, it is set to execute the same statement in the FOR loop 10 times. In the second algorithm, the time taken for the execution of both lines of code - FOR loop and print statement, is 5 milliseconds. Moreover, the time taken increases as the N value increases since the statement is going to get executed for N times.

Note: The above code is executed in Python-Jupyter Notebook with Windows 11 64-bit Operating System + processor Intel(R) Core(TM) i5-10300H CPU @ 2.50GHz. The above time value can vary with different hardware, with different Operating System, and in different programming languages, if utilized.

By now, we could have concluded that whenever an algorithm utilizes statements that get executed only once, it will always need the same amount of time. Then when the statement is in loop condition, the time required increases on the basis of the number of times the loop is set to execute. Moreover, whenever an algorithm has a combination of single executed statements and loop statements or with nested loop statements, the time increases proportionally on the basis of the number of times each statement gets executed.

Now that we understand the overall concept of time complexity and its significance, it is time to understand how we can determine the relationship between the input and time, given a statement in an algorithm. In order to define the relationship between the two, we are going to learn about Big-O Notation.

What is Big-O Notation?

  • In theoretical terms, Big-O notation is utilized for the examination of the performance/complexity of an algorithm.
  • Big-O notation examines the upper bound of the performance of an algorithm, i.e., the worst-case scenario.
  • Big-O notation also considers asymptotic algorithm behavior, which implies the performance of the method when the amount of the input becomes extremely big.
  • The computational complexity asymptote, O(f), measures the order of the employed resources on the basis of the magnitude of the input data (CPU time, RAM, and more).

Now, let us start exploring the different types of Time Complexities.

What are the Different Types of Time Complexities?

There are different types of time complexities used in Algorithms as shown below:

  1. Constant Time Complexity
  2. Linear Time Complexity
  3. Logarithmic Time Complexity
  4. Quadratic Time Complexity
  5. Cubic Time Complexity
  6. Exponential Time Complexity
  7. Quasilinear Time Complexity
  8. Factorial Time Complexity

and many more complex notations are used on the basis of the type of functions defined.

Let us now discuss some of the time complexities in detail.

Constant Time Complexity - O (1)

The time complexity of an algorithm is said to be constant when the algorithm is not dependent on the input size n. The runtime of such an algorithm remains the same irrespective of the input size n. This time complexity is denoted as O (1).

Let us consider the following snippet of code illustrating the same.

Example:

  1. # defining a function to return the first element of the array  
  2. def getFirstElement(list):  
  3.     return list[0]  
  4.   
  5. # main function  
  6. if __name__ == '__main__':  
  7.     # initializing the arrays  
  8.     ArrayOne = [10, 12, 34, 4, 16, 23, 69, 13, 8, 22]  
  9.     ArrayTwo = [13, 41, 20, 64, 13, 50, 66]  
  10.   
  11.     # calling the function and printing the first element of the array  
  12.     print("First Element of the First Array is :", getFirstElement(ArrayOne))  
  13.     print("First Element of the Second Array is :", getFirstElement(ArrayTwo))  

Output:

First Element of the First Array is : 10
First Element of the Second Array is : 13

Explanation:

In the above snippet of code, we have defined a function as getFirstElement() that accepts a list as an argument. In this function, we have returned the first data element of the list. For the main function, we have initialized two arrays with 10 elements and 7 elements, respectively, and called the getFirstElement() function by passing the initialized arrays to it and printing the first element of both arrays.

The above code shows that irrespective of the length of the array (n), the runtime to get the first element in an array of any length is the same. If the run time is considered as 1 unit of time, then it takes only 1 unit to execute the arrays, irrespective of the length. Thus, the function comes under constant time with order O (1).

Linear Time Complexity - O (n)

The time complexity of an algorithm is said to be linear when the run time of the algorithm increases linearly with the length of the input. This statement implies that whenever a function has an iteration that iterates over an input size of n, the algorithm will have a time complexity of order O (n).

Let us consider the following snippet of code illustrating the same.

Example:

  1. # defining a function to print the elements of the array  
  2. def printElements(list):  
  3.     # using the FOR loop to iterate through the list  
  4.     for value in list:  
  5.         # printing the elements  
  6.         print(value)  
  7.   
  8. # main function  
  9. if __name__ == '__main__':  
  10.     # initializing the array  
  11.     myArray = [10, 12, 34, 4, 16, 23, 69, 13, 8, 22]  
  12.   
  13.     # calling the function and printing all elements of the array  
  14.     print("Elements of the Array are : ")  
  15.     printElements(myArray)  

Output:

Elements of the Array are :
10
12
34
4
16
23
69
13
8
22

Explanation:

In the above snippet of code, we have defined a function as printElements() that accepts a list as an argument. In this function, we have used the FOR loop to iterate through the list and print the data element of the list. For the main function, we have initialized an array with 10 elements. We then printed a statement for the users and called the printElements() function by passing the initialized array to it to print the elements of an array.

The above code shows that the run time will increase linearly on the basis of the length of the array (n). If the run time is considered 1 unit of time, then it will take only n time s 1 unit to execute the array. Thus, the function runs linearly with the input size, which comes with order O (n).

Logarithmic Time Complexity - O (log n)

The time complexity of an algorithm is said to be logarithmic if that algorithm reduces the size of the input data in each step. This statement implies that the number of operations performed in the function is different from the input size. However, the number of operations reduces as the size of the input data increases. This time complexity is denoted as O (log n).

Algorithms having such time complexity are generally found in operations on binary trees and binary search functions. Such an algorithm involves the search of a given value in an array by splitting the array into two and starting to search in one split. This method ensures that the operation is not done on every data element of the list.

Let us consider the following snippet of code illustrating the same.

Example:

  1. # defining a function to search a value in data using the binary search  
  2. def binarySearch(list, search_value):  
  3.     # finding the size of the list using the len() function  
  4.     list_size = len(list)  
  5.     # initializing the left and right pointers  
  6.     left_pointer = 0  
  7.     right_pointer = list_size - 1  
  8.       
  9.     # using the while loop to iterate through the array from left pointer to right pointer  
  10.     while left_pointer <= right_pointer:  
  11.         # defining the middle pointer  
  12.         middle_pointer = (left_pointer + right_pointer) // 2  
  13.   
  14.         # if the value to be searched is less than the value of the middle pointer, then we will update the right pointer value to the one less than middle pointer   
  15.         if search_value < list[middle_pointer]:  
  16.             right_pointer = middle_pointer - 1  
  17.   
  18.         # if the value to be searched is greater than the value of the middle pointer, then we will update the left pointer value to the one more than middle pointer   
  19.         elif search_value > list[middle_pointer]:  
  20.             left_pointer = middle_pointer + 1  
  21.   
  22.         # if the value to be searched is equal to the value of the middle pointer, then we will return the middle_pointer   
  23.         else:  
  24.             return middle_pointer  
  25.   
  26.     # if the value is not found in the list, we will raise a ValueError saying 'Value is not found in the list.'  
  27.     raise ValueError('Value is not found in the list.')  
  28.   
  29. # main function  
  30. if __name__ == '__main__':  
  31.     # initializing the list  
  32.     myArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]  
  33.     # initializing the value to be searched  
  34.     value_to_be_searched = 9  
  35.     # calling the binarySearch() function and storing the result  
  36.     returned_index_value = binarySearch(myArray, value_to_be_searched)  
  37.   
  38.     # printing the result for the users  
  39.     print("The index value of", value_to_be_searched, "in", myArray, "=", returned_index_value)  

Output:

The index value of 9 in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] = 8

Explanation:

In the above snippet of code, we have defined a function as binarySearch() to search for a value in data with the help of the binary search algorithm. This function will accept two arguments - the list and the target value to be searched in the list. Inside this function, we have determined the size of the list and initialized two pointers as left_pointer and right_pointer. We then used the while loop to iterate through the array from the left pointer to the right pointer. Within this loop, we have defined the middle pointer as the middle value between the left and right pointer. We then compared if the target value was less than the middle value and shifted the right pointer to one less than the middle pointer. We have also compared if the target value is greater than the middle value and shifted the left pointer to one more than the middle pointer. We then checked if the target value was equal to the middle pointer and returned the middle pointer as the resultant index of the target value. We have also raised the ValueError if the target value is not in the list. For the main function, we have initialized the array. We have also initialized the value to be searched in the list. We then called the binarySearch() function by passing the two arguments as the given list and the target value and stored the result in a variable. Finally, we have printed the results for the users.

The above code shows that the run time will be logarithmic as the size of the input data is reduced in each step. Thus, the function runs in logarithmic time with respect to the input data, which comes with order O (log n).

Quadratic Time Complexity - O (n2)

The time complexity of an algorithm is said to be non-linear or quadratic when the run time of that algorithm increases non-linearly (n2) with respect to the length of the input. Nested Loops, in general, come under this order where one loop takes O (n), and if the function involves a loop within a loop, then it goes like O (n) * O (n) = O (n2) order.

In a similar way, if there are 'm' loops defined in the function, then the order is given by O (nm), which are known as the polynomial time complexity functions.

Bubble Sort is a good example of Quadratic time complexity for each value in the list, and it requires comparing it to all the other values in the list. Let us consider the following example illustrating the same.

Example:

  1. # defining a function to sort the list in ascending order using the bubble sort  
  2. def bubbleSort(list):  
  3.     # defining a flag  
  4.     swappedFlag = True  
  5.     # using the while loop to iterate through the list  
  6.     while swappedFlag:  
  7.         # updating the flag value to False  
  8.         swappedFlag = False  
  9.         # using the for loop to iterate through the list and swapping the positions of the elements to sort them in ascending order  
  10.         for i in range(len(list) - 1):  
  11.             if list[i] > list[i + 1]:  
  12.                 list[i], list[i + 1] = list[i + 1], list[i]  
  13.                 swappedFlag = True  
  14.   
  15. # main function  
  16. if __name__ == '__main__':  
  17.     # initializing a sample array  
  18.     myArray = [12, 21, 17, 16, 14, 18, 15, 13, 42, 10]  
  19.   
  20.     # printing the initial array for the users  
  21.     print("The Elements of Array before Sorting :", myArray)  
  22.       
  23.     # calling the bubbleSort() function to sort the given array  
  24.     bubbleSort(myArray)  
  25.   
  26.     # printing the final sorted array for the users  
  27.     print("The Elements of Array after Sorting :", myArray)  

Output:

The Elements of Array before Sorting : [12, 21, 17, 16, 14, 18, 15, 13, 42, 10]
The Elements of Array after Sorting : [10, 12, 13, 14, 15, 16, 17, 18, 21, 42]

Explanation:

In the above snippet of code, we have defined a function as bubbleSort() that accepts a list as an argument to sort that given list in ascending order using the Bubble Sort method. We have defined a flag inside this function and initially set it to True. We then used the WHILE loop to iterate through the elements in the array until the flag becomes False. Inside this loop, we have updated the initial value of the flag and set it to False. We then used the FOR loop to iterate again through the array to compare the currently selected element to the other elements in the array and swap their position if the current element is greater than the succeeding element. We then set the flag to True. We have initialized the array to be sorted in ascending order for the main function and printed it for the users. We then called the bubbleSort() function, passed the given array as its argument, and printed the final array for the users.

The above code shows that the run time will be quadratic as the nested loop is running the function to sort the elements of the array. Thus, the function runs in quadratic time with respect to the input data, which comes with order O (n2).

Exponential Time Complexity - O (2n)

The time complexity of an algorithm is said to be exponential when the growth doubles with each addition to the set of input data. This kind of time complexity is generally observed in the brute-force algorithms. This time complexity is denoted as O (2n).

In cryptography, a brute-force attack may systematically check all possible data elements of a password by iterating through the subsets. In order to perform this with the help of an exponential algorithm, it becomes incredibly resource-expensive to brute-force crack a long password versus a shorter one. This is one of the reasons that a long password is considered more secure than a short password.

One of the examples of exponential time complexity can be the recursive calculation of Fibonacci Numbers. Let us consider the following snippet of code illustrating the same:

Example:

  1. # defining a function to return Fibonacci number using recursion  
  2. def fibNum(n):  
  3.     # returning n if the value of n is less than or equal to 1  
  4.     if n <= 1:  
  5.         return n  
  6.     # recursive call  
  7.     return fibNum(n - 1) + fibNum(n - 2)  
  8.   
  9. # main function  
  10. if __name__ == '__main__':  
  11.     # printing the statement for the users  
  12.     print("Fibonacci Series (First Ten Numbers) : ")  
  13.     # printing the first ten numbers of Fibonacci series  
  14.     for i in range(10):  
  15.         print(fibNum(i))  

Output:

Fibonacci Series (First Ten Numbers) :
0
1
1
2
3
5
8
13
21
34

Explanation:

In the above snippet of code, we have defined a function as fibNum() that accepts n as an argument to return the Fibonacci number of the given place value. Inside this recursive function, we have returned n if the value of n is less than or equal to 1. We have then set up a recursive call to the function, adding the previous two values of the Fibonacci series. For the main function, we printed a statement and then used the FOR loop to print the first ten Fibonacci numbers by calling the fibNum() function for i times.

The above code shows that the run time will be exponential because of the recursive function we have defined to return the Fibonacci Number. Thus, the function runs in exponential time with respect to the input data, which comes with order O (2n).

Quasilinear Time Complexity - O (n log n)

The time complexity of an algorithm is said to be quasilinear if each operation in the input data has a logarithmic time complexity. This type of time complexity is generally observed in the algorithms used for sorting purposes. Some example algorithms of such time complexity are mergesort, timsort, heapsort, and many more. This quasilinear time complexity is denoted as O (n log n).

Let us consider an example illustrating the implementation of the mergesort algorithm, which has quasilinear time complexity:

Example:

  1. # defining a function to sort the list using the mergesort algorithm  
  2. def mergeSort(list):  
  3.     # if the list size is less than or equal to 1, return  
  4.     if len(list) <= 1:  
  5.         return  
  6.       
  7.     # defining a middle point of the list  
  8.     middlePoint = len(list) // 2  
  9.   
  10.     # separating the list elements to the left and right side of the middle point  
  11.     leftData = list[:middlePoint]  
  12.     rightData = list[middlePoint:]  
  13.       
  14.     # calling the mergeSort() function for left side data and right side data separately  
  15.     mergeSort(leftData)  
  16.     mergeSort(rightData)  
  17.       
  18.     # initializing the left, right and list index  
  19.     leftIndex = 0  
  20.     rightIndex = 0  
  21.     listIndex = 0  
  22.       
  23.     # using the WHILE loop to iterate through the list and sort it in ascending order  
  24.     while leftIndex < len(leftData) and rightIndex < len(rightData):  
  25.         if leftData[leftIndex] < rightData[rightIndex]:  
  26.             list[listIndex] = leftData[leftIndex]  
  27.             leftIndex += 1  
  28.         else:  
  29.             list[listIndex] = rightData[rightIndex]  
  30.             rightIndex += 1  
  31.         listIndex += 1  
  32.       
  33.     if leftIndex < len(leftData):  
  34.         del list[listIndex:]  
  35.         list += leftData[leftIndex:]  
  36.     elif rightIndex < len(rightData):  
  37.         del list[listIndex:]  
  38.         list += rightData[rightIndex:]  
  39.       
  40. # main function  
  41. if __name__ == '__main__':  
  42.     # initializing an unsorted array  
  43.     myArray = [29, 11, 7, 16, 52, 38, 33, 23, 14, 40]  
  44.     # printing the unsorted array for the users  
  45.     print("Unsorted Array :", myArray)  
  46.     # calling the mergeSort() function for the given array  
  47.     mergeSort(myArray)  
  48.     # printing the sorted array as the result for the users  
  49.     print("Sorted Array (After Performing Merge Sort) :", myArray)  

Output:

Unsorted Array : [29, 11, 7, 16, 52, 38, 33, 23, 14, 40]
Sorted Array (After Performing Merge Sort) : [7, 11, 14, 16, 23, 29, 33, 38, 40, 52]

Explanation:

In the above snippet of code, we have defined a function as mergeSort() that accepts an argument as a list to sort it using the merge sort algorithm. Within this function, we have used the if conditional statement to check if the size of the list is less than or equal to 1 and returned. We have then defined the middle pointer for the list and initialized two lists containing the elements to the left and right sides of the middle pointer. We then called the mergeSort() function twice and passed the lists containing the left-side elements and right-side elements. We have then initialized the left, right, and list index as 0 and used the WHILE loop to iterate through the lists and sort them accordingly. For the main function, we have initialized an unsorted array consisting of ten elements and printed it for the users. We then called the mergeSort() and passed the given array as an argument. At last, we have printed the results for the users.

The above code shows that the run time will be quasilinear because the time taken to sort the elements is logarithmic for each array. Thus, the function runs in quasilinear time with respect to the input data, which comes with order O (n log n).

Factorial Time Complexity - O (n!)

The time complexity of an algorithm is said to be factorial if the algorithm grows in a factorial manner on the basis of the size of the input data. For example,

  1. 2! = 2 × 1 = 2  
  2. 3! = 3 × 2 × 1 = 6  
  3. 4! = 4 × 3 × 2 × 1 = 24  
  4. 5! = 5 × 4 × 3 × 2 × 1 = 120  
  5. 6! = 6 × 5 × 4 × 3 × 2 × 1 = 720  
  6. 7! = 7 × 6 × 5 × 4 × 3 × 2 × 1 = 5,040  
  7. 8! = 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 40,320  
  8. 9! = 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 362,880  
  9. 10! = 10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 3,628,800  

As we can observe, the growth rate of the above input is very fast, even if its size is small.

A good example of an algorithm having a factorial time complexity is the Heap algorithm, which is utilized for the generation of all the possible permutations of n objects.

Heap found a systematic method for selecting at each step a pair of data elements to switch in order to generate all the possible permutations of these data elements exactly once.

Let us consider the following example illustrating the implementation of the Heap algorithm to generate all the possible permutations of the given data.

Example:

  1. # defining a function to implement the heap algorithm  
  2. def heapPermutation(list, size):  
  3.     # if the size of the list is equal to one  
  4.     if size == 1:  
  5.         # printing the list and return  
  6.         print(list)  
  7.         return  
  8.       
  9.     # iterating through the size of the list to generate all the possible permutations  
  10.     for i in range(size):  
  11.         # calling the heapPermutation() function  
  12.         heapPermutation(list, size - 1)  
  13.   
  14.         if size % 2 == 0:  
  15.             list[i], list[size - 1] = list[size - 1], list[i]  
  16.         else:  
  17.             list[0], list[size - 1] = list[size - 1], list[0]  
  18.   
  19. # main function  
  20. if __name__ == '__main__':  
  21.     # initializing the array  
  22.     myArray = [1, 2, 3, 4]  
  23.     # calling the heapPermutation() function to generate all the possible permutations for the given array  
  24.   
  25.     print("The Possible Permutations of", myArray, "using the Heap Algorithm are as follows:")  
  26.     heapPermutation(myArray, len(myArray))  

Output:

The Possible Permutations of [1, 2, 3, 4] using the Heap Algorithm are as follows:
[1, 2, 3, 4]
[2, 1, 3, 4]
[3, 1, 2, 4]
[1, 3, 2, 4]
[2, 3, 1, 4]
[3, 2, 1, 4]
[4, 2, 3, 1]
[2, 4, 3, 1]
[3, 4, 2, 1]
[4, 3, 2, 1]
[2, 3, 4, 1]
[3, 2, 4, 1]
[4, 1, 3, 2]
[1, 4, 3, 2]
[3, 4, 1, 2]
[4, 3, 1, 2]
[1, 3, 4, 2]
[3, 1, 4, 2]
[4, 1, 2, 3]
[1, 4, 2, 3]
[2, 4, 1, 3]
[4, 2, 1, 3]
[1, 2, 4, 3]
[2, 1, 4, 3]

Explanation:

In the above snippet of code, we have defined a function as heapPermutation() that accepts two arguments - the list and its size. Within this function, we have checked if the size of the list is equal to one and printed the list for the user. After that, we iterated through the size of the list and generated all the possible permutations using the heap algorithm. For the main function, we have initialized an array and printed a statement for the users. We then called the heapPermutation() function and passed the initialized array and its size as its arguments.

Note that it will grow in a factorial manner on the basis of the size of the input data. Therefore, we can conclude that the above algorithm has the factorial time complexity of O (n!).


Comments

Popular posts from this blog

Operating System Structure

Asymptotic Notations

ASP.NET Events Handling