Share this:
Google+
< Contents | Previous | Next >
In the last section, we have learned about the time estimation based on algorithm complexity. The algorithm complexity is measured through asymptotic functions. In this section, we will mention the order of the best, good and bad algorithm time complexity.
Knowing the order of algorithm complexity is important because through it we can analyze, evaluate and design better algorithms.
Rank 1: Constant Time Complexity
We should say that the best algorithm would have time complexity of O(1) which is a constant. This means the run time of the algorithm is either independent of the data size or there is a constant such that the runtime is bounded above by a constant. Nothing is faster than O(1) because O(1/n) is basically the same as O(1) based on the definition bounded above by a constant. The examples of algorithm (including the baby step) with fixed time complexity are the following:
- Assignment to a variable
- Procedure entry
- Procedure exit
- Return value from a function (return x;)
- Accessing Array Index (float x = Arr[1000];)
- Determining if a number is even or odd
- Swap values between two variables
- Accessing a HashMap value
- Inserting a node in Linked List
- Pushing and Popping on Stack
- Insertion and Removal from Queue
- Finding out the parent or left/right child of a node in a tree stored in Array
- Jumping to Next/Previous element or deleting an element in Doubly Linked List
Rank 2: Double Logarithmic
Based on the limited list of asymptotic functions in the previous section, the second best running time complexity in our list of asymptotic functions is O(log(log(n))) which is double-logarithmic. The examples of algorithms with double-logarithm time complexity are:
- Cheriton-Tarjan MST algorithm
- X-fast trie algorithm, Y-fast trie algorithm: a data structure for storing integers from a bounded domain
- Tango tree algorithm
- Van Emde Boas tree
- multisplay tree data structure
- Hopcroft-Fortune closest pair of points algorithm
- Interpolation search (average)
See also: what cause double-log complexity
Rank 3: Logarithmic
Next, third best running time complexity is O(log(n)) which is logarithmic. The examples of algorithms with logarithm time complexity are:
- Binary Search
- Search, Insert, delete in in a binary search tree (BST)
- Calculating Fibonacci Numbers
- Primality test Fermat method
- Manually looking up people’s name in a phone book
See also: what cause log n complexity, what does O(log(n)) means exactly
Rank 4: Square Root
Still, the next best running time complexity is ( which is square root. The examples of algorithms with square root time complexity are:
- Grover's algorithm for searching an unsorted database
- point updates and range sum queries
- Baby-step giant-step algorithm
- Orthogonal range queries on a bidimensional k-d tree
- Range Min/Max Querying (RMQ)
- Primality test
- Prime factorization
Rank 5: Linear
The next best algorithms would be of linear running time O(n). This means the performance of the algorithm is linearly in direct proportion to the input data size. We know that a linear time complexity means a single for-next loop. The examples of algorithms with linear complexity are:
- Traversing an array in one For-Next loop
- Traversing a linked list
- Boyer–Moore–Horspool algorithm
- Linear Search
- Deletion of a specific element in a Linked List (Not sorted)
- Comparing two strings
- Checking for Palindrome
- Counting/Bucket Sort
Rank 6: Log-Linear
The next best running time is O(n log(n)) or log-linear. Usually Divide and Conquer type of algorithms which are the optimization from O(n^2) algorithms may produce this running time. The examples of algorithms with log-linear complexity are:
Note: There are boundary (or intersection between two asymptotic functions) in which we say that one asymptotic function is faster than the other. For example: Schönhage–Strassen algorithm to multiply large integers has complexity of O(n log(n) log(log(n)) which is a little bit faster than O(n log(n)) if n < 10 billion but slower than O(n log(n)) if n>10 billion. If your data size is less than the boundary, you may select the appropriate algorithm.
Rank 7: Quadratic
Quadratic running time O(n^2) is the next good algorithm. We know that quadratic complexity means Traversing a simple 2D array using double For-Next loop. The examples of algorithms with quadratic complexity are:
Rank 8: Cubic
Cubic running time or other higher order polynomial asymptotic function is generally still trackable and should be considered as good algorithm if there is no better alternative and the data size is limited. The examples of algorithms with quadratic complexity are:
- Floyd–Warshall algorithm
Rank 9: Exponential
In general, algorithm with exponential time complexity O(c^n) are considered bad algorithm in term of running time. The examples of algorithms with exponential complexity are:
- recursive calculation of Fibonacci numbers
Rank 10: Factorial
The worst algorithm in term of time complexity would be factorial O(n!). The examples of algorithms with exponential complexity are:
- List all permutations in an array (of course)
- Brute Force or naïve solution of Traveling Salesman Problem
Summary
In summary, if we put n = 1 million as our standard of comparison, the best algorithms should have computational speed faster than or equal to O(n log(n)), such as O(log n), O(log log n), O(sqrt(n)), O(n) or O(n log(n)). Look at the table above. You can see that for n=1 million, the O(n log(n)) takes only seconds to finish.
See also: Common Data structure Operation
< Contents | Previous | Next >
These tutorial is copyrighted .
How to cite this tutorial:
Teknomo, Kardi. (2017) Analysis of Algorithm .
http://people.revoledu.com/kardi/tutorial/Algorithm/