< Contents | Previous | Next >

Let us take the classical example of sorting a list of numbers. Given a list of numbers, the purpose is to put these list in order from the smallest to the largest. There are so many sorting algorithms that do the same purpose. People give name for those sorting algorithms from Bucket Sort, Merge Sort, Heap Sort, Tree sort, Splay sort, Quick Sort, k-d Tree Sort, Bubble Sort, Insertion Sort, Selection Sort and so on to name a few of them. Each algorithm employs specific type data structure and certain set of assumptions. Of course, you want to employ the best algorithm for your software. How do you compare them? How to measure the complexity of an algorithm? One possibility is to measure the length of its program such as the number of line of code. However, this kind of measure is incorrect because there are some algorithms that much longer in the line of code but faster in the actual run time.

For simplicity, we shall use running time as our complexity measure because the memory space bound are usually a linear function of the input size of the data.

If your data is very big, the running time of the algorithm would be longer than the data is very small. Thus, the input size of the data matter. Let us give notation  to the input size of the data. Obviously, it is only a simplification when we use only a single variable n to represent the data size. Suppose we have a matrix that data size depends on the number of rows m and the number of column n, then we can extend to input size to. Similarly, in a network graph where the data size depends on the number of nodes n and links m, then again then we can extend to input size to.

In general, we shall measure the running time of an algorithm as a function of the worst case input data. We may be overly pessimistic but it guarantee the actual performance. If the worst case scenarios rarely happens in practice, we can use average case analysis. This is done by averaging over all the possible input sizes. However, for the average case analysis, the probability distribution matters and should be check if it reflects the actual reality.

We only want to get the estimate of relative running time of an algorithm as the function the input size of the data. Remember, our purpose is to compare various algorithms that work for the same purpose such that we can select the best algorithm for our problem. The detail run time would be machine dependent (including OS, compiler, programming language, programmers, etc.) and that is the constant factor of the analysis of the algorithm. Since we are interested in machine independent measure, we shall ignore the constant factors in the analysis of the algorithm.

Researchers in computer science has concluded that if the problem size is large enough, the relative efficiency of two algorithms depends on their running times as an asymptotic function of the input size, independent of constant factors [Tarjan, 1991]. The asymptotic function that reflect the running time of an algorithm has notation O(.) and therefore it is often called big-oh notation. This big-oh notation is actually measured the worst running time as a function of the input size. When we have

It means the function  grows at most as fast as function, while both functions have  as input data size. To understand the asymptotic function of the input size, let us give an example.

Say, algorithm-A runs linearly. We give notation of the asymptotic function of the input size as O(n). This notation means the larger the input data size, the longer the run time of the algorithm-A. The actual function can be written as

Where  and  are constants. Since we ignore the constants to make our complexity measure to be machine independent, we only write it as

To know that we are talking about complexity of algorithm and not just any ordinary function, we write the above asymptotic function as.

Now you also know that algorithm-B that solve the same problem as algorithm-A runs quadratically. We give notation of the asymptotic function of the input size as O(n^2). The actual quadratic function can be written as

where  are constants. The term with larger exponent  dominate the other terms when n is very large, therefore after ignoring the constants and the non-dominant terms, we say that asymptotically quadratic function above can be written as

Now we want to compare algorithm-A with complexity O(n) and Algorithm-B with complexity O(n^2), which one is faster? Look at the table below. If your data size is 1000, and say one run in your computer is equal to one millisecond, then the algorithm-A will run for 1 second while algorithm-B will run for more than 16 minutes (1000 seconds) on the same computer.

 Size, n 1 10 100 1000 10,000 100,000 1,000,000 A: O(n) 1 10 100 1000 10,000 100,000 1,000,000 B: O(n^2) 1 100 10,000 1,000,000 10^8 10^10 10^12

It is clear that algorithm-A that runs linearly will take much shorter running time when the input size is larger. Thus, algorithm-A wins over Algorithm-B.

You have seen through the example above that we could compare the two algorithms above without even coding it. They are also machine independent.

Thus, from now on, if you see someone propose an algorithm, the first things you need to see is the big-oh notation such that you can compare with other existing algorithm for the same purpose.

In the next section, you will learn the estimated time of various asymptotic functions.

How to cite this tutorial:

Teknomo, Kardi. (2017) Analysis of Algorithm .
http://people.revoledu.com/kardi/tutorial/Algorithm/