By Kardi Teknomo, PhD.

Analysis of Algorithm

Share this: Google+
< Contents | Previous | Next >

By this point, you have already a rough idea of what big-oh notation is about. In this section, we will refine that idea by connecting the asymptotic functions and estimate running time. Which big-oh notation has the fastest running time? Is factorial asymptotic function O(n!) faster than polynomial such as O(n^3)?

Let us compare several common asymptotic functions and estimate running time. Table below shows the estimated running time if one step takes about a microsecond (=1/1,000,000 second). The asymptotic functions have been sorted from the fastest to the slowest. This table make clear to us that depending on the problem size at hand, any algorithm that slower than O(n^2) is too slow to be used for big data size. Some algorithm may take more than a millennium (1000 years) if we run it. Algorithm with factorial running time cannot be used even for the problem size of 25 data points. Some algorithm with polynomial running time of O(n^3) cannot be used for large problem higher than 10,000 data points. The empty cells in the table means too large to be computed.

function

Problem size, n

1

10

25

50

100

500

1,000

10,000

100,000

1,000,000

1

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

log(log(n))

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

log(n)

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

Sqrt(n)

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

n

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

1 s

n log(n)

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

6 s

n^2

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

1 s

1.67 s

2.78 hours

1.65 weeks

n^3

< 1 s

< 1 s

< 1 s

< 1 s

1 s

2.1 s

16.7 min

1.65 weeks

3.17 decades

31.7 Millennium

n^log(n)

< 1 s

< 1 s

< 1 s

< 1 s

< 1 s

19.2 s

16.7 min

317 years

3.17 x 10^8 Millennium

3.17 x10^19 Millennium

2^n

< 1 s

< 1 s

33.5 s

35 years

4 x 10^13 Millennium

10^134 Millennium

3.4 x 10^284 Millennium

 

 

 

3^n

< 1 s

< 1 s

9.8 days

2.3 x 10^7 Millennium

1.6 x 10^31 Millennium

10^222 Millennium

 

 

 

 

n!

< 1 s

3.6 s

4.9 x 10^8 Millennium

9.6 x 10^47 Millennium

2.9 x 10^141 Millennium

 

 

 

 

 

Clearly, the list of asymptotic functions above are not exhaustive. You can compute the estimate computational time of an asymptotic function by yourself using spreadsheet such as Excel. If one step computational time is Algorithm Run Time (in the table above, we setAlgorithm Run Time seconds), and the asymptotic function is Algorithm Run Time then the formula to compute the computational time is simply
Algorithm Run Time

The number of seconds in time as shown in the table below is useful to convert the seconds into more readable time:

Time

Number of Seconds

A Second

                                                 1

A Minute

                                               60

An Hour

                                         3,600

A Day

                                      86,400

A Week

                                    604,800

A Month

                                2,592,000

A Quarter

                                7,776,000

A Semester

                              15,552,000

A Year

                              31,536,000

A Decade

                            315,360,000

A Century

                        3,153,600,000

A Millennium

                      31,536,000,000

 

The drastic change in running time due to the non-linear nature of the algorithm is sometimes difficult to perceive by many programmers. People tend to think in linear fashion. Let us take an example. Floyd–Warshall algorithm is well-known to have running time complexity of O(n^3) where n is the number of nodes in a network. Suppose for a moment, you either do not know anything about the algorithm complexity or you simply ignore to compute it. When you run the Floyd–Warshall algorithm for 100 nodes, the algorithm can run in just about 1 second. You were happy. Next, when you run the same algorithm for 500 nodes, the algorithm can run in just about 2 seconds. You increased the input size by 5 times but the running time is just about double. This little experience will make you think in linear fashion. You thought that if you double the number of nodes, the running time is probably just about 4 seconds. It does not happen that way. If the number of nodes is doubled, the running time becomes 17 minutes. At this point, you can probably still bear it. Now, since you are such ignorance to the complexity of algorithm, if the number of nodes in the network is multiplied by 10, the same Floyd–Warshall algorithm will run for more than a week on the same computer. A city network typically has more than 100,000 nodes and running this O(n^3) algorithm is clearly impractical in that same computer because it will take more than 3 decades to finish it.

The nice lesson is if you do not compute the time complexity of the algorithm and rely on the computer run time to monitor it, you will end up wasting your time waiting for the computer to run. In fact, some algorithm may never finish to run it in your whole life time.

In the next section, let us identify some best algorithms based on the order of time complexity.

< Contents | Previous | Next >

 

Share and save this tutorial
Add to: Del.icio.us Add to: Digg Add to: StumbleUpon Add to: Reddit Add to: Slashdot Add to: Technorati Add to: Netscape Add to: Newsvine Add to: Mr. Wong Add to: Webnews Add to: Folkd Add to: Yigg Add to: Linkarena Add to: Simpy Add to: Furl Add to: Yahoo Add to: Google Add to: Blinklist Add to: Blogmarks Add to: Diigo Add to: Blinkbits Add to: Ma.Gnolia Information

These tutorial is copyrighted .

How to cite this tutorial:

Teknomo, Kardi. (2017) Analysis of Algorithm .
http://people.revoledu.com/kardi/tutorial/Algorithm/