SE250:HTTB:WIP:Big-O

From Marks Wiki
Jump to navigation Jump to search

Note: Even though the section is named Big-O, other realted notations, like the Big-Θ and Big-Ω are also mentioned here.

Note: f(n) and g(n) are the same as f and g as far as I know.

Introduction

Big-O, Big-Θ and Big-Ω notations describe resource use of an algorithm as the number of inputs increase. (Resource usually refers to computing time or memory space used - we're mainly interested in computing time.)

Since we only consider cases where n is large, Big-O notation is also called the asymptotic notation. While Big-O provides information resource use scales as inputs increase, it does not provide information about the actual times that the algorithm takes to run (see Limitations).


Big-O

We can say that "f is O(g)" if we can find values for n0 and C, so that the inequality "f(n) ≤ C * g(n)" is true, providing some conditons are satisfied. The conditions are:

  • n is bigger than n0 ( from n = n0 all the way to infinity - i.e. n ≥ n0)
  • C must be a positive number (C > 0), and
  • n0 should be a positive number ( n0 ∈ N ).


Formally:

f(n) is O(g(n)) for positive functions f(n) and g(n), if there exists a positive integer n0 and a constant C > 0 such that for all integers n > n0, f(n) ≤ cg(n).


Big-O gives an idea of the "worst case" running time of an algorithm (i.e. the "upper bound" of an algorithm's running time). From our previous definition, we know that f(n) always be less than or equal to a multiple of g(n), so g(n) describes the maximum increase in an algorithm's running time when input size increases.


Example 1, Example 2


Big-Ω

Big-Ω gives an idea of the "best case" running time of an algorithm (i.e. the "lower bound" of an algorithm's running time). We can say that "f is Ω(g)" if "g is O(f)" OR if "f(n) ≥ C * g(n)" (the two are equivalent). Big-Ω can be thought of as the "opposite" if the Big-O.

Big-Ω gives an idea of how an algorithm's minimum running time increases as the input size increases.


Formally:

f(n) is Ω(g(n)) for positive functions f(n) and g(n), if there exists an integer n0 and a constant C > 0 such that for all integers n > n0, f(n) ≥ cg(n).


Big-Θ

We can say that "f is Θ(g)" when "f is O(g)" AND "g is O(f)". That is, f(n) grows no faster than g(n) (to within a constant factor), and g(n) grows no faster than f(n) (by a constant factor). So it makes sense to say that "f(n) is equivalent to g(n)" (to within a constant factor).

Formally:

f(n) is Θ(g(n)) for positive fucntions f(n) and g(n) if f(n) = O(g(n)) and f(n) = Ω(g(n)).

Tables showing the Big-O and Big-Ω of common operations for some data structures.

Example 3

Left: Graph shows f is O(g). We know this because for all n > n0, f(n) does not go above the line c*g(n).

Middle: Graph shows f is Ω(g). We know this because for all n > n0, f(n) does not go below the line c*g(n).

Right: Graph shows that f(n) is O(g) and Ω(g). In fact, f(n) is Θ(g). This is because when n > n0, a constant multiple of g(n) is both the upper and lower bound of f(n) - that g(n) grows at the same rate as f(n), within a constant factor.

<html> <a><img src="http://i279.photobucket.com/albums/kk124/stsa027/4520-chap3_1.jpg" border="0" alt="Photobucket"></a> <a><img src="http://i279.photobucket.com/albums/kk124/stsa027/4520-chap3_2.jpg" border="0" alt="Photobucket"></a> <a><img src="http://i279.photobucket.com/albums/kk124/stsa027/4520-chap3_3.jpg" border="0" alt="Photobucket"></a> </html>

Rules for Asymptotic Notation

Taken from lecture notes by Mark Wilson:

  1. (irrelevance of constant factors) if c > 0 is constant then cf is Θ(f).
  2. (transitivity) If f is O(g)/Θ(g)/Ω(g) and g is O(h)/Θ(h)/Ω(h), then f is O(h)/Θ(h)/Ωh).
  3. (sum rule) If f1 is O(g1) and f2 is O(g2) then f1 + f2 is O(max{g1; g2}).

For example, if f1 is O(n^3) and f2 is O(n^2), then f1 + f2 is O(n^3).

  1. (product rule) If f1 is O(g1) and f2 is O(g2) then f1f2 is O(g1g2).
  2. (limit rule) Suppose that L := lim(n->infinity)f(n)/g(n) exists.

Then:

  • if L = 0 then f is O(g) and f is not Ω(g);
  • if 0 < L < infinity then f is Θ(g);
  • if L = 1 then f is Ω(g) and f is not O(g).

L'H^opital's rule is often useful for the application of the limit rule. Note that the limit may not exist at all.

All of the notations, Big-O Big-Θ and Big-Ω, does not include any information about constant factors. We ignore them because the constants are implementation and machine specific, and that knowing them will not provide much useful information. Also, if we are comparing two algorithms, we are usually only interested in finding out which one performs better under a high workload (i.e. large n) because if the workload is low, it doesn't really matter whether the algorithm more efficient - the absolute difference in running time is actually really small. So under a high workload, the only important factor that sets algorithms apart is their scaling behaviour(e.g. logn, n^2), and not the constant factors that they have - see graphs below.

<html> <a><img src="http://i279.photobucket.com/albums/kk124/stsa027/no_coefficients_in.jpg" border="0" alt="no_coefficients_in"></a> <a><img src="http://i279.photobucket.com/albums/kk124/stsa027/no_coefficients_out.jpg" border="0" alt="no_coefficients_out"></a> </html>

Growth Rate of Functions

It's very well to know that mathematical functions can describe the worst, best or average cases of an algorithm... but what is the most desirable? The following is a list from best to worst for what you want an algorithm to have a running-time of:

constant, logn, (logn)^2, n^x (x<1), n, nlogn, n(logn)^2, n^x (x>1), x^n (x>1), n!, n^n


Below is an incomplete graphical representation of the order just listed. Notice that the graphs are generally consistent with the list order - e.g. that constant time is the most desirable running time for an algorithm, followed by logn, (logn)^2 and constant time.

Note: If the ordering of functions suggested by the graphs are not consistent with the list order, that is because the graphs were heavily truncated so we do not see the complete picture where n is larger, when it is more obvious that some functions have far faster increases in running times as n increases(e.g. the f(x) = 2*n).

The file from which the graphs were generated.

<html> <a href="http://i279.photobucket.com/albums/kk124/stsa027/different_fns_large-1.jpg" target="_blank"><img src="http://i279.photobucket.com/albums/kk124/stsa027/different_fns-1.jpg" border="0" alt="different_f(n)s"></a> <a href="http://i279.photobucket.com/albums/kk124/stsa027/Different_fn_large.jpg"" target="_blank"><img src="http://i279.photobucket.com/albums/kk124/stsa027/Different_fn-1.jpg" border="0" alt="different_f'(n)s"></a> </html>


Left graph:

A graph of various mathematical functions vs. n. Shows the behvaviour of f(n) as n increases, and how the different functions compare with each other (e.g. which ones increase faster than others).

Right graph:

This is the derivative of the graph before. It indicates how much additional time is required as n increases(i.e. the rate of increase in resource use).


Limitations - Why sometimes knowing Big-O, Big-Θ and Big-Ω is not enough

If two algorithms have running times that scale in the same manner (e.g. both O(n) or O(logn)), Big-O will not give any information about which is the better one, because no information is given about the constants they have. Other types of analysis will have to be performed.

Also, Big-O compares the performance of different algorithms when the input size is infinitely large. This is why Big-O may not give us accurate information about algorithm performance because we may not be interested in extremely large inputs.

For example, an algorithm of O(n) is better than O(n^2). But what if the constant for O(n) is 10^10, and for O(n^2) it's 10^-10? In this case, the input size will have to be 10^20 for O(n) to begin to out perform O(n^2) - an input size that is rarely met in practice.


Basic Performance Measures

While not under the topic of Big-O itself, the idea of performance measuring is heavily connected.

The first, and perhaps most important measure, is that of correctness; does the code actually give the correct output for given (legal) inputs, or more simply, does the program do what it's supposed to do? This is often quite hard to prove, as it is not possible to test all the possible inputs. One example of testing for correctness is unit testing, where small sections of the code are tested on sample inputs that (hopefully) span a wide enough range of inputs to give the programmer an of how the code will perform. A set of unit tests is built up for the program which can be reused later on as the program changes.

The second measure of performance is resource use, which can be broken down into two sections: memory usage, and run time. The most noticeable of these is the running time, especially nowadays with large amount of memory available to computers making memory usage less restrictive. Memory usage can still be a crucial issue however when a large amount of data is being manipulated (obviously this limit depends on the specifications of the computer in question).

Ideally we want to minimize both of these elements, and the extent to which we can depends on programming skill, the machines used, and sometimes even the compiler used.

Quick Quiz

<html> Big-O gives an idea of the _______ running time of an algorithm

<button onclick='javascript:alert("Wrong")'>best case</button> <button onclick='javascript:alert("Wrong")'>average</button> <button onclick='javascript:alert("Correct")'>worst case</button> <button onclick='javascript:alert("Wrong")'>random</button>

Which of the following function representing the algorithm having the best running-time (i.e. the least running time)?

<button onclick='javascript:alert("Correct")'>log(n)</button> <button onclick='javascript:alert("Wrong")'>(n)log(n)</button> <button onclick='javascript:alert("Wrong")'>n(logn)^2</button> <button onclick='javascript:alert("Wrong")'>(logn)^2</button>

If f1 is O(g1) and f2 is O(g2) then f1 + f2 is __________.

<button onclick='javascript:alert("Wrong")'>O(min{g1; g2})</button> <button onclick='javascript:alert("Wrong")'>(O(g1+g2})</button> <button onclick='javascript:alert("Wrong")'>O(g1)+O(g2)</button> <button onclick='javascript:alert("Correct")'>O(max{g1; g2})</button>

</html>


Resources

SOFTENG250 notes by Mark Wilson

http://www.cs.otago.ac.nz/cosc242/lecture02-4up.pdf

http://en.wikibooks.org/wiki/Data_Structures/Asymptotic_Notation#Big-O_Notation

3 Graphs are from here:

http://www.cs.gsu.edu/~cscguax/algorithms/4520-chapter3.ppt

Back to Work in Progess