/usr/bin/time -p /.a.out -r 200 -n 100000
real x.xx
user x.xx (look at this)
sys x.xx
Wall-clock time:
applicable to large class of program (yes)
independent of hardware (no)
applicable to many types of resources (no)
mathematically rigorous (no)
useful to help us select algorithms (unclear)
we don't know exactly how many steps inside one function
we are checking n+1 times for loop guard
we don't know the computational power associated with each different step
F(x) function has f(n) complexity G(x) function has g(n) complexity
Bad Definition: F(x) is better than G(x) if for all n, f(n)<=g(n)
- well, some function can better handling small inputs. We need some algorithms that can do well in stress test.
Better Definition: F(x) is better than G(x) if there exists a natural number n0 such that for all n>n0, f(n)<=g(n)
Good Definition: there exists a natural number n0 and a real c>0 such that for all n>=n0, f(n)<=cg(n)
(O(g(n)) is a set of functions where f(n) \in O(g(n)) iff there are some c\in \mathbb{R}^{+} and some n_0 \in \mathbb{N} s.t. (\forall n \geq n_0)(f(n)\leq c\times g(n)))
if we can we scale the function in big-O so that the function in big-O can be linearly greater than the left.
O(g(n)) denotes all family of functions that runs no slower than any constant scale of g(n) as n gets unimaginablely big (approaches infinity)
f\in O(g): f is better than g
O(g) is a set
O(n) just means respect to n variable, so we can have O(wh) but not O(n^2)
if f(x) and g(x) have the same complexity, then f \in O(g) as well as g \in O(f)
O(1)<O(log(log(n)))<O(log(n))<O(log(n)^2)<O(n)<O(n*log(n))<O(n^2)<O(2^n)<O(n!)
change of log base does not matter
(A outrun B, A outpace B, means A is slower)
because max(x,y) <= x+y <= 2max(x,y), then O(max(x,y)) <= O(x+y) <= O(max(x,y)). So we just write O(x+y)
Table of Content