Extended Church-Turing Thesis: all reasonable models of computation are equivalent with respect to polynomial-time computability
The Random-Access Machine (RAM) model:
+, -, *, /, <, >: takes 1 step for small numbers
index memory access: 1 step
Asymptotic Complexity: Big-O, Big-\Omega, \varTheta.
Big-O: f(n) \in O(g(n)) (little-o has its own meaning)
For f : \mathbb{R}^+ \rightarrow \mathbb{R^+}
For g : \mathbb{R}^+ \rightarrow \mathbb{R^+}
If (\exists C, n_0 > 0)(\forall n \geq n_0 > 0)(f(n) \leq Cg(n))
// Exercise (Practice with big-O) // Exercise (Logarithms vs polynomials)
Big-\Omega: f(n) \in \Omega(g(n)) (little-\omega has its own meaning)
For f : \mathbb{R}^+ \rightarrow \mathbb{R^+}
For g : \mathbb{R}^+ \rightarrow \mathbb{R^+}
If (\exists c, n_0 > 0)(\forall n \geq n_0 > 0)(f(n) \geq cg(n))
// Exercise (Practice with big-Omega)
\varTheta: f(n) = \varTheta(g(n))
For f : \mathbb{R}^+ \rightarrow \mathbb{R^+}
For g : \mathbb{R}^+ \rightarrow \mathbb{R^+}
If f(n) \in O(g(n)) \land f(n) \in \Omega(g(n))
If (\exists c, C, n_0 > 0)(\forall n \geq n_0 > 0)(cg(n) \leq f(n) \leq Cg(n))
Logarithms in different bases: \log_b n = \frac{\log_2 n}{\log_2 b} = \varTheta(\log n) since \frac{1}{\log_2 b} \log_2 n \leq \frac{\log_2 n}{\log_2 b} \leq \frac{1}{\log_2 b} \log_2 n
// Exercise (Practice with Theta)
Length of Input: how many keyboard strikes to write down one number in base 2. (if single input then the length of binary encoding; if array then number of elements.)
Worst-Case Running Time: on algorithm A, T_A: \mathbb{N} \rightarrow \mathbb{N}, T_A (n) = \max(\text{steps}) where n = \text{len}(x)
Constant Time: T(n) \in O(1) Logarithmic Time: T(n) \in O(\log n)
Linear Time: T(n) \in O(n) Quadratic Time: T(n) \in O(n^2) Polynomial Time: (\exists k \in \mathbb{N})(T(n) \in O(n^k)) Exponential Time: (\exists k \in \mathbb{N})(T(n) \in O(2^{n^k}))
\mathtt{P}: complexity class of a set of language in polynomial time.
Subroutine: if the one routine has work f(n), then it can only produce string of length cf(n).
Algorithmic complexity: asymptotic complexity of one algorithm that computes the problem Intrinsic complexity: asymptotic complexity of most efficient algorithm that computes the problem (may not be well defined by well ordering property)
// Exercise (TM complexity of {0𝑘1𝑘:𝑘∈ℕ}) // Exercise (Is polynomial time decidability closed under concatenation?)
Integer Addition:
keep adding one: \Omega(2^n)
elementary addition: Omega(n)
Multiplication:
elementary multiplication, division: O(len(A) \times len(B))
Can do T(n) \in O(n^{1+\epsilon}) for any \epsilon > 0
Fastest Known: Harvey, Hoeven (2019) n\log{n}
// Exercise (251st root)
Matrix Multiplication
World Record (2020): O(n^{2.37285}) by Josh Alman, Virginia Williams
Exponential Time Cost in Universe: trade time with energy
speeding up calculation generates more exponential heat
travel in speed of light requires energy
// TODO: Check Your Understanding
Friedman's \alpha: not actual computations on digital computer
Physical Constraints: time, space, energy
reversible computation does not dissipate energy
in principle, not all computation costs
Acceptance Language: L(M) = \{x \in \Sigma^* | C_x^{\text{init}} \xrightarrow[M]{}C^{yes}\}
Time Complexity: T(x)_T = t \iff C_x^{\text{init}} \xrightarrow[M]{}C^{Y} \lor C^{N} (where T(x) is always defined because we only interested in decision problems)
Worst Case Complexity: T_M(n) = \max(T_M(x) | x \text{ has size } n)
Algorithm Analysis: practical algorithms using register machines, random access machines as one step Complexity Theory: using Turing Machines as one step
Note: turing machines with different tapes have different complexity for algorithms
Under reasonable assumptions, the speed-up on a more realistic machine model versus a plain Turing machine is only a low-degree polynomial
Speed-up Theorem: one can always make linear speed up by reading 2 alphabets in one step
Time Complexity Class: TIME(f) = \{L(M) | M \text{ is a TM}, T_M(n) \in O(f(n))\} Family of Time Complexity Class: TIME(F) = \bigcup_{f \in F}TIME(f)
\mathtt{P} = TIME(\text{poly}): polynomial time
\mathtt{EXP}_k = \bigcup TIME(2^{cn^k} | c > 0): k-th order exponential time
\mathtt{EXP} = \bigcup EXP_k: full exponential time
\mathtt{EEXP} = \bigcup TIME(2^{2^{n^c}} | c > 0): doubly exponential time
TIME(\alpha): Friedman's self-avoiding words function
Transducers: read-only input tape, write-only output tape, working tape. For combinging and pipelining algorithms.
Property of Complexity Class:
Why We Use Polynomial as Actually Computable
We can always figure out constant (except for Robertson-Seymour Theorem for graph minors)
a natural problem, if is polynomial, constants and polynomial power are often small
dijkstra:
Knuth's Lament: In order to raise asymptotic complexity by small amount, one would sacrefice constant factors a lot to the point they are practically inefficient.
Space Complexity Class: SPACE(f) = \{L(M) | M \text{ is a TM}, T_M(n) \in O(f(n))\} Family of Space Complexity Class: SPACE(F) = \bigcup_{f \in F}SPACE(f)
Constant Space (SPACE(1)) is equivalent to DFA
Theorem: SPACE(\log\log n) is the same as constant space.
Trade off between time and space
To use space, we have to use time: TIME(f) \subseteq SPACE(f)
Trade off: f(n) \geq \log n \implies SPACE(f(n)) \subseteq TIME(2^{O(f(n))})
We know how many configuration it can be for a halt machine. Therefore, we can bound time by space.
Time Power: Let f be time constructible, g(n) = o(f(n)), then TIME(g(n)) \subsetneq TIME(f(n)\log f(n))
Space Power: Let f(n) \geq \log n be space constructible, g(n) = o(f(n)), then SPACE(g(n)) \subsetneq SPACE(f(n))
Table of Content