Lecture 008

Ada's Lecture

Fixing Undecidable Problem

Fixing Undecidable Problem

Extended Church-Turing Thesis: all reasonable models of computation are equivalent with respect to polynomial-time computability

The Random-Access Machine (RAM) model:

Asymptotic Complexity: Big-O, Big-\Omega, \varTheta.

Big-O: f(n) \in O(g(n)) (little-o has its own meaning)

// Exercise (Practice with big-O) // Exercise (Logarithms vs polynomials)

Big-\Omega: f(n) \in \Omega(g(n)) (little-\omega has its own meaning)

// Exercise (Practice with big-Omega)

\varTheta: f(n) = \varTheta(g(n))

Logarithms in different bases: \log_b n = \frac{\log_2 n}{\log_2 b} = \varTheta(\log n) since \frac{1}{\log_2 b} \log_2 n \leq \frac{\log_2 n}{\log_2 b} \leq \frac{1}{\log_2 b} \log_2 n

// Exercise (Practice with Theta)

Length of Input: how many keyboard strikes to write down one number in base 2. (if single input then the length of binary encoding; if array then number of elements.)

Worst-Case Running Time: on algorithm A, T_A: \mathbb{N} \rightarrow \mathbb{N}, T_A (n) = \max(\text{steps}) where n = \text{len}(x)

Constant Time: T(n) \in O(1) Logarithmic Time: T(n) \in O(\log n)

Linear Time: T(n) \in O(n) Quadratic Time: T(n) \in O(n^2) Polynomial Time: (\exists k \in \mathbb{N})(T(n) \in O(n^k)) Exponential Time: (\exists k \in \mathbb{N})(T(n) \in O(2^{n^k}))

\mathtt{P}: complexity class of a set of language in polynomial time.

Subroutine: if the one routine has work f(n), then it can only produce string of length cf(n).

Algorithmic complexity: asymptotic complexity of one algorithm that computes the problem Intrinsic complexity: asymptotic complexity of most efficient algorithm that computes the problem (may not be well defined by well ordering property)

// Exercise (TM complexity of {0𝑘1𝑘:𝑘∈ℕ}) // Exercise (Is polynomial time decidability closed under concatenation?)

Integer Addition:

Multiplication:

// Exercise (251st root)

Matrix Multiplication

Exponential Time Cost in Universe: trade time with energy

// TODO: Check Your Understanding

Sutner's Lecture

Resource Bounds

Friedman's \alpha: not actual computations on digital computer

Physical Constraints: time, space, energy

Acceptance Language: L(M) = \{x \in \Sigma^* | C_x^{\text{init}} \xrightarrow[M]{}C^{yes}\}

Complexity

Time Complexity: T(x)_T = t \iff C_x^{\text{init}} \xrightarrow[M]{}C^{Y} \lor C^{N} (where T(x) is always defined because we only interested in decision problems)

Worst Case Complexity: T_M(n) = \max(T_M(x) | x \text{ has size } n)

Algorithm Analysis: practical algorithms using register machines, random access machines as one step Complexity Theory: using Turing Machines as one step

Speed-up Theorem: one can always make linear speed up by reading 2 alphabets in one step

Time Complexity Class: TIME(f) = \{L(M) | M \text{ is a TM}, T_M(n) \in O(f(n))\} Family of Time Complexity Class: TIME(F) = \bigcup_{f \in F}TIME(f)

Trackability

Transducers: read-only input tape, write-only output tape, working tape. For combinging and pipelining algorithms.

Property of Complexity Class:

Why We Use Polynomial as Actually Computable

Arguing About \mathtt{P}

dijkstra:

dijkstra (n+m)log(n)

dijkstra (n+m)log(n)

Knuth's Lament: In order to raise asymptotic complexity by small amount, one would sacrefice constant factors a lot to the point they are practically inefficient.

Space Constraints

Space Complexity Class: SPACE(f) = \{L(M) | M \text{ is a TM}, T_M(n) \in O(f(n))\} Family of Space Complexity Class: SPACE(F) = \bigcup_{f \in F}SPACE(f)

Hierarchy of Complexity Class

Trade off between time and space

Time Power: Let f be time constructible, g(n) = o(f(n)), then TIME(g(n)) \subsetneq TIME(f(n)\log f(n))

Space Power: Let f(n) \geq \log n be space constructible, g(n) = o(f(n)), then SPACE(g(n)) \subsetneq SPACE(f(n))

Table of Content