# Approximation Algorithms

Approximation algorithms provide a scheme.. c-gap... Examples..

## Overview

inapprox overview For optimization problems, we want a solution that has an associated (min/max) cost. For a set of given instances (or bounded instances), we approximate the best non negative solution.

OPT(x) = optimal cost of, or solution itself for an instance. All solutions have polynomial length; NPO = {optimization problems}

Converting an instance of NPO:

min: is OPT(x) <= val? max: is OPT(x) >= val?

These problems are in NP, since solutions are polynomially solvable.

approximation: ALG is a c-approx. if min: cost(ALG(x))/OPT <= c (c >= 1) max: OPT/cost(ALG(x)) <= c (c >= 1)

## PTAS poly-time approx scheme

The alg takes an additional input eps > 0, producing a 1+eps solution. Poly-time for fixed eps. PTAS = {NPO problems with alg} F-APX = {NPO with poly f(n)-approx alg for some f in F} APX = O(1)-APX Log-APX = Ologn-APX PTAS is a strict subset of APX

## approx reductions

### approximation preserving reductions

go from instance A : x -> B : x' with function f x' = f(x) If we find a solution for B y' to x', convert it back to A, with function g. y=g(x,y')

#### PTAS reduction

Finding a 1+eps approximation for A, by finding a 1+d(eps) for B. f and g depend on eps. if B in PTAS* then A in PTAS*, if A not in PTAS* then B not in PTAS*.

• also holds for APX.

AP-reduction: d(eps) = O(e). if B has an O(f)-apx then A O(f)-apx. strict reduction: d(e) = e APX-hard = PTAS-reduction from any problem in APX =/= PTAS. Otherwise all problems would have a PTAS.

### L-reduction

```- OPTb(x') = O(OPTa(x))
- |costa(y) - costa(x)|, absolute error is O(|costb(y') - costb(x')|)
```

d(eps) = A*B*eps

min: costa(y)/OPTa(x) < OPTa(x)+B(costb(y)-OPTb(x) so costa > OPT

## Approximation classes

PO < FPTAS < PTAS < APX < NPO This holds iff P = NP. Relative inapproximability, problems in APX where the apx-quality cannot be improved beyond a constant. Absolute inapproximability, problems that have no c-apx alg for c < inf. (problems in NPO - APX)

gap introducing and gap amplification can be used Gap introducing can be used to establish both rel and abs inapprox. Amplification can show absolute inapprox.

## Gap introducing reductions

Suppose we have an optimization problem B. The objective is to minimize the cost for B. Here we focus on the possibility of finding a approx alg, with a certain quality. Using an NP-hard decision problem A, which has yes/no instances. Reducing no instances to problem B, where OPT > b, and yes instances to OPT <= a, using poly function f. There is an (a,b] gap between the two instances. We prove that such a reduction exists if these properties are satisfied. The reduction leads to an approximation algorithm of at best b/a-apx quality. There does not exist an apx with a better approximation than b/a.

proof: If such an algorithm, lets call it ALGb/a would exist, for any input x on problem a, run f(x). This creates an instance of B. Now, running ALGb/a will produce a result x', which is either smaller than b, resulting in a yes, otherwise it will result in a no. To show that the ALG is valid we show: yes from problem A -> f(x), the ALGb/a(f(x)) produces < b/a*OPT < b, so the output produces yes. no from problem A -> f(x), the ALGb/a(f(x)) produces > b/a*OPT > b -> no output

## gap introducing proof for minbinpacking

NP-complete problem: partitioning, partition into two sets of equal weight. Let f be the reduction from partitioning to binpacking: on input x = (A,g) in Partitioning, let f(x) = (S,w) in MinBinPacking where - S = A - weight function: w = g(a) / C, where C is half of the weight C = 0.5*Totalweight. Separate outcomes for approximation non-existance: It is clear that if x = (A,g) = yes-instance, 2 bins suffice, therefore OPT(f(x)) < 2. x = no-instance, when 3 bins are necessary. Therefore, OPT(f(x)) > (3-eps) for eps in (0,1]