On Apr 25, 1996, we finished the analysis of the greedy algorithm, showing that the pessimistic estimator in question was in fact a pessimistic estimator. We introduced and proved a Hoeffding/Chernoff-type bound due to Raghavan -- this bound says that a sum of independent 0-1 random variables exceeds its expected value MU by a factor of (1+EPSILON) with probability at most EXP(-MU*EPSILON^2/3) (for 0 < EPSILON < 1). The proof bounds the probability of failure by the expected value of an exponential function.
On Apr 30, 1996, we introduced the non-concurrent multi-commodity flow problem and used oblivious randomized rounding to derive an approximation algorithm.
Our reference for this part of the course is the following paper, which includes the set-cover example and a generalization of the multi-commodity flow algorithm (a packing problem): Randomized rounding without solving the linear program from SODA-95, pp. 170-178.
Linear programming is the study of when systems of linear inequalities are satisfiable, much as linear algebra is the study of when systems of linear equalities are satisfiable. A linear programming problem is specified by giving a multi-variate linear objective function to maximize or minimize and a set of linear constraints that any solution must satisfy. Optimal solutions to linear programs can be found in polynomial time.
Integer linear programming allows additionally constraining some of the variables to take integer values, and is NP-hard. Given an integer linear program, relaxing the program means removing all the "variables must take integer values" constraints to obtain a linear program.
Often one can formulate a combinatorial optimization problem as an integer linear programming problem. A crucial question is the how the optimal solutions to the problem are related to the optimal solutions to the relaxed problem. For some problems (for example maximum-flow with integer capacities) relaxing the constraints makes no difference - there are always optimal integer solutions. But in general, relaxing the problem allows fractional solutions that can have a much smaller (for minimization problems) or larger (for maximization problems) objective function value. The worst-case ratio between the best integer solution and the best fractional solution is called the integrality gap. Integrality gaps are, empirically, often closely related to how well the original problem can be approximated in polynomial time.
The idea motivating randomized rounding is the following. To approximately solve some integer linear program, why not first solve the relaxed linear program and then try to convert the fractional solution to a "nearby" integer solution? Randomized rounding is a general technique for doing this conversion: use a random process (such as random sampling or biased coin flipping) to produce an integer solution from the fractional solution. To show that the resulting solution is likely to be good (at least with non-zero probability), use the probabilistic method. (Here's a book on the method published by Wiley.)
The method of conditional probabilities is a technique for converting
the probabilistic existence proof into a constructive algorithm. Briefly, the
idea is to emulate the random process by a deterministic one that does "as
well" as the random process does. More specifically, it applies when the proof
is of the following form:
"It suffices to find an object X such that the value
F(X) of some specified function F is (for example) less than 1. The analysis
of some random process producing a random X shows that on average, F(X) is less
than 1. Therefore the desired X must exist."
The random process is emulated by breaking it into a sequence of steps;
each step is performed deterministically by making the choice that minimizes the
condition expectation of F(X) (conditioned on the steps taken so far).
We can picture this as a deterministic descent along a path in a tree
representing all possible random paths of the original process,
where each node is labeled with the expected value of F(X) if,
Sometimes the conditional expectations in question are difficult to compute, in which case pessimistic estimators can sometimes be found that upper bound the conditional expectations and have the properties necessary for the analysis. These values are often implicit in the original probabilistic analysis.
Finally, if the randomized rounding experiment is designed correctly, it is (remarkably) possible to obtain pessimistic estimators that can be computed without knowing the optimal fraction solution (but only the constraints that such a solution would satisfy). In this case, I call it oblivious randomized rounding.