Combinatorial Optimization Research Center

3rd COLUMBIA OPTIMIZATION DAY

Monday, November 3, 2003

9:00 AM, Schermerhorn 501



ABSTRACTS





       David Applegate
       AT&T

       Optimization of Demand Oblivious Routing

   Optimizing the routing of traffic through a network traditionally starts with an estimate of the traffic demands and a knowledge of the network topology. However, traffic demands are only approximately known, since they are difficult to measure, and are dynamic on several different time scales. Demand oblivious routing provides a worst-case approach to traffic engineering in the presence of uncertainty in the traffic demands.

I will describe demand oblivious routing, present a linear programming approach to computing optimal demand oblivious routings, and present some computational results on several real-world network topologies.




       Egon Balas
       CMU

       The Vertex Separator Problem

   (Lecture based on joint work with Cid Carvalho de Souza, University of Campinas, Brazil). The vertex separator (VS) problem in a graph G =(V,E) asks for a partition of V into nonempty subsets A, B, C, such that there is no edge between A and B, and |C| is minimized subject to a bound on max{|A|,|B|}. We give a mixed integer programming formulation of the problem and investigate the vertex separator polytope (VSP), the convex hull of incidence vectors of vertex separators.

Central to our investigation is the relationship between separators and dominators. Several classes of valid inequalities are investigated, along with the conditions under which they are facet defining for the VSP. Some of our proofs combine in new ways projection with lifting. A branch-and-cut algorithm for the VS problem based on the inequalities discussed here was implemented and tested on a wide variety of VS problem instances drawn from the literature and inspired by various applications. Our computational experience throws new light on the role of cut density as distinct from cut strength.




       Ravi Kannan
       Yale

       Sampling on the Fly from Massive Data

   We consider problems where the input is too large to be stored in RAM and must be read from external memory.   Sampling is a natural method in this context. For many graph problems like the maximum cut problem and combinatorial problems like the maximum 3-satisfiability problem, it has recently been shown that the induced sub-problem on a small random subset of the vertices / variables yields an estimate of the answer to the whole problem. Here, uniform sampling suffices.

There has been progress as well on using non-uniform sampling in the ``streaming'' model, which allows just one pass through the data. Clustering and frequency estimation problems have been thus tackled. We will also consider problems, where, the data consists of a large matrix which can be stored in external memory and read a limited number of times. We show that a random subset of rows and columns picked according to a certain probability distribution can be used to derive a succinct approximation to the matrix; this approximation is sufficient to solve several problems.




       Arkadi Nemirovski
       Israel Institute of Technology

       A simple $O(1/t)$-converging method for extremely large-scale convex-concave games and eigenvalue optimization

   In all known polynomial time algorithms of nonlinear convex optimization problems, the computational effort per iteration grows nonlinearly with the design dimension $n$ of the problem (typically, as $n^3$). As a result, in the case of extremely large-scale (tens and hundreds thousands of variables) convex programs, a single iteration of a polynomial time algorithm can become too expensive to be practical. In these cases one is enforced to use ``computationally cheap'' (with linear in $n$ complexity of an iteration) optimization techniques.

A bad fact about ``cheap'' algorithms is that all known methods of this type possess sublinear rate of convergence and thus cannot guarantee high-accuracy solutions. A good piece of news is that with ``favourable geometry'' of a problem, properly chosen cheap algorithms demonstrate {\sl dimension-independent} and {\sl optimal in the large-scale case}, in a certain precise sense, rate of convergence, which makes these methods well-suited for finding moderate-accuracy solutions of really large-scale problems.

In this talk, we present a new simple method of this type aimed at solving problems of the form $\min\limits_{x\in X} \max\limits_{y\in Y} F(x,y)$, where $X$, $Y$ are convex compact and $F$ is a convex-concave function with Lipschitz continuous gradient. When $X$ and $Y$ possess favourable geometry (e.g., are balls, or simplexes, or spectahedrons), the method exhibits (nearly) dimension-independent and optimal in the large-scale case $O(1/t)$ rate of convergence.

Our numerical examples include large-scale matrix games and computing, within absolute accuracy 1, the Lovacz capacity of a graph (the largest graph we have processed has 1024 vertices and over 197,000 arcs).




       James Orlin
       MIT

       The Complexity of Inverse and Partial Inverse Optimization Problems

   Given a combinatorial optimization problem and a feasible solution to it, the corresponding inverse optimization problem is to find a minimal adjustment of the cost function such that the given solution becomes optimum. A partial inverse optimization problem differs from an inverse optimization problem in that only part of an optimal solution is known.

Researchers have applied inverse optimization to the study of geophysics (e.g., what are seismological parameters that are consistent with the observed behavior of an earthquake), and transportation analysis (e.g., how can one adjust road tolls so as to lead to optimal traffic flows), and to marketing (e.g., what is the best estimate for a person's utilities for product features based on observations of their preferences for products?)

After formalizing the notion of an inverse problem and its variants, we present various methods for solving inverse problems efficiently, and review the complexity of partial inverse problems. In general, if the original problem is solvable in polynomial time, the inverse problem is also solvable in polynomial time. On the other hand, for very simple network problems, such as the assignment problem, the shortest path problem, and the minimum cut problem, the partial inverse problem is NP-hard.




       Baruch Schieber
       IBM Research

       Approximating the Throughput in Resource Allocation and Real Time Scheduling.

   We consider the problem of maximizing the throughput or minimizing the loss in resource allocation and real time scheduling. In this problem we have a set of activities competing for a reusable resource. Each activity utilizes a certain amount of the resource for the duration of its execution and frees it upon completion. The problem is to select a feasible subset of activities for execution, that is, a subset of activities such that the total amount of resource consumed by them simultaneously does not exceed the amount of resource available. The objective is to find a feasible subset so as to maximize the profit accrued or, in some cases, to minimize the profit lost due to unscheduled activities. Several problems can be cast in these terms, among them: (i) real-time scheduling of jobs on parallel machines; (ii) bandwidth allocation for sessions between two endpoints; (iii) general caching; (iv) dynamic storage allocation; (v) bandwidth allocation on optical line and ring topologies.

We consider several variants of this problem, all of which are NP-Hard, and present efficient constant factor approximation algorithms for computing the maximum throughput and minimum loss in these variants. Some of our algorithms are based on the local-ratio technique.

Based on joint work with Amotz Bar-Noy, Reuven Bar-Yehuda, Don Coppersmith, Ari Freund, Sudipto Guha, Yoav Katz, Joseph (Seffi) Naor, Hadas Shachnai, and Maxim Sviridenko.