Algoritmos codiciosos óptimos para problemas NP-hard

38

La codicia, por falta de una palabra mejor, es buena. Uno de los primeros paradigmas algorítmicos enseñados en el curso introductorio de algoritmos es el enfoque codicioso . El enfoque codicioso da como resultado algoritmos simples e intuitivos para muchos problemas en P. Más interesante, para algunos problemas NP-difíciles, el algoritmo codicioso / local obvio y natural da como resultado (probablemente) un factor de aproximación óptimo (bajo supuestos teóricos de complejidad adecuados). Un ejemplo clásico es el problema Set Cover . Un algoritmo codicioso natural proporciona un factor de aproximación O (ln n), que es óptimo a menos que P = NP.

Nombra algunos algoritmos locales codiciosos / naturales para problemas NP-difíciles que son demostrablemente óptimos bajo supuestos teóricos de complejidad adecuados.

Shiva Kintali
fuente
Suresh (o) Ryan, ¿puede agregar una etiqueta llamada "dureza de aproximación" y etiquetar esta pregunta. No puedo agregar nuevas etiquetas con mi reputación actual :( Además, dado que las etiquetas largas (> 20 caracteres) no están permitidas, supongo que debería ser una dureza de aprox.
Shiva Kintali,
Hola Shiva, agregué la etiqueta de dureza de aprox como sugirió, pero personalmente creo que la dureza de aproximación suena mejor y debería ser posible ya que es más corta que los algoritmos de aproximación.
Kaveh
66
Primera frase bien elegida. ;)
AlcubierreDrive

Respuestas:

19

El método de la esperanza condicional (por derandomizing los algoritmos de "asignación al azar" para Max-Cut y Max-SAT) puede ser vista como una estrategia voraz: para , tienes que elegir el valor de una variable x i tales que el número esperado de restricciones satisfechas en la instancia reducida resultante excede el número esperado de restricciones satisfechas en la instancia actual. (De hecho, el algoritmo codicioso de 1 / 2 -approximating Max-Cut es el mismo como el "método de esperanza condicional" algoritmo para 1 / 2 -approximating Max-Cut.)i=1,,nxi1/21/2

Dado que el método también funciona para Max-E3-SAT y logra un : Aproximación, este es un ejemplo de un algoritmo codicioso que es una aproximación óptima a menos que P = N P (cf. Hastad y de Moshkovitz-Raz resultados inapproximability para Max -E3-SAT).7/8P=NP

Ryan Williams
fuente
16

Cubierta de vértice: El mejor algoritmo de aproximación de factor constante implica (con avidez) encontrar una coincidencia máxima y elegir todos los vértices involucrados como la solución aproximada. Esto produce una solución aproximada de 2, y no es posible una aproximación de factor constante mejor a menos que la Conjetura de juegos únicos sea falsa.

Subhash Khot y Oded Regev, la cobertura de Vértice puede ser difícil de aproximar dentro de 2 − ε , JCSS 74 (3), 2008.

Fuera de tema: creo que este es un algoritmo de aproximación realmente lindo, especialmente porque es tan trivial con el beneficio de la retrospectiva.

gphilip
fuente
1
is the maximal matching algorithm really greedy ?
Suresh Venkat
Yes, since it makes a locally optimal choice at each step. The algorithm actually makes a locally /feasible/ choice, but since the edges are unweighted this is also an optimal choice.
gphilip
11

Given a directed graph, find the acyclic subgraph with maximum number of edges.

Trivial 2-approximation algorithm: Pick an arbitrary ordering of vertices and take either the forward edges or backward edges.

Beating the 2-approximation is known to be Unique-games hard(although it might not be NP-hard).

  • Beating the Random Ordering is Hard: Inapproximability of Maximum Acyclic Subgraph Venkatesan Guruswami, Rajsekar Manokaran and Prasad Raghavendra.
Jagadish
fuente
11

Submodular maximization with respect to cardinality constraint has a 1-1/e greedy approximation. Algorithm is due to Nemhauser, Wolsey, Fisher. NP hardness follows from np-hardness of set cover as max-coverage is a special case of submodular maximization.

Ashwinkumar B V
fuente
1
The Nemhauser-Wolsey-Fisher analysis of the greedy algorithm is only for the case of maximizing subject to a simple cardinality constraint. Greedy gives only a 1/2-approximation even for the simple partition matroid. The (11/e)-approximation for maximizing a submodular function subject to an arbitrary matroid is a recent result due to Vondrak and others (including myself). It relies on several tools and is not a greedy algorithm.
Chandra Chekuri
Ofcourse, my mistake. Edited the answer to reflect the correction.
Ashwinkumar B V
10

Greedy gives a (1-1/e) approximation to Max-k-cover, and this cannot be improved unless P=NP.

Lev Reyzin
fuente
I think this is the same problem as is @AshwinkumarBV's answer, which I guess was posted while I was typing mine...
Lev Reyzin
6

The k-center problem is a clustering problem, in which we are given a complete undirected graph G=(V,E) with a distance dij0 between each pair of vertices i,jV. The distances obey the triangle inequality and model similarity. We are also given an integer k.

In the problem, we have to find k clusters that group together the vertices that are most similar into clusters together. We choose a set SV,|S|=k of k cluster centers. Each vertex will assign itself to the closest cluster center grouping the vertices into k different clusters. The objective is to minimize the maximum distance of a vertex to its cluster center. So geometrically, we want to find the centers of k different balls of the same radius r that cover all points so that r is as small as possible.

The optimal algorithm is greedy, and also very simple and intuitive. We first pick a vertex iV arbitrarily and put it in our set of S cluster centers. We then pick the next cluster center such that it is as far away as possible from all the other cluster centers. So while |S|<k, we repeatedly find a vertex jV for which the distance d(j,S) is maximized and add it to S. Once |S|=k we are done.

The described algorithm is a 2-approximation algorithm for the k-center problem. In fact, if there exists a ρ-approximation algorithm for the problem with ρ<2 then P=NP. This can be shown easily with a reduction from the NP-complete dominating set problem by showing we can find a dominating set of size at most k iff an instance of the k-center problem in which all distances are either 1 or 2 has optimal value 1. The algorithm and analysis is given by Gonzales, Clustering to minimize the maximum intercluster distance, 1985. Another variant of a 2-approximation is given by Hochbaum and Shmoys, A best possible heuristic for the k-center problem, 1985.

Juho
fuente
1

Maybe this would also interest you (adapted from Methods to translate global constraints to local constraints)

Since greedy methods (more correctly local methods) employ only local information to achieve global optimisation, if ways are found which are able to transform global conditions to conditons able to be used employing only local information, this provides a (globally) optimal solution to problems using greedy/local techniques only.

References:

  1. Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds (Journal of Machine Learning Research 4 (2003))
  2. Global optimization using local information with applications to flow control, Bartal, Y.
  3. Why Natural Gradient?, Amari S., Douglas S.C.
  4. Local optimization of global objectives: competitive distributed deadlock resolution and resource allocation, Awerbuch, Baruch, Azar, Y.
  5. Learning with Local and Global Consistency
  6. Constraint Satisfaction Problems Solvable by Local Consistency Methods

There are a couple of references which tackle the problem of translating global evaluation functions (or constraints) to local ones (using local information) and their consistency (i.e convergence to same global optimum):

  1. Local Evaluation Functions and Global Evaluation Functions for Computational Evolution, HAN Jing, 2003
  2. Emergence from Local Evaluation Function, Han Jing and Cai Qingsheng, 2002

Abstract (from 1. above)

This paper presents a new look on computational evolution from the aspect of the locality and globality of evaluation functions for solving classical combinatorial problem: the kcoloring Problem (decision problem) and the Minimum Coloring Problem (optimization problem). We first review current algorithms and model the coloring problem as a multi-agent system. We then show that the essential difference between traditional algorithms (Local Search, such as Simulated Annealing) and distributed algorithms (such as the Alife&AER model) lies in the evaluation function: Simulated Annealing uses global information to evaluate the whole system state, which is called the Global Evaluation Function (GEF) method; the Alife&AER model uses local information to evaluate the state of a single agent, which is called the Local Evaluation Function (LEF) method. We compare the performances of LEF and GEF methods for solving the k-coloring Problems and the Minimum Coloring Problems. The computer experimental results show that the LEF is comparable to GEF methods (Simulated Annealing and Greedy), in many problem instances the LEF beats GEF methods. At the same time, we analyze the relationship between GEF and LEF: consistency and inconsistency. The Consistency Theorem shows that Nash Equilibria of an LEF is identical to local optima of a GEF when the LEF is consistent with the GEF. This theorem partly explains why the LEF can lead the system to a global goal. Some rules for constructing a consistent LEF are proposed. In addition to consistency, we give a picture of the LEF and GEF and show their differences in exploration heuristics.

Specificaly the paper addresses methods to determnine whether a local function (LEF) is consistent with a global function (GEF) and methods to construct consistent LEFs from given GEFs (Consistency theorem).

Excerpt from Conclusion section (from 1. above)

This paper is just the beginning of LEF&GEF studies. In addition to the research report above, there is still a lot of future work: more experiments on LEF methods; analytical study on LEF; sufficiency of local information for LEF; and the existence of a consistent GEF for any LEF; Is the consistency concept sufficient? Since Genetic Algorithms also have an evaluation function (fitness function), can we apply LEF&GEF to Genetic Algorithms? … It is our intention to study and attempt to answer all of these questions

Nikos M.
fuente