La codicia, por falta de una palabra mejor, es buena. Uno de los primeros paradigmas algorítmicos enseñados en el curso introductorio de algoritmos es el enfoque codicioso . El enfoque codicioso da como resultado algoritmos simples e intuitivos para muchos problemas en P. Más interesante, para algunos problemas NP-difíciles, el algoritmo codicioso / local obvio y natural da como resultado (probablemente) un factor de aproximación óptimo (bajo supuestos teóricos de complejidad adecuados). Un ejemplo clásico es el problema Set Cover . Un algoritmo codicioso natural proporciona un factor de aproximación O (ln n), que es óptimo a menos que P = NP.
Nombra algunos algoritmos locales codiciosos / naturales para problemas NP-difíciles que son demostrablemente óptimos bajo supuestos teóricos de complejidad adecuados.
Respuestas:
El método de la esperanza condicional (por derandomizing los algoritmos de "asignación al azar" para Max-Cut y Max-SAT) puede ser vista como una estrategia voraz: para , tienes que elegir el valor de una variable x i tales que el número esperado de restricciones satisfechas en la instancia reducida resultante excede el número esperado de restricciones satisfechas en la instancia actual. (De hecho, el algoritmo codicioso de 1 / 2 -approximating Max-Cut es el mismo como el "método de esperanza condicional" algoritmo para 1 / 2 -approximating Max-Cut.)i=1,…,n xi 1/2 1/2
Dado que el método también funciona para Max-E3-SAT y logra un : Aproximación, este es un ejemplo de un algoritmo codicioso que es una aproximación óptima a menos que P = N P (cf. Hastad y de Moshkovitz-Raz resultados inapproximability para Max -E3-SAT).7/8 P=NP
fuente
Cubierta de vértice: El mejor algoritmo de aproximación de factor constante implica (con avidez) encontrar una coincidencia máxima y elegir todos los vértices involucrados como la solución aproximada. Esto produce una solución aproximada de 2, y no es posible una aproximación de factor constante mejor a menos que la Conjetura de juegos únicos sea falsa.
Subhash Khot y Oded Regev, la cobertura de Vértice puede ser difícil de aproximar dentro de 2 − ε , JCSS 74 (3), 2008.
Fuera de tema: creo que este es un algoritmo de aproximación realmente lindo, especialmente porque es tan trivial con el beneficio de la retrospectiva.
fuente
Trivial 2-approximation algorithm: Pick an arbitrary ordering of vertices and take either the forward edges or backward edges.
Beating the 2-approximation is known to be Unique-games hard(although it might not be NP-hard).
fuente
Submodular maximization with respect to cardinality constraint has a 1-1/e greedy approximation. Algorithm is due to Nemhauser, Wolsey, Fisher. NP hardness follows from np-hardness of set cover as max-coverage is a special case of submodular maximization.
fuente
Greedy gives a (1-1/e) approximation to Max-k-cover, and this cannot be improved unless P=NP.
fuente
Finding a minimum degree MST. It is np-hard as finding a hamiltonian path is a special case. A local search algorithm gives to within an additive constant 1.
Reference
Approximating the minimum-degree Steiner tree to within one of optimal
fuente
Thek -center problem is a clustering problem, in which we are given a complete undirected graph G=(V,E) with a distance dij≥0 between each pair of vertices i,j∈V . The distances obey the triangle inequality and model similarity. We are also given an integer k .
In the problem, we have to findk clusters that group together the vertices that are most similar into clusters together. We choose a set S⊆V,|S|=k of k cluster centers. Each vertex will assign itself to the closest cluster center grouping the vertices into k different clusters. The objective is to minimize the maximum distance of a vertex to its cluster center. So geometrically, we want to find the centers of k different balls of the same radius r that cover all points so that r is as small as possible.
The optimal algorithm is greedy, and also very simple and intuitive. We first pick a vertexi∈V arbitrarily and put it in our set of S cluster centers. We then pick the next cluster center such that it is as far away as possible from all the other cluster centers. So while |S|<k , we repeatedly find a vertex j∈V for which the distance d(j,S) is maximized and add it to S . Once |S|=k we are done.
The described algorithm is a2 -approximation algorithm for the k -center problem. In fact, if there exists a ρ -approximation algorithm for the problem with ρ<2 then P=NP . This can be shown easily with a reduction from the NP-complete dominating set problem by showing we can find a dominating set of size at most k iff an instance of the k -center problem in which all distances are either 1 or 2 has optimal value 1. The algorithm and analysis is given by Gonzales, Clustering to minimize the maximum intercluster distance, 1985. Another variant of a 2 -approximation is given by Hochbaum and Shmoys, A best possible heuristic for the k-center problem, 1985.
fuente
Graham's list scheduling procedure is optimal for precedence constrained scheduling on identical machinesP|prec|Cmax unless a new variant of the unique games conjecture by Bansal and Khot is false.
Conditional Hardness of Precedence Constrained Scheduling on Identical Machines by Ola Svensson
fuente
Maybe this would also interest you (adapted from Methods to translate global constraints to local constraints)
Since greedy methods (more correctly local methods) employ only local information to achieve global optimisation, if ways are found which are able to transform global conditions to conditons able to be used employing only local information, this provides a (globally) optimal solution to problems using greedy/local techniques only.
References:
There are a couple of references which tackle the problem of translating global evaluation functions (or constraints) to local ones (using local information) and their consistency (i.e convergence to same global optimum):
Abstract (from 1. above)
Specificaly the paper addresses methods to determnine whether a local function (LEF) is consistent with a global function (GEF) and methods to construct consistent LEFs from given GEFs (Consistency theorem).
Excerpt from Conclusion section (from 1. above)
fuente