Algoritmos aleatorios eficientes y simples donde el determinismo es difícil

33

A menudo escucho que para muchos problemas conocemos algoritmos aleatorios muy elegantes, pero no, o solo soluciones deterministas más complicadas. Sin embargo, solo conozco algunos ejemplos de esto. Lo más destacado

  • Selección rápida aleatoria (y algoritmos geométricos relacionados, por ejemplo, para cascos convexos)
  • Mincut aleatorizado
  • Prueba de identidad polinómica
  • Problema de medida de Klee

Entre estos, solo las pruebas de identidad polinomial parecen ser realmente difíciles sin el uso de aleatoriedad.

¿Conocen más ejemplos de problemas en los que una solución aleatoria es muy elegante o muy eficiente, pero las soluciones deterministas no lo son? Idealmente, los problemas deberían ser fáciles de motivar para los legos (a diferencia de, por ejemplo, la prueba de identidad polinómica).

adrianN
fuente
10
Otro ejemplo es la prueba de primalidad. Las pruebas de primalidad probabilística de Miller – Rabin y Solovay – Strassen son muy simples y eficientes. Fue un problema abierto de larga data encontrar una prueba de primitiva determinista eficiente, que fue resuelta por Agrawal, Kayal y Saxena. La prueba AKS es una prueba de primalidad de prueba polinómica determinista. Sin embargo, no es tan simple ni tan eficiente como las pruebas probabilísticas.
Yury
8
La selección aleatorizada (hallazgo mediano) es algo más fácil que la determinista. Los algoritmos aleatorios para resolver aproximadamente el empaquetado y el recubrimiento de LP son más rápidos (en el peor de los casos) que sus contrapartes deterministas ( KY07 , GK95 ). Muchos problemas en línea tienen alg's aleatorios que son más competitivos que cualquier algoritmo determinista FK91 .
Neal Young el
11
Calcular el volumen de un cuerpo convexo en altas dimensiones admite una aproximación mediante aleatorización. Se sabe que ningún algoritmo determinista puede dar una buena aproximación. Por lo tanto, la aleatorización es esencial aquí. (1+ϵ)
Chandra Chekuri
55
@ChandraChekuri es un buen comentario y sería una respuesta aún mejor :)
Suresh Venkat
3
@ChandraChekuri en el modelo oráculo, de lo contrario BPPP
Sasho Nikolov

Respuestas:

36

Clasificación de tuercas y tornillos

Rawlins sugirió el siguiente problema en 1992: supongamos que se le da una colección de n tuercas y tornillos. Cada tornillo se ajusta exactamente a una tuerca y, de lo contrario, las tuercas y los tornillos tienen distintos tamaños. Los tamaños son demasiado cercanos para permitir la comparación directa entre pares de pernos o pares de tuercas. Sin embargo, puede comparar cualquier tuerca con cualquier perno intentando atornillarlas; en tiempo constante, descubrirá si el perno es demasiado grande, demasiado pequeño o es justo para la tuerca. Su tarea es descubrir qué tornillo se ajusta a cada tuerca, o de manera equivalente, para clasificar las tuercas y los tornillos por tamaño.

Una variante directa de clasificación rápida aleatoria resuelve el problema en el tiempo con alta probabilidad. Elige un rayo al azar; úsalo para repartir las nueces; use la tuerca correspondiente para dividir los tornillos; y recurse. Sin embargo, encontrar un algoritmo determinista que incluso se ejecute en o ( n 2 ) no es trivial. Los algoritmos deterministas de tiempo O ( n log n ) fueron finalmente encontrados en 1995 por Bradford e independientemente por Komlós, Ma y Szemerédi. Bajo el capó, ambos algoritmos utilizan variantes de la red de clasificación paralela AKS, por lo que la constante oculta en el O ( nO(nlogn)o(n2)O(nlogn) límite de tiempo es bastante grande; la constante oculta para el algoritmo aleatorio es 4.O(nlogn)

  • Noga Alon, Manuel Blum, Amos Fiat, Sampath Kannan, Moni Noar y Rafail Ostrovsky. Tuercas y tornillos a juego. Proc. 5th Ann. ACM-SIAM Symp. Algoritmos discretos , 690-696, 1994.
  • Noga Alon, Phillip G. Bradford y Rudolf Fleischer. Coincidencia de tuercas y tornillos más rápido. Informar. Proc. Letón. 59 (3): 123-127, 1996.
  • Phillip G. Bradford. Coincidencia de tuercas y tornillos de forma óptima. Tech. Rep. MPI-I-95-1-025, Max-Planck-Institut für Informatik, 1995. http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-025
  • Phillip G. Bradford and Rudolf Fleischer. Matching nuts and bolts faster. Proc. 6th. Int. Symp. Algorithms Comput., 402–408, 1995. Lecture Notes Comput. Sci. 1004.
  • János Komlós, Yuan Ma, and Endre Szemerédi. Matching nuts and bolts in O(nlogn) time. SIAM J. Discrete Math. 11(3):347–372, 1998.
  • Gregory J. E. Rawlins. Compared To What? : An Introduction to The Analysis of Algorithms. Computer Science Press/W. H. Freeman, 1992.
Jeffε
fuente
2
This is a beautiful example, but it's an oracle problem. Is there some way to remove the oracle from it?
Peter Shor
Got a link to the 98 Szemeredi paper? How is this hard? In parallel compare each bolt to a unique nut and put each pair in sorted order; removing matched elements. In log(n) steps merge the sorted nbnbnbnbnb sequences, kicking out matches as they arise. EDIT: Yeah the noncomparability of nnn and bbbb strings is annoying on the merge step.
Chad Brewbaker
@ChadBrewbaker Suppose in every pair but one, the bolt is smaller than the nut. (Yes, this is possible.) Now what does your algorithm do? In other words, "annoying" = "the whole problem".
Jeffε
I was looking for the Szemeredi paper and thinking out loud how it is hard. Yes, I concur that a merge based approach is nontrivial; but Vishkin's papers on parallel graph connectivity leave a gut feeling it is not impossible.
Chad Brewbaker
With each comparison from a nut and bolt you get a directed edge added to the graph or a match which removes both vertices. The goal would be to merge connected components in such a way that they collapse all matches between them with a linear amount of work and keep the storage size of edges in a connected component bounded to linear space.
Chad Brewbaker
17

Once you are not just talking about poly-time but rather look at the many models of computation we study, there are examples everywhere:

In Logspace: Un-directed ST connectivity (in RL since 1979, and in L only since 2005)

In NC: Finding a perfect matching in a bipartite graph in parallel (in RNC and still not known to be in NC)

In interactive proofs: deterministic ones give NP, while randomized ones can do PSPACE. Related: checking a proof deterministically requires looking at all the proof, while PCP proofs allow you to check only a constant number of bits.

In Algorithmic Mechanism Design: many randomized truthful approximation mechanisms with no deterministic counterpart.

In Communication complexity: the equality function requires linear communication deterministically but logarithmic (or, depending on exact model, constant) communication randomly.

In decision trees: evaluating an and-or tree requires linear queries deterministically but much less with randomization. This is essentially equivalent to alpha-beta pruning which gives a randomized sub-linear algorithm for game-tree evaluation.

In streaming models, distributed computing models: see previous answers.

Noam
fuente
12

Most streaming algorithms

In the streaming model of computation (AMS, book), an algorithm processes an online sequence of updates and is restricted to keep only sublinear space. At any point in time, the algorithm should be able to answer a query.

For many problems there exist sublinear space randomized streaming algorithms while provably no deterministic algorithm can solve the problem in sublinear space. This is related to gaps between randomized and deterministic communication complexity. A simple example is the distinct count problem: at each time step t the algorithm is given an integer it[n], and it should be able to approximate Dm=|{it:t=1m}|, i.e. the number of distinct integers seen up to time step m. It's relatively easy to show that any deterministic algorithm achieving constant approximation must use Ω(n) space (see e.g. lecture notes by Piotr Indyk). On the other hand the clever sampling algorithm of Flajolet and Martin (simple analysis with limited randomness in the AMS paper linked above) achieves constant approximation in O(logn) bits. The latest work on the problem gives an optimal O(1ϵ2+logn) algorithm that computes an 1±ϵ approximation.

Sasho Nikolov
fuente
8

Finding a maximal independent set in a distributed network of n nodes with maximum degree Δ. There's a known lower bound [3] of min(Ω(logΔ),Ω(logn)) that holds for randomized and deterministic algorithms.

The following is a simple randomized distributed algorithm [1] that proceeds in synchronous rounds. (In a round, every node u can perform some local computation and send messages to its neighbors. These messages are guaranteed to be received before the start of the next round.)

  1. In every round, each active node u marks itself with probability 1/du where du>0 is the degree of u; if du=0, u simply enters the independent set. (Initially, every node is active.)
  2. If u is the only marked node in its neighborhood, u enters the independent set, deactivates itself and notifies all of its neighbors to deactivate themselves. The degrees of the remaining active nodes are decreased accordingly, i.e., all edges to deactivated nodes are removed.
  3. Otherwise, if there is some neighboring node v that is also marked, the lower degree vertex unmarks itself and remains active.

It can be shown that this algorithm terminates in O(logn) rounds with high probability, by arguing that half of the remaining edges are deleted in every round. In contrast, the fastest known deterministic distributed algorithm [2] takes O(n1/logn) rounds and is considerably more complicated.


[1] Michael Luby: A Simple Parallel Algorithm for the Maximal Independent Set Problem. SIAM J. Comput. 15(4): 1036-1053 (1986) http://dx.doi.org/10.1137/0215074

[2] Alessandro Panconesi, Aravind Srinivasan: On the Complexity of Distributed Network Decomposition. J. Algorithms 20(2): 356-374 (1996) http://dx.doi.org/10.1006/jagm.1996.0017

[3] Fabian Kuhn, Thomas Moscibroda, Roger Wattenhofer: Local Computation: Lower and Upper Bounds. CoRR abs/1011.5470 (2010) http://arxiv.org/abs/1011.5470

Peter
fuente
A recent algorithm (at PODC 2013), inspired by biological systems, achieves performance as good as Luby's by using a simple local feedback mechanism. arxiv.org/abs/1211.0235
András Salamon
6

Leader Election in an Anonymous Ring of Processes

Suppose that you have a ring network of processes that do not have ids and that communicate by message passing. Initially, every process is in the same state. You want to design a distributed algorithm such that eventually exactly 1 process enters the elected state and all other processes enter the non-elected state. This is the so called leader election problem which is one of the fundamental symmetry breaking tasks in a distributed system and has many applications.

There's a simple argument (e.g. [1]) that there is no deterministic leader election algorithm for an anonymous ring.

Model: We assume that the computation advances in synchronous rounds where, in each round, every process performs some local computation, sends messages to its neighbors in the ring, and receives messages from its neighbors.

For the sake of a contradiction, let's assume that there is such a deterministic leader election algorithm A. It is sufficient to show that, at the start of any round r0, all processes are in the same state, since this implies that there cannot be exactly 1 process in the elected state. Since processes do not have ids and the network is symmetric, every process is in the same initial state, which provides the induction base.

For the induction step, consider some round r0 and assume that every process is in the same state at the start of round r. Therefore, since algorithm A is deterministic, every process performs exactly the same computation and sends exactly the same messages during round r. This in turn implies that every process receives exactly the same messages during r and, by the start of round r+1, is in the same state. Thus, no such algorithm A can exist.

If A is a randomized algorithm on the other hand and processes know the size of the ring n, there's an easy way to break symmetry, by generating a random id from the range [1,n4], which will result in unique ids for all processes with high probability. A simple and naive algorithm proceeds by letting every process send its id along the ring and instruct processes to forward only messages containing the largest id seen so far. This guarantees that only the process who generated the largest id will receive its own message once it has traversed the entire ring and elect itself as the leader.


[1] Dana Angluin: Local and Global Properties in Networks of Processors (Extended Abstract). STOC 1980: 82-93. http://doi.acm.org/10.1145/800141.804655

Peter
fuente
6

Majority problem in a query model.

Problem. We are given a set of n balls colored with two or more colors. The goal is to find a ball of the majority color (i.e., a color that occurs more than n/2 times) assuming such a color exists, using queries of the form "Does ball i and ball j have the same color?". The strategy must be oblivious, i.e., queries cannot depend on the results of previous queries.

Randomized Algorithm. Pick a random ball and check if more than n/2 balls have the same color. This algorithm runs in O(n) time in expectation.

Deterministic O(n) algorithm is quite non-trivial and is a nice application of expanders.

F. R. K. Chung, R. L. Graham, J. Mao, and A. C. Yao, Oblivious and adaptive strategies for the Majority and Plurality problems, Proc. COCOON 2005, pp. 329–338.

Jagadish
fuente