¿Existen casos interesantes de algoritmos que se hayan publicado con límites probados y donde se hayan publicado límites estrictamente mejores más adelante? No mejores algoritmos con mejores límites, ¡obviamente eso ha sucedido! Pero un mejor análisis conduce a un mejor límite en un algoritmo existente
Pensé que la multiplicación de matrices era un ejemplo de esto, pero me he convencido (¡quizás incorrectamente!) Después de tratar de entender mejor a Coppersmith – Winograd y sus amigos.
ho.history-overview
analysis-of-algorithms
Rob Simmons
fuente
fuente
Respuestas:
El algoritmo Union-Find, que Tarjan 1 mostró tenía complejidadn α ( n ) , donde α ( n ) es la función inversa de Ackermann, había sido analizado previamente por varias personas. Según Wikipedia, fue inventado por Galler y Fischer 2 , pero esto parece ser incorrecto, ya que no tenían todos los componentes del algoritmo necesarios para que funcione tan rápido.
Basado en breves escaneos de los documentos, parece que el algoritmo fue inventado por Hopcroft y Ullman 3 , quienes dieron un límite de tiempo (incorrecto)O(n) . Fischer 4 luego encontró el error en la prueba y le dio un límite de tiempo O(nloglogn) . Luego, Hopcroft y Ullman 5 dieron un límite de tiempo O(nlog∗n) , después de lo cual Tarjan 1 encontró el (óptimo) O(nα(n)) .
1 RE Tarjan, "Eficiencia de un algoritmo de unión de conjuntos bueno pero no lineal" (1975).
2 BS Galler y MJ Fischer, "Un algoritmo de equivalencia mejorado" (1964).
3 JE Hopcroft y JD Ullman, "Un algoritmo de fusión de lista lineal" (1971).
4 MJ Fischer, "Eficiencia de algoritmos de equivalencia" (1972).
5 JE Hopcroft y JD Ullman, "Algoritmos de fusión de conjuntos" (1973).
fuente
fuente
The Logjam Attack mentions that analysis of the general number field sieve (as applied to computing discrete logarithms overFp ) descent step was tightend, see top left of the 3rd page. As this is the only step that can't be pre-computed (if Fp is fixed), its efficiency was instrumental to their attack.
The initial analysis appears to be in Gordon 92, where descent was costed similarly to precomputation, atLp(1/3,32/3) . The tighter analysis seems to be from Barbulescu's thesis, where descent is costed at Lp(1/3,1.232) , where:
Lp(v,c)=exp((c+o(1))(logp)v(loglogp)1−v)
It's worth mentioning that this is technically an expected cost, and not an upper bound. This still seems in the spirit of the question to me, but I'll remove it if it's not what you're looking for.
fuente
Recent work of Anupam Gupta, Euiwoong Lee, and Jason Li [1] shows that the Karger-Stein algorithm for the minimumk -cut problem has, in fact, asymptotic time complexity O(nk+o(1)) , improving on the original analysis which gave O(n2k−2) (and on previous work by the same authors, which obtained a different algorithm running in time O(n1.98k+O(1)) ).
This is likely to be (near)optimal, based on a conditional lower bound ofΩ(nk) .
Note: A talk by Jason Li (and the corresponding slides) can be found on the TCS+ website.
[1] The Karger—Stein Algorithm is Optimal fork -cut, Anupam Gupta, Euiwoong Lee, Jason Li. arXiv:1911.09165, 2019.
fuente
The work function algorithm fork -server was shown to be (2k−1) -competitive by Koutsipias and Papadimitrou - the algorithm was known previously and analyzed only in special cases. It is conjectured to be k -competitive.
fuente
The3 -Hitting Set problem had a few iterations of "better analysis" (see Fernau's papers [1] [2])
The algorithm before these paper had some arbitrary choices (like 'choose an edge'...), but when the choices are specifically-chosen in a certain way, it allows for a more refined analysis, that is where the improvement comes in. And I think his Appendices in 1 give a more refined analysis (counting subproblems/substructures) leading to better recurrences and so better runtimes.
I think there are a number of such examples in the parameterized complexity literature, where adding another variable to the analysis can lead to improved runtimes, but I have been out of that game for several years now and I can't think of specific ones at the moment. There are a number of papers in FPT and PTAS areas that come up when looking for "improved analysis" in the paper titles.
If specifying choices counts as the same algorithm (like union-find's depth-rank heuristic), then the Edmonds-Karp algorithm is an 'improved analysis' of Ford-Fulkerson, and I'd imagine there are plenty of other problems that have algorithms that have seen runtime improvements from new choice rules.
Then there are a whole bunch of amortized analysis of existing algorithms (I think union-find fits under this description, here is another one https://link.springer.com/article/10.1007/s00453-004-1145-7 )
fuente