Fijo un lenguaje regular L
- Entrada: n
n triples ( a i , l i , r i )(ai,li,ri) donde a i ∈ Σai∈Σ y 1 ≤ l i ≤ r i ≤ n1≤li≤ri≤n son enteros - Salida: ¿hay una biyección f : { 1 , ... , n } → { 1 , ... , n }
f:{1,…,n}→{1,…,n} tal que l i ≤ f ( i ) ≤ r ili≤f(i)≤ri para todo ii , y a f - 1 ( 1 ) ⋯ a f - 1 ( n ) ∈ Laf−1(1)⋯af−1(n)∈L .
Obviamente, este problema está en NP, adivinando una biyección f
Algunas observaciones iniciales:
- Parece que se han estudiado problemas similares en la programación: podríamos ver el problema como la programación de tareas de costo unitario en una sola máquina, respetando las fechas de inicio y finalización. Sin embargo, este último problema es obviamente en PTIME con un enfoque codicioso, y no veo nada en la literatura de programación para el caso donde las tareas están etiquetadas y nos gustaría lograr una palabra en un idioma regular objetivo.
- Otra forma de ver el problema es como un caso especial de un problema de coincidencia máxima bipartita (entre letras y posiciones), pero de nuevo es difícil de expresar la restricción de que tenemos que caer en L
L . - En el caso específico donde L
L es un lenguaje de la forma u ∗u∗ para alguna palabra fija uu (por ejemplo, ( a b ) ∗(ab)∗ ), entonces el problema de programación de letras para LL está en PTIME con un algoritmo codicioso fácil: construya la palabra desde la izquierda a la derecha, y coloque en cada posición una de las letras disponibles que sea correcta en relación con LL y tenga el menor tiempo r iri . (Si no hay letras disponibles que sean correctas, falle). Sin embargo, esto no se generaliza a los idiomas regulares arbitrarios LL porque para dichos idiomas podemos elegir qué tipo de letra usar. - Parece que un algoritmo dinámico debería funcionar, pero de hecho no es tan simple: parece que necesitaría memorizar qué conjunto de letras ha tomado hasta ahora. De hecho, cuando construye una palabra de izquierda a derecha, cuando ha alcanzado una posición i
i , su estado depende de las letras que ha consumido hasta ahora. No puede memorizar todo el conjunto porque entonces habría exponencialmente muchos estados. Pero no es tan fácil "resumirlo" (por ejemplo, cuántas copias de cada carta se usaron), porque para saber qué copias usó, parece que necesita recordar cuándo las ha consumido (cuanto más tarde haya consumido ellos, más cartas estaban disponibles). Incluso con un lenguaje como ( a b | b a ) ∗(ab|ba)∗ , ya puede haber restricciones complicadas sobre cuándo debería elegir tomar una bab y cuándo debería elegir tomar una bba dependiendo de qué letras necesitará más adelante y cuándo las letras estarán disponibles. - Sin embargo, como el lenguaje normal L
L es fijo y no puede memorizar tanta información, me cuesta encontrar un problema NP-difícil del que pueda reducir.
Respuestas:
El problema es NP-hard para L = A ∗ donde A es el lenguaje finito que contiene las siguientes palabras:L=A∗ A
La reducción proviene del problema de Orientación del gráfico, que se sabe que es NP-hard (consulte https://link.springer.com/article/10.1007/s00454-017-9884-9 ). En este problema, se nos da un gráfico no dirigido de 3 regulares en el que cada vértice está etiquetado como " { 1 } " o " { 0 , 3 } ". El objetivo es dirigir los bordes del gráfico para que el grado de salida de cada vértice esté en el conjunto que etiqueta ese vértice.{1} {0,3}
La reducción debe tomar como entrada una instancia de Orientación gráfica y generar una lista de triples como salida. En esta reducción, los triples que producimos siempre satisfarán ciertas restricciones. Estas restricciones se enumeran a continuación, y nos referiremos a una lista de triples como válidas si y solo si cumplen estas restricciones:
Note the following lemma, proved at the end of this post.
Lema: para una lista válida de triples, los caracteres x , y , y c deben colocarse exactamente como se indica en los triples, y para cualquier par de triples ( 0 , l , r ) y ( 1 , l , r ) , el dos caracteres para ese triple deben colocarse en los índices l y r .x y c (0,l,r) (1,l,r) l r
Entonces la idea de la reducción es la siguiente.
Utilizamos pares de triples ( 0 , l , r ) y ( 1 , l , r ) para representar aristas. El borde va entre los puntos finales en el índice ly en el índice r . Suponiendo que produzcamos una lista válida de triples, los caracteres de estos dos triples deben colocarse en l y r , para que podamos tratar el orden en que se colocan como indicando la dirección del borde. Aquí 1 es la "cabeza" del borde y 0 es la "cola". En otras palabras, si el 1 se coloca en r(0,l,r) (1,l,r) l r l r 1 0 1 r entonces el borde apunta desde ll to rr and if the 11 is placed at ll then the edge points from rr to ll .
To represent vertices, we place an xx or yy character at an index and use the next three characters as the endpoints of the three edges which touch the vertex. Note that if we place an xx , all three edges at the vertex must point in the same direction (all into the vertex or all out of the vertex) simply due to the strings that are in finite language AA . Such vertices have outdegree 00 or 33 , so we place an xx exactly for the vertices labeled {0,3}{0,3} . If we place a yy , exactly one of the three edges at the vertex must point in the same direction due to the strings in AA . Such vertices have outdegree 11 , so we place a yy exactly for the vertices labeled {1}{1} .
In some sense, we are done. In particular, the correspondence between solving this instance and solving the Graph Orientation instance should be clear. Unfortunately, the list of triples we produce may not be valid, and so the "edges" described may not work as intended. In particular, the list of triples might not be valid because the condition that the intervals from the triples must always contain each other might not hold: the intervals from two edges may overlap without one containing the other.
To combat this, we add some more infrastructure. In particular, we add "crossover vertices". A crossover vertex is a vertex of degree 44 whose edges are paired such that within each pair one edge must point into the crossover vertex and one out. In other words, a crossover vertex will behave the same as just two "crossing" edges. We represent a crossover vertex by placing the character cc at some index ii . Then note that the language AA constrains the characters at i−1i−1 and i+2i+2 to be opposite (one 00 and one 11 ) and the characters at i−2i−2 and i+1i+1 to be opposite. Thus, if we use these indices as the endpoints for the four edges at the crossover vertex, the behavior is exactly as described: the four edges are in pairs and among every pair one points in and one points out.
How do we actually place these crossovers? Well suppose we have two intervals (l,r)(l,r) and (l′,r′)(l′,r′) which overlap. WLOG, l<l′<r<r′l<l′<r<r′ . We add the crossover character into the middle (between l′l′ and rr ). (Let's say that all along we spaced everything out so far that there's always enough space, and at the end we will remove any unused space.) Let the index of the crossover character be ii . Then we replace the four triples (0,l,r)(0,l,r) , (1,l,r)(1,l,r) , (0,l′,r′)(0,l′,r′) , and (1,l′,r′)(1,l′,r′) with eight triples with two each (one with character 00 and one with character 11 ) for the following four intervals (l,i−1)(l,i−1) , (i+2,r)(i+2,r) , (l′,i−2)(l′,i−2) , (i+1,r′)(i+1,r′) . Notice that the intervals don't overlap in the bad way anymore! (After this change, if two intervals overlap, one is strictly inside the other.) Furthermore, the edge from ll to rr is replaced by an edge from ll to the crossover vertex followed by the edge from there to rr ; these two edges are paired at the crossover vertex in such a way that one is pointed in and one is pointed out; in other words, the two edges together behave just like the one edge they are replacing.
In some sense, putting in this crossover vertex "uncrossed" two edges (whose intervals were overlapping). It is easy to see that adding the crossover vertex can't cause any additional edges to become crossed. Thus, we can uncross every pair of crossing edges by inserting enough crossover vertices. The end result still corresponds to the Graph Orientation instance, but now the list of triples is valid (the properties are all easy to verify now that we have "uncrossed" any crossing edges), so the lemma applies, the edges must behave as described, and the correspondence is actually an equivalence. In other words, this reduction is correct.
proof of lemma
Lemma: for a valid list of triples, the characters xx , yy , and cc must be placed exactly as indicated by the triples, and for any pair of triples (0,l,r)(0,l,r) and (1,l,r)(1,l,r) , the two characters for that triple must be placed at indices ll and rr .
proof:
We proceed by induction on the triples by interval length. In particular, our statement is the following: for any kk if some triple has interval length kk then the character in that triple must be placed as described in the lemma.
Base case: for k=0k=0 , the triple must be placing a character xx , yy , or cc at the single index inside the interval. This is exactly as described in the lemma.
Inductive case: assume the statement holds for any kk less than some k′k′ . Now consider some triple with interval length k′k′ . Then that triple must be of the form (i,l,r)(i,l,r) with r=l+k′−1r=l+k′−1 and i∈{0,1}i∈{0,1} . The triple (1−i,l,r)(1−i,l,r) must also be present. The number of triples (α′,l′,r′)(α′,l′,r′) with l≤l′≤r′≤rl≤l′≤r′≤r is exactly r−l+1=k′r−l+1=k′ . These triples include triples (0,l,r)(0,l,r) and (1,l,r)(1,l,r) but also k′−2k′−2 other triples of the form (α′,l′,r′)(α′,l′,r′) with l<l′≤r′<rl<l′≤r′<r . These other triples all have interval length smaller than k′k′ , so they all must place their characters as specified in the lemma. The only way for this to occur is if these triples place characters in every index starting at index l+1l+1 and ending at index r+1r+1 . Thus, our two triples (0,l,r)(0,l,r) and (1,l,r)(1,l,r) must place their characters at indices ll and rr , as described in the lemma, concluding the inductive case.
By induction, the lemma is correct.
fuente
@MikhailRudoy was the first to show NP-hardness, but Louis and I had a different idea, which I thought I could outline here since it works somewhat differently. We reduce directly from CNF-SAT, the Boolean satisfiability problem for CNFs. In exchange for this, the regular language LL that we use is more complicated.
The key to show hardness is to design a language L′L′ that allows us to guess a word and repeat it multiple times. Specifically, for any number kk of variables and number
mm of clauses, we will build intervals that ensure that all words ww of L′L′
that we can form must start with an arbitrary word uu of length kk on alphabet
{0,1}{0,1} (intuitively encoding a guess of the valuation of variables), and
then this word uu is repeated mm times (which we will later use to test that
each clause is satisfied by the guessed valuation).
To achieve this, we will fix the alphabet A={0,1,#,0′,1′}A={0,1,#,0′,1′} and the
language: L′:=(0|1)∗(#(00′|11′)∗)∗#(0|1)∗L′:=(0|1)∗(#(00′|11′)∗)∗#(0|1)∗ . The formal claim
is a bit more complicated:
Claim: For any numbers k,m∈Nk,m∈N , we can build in PTIME a set of
intervals such that the words in L′L′ that can be formed with these intervals are
precisely:
{u(#(˜u⋈˜u′)#(u⋈u′))m#˜u∣u∈{0,1}k}
where ˜uu~ denotes the result of reversing the order of uu and swapping
00 's and 11 's, where u′u′ denotes the result of adding a prime to all letters in
uu , and where x⋈yx⋈y for two words xx of yy of length pp is the word
of length 2p2p formed by taking alternatively one letter from xx and one letter
from yy .
Here's an intuitive explanation of the construction that we use to prove this. We start with intervals that encode the initial guess of uu . Here is the gadget
for n=4n=4 (left), and a possible solution (right):
It's easy to show the following observation (ignoring L′L′ for now): the possible
words that we can form with these intervals are exactly u#˜uu#u~ for u∈{0,1}ku∈{0,1}k . This is shown essentially like the Lemma in @MikhailRudoy's
answer, by induction from the shortest intervals to the longest ones: the center
position must contain ## , the two neighboring positions must contain one 00 and
one 11 , etc.
We have seen how to make a guess, now let's see how to duplicate it. For this, we will rely on L′L′ , and add more intervals. Here's an illustration for k=3k=3 :
For now take L:=(0|1)∗(#(00′|11′)∗)∗#(0′|1′)∗L:=(0|1)∗(#(00′|11′)∗)∗#(0′|1′)∗ . Observe
how, past the first ## , we must enumerate alternatively an unprimed and a primed
letter. So, on the un-dashed triangle of
intervals, our observation above still stands: even though it seems like these
intervals have more space to the right of the first ## , only one position out of
two can be used. The same claim holds for the dashed intervals. Now, LL further enforces that, when we enumerate an unprimed letter, the primed letter that follows must be the same. So it is easy to see that the possible words are exactly: u#(˜u⋈˜u′)#u′u#(u~⋈u~′)#u′ for u∈{0,1}ku∈{0,1}k .
Now, to show the claim, we simply repeat this construction mm times. Here's an
example for k=3k=3 and m=2m=2 , using now the real definition of L′L′ above the
statement of the claim:
As before, we could show (by induction on mm ) that the possible words are
exactly the following: u(#˜u⋈˜u′#u⋈u′)2#˜uu(#u~⋈u~′#u⋈u′)2#u~ for u∈{0,1}ku∈{0,1}k . So this construction
achieves what was promised by the claim.
Thanks to the claim we know that we can encode a guess of a valuation for the variables, and repeat the valuation multiple times. The only missing thing is to explain how to check that the valuation satisfies the formula. We will do this by checking one clause per occurrence of uu . To do this, we observe that
without loss of generality we can assume that each letter of the word is
annotated by some symbol provided as input. (More formally: we could assume that
in the problem we also provide as input a word ww of length nn , and we ask
whether the intervals can form a word uu such that w⋈uw⋈u is in LL .)
The reason why we can assume this is because we can double the size of each
interval, and add unit intervals (at the bottom of the picture) at odd positions
to carry the annotation of the corresponding even position:
Thanks to this observation, to check clauses, we will define our regular language LL to be the intersection of two languages. The first language
enforces that the sub-word on even positions is a word in L′L′ , i.e., if we
ignore the annotations then the word must be in L′, so we can just use the construction of the claim and add some annotations. The second language L″ will
check that the clauses are satisfied. To do this, we will add three letters in
our alphabet, to be used as annotations: +, −, and ϵ. At clause 1≤i≤m, we add unit intervals to annotate by + the positions in the
i-th repetition of u corresponding to variables occurring positively in
clause i, and annotate by~− the positions corresponding to negatively
occurring variables. We annotate everything else by~ϵ. It is now clear
that L″ can check that the guessed valuation satisfies the formula, by
verifying that, between each pair of consecutive # symbols that contain an
occurrence of u (i.e., one pair out of two), there is some literal that
satisfies the clause, i.e., there must be one occurrence of the subword +1 or of the subword −0.
This concludes the reduction from CNF-SAT and shows NP-hardness of the letter scheduling problem for the language L.
fuente