Conexión entre generadores estabilizadores y matrices de verificación de paridad en el código Steane

15

Estoy trabajando a través de Mike e Ike (Nielsen y Chuang) para el autoestudio, y estoy leyendo sobre códigos de estabilizadores en el Capítulo 10. Soy un ingeniero eléctrico con algo de experiencia en teoría de la información clásica, pero estoy de ninguna manera un experto en teoría de codificación algebraica. Mi álgebra abstracta es esencialmente un poco más de lo que está en el apéndice.

Creo que entiendo totalmente la construcción Calderbank-Shor-Steane, donde se utilizan dos códigos clásicos lineales para construir un código cuántico. El código Steane se construye utilizando C1 (el código para volteos qbit) como el código Hamming [7,4,3], y C2 (el código para volteos de fase) como el mismo código. La matriz de verificación de paridad del código [7,4,3] es:

[000111101100111010101]
.

Los generadores estabilizadores para el código Steane se pueden escribir como:

NameOperatorg1IIIXXXXg2IXXIIXXg3XIXIXIXg4IIIZZZZg5IZZIIZZg6ZIZIZIZ
donde por el bien de mi corduraIIIXXXX=IIIXXXXy así sucesivamente.

Se señala en el libro que las X sy las Z están en las mismas posiciones que las 1 s en el código de verificación de paridad original. El ejercicio 10.32 solicita verificar que las palabras de código para el código Steane estén estabilizadas por este conjunto. Obviamente podría enchufar esto y verificarlo a mano. Sin embargo, se afirma que con la observación de las similitudes entre la matriz de verificación de paridad y el generador, el ejercicio es "evidente".

He visto este hecho en otros lugares ( http://www-bcf.usc.edu/~tbrun/Course/lecture21.pdf ), pero me falta algún tipo de intuición (probablemente obvia). Creo que me falta alguna conexión adicional de las palabras de código clásicas a los códigos cuánticos, aparte de cómo se usan en la indexación de elementos básicos en la construcción del código (es decir, la Sección 10.4.2).

Travis C Cuvelier
fuente

Respuestas:

6

Hay algunas convenciones e intuiciones aquí, que tal vez ayudaría haber explicado:

  • Bits de signo versus {0,1} bits

    El primer paso es hacer lo que a veces se llama el "gran cambio de notación", y pensar que los bits (incluso los bits clásicos) están codificados en signos. Esto es productivo si lo que más le interesa es la paridad de las cadenas de bits, porque los cambios de bits y los cambios de signo básicamente actúan de la misma manera. Estamos mapa 0+1 y 11 , de modo que, por ejemplo, la secuencia de bits (0,0,1,0,1) estaría representada por la secuencia de los signos (+1,+1,1,+1,1) .

    Las paridades de secuencias de bits corresponden a productos de secuencias de signos. Por ejemplo, así como reconoceríamos 00101=0 como un cálculo de paridad, podemos reconocer (+1)(+1)(1)(+1)(1)=+1 como representación del mismo cálculo de paridad utilizando la convención de signos.

    Ejercicio.Calcule la 'paridad' de (1,1,+1,1) y de (+1,1,+1,+1) . ¿Son estos lo mismo?

  • Verificaciones de paridad utilizando bits de signo

    En la convención de {0,1} bits , las verificaciones de paridad tienen una buena representación como producto de punto de dos vectores booleanos, de modo que podemos realizar cálculos de paridad complicados como transformaciones lineales. Al cambiar a bits de signo, inevitablemente hemos perdido la conexión con el álgebra lineal en un nivel de notación , porque estamos tomando productos en lugar de sumas. A nivel computacional, debido a que esto es solo un cambio en la notación, realmente no tenemos que preocuparnos demasiado. Pero en un nivel matemático puro, ahora tenemos que pensar un poco más sobre lo que estamos haciendo con las matrices de verificación de paridad.

    Cuando usamos bits de signo, aún podemos representar una 'matriz de verificación de paridad' como una matriz de 0s y 1s, en lugar de signos ± 1. ¿Por qué? Una respuesta es que un vector de fila que describe una comprobación de paridad de bits es de un tipo diferente que la secuencia de bits en sí: describe una función en los datos, no los datos en sí. La matriz de 0s y 1s ahora solo requiere una interpretación diferente: en lugar de coeficientes lineales en una suma, corresponden a exponentes en un producto. Si tenemos bits de signo (s1,s2,,sn){1,+1}norte, y queremos calcular una verificación de paridad dada por un vector de fila (si1,si2,...,sinorte){0 0,1} , la verificación de paridad se calcula entonces por

    (s1)si1(s2)si2[](snorte)sinorte{-1,+1},
    donde recuerda que s0 0=1 para todos loss . Al igual que con {0,1} bits, puede pensar en la fila(si1,si2,...,sinorte) como simplemente representando una 'máscara' que determina qué bitssj hacen una contribución no trivial a la paridad cálculo.

    Ejercicio. Calcule el resultado de la verificación de paridad (0 0,1,0 0,1,0 0,1,0 0) en (+1,-1,-1,-1,-1,+1,-1) .

  • Valores propios como paridades.

    La razón por la que nos gustaría codificar bits en signos en la teoría de la información cuántica se debe a la forma en que la información se almacena en estados cuánticos, o más al punto, la forma en que podemos describir el acceso a esa información. Específicamente, podemos hablar mucho sobre la base estándar, pero la razón por la que es significativa es porque podemos extraer esa información mediante la medición de un observable .

    Este observable podría ser el proyector |11|, donde|0 tiene valor propio 0 y|1 tiene valor propio 1, pero a menudo es útil para describir a preferir las cosas en términos de las matrices de Pauli. En este caso, hablaríamos de la base estándar como la base propia deloperadorZ , en cuyo caso tenemos|0 como el 1-eigenvector de Zy|1 como el -1-eigenvector de Z.

    Entonces: tenemos la aparición de bits de signo (en este caso, valores propios) que representan la información almacenada en un qubit. Y mejor aún, podemos hacer esto de una manera que no sea específica de la base estándar: podemos hablar sobre la información almacenada en la base 'conjugada', simplemente considerando si el estado es un estado propio de X y qué valor propio tiene . Pero más que esto, podemos hablar de los valores propios de un operador Pauli de múltiples qubits como codificación de paridades de bits múltiples: el producto tensor ZZ representa una forma de acceder al producto de los bits de signo , es decir la paridad, de dos qubits en la base estándar. En este sentido, el valor propio de un estado con respecto a un operador de Pauli de múltiples qubits, si ese valor propio está definido ( es decir,  en el caso de que el estado sea un valor propio del operador Pauli), es en efecto el resultado de un cálculo de paridad de información almacenada en alguna elección de base para cada uno de los qubits.

    Exercise. What is the parity of the state |11 with respect to ZZ? Does this state have a well-defined parity with respect to XX?

    Exercise. What is the parity of the state |+ with respect to XX? Does this state have a well-defined parity with respect to ZZ?

    Exercise. What is the parity of |Φ+=12(|00+|11) with respect to ZZ and XX?

  • Stabiliser generators as parity checks.

    We are now in a position to appreciate the role of stabiliser generators as being analogous to a parity check matrix. Consider the case of the 7-qubit CSS code, with generators

    GeneratorTensor factors1234567g1XXXXg2XXXXg3XXXXg4ZZZZg5ZZZZg6ZZZZ
    I've omitted the identity tensor factors above, as one might sometimes omit the 0s from a {0,1} matrix, and for the same reason: in a given stabiliser operator, the identity matrix corresponds to a tensor factor which is not included in the 'mask' of qubits for which we are computing the parity. For each generator, we are only interested in those tensor factors which are being acted on somehow, because those contribute to the parity outcome.

    Now, the 'codewords' (the encoded standard basis states) of the 7-qubit CSS code are given by

    |0L|0000000+|0001111+|0110011+|0111100+|1010101+|1011010+|1100110+|1101001=yC|y,|1L|1111111+|1110000+|1001100+|1000011+|0101010+|0100101+|0011001+|0010110=yC|y1111111,
    where C is the code generated by the bit-strings 0001111, 0110011, and 1010101. Notably, these bit-strings correspond to the positions of the X operators in the generators g1, g2, and g3. While those are stabilisers of the code (and represent parity checks as I've suggested above), we can also consider their action as operators which permute the standard basis. In particular, they will permute the elements of the code C, so that the terms involved in |0L and |1L will just be shuffled around.

    The generators g4, g5, and g6 above are all describing the parities of information encoded in standard basis states. The encoded basis states you are given are superpositions of codewords drawn from a linear code, and those codewords all have even parity with respect to the parity-check matrix from that code. As g4 through g6 just describe those same parity checks, it follows that the eigenvalue of the encoded basis states is +1 (corresponding to even parity).

    This is the way in which

    'with the observation about the similarities between the parity check matrix and the generator the exercise is "self evident"'

    — because the stabilisers either manifestly permute the standard basis terms in the two 'codewords', or manifestly are testing parity properties which by construction the codewords will have.

  • Moving beyond codewords

    The list of generators in the table you provide represent the first steps in a powerful technique, known as the stabiliser formalism, in which states are described using no more or less than the parity properties which are known to hold of them.

    Some states, such as standard basis states, conjugate basis states, and the perfectly entangled states |Φ+|00+|11 and |Ψ|01|10 can be completely characterised by their parity properties. (The state |Φ+ is the only one which is a +1-eigenvector of XX and ZZ; the state |Ψ is the only one which is a −1-eigenvector of both these operators.) These are known as stabiliser states, and one can consider how they are affected by unitary transformations and measurements by tracking how the parity properties themselves transform. For instance, a state which is stabilised by XX before applying a Hadamard on qubit 1, will be stabilised by ZX afterwards, because (HI)(XX)(HI)=ZX. Rather than transform the state, we transform the parity property which we know to hold of that state.

    You can use this also to characterise how subspaces characterised by these parity properties will transform. For instance, given an unknown state in the 7-qubit CSS code, I don't know enough about the state to tell you what state you will get if you apply Hadamards on all of the qubits, but I can tell you that it is stabilised by the generators gj=(H7)gj(H7), which consist of

    GeneratorTensor factors1234567g1ZZZZg2ZZZZg3ZZZZg4XXXXg5XXXXg6XXXX
    This is just a permutation of the generators of the 7-qubit CSS code, so I can conclude that the result is also a state in that same code.

    There is one thing about the stabiliser formalism which might seem mysterious at first: you aren't really dealing with information about the states that tells you anything about how they expand as superpositions of the standard basis. You're just dealing abstractly with the generators. And in fact, this is the point: you don't really want to spend your life writing out exponentially long superpositions all day, do you? What you really want are tools to allow you to reason about quantum states which require you to write things out as linear combinations as rarely as possible, because any time you write something as a linear combination, you are (a) making a lot of work for yourself, and (b) prefiriendo alguna base de una manera que pueda evitar que note alguna propiedad útil a la que puede acceder utilizando una base diferente .

    H7 might have on the codespace of the 7-qubit code. What should one do instead of writing out superpositions?

    The answer is to describe these states in terms of observables — in terms of parity properties — to fix those states. For instance, just as |0 is the +1-eigenstate of Z, we can characterise the logical state |0L of the 7-qubit CSS code as the +1-eigenstate of

    ZL=ZZZZZZZ
    and similarly, |1L as the −1-eigenstate of ZL. (It is important that ZL=Z7 commutes with the generators {g1,,g6}, so that it is possible to be a +1-eigenstate of ZL at the same time as having the parity properties described by those generators.) This also allows us to move swiftly beyond the standard basis: using the fact that X7 anti commutes with Z7 the same way that X anti commutes with Z, and also as X7 commutes with the generators gi, we can describe |+L as being the +1-eigenstate of
    XL=XXXXXXX,
    and similarly, |L as the −1-eigenstate of XL. We may say that the encoded standard basis is, in particular, encoded in the parities of all of the qubits with respect to Z operators; and the encoded 'conjugate' basis is encoded in the parities of all of the qubits with respect to X operators.

    By fixing a notion of encoded operators, and using this to indirectly represent encoded states, we may observe that

    (H7)XL(H7)=ZL,(H7)ZL(H7)=XL,
    which is the same relation as obtains between X and Z with respect to conjugation by Hadamards; which allows us to conclude that for this encoding of information in the 7-qubit CSS code, H7 not only preserves the codespace but is an encoded Hadamard operation.

Thus we see that the idea of observables as a way of describing information about a quantum states in the form of sign bits — and in particular tensor products as a way of representing information about parities of bits — plays a central role in describing how the CSS code generators represent parity checks, and also in how we can describe properties of error correcting codes without reference to basis states.

Niel de Beaudrap
fuente
After having read this I still don't get how it is obvious that X4X5X6X7 (I take this example) stabilize the code space. In your answer you seemed to use the fact you know what the code space looks like noticing that X4X5X6X7 applied on |0L gives |1L. What perturbs me is that if we talked about Z4Z5Z6Z7 operators, I would directly see the link with parity check matrix as their eigenvalues give the parity of the bits 4 to 7 exactly like the first line of parity check matrix. But the X operator are not diagonal in the 0/1 basis. So I don't get...
StarBucK
While it isn't ideal to fuss with the state-vectors for the code-words, from the expansion above we can see that X4X5X6X7 permutes the standard basis components of |0L, and similarly for the standard-basis components of |1L. While this picture involves non-trivial transformations of parts of the state, the overall effect is stabilisation. To see how to see the X stabilisers as parity-checks, the way you do with Z-stabilisers, maybe you could consider the effect of the X stabilisers of |+L and |L, and in the conjugate basis.
Niel de Beaudrap
Thank you for your answer. Ok so to be sure: do you agree that if we don't look at the basis of the code space but only at the parity check matrix and the generators, the only thing we can directly understand is the fact the Z generator can be read in the parity check matrix. But for the X generator even if appears they follow a similar structure it is not obvious to understand why without further calculation ? Because in the Nielsen&Chuang the way it is presented is as if it was obvious. So I wondered if I missed something ?
StarBucK
2

One way that you could construct what the codeword is is to project on the +1 eigenspace of the generators,

|C1=126(i=16(I+gi))|0000000.
Concentrate, to start with, one the first 3 generators
(I+g1)(I+g2)(I+g3).
If you expand this out, you'll see that it creates all the terms in the group (I,g1,g2,g3,g1g2,g1g3,g2g3,g1g2g3). Corresponding it to binary strings, the action of multiplying two terms (since X2=I) is just like addition modulo 2. So, contained within the code are all of the words generated by the parity check matrix (and this is a group, with group operation of addition modulo 2).

Now, if you multiply by one of the X stabilizers, that's like doing the addition modulo 2 on the corresponding bit strings. But, because we've already generated the group, by definition every group element is mapped to another (unique) group element. In other words, if I do

g1×{I,g1,g2,g3,g1g2,g1g3,g2g3,g1g2g3}={g1I,g1g2,g1g3,g2,g3,g1g2g3,g2g3},
I get back the set I started with (using g12=I, and the commutation of the stabilizers), and therefore I'm projecting onto the same state. Hence, the state is stabilized by g1 to g3.

You can effectively make the same argument for g4 to g6. I prefer to think about first applying a Hadamard to every qubit, so the Xs are changed to Zs and vice versa. The set of stabilizers are unchanged, so the code is unchanged, but the Z stabilizer is mapped to an X stabilizer, about which we have already argued.

DaftWullie
fuente
1

What follows perhaps doesn't exactly answer your question, but instead aims to provide some background to help it become as 'self-evident' as your sources claim.

The Z operator has eigenstates |0 (with eigenvalue +1) and |1 (with eigenvalue 1). The ZZ operator on two qubits therefore has eigenstates

|00,|01,|10,|11
. The eigenvalues for these depend on the parity of the bit strings. For example, with |00 we multiply the +1 eigenvalues of the individual Z operators to get +1. For |11 we multiply the 1 eigenvalues together and also get +1 for ZZ. So both these even parity bit strings have eigenvalue +1, as does any superposition of them. For both odd parity states (|01 and |10) we must multiply a +1 with a 1, and get a 1 eigenvalue for ZZ.

Note also that superpositions of bit strings with fixed parity (such as some α|00+β|00) are also eigenstates, and have the eigenvalue associated with their parity. So measuring ZZ would not collapse such a superposition.

This analysis remains valid as we go to higher number of qubits. So if we want to know about the parity of qubits 1, 3, 5, and 7 (to pick a pertinent example), we could use the operator ZIZIZIZ. If we measure this and get the outcome +1, we know that this subset of qubits has a state represented by an even parity bit string, or a superposition thereof. If we get 1, we know that it is an odd parity state.

This allows us to write the [7,4,3] Hamming code using the notation of quantum stabilizer codes. Each parity check is turned into a stabilizer generator which has I on every qubit not involved in the check, and Z on every qubit that is. The resulting code will protect against errors that anticommute with Z (and therefore have the effect of flipping bits).

Of course, qubits do not restrict us to only working in the Z basis. We could encode our qubits for a classical Hamming code in the |+ and | states instead. These are the eigenstates of X, so you just need to replace Z with X in everything I've said to see how this kind of code works. It would protect against errors that anticommute with X (and so have the effect of flipping phase).

The magic of the Steane code, of course, is that it does both at the same time and protects against everything.

James Wootton
fuente