Hay algunas convenciones e intuiciones aquí, que tal vez ayudaría haber explicado:
Bits de signo versus {0,1} bits
El primer paso es hacer lo que a veces se llama el "gran cambio de notación", y pensar que los bits (incluso los bits clásicos) están codificados en signos. Esto es productivo si lo que más le interesa es la paridad de las cadenas de bits, porque los cambios de bits y los cambios de signo básicamente actúan de la misma manera. Estamos mapa 0↦+1 y 1↦−1 , de modo que, por ejemplo, la secuencia de bits (0,0,1,0,1) estaría representada por la secuencia de los signos (+1,+1,−1,+1,−1) .
Las paridades de secuencias de bits corresponden a productos de secuencias de signos. Por ejemplo, así como reconoceríamos 0⊕0⊕1⊕0⊕1=0 como un cálculo de paridad, podemos reconocer (+1)⋅(+1)⋅(−1)⋅(+1)⋅(−1)=+1 como representación del mismo cálculo de paridad utilizando la convención de signos.
Ejercicio.Calcule la 'paridad' de (−1,−1,+1,−1) y de (+1,−1,+1,+1) . ¿Son estos lo mismo?
Verificaciones de paridad utilizando bits de signo
En la convención de {0,1} bits , las verificaciones de paridad tienen una buena representación como producto de punto de dos vectores booleanos, de modo que podemos realizar cálculos de paridad complicados como transformaciones lineales. Al cambiar a bits de signo, inevitablemente hemos perdido la conexión con el álgebra lineal en un nivel de notación , porque estamos tomando productos en lugar de sumas. A nivel computacional, debido a que esto es solo un cambio en la notación, realmente no tenemos que preocuparnos demasiado. Pero en un nivel matemático puro, ahora tenemos que pensar un poco más sobre lo que estamos haciendo con las matrices de verificación de paridad.
Cuando usamos bits de signo, aún podemos representar una 'matriz de verificación de paridad' como una matriz de 0s y 1s, en lugar de signos ± 1. ¿Por qué? Una respuesta es que un vector de fila que describe una comprobación de paridad de bits es de un tipo diferente que la secuencia de bits en sí: describe una función en los datos, no los datos en sí. La matriz de 0s y 1s ahora solo requiere una interpretación diferente: en lugar de coeficientes lineales en una suma, corresponden a exponentes en un producto. Si tenemos bits de signo (s1,s2,…,sn)∈{−1,+1}n, y queremos calcular una verificación de paridad dada por un vector de fila (b1,b2, ... ,bnorte) ∈ { 0 , 1 } , la verificación de paridad se calcula entonces por
( s1)si1⋅ ( s2)si2⋅ [ ⋯ ] ⋅ ( snorte)sinorte∈ { - 1 , + 1 } ,
donde recuerda que s0 0= 1 para todos loss . Al igual que con {0,1} bits, puede pensar en la fila( b1, b2, ... , bnorte) como simplemente representando una 'máscara' que determina qué bitssj hacen una contribución no trivial a la paridad cálculo.
Ejercicio. Calcule el resultado de la verificación de paridad ( 0 , 1 , 0 , 1 , 0 , 1 , 0 ) en ( + 1 , - 1 , - 1 , - 1 , - 1 , + 1 , - 1 ) .
Valores propios como paridades.
La razón por la que nos gustaría codificar bits en signos en la teoría de la información cuántica se debe a la forma en que la información se almacena en estados cuánticos, o más al punto, la forma en que podemos describir el acceso a esa información. Específicamente, podemos hablar mucho sobre la base estándar, pero la razón por la que es significativa es porque podemos extraer esa información mediante la medición de un observable .
Este observable podría ser el proyector El | 1 ⟩⟨1|, donde|0⟩ tiene valor propio 0 y|1⟩ tiene valor propio 1, pero a menudo es útil para describir a preferir las cosas en términos de las matrices de Pauli. En este caso, hablaríamos de la base estándar como la base propia deloperadorZ , en cuyo caso tenemos|0⟩ como el 1-eigenvector de Zy|1⟩ como el -1-eigenvector de Z.
Entonces: tenemos la aparición de bits de signo (en este caso, valores propios) que representan la información almacenada en un qubit. Y mejor aún, podemos hacer esto de una manera que no sea específica de la base estándar: podemos hablar sobre la información almacenada en la base 'conjugada', simplemente considerando si el estado es un estado propio de X y qué valor propio tiene . Pero más que esto, podemos hablar de los valores propios de un operador Pauli de múltiples qubits como codificación de paridades de bits múltiples: el producto tensor Z⊗Z representa una forma de acceder al producto de los bits de signo , es decir la paridad, de dos qubits en la base estándar. En este sentido, el valor propio de un estado con respecto a un operador de Pauli de múltiples qubits, si ese valor propio está definido ( es decir, en el caso de que el estado sea un valor propio del operador Pauli), es en efecto el resultado de un cálculo de paridad de información almacenada en alguna elección de base para cada uno de los qubits.
Exercise. What is the parity of the state |11⟩ with respect to Z⊗Z? Does this state have a well-defined parity with respect to X⊗X?
Exercise. What is the parity of the state |+−⟩ with respect to X⊗X? Does this state have a well-defined parity with respect to Z⊗Z?
Exercise. What is the parity of |Φ+⟩=12√(|00⟩+|11⟩) with respect to Z⊗Z and X⊗X?
Stabiliser generators as parity checks.
We are now in a position to appreciate the role of stabiliser generators as being analogous to a parity check matrix.
Consider the case of the 7-qubit CSS code, with generators
Generatorg1g2g3g4g5g61XZ2XZ3XXZZTensor factors4XZ5XXZZ6XXZZ7XXXZZZ
I've omitted the identity tensor factors above, as one might sometimes omit the 0s from a {0,1} matrix, and for the same reason: in a given stabiliser operator, the identity matrix corresponds to a tensor factor which is not included in the 'mask' of qubits for which we are computing the parity. For each generator, we are only interested in those tensor factors which are being acted on somehow, because those contribute to the parity outcome.
Now, the 'codewords' (the encoded standard basis states) of the 7-qubit CSS code are given by
|0L⟩∝|1L⟩∝|0000000⟩+|0001111⟩+|0110011⟩+|0111100⟩+|1010101⟩+|1011010⟩+|1100110⟩+|1101001⟩=∑y∈C|y⟩,|1111111⟩+|1110000⟩+|1001100⟩+|1000011⟩+|0101010⟩+|0100101⟩+|0011001⟩+|0010110⟩=∑y∈C|y⊕1111111⟩,
where C is the code generated by the bit-strings 0001111, 0110011, and 1010101. Notably, these bit-strings correspond to the positions of the X operators in the generators g1, g2, and g3. While those are stabilisers of the code (and represent parity checks as I've suggested above), we can also consider their action as operators which permute the standard basis. In particular, they will permute the elements of the code C, so that the terms involved in |0L⟩ and |1L⟩ will just be shuffled around.
The generators g4, g5, and g6 above are all describing the parities of information encoded in standard basis states.
The encoded basis states you are given are superpositions of codewords drawn from a linear code, and those codewords all have even parity with respect to the parity-check matrix from that code.
As g4 through g6 just describe those same parity checks, it follows that the eigenvalue of the encoded basis states is +1 (corresponding to even parity).
This is the way in which
'with the observation about the similarities between the parity check matrix and the generator the exercise is "self evident"'
— because the stabilisers either manifestly permute the standard basis terms in the two 'codewords', or manifestly are testing parity properties which by construction the codewords will have.
Moving beyond codewords
The list of generators in the table you provide represent the first steps in a powerful technique, known as the stabiliser formalism, in which states are described using no more or less than the parity properties which are known to hold of them.
Some states, such as standard basis states, conjugate basis states, and the perfectly entangled states |Φ+⟩∝|00⟩+|11⟩ and |Ψ−⟩∝|01⟩−|10⟩ can be completely characterised by their parity properties. (The state |Φ+⟩ is the only one which is a +1-eigenvector of X⊗X and Z⊗Z; the state |Ψ−⟩ is the only one which is a −1-eigenvector of both these operators.) These are known as stabiliser states, and one can consider how they are affected by unitary transformations and measurements by tracking how the parity properties themselves transform. For instance, a state which is stabilised by X⊗X before applying a Hadamard on qubit 1, will be stabilised by Z⊗X afterwards, because (H⊗I)(X⊗X)(H⊗I)=Z⊗X.
Rather than transform the state, we transform the parity property which we know to hold of that state.
You can use this also to characterise how subspaces characterised by these parity properties will transform.
For instance, given an unknown state in the 7-qubit CSS code, I don't know enough about the state to tell you what state you will get if you apply Hadamards on all of the qubits, but I can tell you that it is stabilised by the generators g′j=(H⊗7)gj(H⊗7), which consist of
Generatorg′1g′2g′3g′4g′5g′61ZX2ZX3ZZXXTensor factors4ZX5ZZXX6ZZXX7ZZZXXX
This is just a permutation of the generators of the 7-qubit CSS code, so I can conclude that the result is also a state in that same code.
There is one thing about the stabiliser formalism which might seem mysterious at first: you aren't really dealing with information about the states that tells you anything about how they expand as superpositions of the standard basis. You're just dealing abstractly with the generators. And in fact, this is the point: you don't really want to spend your life writing out exponentially long superpositions all day, do you? What you really want are tools to allow you to reason about quantum states which require you to write things out as linear combinations as rarely as possible, because any time you write something as a linear combination, you are (a) making a lot of work for yourself, and (b) prefiriendo alguna base de una manera que pueda evitar que note alguna propiedad útil a la que puede acceder utilizando una base diferente .
H⊗7 might have on the codespace of the 7-qubit code. What should one do instead of writing out superpositions?
The answer is to describe these states in terms of observables — in terms of parity properties — to fix those states. For instance, just as |0⟩ is the +1-eigenstate of Z, we can characterise the logical state |0L⟩ of the 7-qubit CSS code as the +1-eigenstate of
ZL=Z⊗Z⊗Z⊗Z⊗Z⊗Z⊗Z
and similarly, |1L⟩ as the −1-eigenstate of ZL.
(It is important that ZL=Z⊗7 commutes with the generators {g1,…,g6}, so that it is possible to be a +1-eigenstate of ZL at the same time as having the parity properties described by those generators.)
This also allows us to move swiftly beyond the standard basis: using the fact that X⊗7 anti commutes with Z⊗7 the same way that X anti commutes with Z, and also as X⊗7 commutes with the generators gi, we can describe |+L⟩ as being the +1-eigenstate of
XL=X⊗X⊗X⊗X⊗X⊗X⊗X,
and similarly, |−L⟩ as the −1-eigenstate of XL.
We may say that the encoded standard basis is, in particular, encoded in the parities of all of the qubits with respect to Z operators; and the encoded 'conjugate' basis is encoded in the parities of all of the qubits with respect to X operators.
By fixing a notion of encoded operators, and using this to indirectly represent encoded states, we may observe that
(H⊗7)XL(H⊗7)=ZL,(H⊗7)ZL(H⊗7)=XL,
which is the same relation as obtains between X and Z with respect to conjugation by Hadamards; which allows us to conclude that for this encoding of information in the 7-qubit CSS code, H⊗7 not only preserves the codespace but is an encoded Hadamard operation.
Thus we see that the idea of observables as a way of describing information about a quantum states in the form of sign bits — and in particular tensor products as a way of representing information about parities of bits — plays a central role in describing how the CSS code generators represent parity checks, and also in how we can describe properties of error correcting codes without reference to basis states.
One way that you could construct what the codeword is is to project on the+1 eigenspace of the generators,
|C1⟩=126(∏i=16(I+gi))|0000000⟩.
Concentrate, to start with, one the first 3 generators
(I+g1)(I+g2)(I+g3).
If you expand this out, you'll see that it creates all the terms in the group (I,g1,g2,g3,g1g2,g1g3,g2g3,g1g2g3) . Corresponding it to binary strings, the action of multiplying two terms (since X2=I ) is just like addition modulo 2. So, contained within the code are all of the words generated by the parity check matrix (and this is a group, with group operation of addition modulo 2).
Now, if you multiply by one of theX stabilizers, that's like doing the addition modulo 2 on the corresponding bit strings. But, because we've already generated the group, by definition every group element is mapped to another (unique) group element. In other words, if I do
g1×{I,g1,g2,g3,g1g2,g1g3,g2g3,g1g2g3}={g1I,g1g2,g1g3,g2,g3,g1g2g3,g2g3},
I get back the set I started with (using g21=I , and the commutation of the stabilizers), and therefore I'm projecting onto the same state. Hence, the state is stabilized by g1 to g3 .
You can effectively make the same argument forg4 to g6 . I prefer to think about first applying a Hadamard to every qubit, so the Xs are changed to Zs and vice versa. The set of stabilizers are unchanged, so the code is unchanged, but the Z stabilizer is mapped to an X stabilizer, about which we have already argued.
fuente
What follows perhaps doesn't exactly answer your question, but instead aims to provide some background to help it become as 'self-evident' as your sources claim.
TheZ operator has eigenstates |0⟩ (with eigenvalue +1 ) and |1⟩ (with eigenvalue −1 ). The ZZ operator on two qubits therefore has eigenstates
|00⟩,|01⟩,|10⟩,|11⟩ .
The eigenvalues for these depend on the parity of the bit strings. For example, with |00⟩ we multiply the +1 eigenvalues of the individual Z operators to get +1 . For |11⟩ we multiply the −1 eigenvalues together and also get +1 for ZZ . So both these even parity bit strings have eigenvalue +1 , as does any superposition of them. For both odd parity states (|01⟩ and |10⟩ ) we must multiply a +1 with a −1 , and get a −1 eigenvalue for ZZ .
Note also that superpositions of bit strings with fixed parity (such as someα|00⟩+β|00⟩ ) are also eigenstates, and have the eigenvalue associated with their parity. So measuring ZZ would not collapse such a superposition.
This analysis remains valid as we go to higher number of qubits. So if we want to know about the parity of qubits 1, 3, 5, and 7 (to pick a pertinent example), we could use the operatorZIZIZIZ . If we measure this and get the outcome +1 , we know that this subset of qubits has a state represented by an even parity bit string, or a superposition thereof. If we get −1 , we know that it is an odd parity state.
This allows us to write the [7,4,3] Hamming code using the notation of quantum stabilizer codes. Each parity check is turned into a stabilizer generator which hasI on every qubit not involved in the check, and Z on every qubit that is. The resulting code will protect against errors that anticommute with Z (and therefore have the effect of flipping bits).
Of course, qubits do not restrict us to only working in theZ basis. We could encode our qubits for a classical Hamming code in the |+⟩ and |−⟩ states instead. These are the eigenstates of X , so you just need to replace Z with X in everything I've said to see how this kind of code works. It would protect against errors that anticommute with X (and so have the effect of flipping phase).
The magic of the Steane code, of course, is that it does both at the same time and protects against everything.
fuente