[][src]Module bulletproofs::notes::r1cs_proof

This module contains notes on how and why constraint system proofs work.

Constraint system

A constraint system is a collection of arithmetic constraints over a set of variables.

The constraint system we use is specifically a rank-1 constraint system or R1CS. Such a system consists of two sets of constraints: multiplication gates and linear constraints in terms of the variables.

Variables

There are two kinds of variables in the constraint system:

  1. \(m\) high-level variables, the input secrets \(\mathbf{v}\),
  2. \(n\) low-level variables, the internal inputs and outputs of the multiplication gates: \(\mathbf{a}_L, \mathbf{a}_R, \mathbf{a}_O\).

Multiplication gates

Each multiplication gate takes two input variables and multiplies them to get an output. That relation for all multiplication gates is represented by \(n\) constraints:

\[ \mathbf{a}_L \circ \mathbf{a}_R = \mathbf{a}_O, \] where:

Linear constraints

Linear constraints are expressed using a vector of \(q\) equations that use linear combinations of the variables:

\[ \mathbf{W}_L \cdot \mathbf{a}_L + \mathbf{W}_R \cdot \mathbf{a}_R + \mathbf{W}_O \cdot \mathbf{a}_O = \mathbf{W}_V \cdot \mathbf{v} + \mathbf{c}, \] where:

Building constraints

Bulletproofs framework allows building constraint systems on the fly, without a trusted setup. This allows instantiating constraints from a family of constraint systems parametrized by public values and commitments to the secret inputs. As a result, the instantiation can be thought of as a challenge generated by a verifier to a prover.

The prover starts out by committing to its secret inputs \(\mathbf{v}\) and obtaining \(m\) variables representing these inputs.

Then, the prover performs a combination of the following operations to generate the constraints:

  1. Allocate a multiplier: a new multiplication gate is added, represented by three low-level variables \(a_L, a_R, a_O\), for left input, right input and output value respectively.
  2. Add a constraint: a linear combination of any number of variables is encoded into appropriate positions in matrices \(\mathbf{W}_L, \mathbf{W}_R, \mathbf{W}_O, \mathbf{W}_V\) and a vector of constants \(\mathbf{c}\).
  3. Request a challenge scalar: a random scalar returned in response to committed high-level variables.

Gadgets

Gadgets are buildings blocks of a constraint system that map to some functions in a higher-level protocol. Gadgets receive some variables as inputs, may allocate more variables for internal use, and produce constrains involving all these variables.

Examples:

Low-level variables

Often a gadget needs a variable to connect with another gadget, or to implement its internal logic, without requiring a distinct high-level variable commitment \(V_i\) for it. Such low-level variables are created from left and right variables \(a_L, a_R\) of additional multiplication gates. Output variables \(a_O\) are not used for this purpose because they are implicitly constrained by a multiplication gate and cannot be used as independent uncommitted variables.

Important: uncommitted variables have their name due to lack of the individual commitments \(V_i\), but they are still committed collectively with all low-level variables using a single vector Pedersen commitment \(A_I\) as required by the underlying proof protocol. The distinction is important when building constraints using challenges, which are bound only to the high-level variables, but not to the low-level variables (hence, “uncommitted”).

Gadget as a challenge

Intermediate challenge scalars can be used to construct gadgets more efficiently.

For example, a shuffle gadgets (“proof of permutation”) can be done by proving equality of two polynomials sampled at a challenge point, where roots of each polynomial represent secret values of the corresponding side of a permutation:

\[ \{a,b\} = \{c,d\} \iff (a-x)\cdot(b-x) = (c-x)\cdot(d-x), \] where \(x\) is a random challenge, sampled after all values \(a,b,c,d\) are committed.

Making a proof of permutation using a static gadget (without challenge values) may require building a sorting network that would use significantly more multiplication gates.

Important: challenges are bound to the high-level variables and the committed portion of low-level variables. The remaining low-level variables are uncommitted and must be uniquely determined by the committed variables and the challenge scalars in order for gadgets to be locally sound. To facilitate this, the constraint system API prevents use of challenges before all freely chosen variables are committed.

Representation of constraints

The matrices \(\mathbf{W}_L, \mathbf{W}_R, \mathbf{W}_O, \mathbf{W}_V\) are typically very sparse because most constraints apply only to a few variables. As a result, constraints are represented as short lists of pairs \((i, w)\) where \(i\) is an index of a variable, and \(w\) is its (non-zero) weight.

Multiplication of a matrix by a vector is implemented via multiplication of each weight \(w\) by a scalar in the vector at a corresponding index \(i\). This way, all zero-weight terms are automatically skipped.

Constraint system proof

Constraint system proof allows a verifier to specify the constraints, for which a prover is asked to generate a valid proof. The resulting proof is zero-knowledge: the constraints are known to both the prover and the verifier, but the variables remain secret.

The proving system uses efficient inner product protocol by expressing all the constraints in terms of a single inner product. The following notes describe in detail how this is accomplished.

Constraint system notation

Dimensions of vectors:

In the Bulletproofs paper, matrices are labeled as \(\mathbf{W}_L, \mathbf{W}_R, \mathbf{W}_O, \mathbf{W}_V\). We will keep this notation, but will note to readers not to confuse the \(\mathbf{W}_{L,R,O,V}\) notation for being a vector of points.

We will use the general notation described above and, for consistency, rename two more variables from the paper: \[ \begin{aligned} \beta &\xrightarrow{} \tilde{o} \\ Q &\xrightarrow{} q \\ \end{aligned} \]

Combining statements using challenge variables

We can rewrite the statement about multiplication gates into an inner product equation, using the challenge variable \(y\). We can do this for a random challenge \(y\) because \({\mathbf{b}} = {\mathbf{0}}\) if and only if1 \({\langle {\mathbf{b}}, {\mathbf{y}}^{n} \rangle} = 0\). The equation \(\mathbf{a}_L \circ \mathbf{a}_R = \mathbf{a}_O\) becomes:

\[ \langle \mathbf{a}_L \circ \mathbf{a}_R - \mathbf{a}_O , \mathbf{y}^n \rangle = 0 \]

We can rewrite the statement about the linear constraints into an inner product equation, using the challenge variable \(z\). We can do this for a random challenge \(z\), for the same reason as above. The equation \( \mathbf{W}_L \cdot \mathbf{a}_L + \mathbf{W}_R \cdot \mathbf{a}_R + \mathbf{W}_O \cdot \mathbf{a}_O = \mathbf{W}_V \cdot \mathbf{v} + \mathbf{c} \) becomes:

\[ \langle z \mathbf{z}^q, \mathbf{W}_L \cdot \mathbf{a}_L + \mathbf{W}_R \cdot \mathbf{a}_R + \mathbf{W}_O \cdot \mathbf{a}_O - \mathbf{W}_V \cdot \mathbf{v} - \mathbf{c} \rangle = 0 \]

We can combine these two inner product equations, since they are offset by different multiples of challenge variable \(z\). The statement about multiplication gates is multiplied by \(z^0\), while the statements about addition and scalar multiplication gates are multiplied by a powers of \(z\) between \(z^1\) and \(z^q\). Combining the two equations gives us:

\[ \langle \mathbf{a}_L \circ \mathbf{a}_R - \mathbf{a}_O , \mathbf{y}^n \rangle + \langle z \mathbf{z}^q, \mathbf{W}_L \cdot \mathbf{a}_L + \mathbf{W}_R \cdot \mathbf{a}_R + \mathbf{W}_O \cdot \mathbf{a}_O - \mathbf{W}_V \cdot \mathbf{v} - \mathbf{c} \rangle = 0 \]

Before we proceed, we factor out “flattened constraints” from the terms that involve weight matrices:

\[ \langle \mathbf{a}_L \circ \mathbf{a}_R - \mathbf{a}_O , \mathbf{y}^n \rangle + \langle \mathbf{w}_L, \mathbf{a}_L \rangle + \langle \mathbf{w}_R, \mathbf{a}_R \rangle + \langle \mathbf{w}_O, \mathbf{a}_O \rangle - \langle \mathbf{w}_V, \mathbf{v} \rangle - w_c = 0 \] \[ \begin{aligned} \mathbf{w}_L &= z \mathbf{z}^q \cdot \mathbf{W}_L, \\ \mathbf{w}_R &= z \mathbf{z}^q \cdot \mathbf{W}_R, \\ \mathbf{w}_O &= z \mathbf{z}^q \cdot \mathbf{W}_O, \\ \mathbf{w}_V &= z \mathbf{z}^q \cdot \mathbf{W}_V, \\ w_c &= \langle z \mathbf{z}^q, \mathbf{c} \rangle, \\ \end{aligned} \] where each of \(\mathbf{w}_L, \mathbf{w}_R, \mathbf{w}_O\) has length \(n\) and \(\mathbf{w}_V\) has length \(m\).

Rearranging into a single inner product statement

We want to work towards expressing the constraints in terms of a single inner product, so that we can use the inner product argument to represent it in a more compact and efficient-to-verify form. To do that we will rearrange the above equation so that terms involving \({\mathbf{a}}_{L}\) and \({\mathbf{a}}_{O}\) appear only on the left-hand side, terms involving \({\mathbf{a}}_{R}\) appear only on the right-hand side, and non-secret terms (which the verifier can compute on its own) are factored out into a new term \(\delta(y, z) \).

This arrangement will allow us to verify relations between the resulting inner product, its vectors and the commitments to high-level and low-level variables.

The choice of placing \({\mathbf{a}}_{O}\) on the same side with \({\mathbf{a}}_{L}\) is arbitrary: the proof would still work if \({\mathbf{a}}_{O}\) was rearranged on the right-hand side instead.

If we reorder terms, we get:

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle

\langle \mathbf{a}_L \circ \mathbf{a}_R, \mathbf{y}^n \rangle - \langle \mathbf{a}_O, \mathbf{y}^n \rangle + \langle \mathbf{w}_L, \mathbf{a}_L \rangle + \langle \mathbf{w}_R, \mathbf{a}_R \rangle + \langle \mathbf{w}_O, \mathbf{a}_O \rangle \]

Merge the statements containing \(\mathbf{a}_O \):

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle

\langle \mathbf{a}_L \circ \mathbf{a}_R, \mathbf{y}^n \rangle + \langle \mathbf{a}_L, \mathbf{w}_L \rangle + \langle \mathbf{a}_O, -\mathbf{y}^n + \mathbf{w}_O \rangle + \langle \mathbf{a}_R, \mathbf{w}_R \rangle \]

Rearrange \(\langle \mathbf{a}_L \circ \mathbf{a}_R, \mathbf{y}^n \rangle\) into \(\langle \mathbf{a}_L, \mathbf{y}^n \circ \mathbf{a}_R \rangle\):

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle

\langle \mathbf{a}_L, \mathbf{y}^n \circ \mathbf{a}_R \rangle + \langle \mathbf{a}_L, \mathbf{w}_L \rangle + \langle \mathbf{a}_O, -\mathbf{y}^n + \mathbf{w}_O \rangle + \langle \mathbf{a}_R, \mathbf{w}_R \rangle \]

Multiply the \( \langle \mathbf{a}_R, \mathbf{w}_R \rangle \) term by \(\mathbf{y}^n\) one one side of the inner product and by \(\mathbf{y}^{-n}\) on the other side:

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle

\langle \mathbf{a}_L, \mathbf{y}^n \circ \mathbf{a}_R \rangle + \langle \mathbf{a}_L, \mathbf{w}_L \rangle + \langle \mathbf{a}_O, -\mathbf{y}^n + \mathbf{w}_O \rangle + \langle \mathbf{y}^n \circ \mathbf{a}_R, \mathbf{y}^{-n} \circ \mathbf{w}_R \rangle \]

Merge the statements containing \(\mathbf{y}^n \circ \mathbf{a}_R\):

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle

\langle \mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{y}^n \circ \mathbf{a}_R \rangle + \langle \mathbf{a}_L, \mathbf{w}_L \rangle + \langle \mathbf{a}_O, -\mathbf{y}^n + \mathbf{w}_O \rangle \]

Add \(\delta(y, z) = \langle \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{w}_L \rangle \) to both sides:

\[ \begin{aligned} w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle + \delta(y, z) &= \langle \mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{y}^n \circ \mathbf{a}_R \rangle + \langle \mathbf{a}_L, \mathbf{w}_L \rangle \\ &+ \langle \mathbf{a}_O, -\mathbf{y}^n + \mathbf{w}_O \rangle + \langle \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{w}_L \rangle \end{aligned} \]

Merge the terms containing \(\mathbf{w}_L\):

\[ \begin{aligned} w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle + \delta(y, z) &= \langle \mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{y}^n \circ \mathbf{a}_R \rangle + \langle \mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{w}_L \rangle + \langle \mathbf{a}_O, -\mathbf{y}^n + \mathbf{w}_O \rangle \end{aligned} \]

Merge the terms containing \(\mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R\):

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle + \delta(y, z) = \langle \mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{y}^n \circ \mathbf{a}_R + \mathbf{w}_L \rangle + \langle \mathbf{a}_O, -\mathbf{y}^n + \mathbf{w}_O \rangle \]

We want to combine a sum of two inner products \(\langle a, b \rangle + \langle c, d \rangle\) into one inner product. To do that, we will take a term of a polynomial formed by a linear combination of these products with respect to a challenge scalar \(x\). Specifically, the 2nd degree term of \(\langle a \cdot x + c \cdot x^2, b \cdot x + d \cdot x^0 \rangle\) is equal to \( \langle a, b \rangle + \langle c, d \rangle\).

To apply this technique to the above equation we assign \(a, b, c, d\) the following values:

\[ \begin{aligned} a &= \mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R \\ b &= \mathbf{y}^n \circ \mathbf{a}_R + \mathbf{w}_L\\ c &= \mathbf{a}_O \\ d &= -\mathbf{y}^n + \mathbf{w}_O \end{aligned} \]

Next, we combine \(a, b, c, d\) using the equation \( \langle a \cdot x + c \cdot x^2, b \cdot x + d \cdot x^0 \rangle \). When we take its second degree, we recover a single inner product, which was our original goal:

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle + \delta(y, z) = \text{2nd degree of } \langle (\mathbf{a}_L + \mathbf{y}^{-n} \circ \mathbf{w}_R) \cdot x + \mathbf{a}_O \cdot x^2, (\mathbf{y}^n \circ \mathbf{a}_R + \mathbf{w}_L) \cdot x + (-\mathbf{y}^n + \mathbf{w}_O) \cdot x^0 \rangle \]

Distribute the \(x\) values:

\[ w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle + \delta(y, z) = \text{2nd degree of } \langle \mathbf{a}_L \cdot x + \mathbf{y}^{-n} \circ \mathbf{w}_R \cdot x + \mathbf{a}_O \cdot x^2, \mathbf{y}^n \circ \mathbf{a}_R \cdot x + \mathbf{w}_L \cdot x - \mathbf{y}^n + \mathbf{w}_O \rangle \]

This is equivalent to the equation we started with, but has a single inner product with \({\mathbf{a}}_{L}\) and \({\mathbf{a}}_{O}\) on the left, \({\mathbf{a}}_{R}\) on the right, and non-secret terms factored out.

Blinding the inner product

In the current form, the vectors in the inner-product reveal information about the values \({\mathbf{a}}_{L}\), \({\mathbf{a}}_{R}\) and \({\mathbf{a}}_{O}\), which in turn reveal the values \(\mathbf{v}\). And since the inner-product argument is not zero-knowledge, the vectors cannot be used in the inner-product argument either. To solve this problem, the prover chooses two vectors of blinding factors

\[ {\mathbf{s}}_{L}, {\mathbf{s}}_{R} \;{\xleftarrow{\$}}\; {\mathbb Z_p}^{n}, \]

and uses them to blind \(\mathbf{a}_L\) and \(\mathbf{a}_R\) within left and right sides of the inner product respectively.

\[ \begin{aligned} \mathbf{a}_{L} &\leftarrow \mathbf{a}_{L} + \mathbf{s}_{L} \cdot x^2 \\ \mathbf{a}_{R} &\leftarrow \mathbf{a}_{R} + \mathbf{s}_{R} \cdot x^2 \end{aligned} \]

The blinding factors are multiplied by \(x^2\) so that when the substitution is made into the \(\mathbf{l}(x)\) and \(\mathbf{r}(x)\) equations, \({\mathbf{s}}_{L}\) will be in the 3rd degree of \(x\) in \(\mathbf{l}(x)\), and \({\mathbf{s}}_{L}\) will be in the 3rd degree of \(x\) in \(\mathbf{r}(x)\). As a result, the blinding factors will not interfere with the value \(t_2\), which is the 2nd degree of \(\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle\).

Note: multiplication outputs \(\mathbf{a}_O\) do not need their own blinding factors: they are automatically blinded by \(\mathbf{s}_{L}\) since both \(\mathbf{a}_L\) and \(\mathbf{a}_O\) are terms in the same (left) side of the inner product.

We now construct vector polynomials \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\), which represent the left and right sides of the input to the inner-product equation, with these new definitions: \[ \begin{aligned} {\mathbf{l}}(x) &= (\mathbf{a}_L + \mathbf{s}_L \cdot x^2) \cdot x + \mathbf{y}^{-n} \circ \mathbf{w}_R \cdot x + \mathbf{a}_O \cdot x^2 \\ &= \mathbf{a}_L \cdot x + \mathbf{s}_L \cdot x^3 + \mathbf{y}^{-n} \circ \mathbf{w}_R \cdot x + \mathbf{a}_O \cdot x^2 \\ {\mathbf{r}}(x) &= \mathbf{y}^n \circ (\mathbf{a}_R + \mathbf{s}_R \cdot x^2) \cdot x + \mathbf{w}_L \cdot x - \mathbf{y}^n + \mathbf{w}_O \\ &= \mathbf{y}^n \circ \mathbf{a}_R \cdot x + \mathbf{y}^n \circ \mathbf{s}_R \cdot x^3 + \mathbf{w}_L \cdot x - \mathbf{y}^n + \mathbf{w}_O \end{aligned} \]

When we take the inner product of \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\), we get:

\[ \begin{aligned} t(x) &= {\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle} \\ &= t_{1} x + t_{2} x^{2} + t_{3} x^{3} + t_{4} x^{4} + t_{5} x^{5} + t_{6} x^{6} \\ &= \sum_{i=1}^{6} t_i x^i \end{aligned} \]

Notice that the second degree of \(t(x)\) does not include any blinding factors (because the blinding factors end up in the third or greater degrees of \(t(x)\)) and only contains the inner product forms of the initial arithmetic gate statements that we are trying to prove:

\[ \begin{aligned} t_2 &= \text{2nd degree of } \langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle \\ &= w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle + \delta(y, z) \\ &= \langle \mathbf{a}_L \circ \mathbf{a}_R, \mathbf{y}^n \rangle - \langle \mathbf{a}_O, \mathbf{y}^n \rangle + \langle \mathbf{w}_L, \mathbf{a}_L \rangle + \langle \mathbf{w}_R, \mathbf{a}_R \rangle + \langle \mathbf{w}_O, \mathbf{a}_O \rangle + \delta(y, z) \end{aligned} \]

Proving that \(t_2\) is correct

The prover first forms a commitment to the coefficients of \(t(x)\), then convinces the verifier that these commit to the correct \(t(x)\) by evaluating the polynomial at a challenge point \(x\). This proves that \(t(x)\) is correct and follows the following equation:

\[ \begin{aligned} t(x) &= \sum_{i=1}^{6} x^i t_{i} \\ t_2 &= w_c + \langle \mathbf{w}_V, \mathbf{v} \rangle + \delta(y, z) \\ \end{aligned} \]

We define \(\mathbf{V}\) as the vector of Pedersen commitments to \(\mathbf{v}\), and \(T_i\) as the Pedersen commitment to \(t_i\) for \(i \in [1, 3, 4, 5, 6]\):

\[ \begin{aligned} V_j &= v_j \cdot B + \tilde{v}_j \cdot \widetilde{B} \quad \forall j \in [1, m] \\ T_i &= t_i \cdot B + \tilde{t}_i \cdot \widetilde{B} \quad \forall i \in [1, 3, 4, 5, 6] \\ \end{aligned} \]

The prover forms these commitments, and sends them to the verifier. These commitments are related to each other and to \(t(x)\) by the following diagram:

\[ \begin{aligned} t(x) B &\quad &= \quad & x^2 \langle \mathbf{w}_V, \mathbf{v} \rangle \cdot B & \quad &+ \quad & x^2 \big(w_c + \delta(y,z)\big) B &\quad &+\quad & \sum_{i = 1,3,4,5,6} &x^i t_{i} B \\ + &\quad & \quad & + & \quad & \quad & + &\quad & \quad & & + \\ {\tilde{t}}(x) {\widetilde{B}} &\quad &= \quad & x^2 \langle \mathbf{w}_V, \tilde{\mathbf{v}} \rangle \cdot \widetilde{B} & \quad &+ \quad & 0 {\widetilde{B}} &\quad &+\quad & \sum_{i = 1,3,4,5,6} &x^i \tilde{t_{i}} {\widetilde{B}} \\ \shortparallel &\quad & \quad & \shortparallel & \quad & \quad & \shortparallel &\quad & \quad & & \shortparallel \\ &\quad &= \quad & x^2 \langle \mathbf{w}_V, \mathbf{V} \rangle & \quad &+ \quad & x^2 \big(w_c + \delta(y,z)\big) B &\quad &+\quad & \sum_{i = 1,3,4,5,6} &x^i T_{i} \end{aligned} \]

Notice that the sum of each column is a commitment to the variable in the top row using the blinding factor in the second row. The sum of all of the columns is \(t(x) B + {\tilde{t}}(x) {\widetilde{B}}\), a commitment to the value of \(t\) at the point \(x\), using the synthetic blinding factor2: \[ {\tilde{t}}(x) = x^2 \langle \mathbf{w}_V, \tilde{\mathbf{v}} \rangle + \sum_{i = 1,3,4,5,6} x^i \tilde{t}_{i} \]

To convince the verifier that \(t(x) = \delta(y,z) + \sum_{i=1}^{6} x^i t_{i}\), the prover sends the opening \(t(x), {\tilde{t}}(x)\) to the verifier, who uses the bottom row of the diagram to check consistency: \[ t(x) B + {\tilde{t}}(x) {\widetilde{B}} \stackrel{?}{=} x^2 \langle \mathbf{w}_V, \mathbf{V} \rangle + x^2 \big(w_c + \delta(y,z)\big) B + \sum_{i = 1,3,4,5,6} x^i T_{i} \]

Proving that \(\mathbf{l}(x)\), \(\mathbf{r}(x)\) are correct

We want to relate \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\) to commitments to \({\mathbf{a}}_{L}\), \({\mathbf{a}}_{R}\), \({\mathbf{s}}_{L}\), and \({\mathbf{s}}_{R}\). Since \[ {\mathbf{r}}(x) = \mathbf{y}^n \circ \mathbf{a}_R \cdot x + \mathbf{y}^n \circ \mathbf{s}_R \cdot x^3 + \mathbf{w}_L \cdot x - \mathbf{y}^n + \mathbf{w}_O \] we need commitments to \({\mathbf{y}}^{n} \circ {\mathbf{a}}_{R}\) and \({\mathbf{y}}^{n} \circ {\mathbf{s}}_{R}\). However, the prover must form commitments before receiving the verifier’s challenge \(y\), so they can only commit to \({\mathbf{a}}_{R}\) and \({\mathbf{s}}_{R}\), requiring the verifier to transmute the prover’s commitment over \( ({\mathbf{a}}_{L},{\mathbf{a}}_{R}, {\widetilde{a}}) \) into a commitment over \( ({\mathbf{a}}_{L}, {\mathbf{y}}^{n} \circ {\mathbf{a}}_{R}, {\widetilde{a}}) \) (and similarly for \({\mathbf{s}}_{R}\)).

To do this, notice that \[ \begin{aligned} \text{commitment over }({\mathbf{a}}_{L}, {\mathbf{a}}_{R}, {\widetilde{a}}) &= {\langle {\mathbf{a}}_{L}, {\mathbf{G}} \rangle} + {\langle {\mathbf{a}}_{R}, {\mathbf{H}} \rangle} + {\widetilde{a}} {\widetilde{B}} \\ &= {\langle {\mathbf{a}}_{L}, {\mathbf{G}} \rangle} + {\langle {\mathbf{y}}^{n} \circ {\mathbf{a}}_{R}, {\mathbf{y}}^{-n} \circ {\mathbf{H}} \rangle} + {\widetilde{a}} {\widetilde{B}}, \end{aligned} \] so that by changing generators to \(\hat{\mathbf{H}} = {\mathbf{y}}^{-n} \circ {\mathbf{H}}\), the point which is a commitment to \(({\mathbf{a}}_{L}, {\mathbf{a}}_{R}, {\widetilde{a}})\) with respect to \(({\mathbf{G}}, {\mathbf{H}}, {\widetilde{a}})\) is transmuted into a commitment to \(({\mathbf{a}}_{L}, {\mathbf{y}}^{n} \circ {\mathbf{a}}_{R}, {\widetilde{a}})\) with respect to \(({\mathbf{G}}, \hat{\mathbf{H}}, {\widetilde{a}})\).

We define the following commitments over the components of \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\):

\[ \begin{aligned} A_I &= \widetilde{B} \cdot \tilde{a} + \langle \mathbf{G} , \mathbf{a}_L \rangle + \langle \mathbf{H}, \mathbf{a}_R \rangle \\ A_O &= \widetilde{B} \cdot \tilde{o} + \langle \mathbf{G} , \mathbf{a}_O \rangle \\ W_L &= \langle \mathbf{y}^{-n} \circ \mathbf{w}_L, \mathbf{H} \rangle \\ W_R &= \langle \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{G} \rangle \\ W_O &= \langle \mathbf{y}^{-n} \circ \mathbf{w}_O, \mathbf{H} \rangle \\ S &= \widetilde{B} \cdot \tilde{s} + \langle \mathbf{G} , \mathbf{s}_L \rangle + \langle \mathbf{H}, \mathbf{s}_R \rangle \end{aligned} \]

The prover forms these commitments, and sends them to the verifier.

For reference, here are the equations for \({\mathbf{l}}(x)\), and \({\mathbf{r}}(x)\) multiplied by \(\mathbf{y}^{-n}\):

\[ \begin{aligned} \mathbf{l}(x) &= \mathbf{a}_L \cdot x + \mathbf{s}_L \cdot x^3 + \mathbf{y}^{-n} \circ \mathbf{w}_R \cdot x + \mathbf{a}_O \cdot x^2 \\ \mathbf{y}^{-n} \circ \mathbf{r}(x) &= \mathbf{a}_R \cdot x + \mathbf{s}_R \cdot x^3 + \mathbf{y}^{-n} \circ \mathbf{w}_L \cdot x - \mathbf{1}^n + \mathbf{y}^{-n} \circ \mathbf{w}_O \end{aligned} \]

To relate the prover’s commitments to \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\), we use the following diagram:

\[ \begin{aligned} {\langle {\mathbf{l}}(x), {\mathbf{G}} \rangle} &\quad &= \quad & {\langle {\mathbf{a}}_L \cdot x, {\mathbf{G}} \rangle} & \quad &+ \quad & {\langle {\mathbf{a}}_O \cdot x^2, {\mathbf{G}} \rangle} & \quad &+ \quad& \langle \mathbf{y}^{-n} \circ \mathbf{w}_R \cdot x , \mathbf{G} \rangle &\quad &+\quad & \langle \mathbf{s}_L \cdot x^3 , \mathbf{G} \rangle \\ + &\quad & \quad & + & \quad & \quad & + & \quad & \quad& + &\quad & \quad & + \\ {\langle {\mathbf{r}}(x), \hat{\mathbf{H}} \rangle} &\quad &= \quad & \langle \mathbf{a}_R \cdot x, {\mathbf{H}} \rangle & \quad &+ \quad & - \langle \mathbf{1}, \mathbf{H} \rangle & \quad &+ \quad& \langle \mathbf{y}^{-n} \circ (\mathbf{w}_L \cdot x + \mathbf{w}_O), \mathbf{H} \rangle &\quad &+\quad & \langle \mathbf{s}_R \cdot x^3 , \mathbf{H} \rangle \\ + &\quad & \quad & + & \quad & \quad & + & \quad & \quad& + &\quad & \quad & + \\ \tilde{e} \cdot \widetilde{B} &\quad &= \quad & \tilde{a} \cdot x \cdot \widetilde{B} & \quad &+ \quad & \tilde{o} \cdot x^2 \cdot \widetilde{B} & \quad &+ \quad& 0 &\quad &+\quad & \tilde{s} \cdot x^3 \cdot \widetilde{B} \\ \shortparallel &\quad & \quad & \shortparallel & \quad & \quad & \shortparallel & \quad & \quad& \shortparallel &\quad & \quad & \shortparallel \\ &\quad &= \quad & x \cdot A_I & \quad &+ \quad & x^2 \cdot A_O - \langle \mathbf{1}, \mathbf{H} \rangle & \quad &+ \quad& W_L \cdot x + W_R \cdot x + W_O &\quad &+\quad & x^3 \cdot S \end{aligned} \]

We can interpret the rows and columns similarly to the previous diagram: the sum of each column is a vector Pedersen commitment with left and right halves from the first and second rows respectively and blinding factor from the third row. The sum of all of the columns is a vector Pedersen commitment to \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\) with synthetic blinding factor \({\widetilde{e}}\).

To convince the verifier that \(t(x) = {\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle}\), the prover sends \({\widetilde{e}}\) to the verifier, who uses the bottom row to compute \[ \begin{aligned} P &= -{\widetilde{e}} {\widetilde{B}} + x \cdot A_I + x^2 \cdot A_O - \langle \mathbf{1}, \mathbf{H} \rangle + W_L \cdot x + W_R \cdot x + W_O + x^3 \cdot S \\ \end{aligned} \] if the prover is honest, this is \(P = {\langle {\mathbf{l}}(x), {\mathbf{G}} \rangle} + {\langle {\mathbf{r}}(x), \hat{\mathbf{H}} \rangle}\), so the verifier uses \(P\) and \(t(x)\) as inputs to the inner product protocol to prove that \(t(x) = {\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle}\).

Verifying challenge-based constraints

Some gadgets can be made more efficient if they can sample random challenges when building a constraint system, provided certain variables are committed. For instance, a shuffle gadget can use just \(2(k-1)\) multipliers for \(k\) inputs and \(k\) outputs to implement the check that two polynomials are equal up to a permutation of their roots, but it is only sound as long as the roots of the polynomials (the inputs/outputs to the shuffle) are fixed before the evaluation point is chosen.

To faciliate this, we split the vectors of multipliers and their blinding factors in two subvectors of lengths \(n'\) and \(n''\):

\[ \begin{aligned} n &= n' + n'' \\ {\mathbf{a}}_L &= {\mathbf{a}}_L' || {\mathbf{a}}_L'' \\ {\mathbf{a}}_R &= {\mathbf{a}}_R' || {\mathbf{a}}_R'' \\ {\mathbf{a}}_O &= {\mathbf{a}}_O' || {\mathbf{a}}_O'' \\ {\mathbf{s}}_L &= {\mathbf{s}}_L' || {\mathbf{s}}_L'' \\ {\mathbf{s}}_R &= {\mathbf{s}}_R' || {\mathbf{s}}_R'' \\ \end{aligned} \]

The vectors of flattened weights are split accordingly:

\[ \begin{aligned} \mathbf{w}_L &= {\mathbf{w}}_L' || {\mathbf{w}}_L'' \\ \mathbf{w}_R &= {\mathbf{w}}_R' || {\mathbf{w}}_R'' \\ \mathbf{w}_O &= {\mathbf{w}}_O' || {\mathbf{w}}_O'' \\ \end{aligned} \]

The statements of each slice of the vectors \(\mathbf{l}(x), \mathbf{r}(x)\) become:

\[ \begin{aligned} \mathbf{l}'(x) &= \mathbf{a}'_L \cdot x + \mathbf{s}_L' \cdot x^3 + \mathbf{y}^{-n'} \circ \mathbf{w}_R' \cdot x + \mathbf{a}_O' \cdot x^2\\ \mathbf{l}''(x) &= \mathbf{a}''_L \cdot x + \mathbf{s}_L'' \cdot x^3 + y^{-n'}\mathbf{y}^{-n''} \circ \mathbf{w}_R'' \cdot x + \mathbf{a}_O'' \cdot x^2\\ \mathbf{y}^{-n'} \circ \mathbf{r}'(x) &= \mathbf{a}'_R \cdot x + \mathbf{s}_R' \cdot x^3 + \mathbf{y}^{-n'} \circ \mathbf{w}_L' \cdot x - \mathbf{1}^{n'} + \mathbf{y}^{-n'} \circ \mathbf{w}_O'\\ y^{-n'} \mathbf{y}^{-n''} \circ \mathbf{r}''(x) &= \mathbf{a}''_R \cdot x + \mathbf{s}_R'' \cdot x^3 + y^{-n'}\mathbf{y}^{-n''} \circ \mathbf{w}_L'' \cdot x - \mathbf{1}^{n''} + y^{-n'}\mathbf{y}^{-n''} \circ \mathbf{w}_O''\\ \end{aligned} \]

Now we need to express the statements above using independent commitments to the subvectors \(\mathbf{a}'_{L,R,O}\) and \(\mathbf{a}''_{L,R,O}\). The commitments must be independent because second subvectors are computed with the use of challenges generated after the first subvectors are determined and committed.

To do that, we split vectors of generators and combine the statements in two: the first one in terms of the commitments to the first subvectors, and the second one in terms of the commitments to the second subvectors.

\[ \begin{aligned} \mathbf{G} &= \mathbf{G}' || \mathbf{G}'' \\ \mathbf{H} &= \mathbf{H}' || \mathbf{H}'' \\ \end{aligned} \]

\[ \begin{aligned} {\langle \mathbf{l}'(x), {\mathbf{G}'} \rangle} &\quad &= \quad & x \cdot {\langle \mathbf{a}'_L, \mathbf{G}' \rangle} & \quad &+ \quad x^2 \cdot {\langle \mathbf{a}'_O, \mathbf{G}' \rangle} & \quad &+ \quad \langle x \cdot \mathbf{y}^{-n'} \circ \mathbf{w}_R', \mathbf{G}' \rangle &\quad &+\quad x^3 \cdot \langle \mathbf{s}'_L , \mathbf{G}' \rangle \\ + &\quad & \quad & + & \quad & \quad \quad + & \quad & \quad \quad + &\quad & \quad \quad + \\ {\langle \mathbf{y}^{-n'} \circ \mathbf{r}'(x), {\mathbf{H}'} \rangle} &\quad &= \quad & x \cdot {\langle \mathbf{a}'_R, \mathbf{H}' \rangle} & \quad &- \quad \langle \mathbf{1}, \mathbf{H}' \rangle & \quad &+ \quad \langle \mathbf{y}^{-n'} \circ (x \cdot \mathbf{w}_L' + \mathbf{w}_O'), \mathbf{H}' \rangle &\quad &+\quad x^3 \cdot \langle \mathbf{s}'_R , \mathbf{H}' \rangle \\ &\quad & \quad & & \quad & \quad \quad & \quad & \quad \quad &\quad & \quad \quad \\ {\langle \mathbf{l}''(x), {\mathbf{G}''} \rangle} &\quad &= \quad & x \cdot {\langle \mathbf{a}''_L, \mathbf{G}'' \rangle} & \quad &+ \quad x^2 \cdot {\langle \mathbf{a}''_O, \mathbf{G}'' \rangle} & \quad &+ \quad \langle x \cdot y^{-n'} \mathbf{y}^{-n''} \circ \mathbf{w}_R'', \mathbf{G}'' \rangle &\quad &+\quad x^3 \cdot \langle \mathbf{s}''_L , \mathbf{G}'' \rangle \\ + &\quad & \quad & + & \quad & \quad \quad + & \quad & \quad \quad + &\quad & \quad \quad + \\ {\langle y^{n'} \mathbf{y}^{-n''} \circ \mathbf{r}''(x), {\mathbf{H}''} \rangle} &\quad &= \quad & x \cdot {\langle \mathbf{a}''_R, \mathbf{H}'' \rangle} & \quad &- \quad \langle \mathbf{1}, \mathbf{H}'' \rangle & \quad &+ \quad \langle y^{-n'} \mathbf{y}^{-n''} \circ (x \cdot \mathbf{w}_L'' + \mathbf{w}_O''), \mathbf{H}'' \rangle &\quad &+\quad x^3 \cdot \langle \mathbf{s}''_R , \mathbf{H}'' \rangle \\ \end{aligned} \]

We need to combine the above statements in one in order to have an expression for the complete vectors \(\mathbf{l}(x), \mathbf{r}(x)\). For that we will multiply the second statement by a random challenge \(u \in {\mathbb Z_{p}^{\times}}\), and add it to the first statement.

\[ \begin{aligned} {\langle \mathbf{l}'(x), {\mathbf{G}'} \rangle} &\quad &= \quad & x \cdot {\langle \mathbf{a}'_L, \mathbf{G}' \rangle} & \quad &+ \quad x^2 \cdot {\langle \mathbf{a}'_O, \mathbf{G}' \rangle} & \quad &+ \quad \langle x \cdot \mathbf{y}^{-n'} \circ \mathbf{w}_R', \mathbf{G}' \rangle &\quad &+\quad x^3 \cdot \langle \mathbf{s}'_L , \mathbf{G}' \rangle \\ + &\quad & \quad & + & \quad & \quad \quad + & \quad & \quad \quad + &\quad & \quad \quad + \\ {\langle \mathbf{y}^{-n'} \circ \mathbf{r}'(x), {\mathbf{H}'} \rangle} &\quad &= \quad & x \cdot {\langle \mathbf{a}'_R, \mathbf{H}' \rangle} & \quad &- \quad \langle \mathbf{1}, \mathbf{H}' \rangle & \quad &+ \quad \langle \mathbf{y}^{-n'} \circ (x \cdot \mathbf{w}_L' + \mathbf{w}_O'), \mathbf{H}' \rangle &\quad &+\quad x^3 \cdot \langle \mathbf{s}'_R , \mathbf{H}' \rangle \\ + &\quad & \quad & + & \quad & \quad \quad + & \quad & \quad \quad + &\quad & \quad \quad \\ {\langle u \cdot \mathbf{l}''(x), {\mathbf{G}''} \rangle} &\quad &= \quad & u \cdot x \cdot {\langle \mathbf{a}''_L, \mathbf{G}'' \rangle} & \quad &+ \quad u \cdot x^2 \cdot {\langle \mathbf{a}''_O, \mathbf{G}'' \rangle} & \quad &+ \quad u \cdot \langle x \cdot y^{-n'} \mathbf{y}^{-n''} \circ \mathbf{w}_R'', \mathbf{G}'' \rangle &\quad &+\quad u \cdot x^3 \cdot \langle \mathbf{s}''_L , \mathbf{G}'' \rangle \\ + &\quad & \quad & + & \quad & \quad \quad + & \quad & \quad \quad + &\quad & \quad \quad + \\ {\langle u \cdot y^{n'} \mathbf{y}^{-n''} \circ \mathbf{r}''(x), {\mathbf{H}''} \rangle} &\quad &= \quad & u \cdot x \cdot {\langle \mathbf{a}''_R, \mathbf{H}'' \rangle} & \quad &- \quad u \cdot \langle \mathbf{1}, \mathbf{H}'' \rangle & \quad &+ \quad u \cdot \langle y^{-n'} \mathbf{y}^{-n''} \circ (x \cdot \mathbf{w}_L'' + \mathbf{w}_O''), \mathbf{H}'' \rangle &\quad &+\quad u \cdot x^3 \cdot \langle \mathbf{s}''_R , \mathbf{H}'' \rangle \\ \end{aligned} \]

The commitments to the components of \(\mathbf{l}(x)\) and \(\mathbf{r}(x)\) are then split up as follows:

\[ \begin{aligned} A_I' &= \langle \mathbf{G}' , \mathbf{a}_L' \rangle + \langle \mathbf{H}', \mathbf{a}_R' \rangle + \widetilde{B} \cdot \tilde{a}' \\ A_I'' &= \langle \mathbf{G}'' , \mathbf{a}_L'' \rangle + \langle \mathbf{H}'', \mathbf{a}_R'' \rangle + \widetilde{B} \cdot \tilde{a}'' \\ A_O' &= \langle \mathbf{G}' , \mathbf{a}_O' \rangle + \widetilde{B} \cdot \tilde{o}' \\ A_O'' &= \langle \mathbf{G}'' , \mathbf{a}_O'' \rangle + \widetilde{B} \cdot \tilde{o}'' \\ S' &= \langle \mathbf{G}' , \mathbf{s}_L' \rangle + \langle \mathbf{H}', \mathbf{s}_R' \rangle + \widetilde{B} \cdot \tilde{s}' \\ S'' &= \langle \mathbf{G}'' , \mathbf{s}_L'' \rangle + \langle \mathbf{H}'', \mathbf{s}_R'' \rangle + \widetilde{B} \cdot \tilde{s}'' \\ \end{aligned} \]

We can now relate the above commitments to \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\) using a new diagram:

\[ \begin{aligned} {\langle {\mathbf{l}}(x), \hat{\mathbf{G}} \rangle} &\quad &= \quad & x \cdot \big( {\langle \mathbf{a}'_L, \mathbf{G}' \rangle} + u \cdot {\langle \mathbf{a}''_L, \mathbf{G}'' \rangle} \big) & \quad &+ \quad x^2 \cdot \big( {\langle \mathbf{a}'_O, \mathbf{G}' \rangle} + u \cdot {\langle \mathbf{a}''_O, \mathbf{G}'' \rangle} \big) & \quad &+ \quad \langle \mathbf{y}^{-n} \circ (\mathbf{w}_R' || u \cdot \mathbf{w}_R'') \cdot x , \mathbf{G} \rangle &\quad &+\quad x^3 \cdot \big( \langle \mathbf{s}'_L , \mathbf{G}' \rangle + u \cdot \langle \mathbf{s}''_L , \mathbf{G}'' \rangle \big) \\ + &\quad & \quad & + & \quad & \quad \quad + & \quad & \quad \quad + &\quad & \quad \quad + \\ {\langle {\mathbf{r}}(x), \hat{\mathbf{H}} \rangle} &\quad &= \quad & x \cdot \big( {\langle \mathbf{a}'_R, \mathbf{H}' \rangle} + u \cdot {\langle \mathbf{a}''_R, \mathbf{H}'' \rangle} \big) & \quad &- \quad \langle \mathbf{1}, \mathbf{H}' \rangle - u \cdot \langle \mathbf{1}, \mathbf{H}'' \rangle & \quad &+ \quad \langle \mathbf{y}^{-n} \circ (\mathbf{w}_L' || u \cdot \mathbf{w}_L'') \cdot x + (\mathbf{w}_O' || u \cdot \mathbf{w}_O''), \mathbf{H} \rangle &\quad &+\quad x^3 \cdot \big( \langle \mathbf{s}'_R , \mathbf{H}' \rangle + u \cdot \langle \mathbf{s}''_R , \mathbf{H}'' \rangle \big) \\ + &\quad & \quad & + & \quad & \quad \quad + & \quad & \quad \quad + &\quad & \quad \quad + \\ \tilde{e} \cdot \widetilde{B} &\quad &= \quad & x \cdot \big( \tilde{a}' \cdot \widetilde{B} + u \tilde{a}'' \cdot \widetilde{B} \big) & \quad &+ \quad x^2 \cdot \big( \tilde{o}' \cdot \widetilde{B} + u \cdot \tilde{o}'' \cdot \widetilde{B} \big) & \quad &+ \quad 0 &\quad &+\quad x^3 \cdot \big( \tilde{s}' \cdot \widetilde{B} + u \tilde{s}'' \cdot \widetilde{B} \big) \\ \shortparallel &\quad & \quad & \shortparallel & \quad & \quad \quad \shortparallel & \quad & \quad \quad \shortparallel &\quad & \quad \quad \shortparallel \\ &\quad &= \quad & x \cdot \big(A_I' + u \cdot A_I'') & \quad &+ \quad x^2 \cdot \big(A_O' + u \cdot A_O'' \big) - \langle \mathbf{1}, \mathbf{H}' \rangle - u \cdot \langle \mathbf{1}, \mathbf{H}'' \rangle & \quad &+ \quad x \cdot \langle \mathbf{w}_L, \hat{\mathbf{H}} \rangle + x \cdot \langle \mathbf{w}_R, \hat{\mathbf{G}} \rangle + \langle \mathbf{w}_O, \hat{\mathbf{H}} \rangle &\quad &+\quad x^3 \cdot (S' + u \cdot S'') \end{aligned} \] with generators transmuted using challenges \(y\) and \(u\): \[ \begin{aligned} \hat{\mathbf{G}} &= \mathbf{G}' || (u \cdot \mathbf{G}''), \\ \hat{\mathbf{H}} &= \mathbf{y}^{-n} \circ \big( \mathbf{H}' || (u \cdot \mathbf{H}'') \big). \\ \end{aligned} \]

The sum of each column is a vector Pedersen commitment with left and right halves from the first and second rows respectively and blinding factor from the third row. The sum of all of the columns is a vector Pedersen commitment to \({\mathbf{l}}(x)\) and \({\mathbf{r}}(x)\) with synthetic blinding factor \(\widetilde{e} = (\tilde{a}' + u \tilde{a}'') \cdot x + (\tilde{o}' + u \tilde{o}'') \cdot x^2 + (\tilde{s}' + u \tilde{s}'') \cdot x^3\).

To convince the verifier that \(\mathbf{l}(x)\) and \(\mathbf{r}(x)\) are computed correctly, the prover can send the evaluations \(\mathbf{l}(x), \mathbf{r}(x)\) along with \(\tilde{e}\) to the verifier, who uses the bottom row of the diagram to check the following statement:

\[ \begin{aligned} {\langle {\mathbf{l}}(x), \hat{\mathbf{G}} \rangle} + {\langle {\mathbf{r}}(x), \hat{\mathbf{H}} \rangle} \stackrel{?}{=} &-{\widetilde{e}} {\widetilde{B}} + x \cdot (A_I' + u \cdot A_I'') + x^2 \cdot (A_O' + u \cdot A_O'')\\ &- \langle \mathbf{1}, \mathbf{H}' \rangle - u \cdot \langle \mathbf{1}, \mathbf{H}'' \rangle + x \cdot \langle \mathbf{w}_L, \hat{\mathbf{H}} \rangle + x \cdot \langle \mathbf{w}_R, \hat{\mathbf{G}} \rangle + \langle \mathbf{w}_O, \hat{\mathbf{H}} \rangle + x^3 \cdot (S' + u \cdot S'') \\ \end{aligned} \]

Compressing the proof with an inner product argument

Once the verifier has checked correctness of \(t(x)\), \(\mathbf{l}(x)\) and \(\mathbf{r}(x)\), they can directly compute the inner product to check the relation \(t(x) \stackrel{?}{=} {\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle}\). This, however, would require transmitting \(2n\) 32-byte elements representing the vectors \(\mathbf{l}(x)\) and \(\mathbf{r}(x)\).

To make the proof smaller, the prover will use the inner product argument to indirectly prove the inner product relation using \(t(x)\) and the vectors represented by a commitment \(P = {\langle {\mathbf{l}}(x), \hat{\mathbf{G}} \rangle} + {\langle {\mathbf{r}}(x), \hat{\mathbf{H}} \rangle}\).

The verifier checks the inner product proof with \(P\) computed using the bottom row of the diagram, which proves that the vectors \(\mathbf{l}(x), \mathbf{r}(x)\) are computed correctly: \[ \begin{aligned} P = &-{\widetilde{e}} {\widetilde{B}} + x \cdot (A_I' + u \cdot A_I'') + x^2 \cdot (A_O' + u \cdot A_O'') \\ &- \langle \mathbf{1}, \mathbf{H}' \rangle - u \cdot \langle \mathbf{1}, \mathbf{H}'' \rangle + x \cdot \langle \mathbf{w}_L, \hat{\mathbf{H}} \rangle + x \cdot \langle \mathbf{w}_R, \hat{\mathbf{G}} \rangle + \langle \mathbf{w}_O, \hat{\mathbf{H}} \rangle + x^3 \cdot (S' + u \cdot S'') \\ \end{aligned} \]

If the inner product proof with such \(P\) is correct, the verifier is convinced of two facts: that \(t(x) = {\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle}\), and \(\mathbf{l}(x), \mathbf{r}(x)\) are correct.

Padding \(\mathbf{l}(x)\) and \(\mathbf{r}(x)\) for the inner product proof

The above discussion did not have restrictions on the value \(n\). However, the inner product argument requires the input vectors to have power-of-two elements: \(n^{+} = 2^k\). To resolve this mismatch, we need to specify how to pad vectors \(\mathbf{l}(x), \mathbf{r}(x)\) and related commitments before we can use the inner product argument.

Our goal is to translate the padding of the constraint system into the padding of proof data, so we can keep the constraint system small and perform less computations in proving and verification.

We will use the following notation for the padding:

\[ \begin{aligned} n^{+} &= 2^{\lceil \log_2 n \rceil} \\ \mathbf{0} &= [0,...,0] \end{aligned} \]

We start by padding the entire constraint system: multipliers are padded with all-zero assignments \(a_{L,j}, a_{R,j}, a_{O,j}\), all-zero blinding factors \(s_{L,j}, s_{R,j}\), and all-zero weights \(W_{R,i,j}, W_{L,i,j}, W_{O,i,j}\), for all constraints \(i \in [0, q)\) and all additional multipliers \(j \in [n,n^{+})\):

\[ \begin{aligned} \mathbf{a}_L^{+} &= \mathbf{a}_L \hspace{0.1cm} || \hspace{0.1cm} \mathbf{0} \\ \mathbf{a}_R^{+} &= \mathbf{a}_R \hspace{0.1cm} || \hspace{0.1cm} \mathbf{0} \\ \mathbf{a}_O^{+} &= \mathbf{a}_O \hspace{0.1cm} || \hspace{0.1cm} \mathbf{0} \\ \mathbf{s}_L^{+} &= \mathbf{s}_L \hspace{0.1cm} || \hspace{0.1cm} \mathbf{0} \\ \mathbf{s}_R^{+} &= \mathbf{s}_R \hspace{0.1cm} || \hspace{0.1cm} \mathbf{0} \\ \mathbf{W}_L^{+} &= \mathbf{W}_L \hspace{0.1cm} || \hspace{0.1cm} [\mathbf{0}, ..., \mathbf{0}] \\ \mathbf{W}_R^{+} &= \mathbf{W}_R \hspace{0.1cm} || \hspace{0.1cm} [\mathbf{0}, ..., \mathbf{0}] \\ \mathbf{W}_O^{+} &= \mathbf{W}_O \hspace{0.1cm} || \hspace{0.1cm} [\mathbf{0}, ..., \mathbf{0}] \\ \end{aligned} \]

As a result, we have to take larger slices of the vectors of generators \(\mathbf{G},\mathbf{H}\) and more powers of the challenge \(y\):

\[ \begin{aligned} \mathbf{G}^{+} &= \mathbf{G} \hspace{0.1cm} || \hspace{0.1cm} [G_n,...,G_{n^{+}-1}] \\ \mathbf{H}^{+} &= \mathbf{H} \hspace{0.1cm} || \hspace{0.1cm} [H_n,...,H_{n^{+}-1}] \\ \mathbf{y}^{n^{+}} &= \mathbf{y}^n \hspace{0.1cm} || \hspace{0.1cm} [y^n,...,y^{n^{+}-1}] \\ \end{aligned} \]

The low-level variables are padded with zeroes, which is equivalent to padding only the second-phase variables (since we pad the combined vector of all variables on the right end). If there are no second-phase variables, we can consider their vector to be of length 0 before padding, but still apply padding to that vector, instead of the vector of first-phase variables.

Since we never pad first-phase variables, it is obvious that the commitments \(A_I', A_O', S'\) remain unchanged.

Commitments to the second-phase low-level variables remain unchanged since the padding scalars are zeroes:

\[ \begin{aligned} {A_I''}^{+} &= \widetilde{B} \cdot \tilde{a}'' + \langle {\mathbf{G}''}^{+}, {\mathbf{a}''}_L^{+} \rangle + \langle {\mathbf{H}''}^{+}, {\mathbf{a}''}_R^{+} \rangle \\ &= \widetilde{B} \cdot \tilde{a}'' + \langle {\mathbf{G}''}, \mathbf{a}_L'' \rangle + \langle {\mathbf{H}''}, \mathbf{a}_R'' \rangle + \langle [G_n, ..., G_{n^{+}-1}], \mathbf{0} \rangle + \langle [H_n, ..., H_{n^{+}-1}], \mathbf{0} \rangle \\ &= \widetilde{B} \cdot \tilde{a}'' + \langle {\mathbf{G}}'', \mathbf{a}_L'' \rangle + \langle {\mathbf{H}}'', \mathbf{a}_R'' \rangle + 0 \\ &= A_I'' \\ \end{aligned} \]

Similarly, \(A_O''\) and \(S''\) are unchanged:

\[ \begin{aligned} {A_O''}^{+} &= A_O'' \\ {S''}^{+} &= S'' \end{aligned} \]

The flattened weight vectors \(\mathbf{w}_{L,R,O}\) are padded with zeroes because the corresponding weights are padded with zeroes: \[ \begin{aligned} \mathbf{w}_L^{+} &= z \mathbf{z}^q \cdot \mathbf{W}_L^{+} &{}={}& (z \mathbf{z}^q \cdot \mathbf{W}_L) || (z \mathbf{z}^q \cdot \mathbf{0}) &{}={}& \mathbf{w}_L || \mathbf{0}, \\ \mathbf{w}_R^{+} &= z \mathbf{z}^q \cdot \mathbf{W}_R^{+} &{}={}& (z \mathbf{z}^q \cdot \mathbf{W}_R) || (z \mathbf{z}^q \cdot \mathbf{0}) &{}={}& \mathbf{w}_R || \mathbf{0}, \\ \mathbf{w}_O^{+} &= z \mathbf{z}^q \cdot \mathbf{W}_O^{+} &{}={}& (z \mathbf{z}^q \cdot \mathbf{W}_O) || (z \mathbf{z}^q \cdot \mathbf{0}) &{}={}& \mathbf{w}_O || \mathbf{0}. \\ \end{aligned} \]

The \(\delta(y,z)\) remains unchanged because the padding weights are zeroes:

\[ \begin{aligned} \delta(y, z)^{+} &= \langle \mathbf{y}^{-n^{+}} \circ \mathbf{w}_R^{+}, \mathbf{w}_L^{+} \rangle \\ &= \langle \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{w}_L \rangle + \langle [y^n,...,y^{n^{+}-1}] \circ \mathbf{0}, \mathbf{0} \rangle \\ &= \langle \mathbf{y}^{-n} \circ \mathbf{w}_R, \mathbf{w}_L \rangle + 0 \\ &= \delta(y, z) \end{aligned} \]

Vector polynomial \(\mathbf{l}(x)\) is padded with zeroes:

\[ \begin{aligned} \mathbf{l}(x)^{+} &= \mathbf{a}_L^{+} \cdot x + \mathbf{s}_L^{+} \cdot x^3 + \mathbf{y}^{-n^{+}} \circ \mathbf{w}_R^{+} \cdot x + \mathbf{a}_O^{+} \cdot x^2 \\ &= \mathbf{a}_L \cdot x + \mathbf{s}_L \cdot x^3 + \mathbf{y}^{-n} \circ \mathbf{w}_R \cdot x + \mathbf{a}_O \cdot x^2 \\ & \hspace{0.5cm} || \hspace{0.1cm} \mathbf{0} \cdot x + \mathbf{0} \cdot x^3 + [y^n,...,y^{n^{+}-1}] \circ \mathbf{0} \cdot x + \mathbf{0} \cdot x^2 \\ &= \mathbf{l}(x) || \mathbf{0} \\ \end{aligned} \]

Vector polynomial \(\mathbf{r}(x)\) is padded with additional (negated) powers of \(y\):

\[ \begin{aligned} \mathbf{r}(x)^{+} &= \mathbf{y}^{n^{+}} \circ \mathbf{a}_R^{+} \cdot x + \mathbf{y}^{n^{+}} \circ \mathbf{s}_R^{+} \cdot x^3 + \mathbf{w}_L^{+} \cdot x - \mathbf{y}^{n^{+}} + \mathbf{w}_O^{+} \\ &= \mathbf{y}^n \circ \mathbf{a}_R \cdot x + \mathbf{y}^n \circ \mathbf{s}_R \cdot x^3 + \mathbf{w}_L \cdot x - \mathbf{y}^n + \mathbf{w}_O \\ & \hspace{0.5cm} || \hspace{0.1cm} [y^n,...,y^{n^{+}-1}] \circ \mathbf{0} \cdot x + [y^n,...,y^{n^{+}-1}] \circ \mathbf{0} \cdot x^3 + \mathbf{0} \cdot x - [y^n,...,y^{n^{+}-1}] + \mathbf{0} \\ &= \mathbf{r}(x) || [-y^n,...,-y^{n^{+}-1}] \end{aligned} \]

Transmuted generators are padded as follows:

\[ \begin{aligned} \hat{\mathbf{G}}^{+} &= \mathbf{G}' \hspace{0.1cm} || \hspace{0.1cm} u \cdot \mathbf{G}'' \hspace{0.1cm} || \hspace{0.1cm} u \cdot [G_n,...,G_{n^{+}-1}] \\ \hat{\mathbf{H}}^{+} &= \mathbf{y}^{-n^{+}} \circ \big( \mathbf{H}' \hspace{0.1cm} || \hspace{0.1cm} u \cdot \mathbf{H}'' \hspace{0.1cm} || \hspace{0.1cm} u \cdot [H_n,...,H_{n^{+}-1}]\big) \\ \end{aligned} \]

The commitment \(P\) is also padded:

\[ \begin{aligned} P^{+} &=& &-{\widetilde{e}} {\widetilde{B}} + x \cdot (A_I' + u \cdot A_I'') + x^2 \cdot (A_O' + u \cdot A_O'') \\ && &- \langle \mathbf{1}, \mathbf{H}' \rangle - u \cdot \langle \mathbf{1}, {\mathbf{H}''}^{+} \rangle + x \cdot \langle \mathbf{w}_L || \mathbf{0}, \hat{\mathbf{H}}^{+} \rangle + x \cdot \langle \mathbf{w}_R || \mathbf{0}, \hat{\mathbf{G}}^{+} \rangle + \langle \mathbf{w}_O || \mathbf{0}, \hat{\mathbf{H}}^{+} \rangle + x^3 \cdot (S' + u \cdot S'') \\ &=& &P - \langle u, [H_n,...,H_{n^{+}-1}] \rangle \end{aligned} \]

The inner product \(t(x)\) remains unchanged because the non-zero padding in the right vector gets multiplied with the zero padding in the left vector:

\[ \begin{aligned} t(x)^{+} &= {\langle {\mathbf{l}}(x)^{+}, {\mathbf{r}}(x)^{+} \rangle} \\ &= {\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle} + {\langle \mathbf{0}, [y^n,...,y^{n^{+}-1}] \rangle} \\ &= {\langle {\mathbf{l}}(x), {\mathbf{r}}(x) \rangle} + 0 \\ &= t(x) \end{aligned} \]

This implies that the terms \(t_{0, 1, 2, 3, 4, 5, 6}\) also remain unchanged.

Aggregated Constraint System Proof

(Under development.)

Range proofs can be naturally aggregated keeping each statement independent. For constraint systems proofs, two options exist:

  1. each party can prove satisfiability of their own constraint system (systems can be distinct);
  2. parties can collaborate to prove satisfiability of a single constraint system without having to reveal secrets to each other.

The aggregation of distinct proofs can be done in a very similar way to the aggregation of range proofs and is useful purely for the space savings (just like with the range proofs).

The collaborative construction of a proof of a single constraint system requires a different framework, but is very useful for computations that increase privacy for each party, e.g. by allowing them to mix their inputs, while not making them share secrets between each other.


  1. This is because the polynomial in terms of \(y\) is zero at every point if and only if every term of it is zero. The verifier is going to sample a random \(y\) after the prover commits to all the values forming the terms of that polynomial, making the probability that the prover cheated negligible. This trick allows implementing logical AND with any number of terms. 

  2. The blinding factor is synthetic in the sense that it is synthesized from the blinding factors of the other commitments.