Overview

TACEO is creating the Compute Layer Security (CLS) protocol to make blockchain computation encrypted by default. The CLS enables every application to compute on private shared state. At its core will be an MPC-VM capable of producing collaborative SNARKs.

MPC Primer

MPC is a subfield of cryptography that enables multiple parties to jointly compute a function over their combined inputs, while keeping these inputs private. OK, let's dissect that information. Similar to ZK, the evaluated function is usually represented as an arithmetic circuit. The inputs to are secret-shared2 among the parties. Every party evaluates the circuit locally on their shares and communicates with the other participants when necessary. After the parties finish their computation, they reconstruct the result of the function without ever telling anyone the secret inputs. In practice, the parties computing the function are not necessarily the same parties that provided the inputs.

Collaborative SNARKs Primer

Collaborative SNARKs are a very new line of work, emerging in 2022, that combines the best of MPC and SNARKs. They enable multiple parties to jointly create a proof of the correctness of a computation while keeping their individual inputs private. This approach solves the problem of computing on private shared state without relying on a trusted third party. Each party contributes to a distributed protocol that generates a single SNARK proof, attesting to the correctness of their combined computation without revealing any individual inputs. The resulting proof is succinct and can be verified efficiently by anyone, ensuring the integrity of the computation without exposing any sensitive data.

DeFi

Decentralized Finance (DeFi) is transforming the traditional financial landscape by offering open, permissionless, and transparent services. However, privacy and security remain significant challenges. Public on-chain information relating to a user’s financial records, trade intentions or identity could be misused in a sense that the user ends up with worse terms and economic disadvantages.

For instance, publishing a large limit order on-chain immediately reveals the intention of a user to sell or buy a certain asset in high quantities. Market participants know how to use this new piece of information to their advantage. Traditional finance faced the same issue with public stock markets. As a response they introduced so-called Dark Pools, which keep the price impact for large quantity trades as low as possible by matching trades privately.

In TradFi Dark Pools work because market participations trust a central entity to run the service. Blockchains aim for getting rid of these types of intermediaries. However, running a Dark Pool via a public Smart Contract would not work, since all sensitive information would immediately be leaked.

We need to make sure that sensitive information (e.g. Limit order: I want to buy 10 BTC @ price $100k) is kept private. Yet, at some point multiple private inputs need to be combined so that a buy order can be matched with a sell order.

co-SNARKs have the ability to compute on multiple, encrypted user inputs, while not leaking any information about the inputs itself. For the example of on-chain Dark Pools, users could submit their desire to trade large quantities in an encrypted form to the MPC network, which processes and match trades with each other. Market participants don't get any information headstart they could use for their very own advantage and users get the best price possible.

AI

Bringing AI to Blockchains was a - until yet - unfulfilled promise. Verified offchain computation opened up this new design space now for the first time. In this case, the inference of ML models takes place offchain and only the results, alongside a Zero Knowledge Proof (zk proof) are posted back onchain. The zk proof guarantees the correct execution of the (correct) ML model, so that onchain contracts can rely on the output similarly as the execution would have taken place directly onchain.

A premise for zk-powered ML inference is: the one who runs the actual computation and generates the zk proof must have all data at its hand. As long as this entity is the same as the one who owns IP rights for the (trained) model and user inputs contain no sensitive information zk is sufficient.

In cases where users send prompts to the ML model, which shouldn’t be shared with anyone additional encryption techniques need to be added to zk. With co-SNARKS user inputs can be fully shielded from AI service providers, while not restricting them in offering their services.

In general, zk proof generation is hardware intense and often requires specialized equipment. ZK cloud providers or proof markets offer proof computations as a service. As a side effect, all data needs to be shared with the prover, which could leak proprietary IP, like model weights.co-SNARKS could mitigate this issue by splitting up the sensitive data (e.g. model weights) in secret shares and distribute them among the MPC network

Data Ownership

Data per se doesn’t contain much value. Only when data can be combined and computed on, value is created. With handling data, tough, comes responsibility. Data protection rights around the globe are getting stricter exposing data processors to regulatory risks.

Instead of handling data in plain, co-SNARKs can be used to perform computations on encrypted (secret shared) data only. As seen in the case of Worldcoin, which handles highly sensitive data in the form of biometric iris scans, MPC used in co-SNARKs removes the need for storing the data itself. The required computations, namely “is this iris hash already part of the dataset?”, can still be performed.

Data can remain in user’s control, while still contribute to economic activities by computing on this data. co-SNARKS enable composable private onchain state. Our demo application, the Max Pick Challenge, demonstrates how user input is kept private, while still be used and compared in the collaborative guessing game. The highest unique guess can only be determined, when all guesses are compared to each other. However, the game only works if the actual guess is not leaked to the public. With today’s blockchain solution one wouldn’t be able to build such types of applications – co-SNARKs open up an entire new design space for how data can be brought to and use onchain.

Gaming

Blockchain gaming aims to combine the benefits of decentralization and user empowerment with a great in-game experience. The nature of blockchain of being fully transparent and auditable can contribute a lot to fair game designs. However, certain games require a degree of privacy or the game doesn’t work as expected. If an onchain transaction contains the user’s next move it would be accessible by everyone else – including the player’s opponent. If now the game design gives the opponent the chance to respond to the next move before it is fully executed, the first player would have a big disadvantage, essentially leaking the plan to its opponent.

A simple example would be the pen & paper game Battleship. Both players secretly position their battleships onto a grid. Every new round one player “attacks” a position on the opponent’ grid, trying to hit a battleship. If the positioning of the ships would take place via public onchain transactions, all information would already be leaked, and the game couldn’t be played.

As a consequence, information like these must be kept secret, while still being “computable”. co-SNARKs can provide both. The players would secret share the ship positions with the MPC nodes. Every “attack” is also sent to the nodes which compute collaboratively if a ship was hit or not.

From simple pen & paper to highly sophisticated strategy games, co-SNARKs can be the missing piece to boost the creation and adoption of onchain games.

Quick Start

Collaborative Circom is an implementation of collaborative SNARKs, with a focus on the Circom framework. In contrast to traditional SNARKs, which are run by a single prover, collaborative SNARKs are executed using a multiparty computation protocol.

If you just want to get your hands dirty as fast as possible, here is a run-down on how to collaboratively prove the Multiplier2 example from the Circom documentation.

First of all, here is the relevant Circom file:

pragma circom 2.0.0;

/*This circuit template checks that c is the multiplication of a and b.*/

template Multiplier2 () {

   // Declaration of signals.
   signal input a;
   signal input b;
   signal output c;

   // Constraints.
   c <== a * b;
}
component main{public [b]} = Multiplier2();

This circuit proves that we know two numbers that factor the output number c. We also reveal one of the numbers we used to factor c. This is not really impressive, but we stick to the classics for explanations! Copy the code and put it in a file named multiplier2.circom.

Compile the Circuit

In the first step, we compile an .r1cs file using Circom and create a verification/proving key using SnarkJS. To compile the .r1cs file open your terminal (after installing Circom) and type:

circom multiplier2.circom --r1cs

You will find a file called multiplier2.r1cs in your working folder. To create the keys you can either follow the Circom documentation, or download the two keys from our GitHub, where we created the keys already (you will need multiplier2.zkey and verification_key.json).

Split the Input

Ok, after we finished the setup, we need to prepare the inputs for the witness extension. If you have read the Circom documentation (or used Circom in the past), you will remember a step between compiling the circuits and the actual proving. That is, the witness extension (or "computing the witness" as Circom calls it).

We prepare an input file and call it input.json:

{"a": "3", "b": "11"}

Remember that b is a public input, as defined by our circuit.

As we want to execute an MPC protocol, we have to split the input for the parties. At the moment we support 3 parties for the witness extension. To do that, execute the following command:

$ mkdir out

$ co-circom split-input --circuit multiplier2.circom --input input.json --protocol REP3 --curve BN254 --out-dir out/
INFO co_circom: 275: Wrote input share 0 to file out/input.json.0.shared
INFO co_circom: 275: Wrote input share 1 to file out/input.json.1.shared
INFO co_circom: 275: Wrote input share 2 to file out/input.json.2.shared
INFO co_circom: 277: Split input into shares successfully

$ ls out/
input.json.0.shared
input.json.1.shared
input.json.2.shared

This command secret shares the private inputs (everything that is not explicitly public) and creates a .json file for each of the three parties, containing the shared and the public values.

Witness Extension

Now we have to compute the extended witness. In a real-world setting you would have to send the input files from the previous step to the parties.

To achieve that we need another config file for every party, namely the network config (you can read an in-depth explanation about the config at here). You can copy-paste the config from here and call it party0.toml for party0 and so on:

my_id = 0
bind_addr = "0.0.0.0:10000"
key_path = "data/key0.der"
[[parties]]
id = 0
dns_name = "localhost:10000"
cert_path = "data/cert0.der"
[[parties]]
id = 1
dns_name = "localhost:10001"
cert_path = "data/cert1.der"
[[parties]]
id = 2
dns_name = "localhost:10002"
cert_path = "data/cert2.der"

You can download the TLS certificates from our GitHub and put them under data/.

We move the .toml files to configs/ and execute the following command (for every party).

$ co-circom generate-witness --input out/input.json.0.shared --circuit multiplier2.circom --protocol REP3 --curve BN254 --config configs/party0.toml --out out/witness.wtns.0.shared

INFO co_circom: 365: Witness successfully written to out/witness.wtns.0.shared

For brevity we only showed the command for a the 0-th party. You have to call it for all three parties in parallel.

After all parties finished successfully, you will have three witness files in your out/ folder. Each one of them contains a share of the extended witness.

Prove the Circuit

We need another MPC step to finally get our co-SNARK proof. We can reuse TLS certificates and the network config from the previous step. Also, we finally need the proving key from the very first step! In your terminal execute the following command:

$ co-circom generate-proof --witness out/witness.wtns.0.shared --zkey multiplier2.zkey --protocol REP3 --curve BN254 --config configs/party0.toml --out proof.0.json --public-input public_input.json

INFO co_circom: 418: Wrote proof to file proof.0.json
INFO co_circom: 438: Proof generation finished successfully

Again, for brevity, we only gave the command for party 0. You know the drill, all at the same time.

The three proofs produced by the separate parties are equivalent and valid Groth16 proofs - Congratulations, you did it 🎉

You will find another file, namely public_input.json. This file contains all public information necessary to verify the proof, which, in our case, means:

["33","11"]

This is the factored number and the public input b.

Verify the Proof

To verify we can either use snarkjs or the co-circom binary.

$ co-circom verify --proof proof.0.json --vk verification_key.json --public-input public_input.json --curve BN254
co_circom: 483: Proof verified successfully

$ snarkjs groth16 verify verification_key.json public_input.json proof.0.json
[INFO]  snarkJS: OK!

For a full shell script executing all of the commands at once, have a look at our GitHub. In this folder you find this exact example, and some more.

And now you can dive into the rest of the book 🦀

Installation

This section will help you setup the co-circom toolchain.

Prerequisites

To use co-circom, you need to install Rust, Circom, and SnarkJS. Here's a brief overview of why each tool is necessary:

  • Rust: Required for building and running components of co-circom.
  • Circom: Needed to compile a circuit into a .r1cs file.
  • SnarkJS: Used to create the proving and verification keys.

Follow these steps to install the required tools:

  • Install Rust: Visit the official Rust site for detailed installation instructions.
  • Install Circom and SnarkJS: Refer to the circom documentation for guidance on installing Circom and SnarkJS.

These resources will provide the necessary information to get your environment set up for using co-circom.

Compile from Source

First, download the source from GitHub. We tested the compilation on Ubuntu 22.04.

git clone git@github.com:TaceoLabs/collaborative-circom.git

After downloading the source, build the toolchain simply by typing:

cargo build --release

You can find the co-circom binary under target/release/.

Download Binary from Release

  1. You can find the latest release here.

  2. Download the binary for your operating system.

  3. Extract the binary from the archive

    tar xf co-circom-YOUR_ARCHITECTURE.tar.gz
    
  4. Make the binary executable (if necessary):

    chmod +x co-circom
    

Usage

This section is empty at the moment 😭

It will be updated in the course of the next weeks, so please be patient!

For the time being we recommend checking out the Quick Start Guide or the examples folder on our GitHub, where we provide different bash scripts to prove some Circom files.

Additionally, have a look at the CLI commands and the additional material!

co-circom CLI

Usage: co-circom <COMMAND>

Commands:
  split-witness       Splits an existing witness file generated by Circom into secret shares for use in MPC
  split-input         Splits a JSON input file into secret shares for use in MPC
  merge-input-shares  Merge multiple shared inputs received from multiple parties into a single one
  generate-witness    Evaluates the extended witness generation for the specified circuit and input share in MPC
  translate-witness   Translates the witness generated with one MPC protocol to a witness for a different one
  generate-proof      Evaluates the prover algorithm for the specified circuit and witness share in MPC
  verify              Verification of a Circom proof
  help                Print this message or the help of the given subcommand(s)

Options:
  -h, --help     Print help
  -V, --version  Print version

split-input

The aim of the split-input command is to take a traditional circom input.json file and secret-share it to a number of participants.

Example

co-circom split-input --circuit test_vectors/poseidon/circuit.circom --link-library test_vectors/poseidon/lib --input test_vectors/poseidon/input.json --protocol REP3 --curve BN254 --out-dir test_vectors/poseidon

The above command takes the input test_vectors/poseidon/input.json for the circom circuit defined in test_vectors/poseidon/circuit.circom, with additional required circom library files in test_vectors/poseidon/lib, and secret shares them using the REP3 MPC protocol. This produces 3 shares input.json.0.shared, input.json.1.shared, input.json.2.shared in the output directory.

These shares can be handed to the 3 different MPC parties for the witness generation phase.

Reference

$ co-circom split-input --help
Splits a JSON input file into secret shares for use in MPC

Usage: co-circom split-input [OPTIONS] --input <INPUT> --circuit <CIRCUIT> --protocol <PROTOCOL> --curve <CURVE> --out-dir <OUT_DIR>

Options:
      --input <INPUT>                The path to the input JSON file
      --circuit <CIRCUIT>            The path to the circuit file
      --link-library <LINK_LIBRARY>  The path to Circom library files
      --protocol <PROTOCOL>          The MPC protocol to be used [possible values: REP3, SHAMIR]
      --curve <CURVE>                The pairing friendly curve to be used [possible values: BN254, BLS12-381]
      --out-dir <OUT_DIR>            The path to the (existing) output directory
  -h, --help                         Print help

merge-input-shares

The aim of the merge-input-shares command is to take input shares originating from multiple parties and merge them into a single input share file to be used for witness generation.

A use case for this would be to have multiple parties provide different parts of the input to the MPC computation parties.

Example

co-circom merge-input-shares --inputs test_vectors/multiplier2/input0.json.0.shared --inputs test_vectors/multiplier2/input1.json.0.shared --protocol REP3 --curve BN254 --out test_vectors/multiplier2/input.json.0.shared

The above command takes the two input shares input0.json.0.shared and input1.json.0.shared (note both are intended for party 0) and combines them into a single input share input.json.0.shared.

Reference

co-circom merge-input-shares --help
Merge multiple shared inputs received from multiple parties into a single one

Usage: co-circom merge-input-shares [OPTIONS] --protocol <PROTOCOL> --curve <CURVE> --out <OUT>

Options:
      --inputs <INPUTS>      The path to the input JSON file
      --protocol <PROTOCOL>  The MPC protocol to be used [possible values: REP3, SHAMIR]
      --curve <CURVE>        The pairing friendly curve to be used [possible values: BN254, BLS12-381]
      --out <OUT>            The output file where the merged input share is written to
  -h, --help                 Print help

split-witness

The aim of the split-witness command is to take a traditional circom witness.wtns witness file and secret-share it to a number of participants.

Example

co-circom split-witness --witness test_vectors/poseidon/witness.wtns --r1cs test_vectors/poseidon/poseidon.r1cs --protocol REP3 --curve BN254 --out-dir test_vectors/poseidon

The above command takes the witness file test_vectors/poseidon/witness.wtns for the circom circuit defined in test_vectors/poseidon/circuit.circom, with corresponding R1CS file test_vectors/poseidon/poseidon.r1cs and secret shares it using the REP3 MPC protocol. This produces 3 shares witness.wtns.0.shared, witness.wtns.1.shared, witness.wtns.2.shared in the output directory.

These shares can be handed to the 3 different MPC parties for the proof generation phase.

Reference

$ co-circom split-witness --help
Splits an existing witness file generated by Circom into secret shares for use in MPC

Usage: co-circom split-witness [OPTIONS] --witness <WITNESS> --r1cs <R1CS> --protocol <PROTOCOL> --curve <CURVE> --out-dir <OUT_DIR>

Options:
      --witness <WITNESS>          The path to the input witness file generated by Circom
      --r1cs <R1CS>                The path to the r1cs file, generated by Circom compiler
      --protocol <PROTOCOL>        The MPC protocol to be used [possible values: REP3, SHAMIR]
      --curve <CURVE>              The pairing friendly curve to be used [possible values: BN254, BLS12-381]
      --out-dir <OUT_DIR>          The path to the (existing) output directory
  -t, --threshold <THRESHOLD>      The threshold of tolerated colluding parties [default: 1]
  -n, --num-parties <NUM_PARTIES>  The number of parties [default: 3]
  -h, --help                       Print help

generate-witness

The aim of the generate-witness command is to generate a secret-shared witness file in MPC using secret shares of the input.

Example

co-circom generate-witness --input test_vectors/poseidon/input.json.0.shared --circuit test_vectors/poseidon/circuit.circom --link-library test_vectors/poseidon/lib --protocol REP3 --curve BN254 --config configs/party1.toml --out test_vectors/poseidon/witness.wtns.0.shared

The above command takes a shared input file input.json.0.shared for the circuit circuit.circom with required circom library files in test_vectors/poseidon/lib with the network config and outputs the witness share to test_vectors/poseidon/witness.wtns.0.shared.

Reference

$ co-circom generate-witness --help
Evaluates the extended witness generation for the specified circuit and input share in MPC

Usage: co-circom generate-witness [OPTIONS] --input <INPUT> --circuit <CIRCUIT> --protocol <PROTOCOL> --curve <CURVE> --config <CONFIG> --out <OUT>

Options:
      --input <INPUT>                The path to the input share file
      --circuit <CIRCUIT>            The path to the circuit file
      --link-library <LINK_LIBRARY>  The path to Circom library files
      --protocol <PROTOCOL>          The MPC protocol to be used [possible values: REP3, SHAMIR]
      --curve <CURVE>                The pairing friendly curve to be used [possible values: BN254, BLS12-381]
      --config <CONFIG>              The path to MPC network configuration file
      --out <OUT>                    The output file where the final witness share is written to
  -h, --help                         Print help

translate-witness

The aim of the translate-witness command is to take a witness file witness.wtns generated with one MPC protocol and translate it to a witness file of a different MPC protocol

Example

co-circom translate-witness --witness test_vectors/poseidon/witness.wtns --src-protocol REP3 --target-protocol SHAMIR --curve BN254 --config configs/party1.toml --out test_vectors/poseidon/shamir_witness.wtns

The above command takes the witness file test_vectors/poseidon/witness.wtns which was generated with the source MPC protocol REP3 and translates it to the witness file test_vectors/poseidon/shamir_witness.wtns which is suitable for the target MPC protocol SHAMIR. The translation process requires network interaction, thus a networking config is required as well.

Reference

$ co-circom translate-witness --help
Translates the witness generated with one MPC protocol to a witness for a different one

Usage: co-circom translate-witness --witness <WITNESS> --src-protocol <SRC_PROTOCOL> --target-protocol <TARGET_PROTOCOL> --curve <CURVE> --config <CONFIG> --out <OUT>

Options:
      --witness <WITNESS>
          The path to the witness share file
      --src-protocol <SRC_PROTOCOL>
          The MPC protocol that was used for the witness generation [possible values: REP3, SHAMIR]
      --target-protocol <TARGET_PROTOCOL>
          The MPC protocol to be used for the proof generation [possible values: REP3, SHAMIR]
      --curve <CURVE>
          The pairing friendly curve to be used [possible values: BN254, BLS12-381]
      --config <CONFIG>
          The path to MPC network configuration file
      --out <OUT>
          The output file where the final witness share is written to
  -h, --help
          Print help

generate-proof

The aim of the generate-proof command is to run proof generation in MPC using the provided public inputs and witness shares.

Example

co-circom generate-proof --witness test_vectors/poseidon/witness.wtns.0.shared --zkey test_vectors/poseidon/poseidon.zkey --protocol REP3 --config configs/party1.toml --out proof.json --public-input public_input.json

The above command takes a witness share test_vectors/poseidon/witness.wtns.0.shared, a traditional Circom .zkey file and a networking config and produces a Circom-compatible proof proof.json, with a Circom-compatible public input file public_input.json.

Reference

$ co-circom generate-proof --help
Evaluates the prover algorithm for the specified circuit and witness share in MPC

Usage: co-circom generate-proof [OPTIONS] --witness <WITNESS> --zkey <ZKEY> --protocol <PROTOCOL> --curve <CURVE> --config <CONFIG>

Options:
      --witness <WITNESS>            The path to the witness share file
      --zkey <ZKEY>                  The path to the proving key (.zkey) file, generated by snarkjs setup phase
      --protocol <PROTOCOL>          The MPC protocol to be used [possible values: REP3, SHAMIR]
      --curve <CURVE>                The pairing friendly curve to be used [possible values: BN254, BLS12-381]
      --config <CONFIG>              The path to MPC network configuration file
      --out <OUT>                    The output file where the final proof is written to. If not passed, this party will not write the proof to a file
      --public-input <PUBLIC_INPUT>  The output JSON file where the public inputs are written to. If not passed, this party will not write the public inputs to a file
  -t, --threshold <THRESHOLD>        The threshold of tolerated colluding parties [default: 1]
  -h, --help                         Print help

verify

The aim of the verify command is to verify a Groth16 Circom proof using the provided verification key and public inputs.

Example

co-circom verify --proof proof.json --vk test_vectors/multiplier2/verification_key.json --public-input public_input.json --curve BN254

The above command verifies the proof in proof.json using the verification key test_vectors/multiplier2/verification_key.json and public input public_input.json.

Reference

co-circom verify --help`
Verification of a Circom proof

Usage: co-circom verify --proof <PROOF> --curve <CURVE> --vk <VK> --public-input <PUBLIC_INPUT>

Options:
      --proof <PROOF>                The path to the proof file
      --curve <CURVE>                The pairing friendly curve to be used [possible values: BN254, BLS12-381]
      --vk <VK>                      The path to the verification key file
      --public-input <PUBLIC_INPUT>  The path to the public input JSON file
  -h, --help                         Print help

Network Configuration

co-circom requires a network configuration file for establishing connections to other MPC parties for the generate-witness and generate-proof commands.

The network configuration file is a TOML file with the following structure:

my_id = 0
bind_addr = "0.0.0.0:10000"
key_path = "data/key0.der"
[[parties]]
id = 0
dns_name = "localhost:10000"
cert_path = "data/cert0.der"
[[parties]]
id = 1
dns_name = "localhost:10001"
cert_path = "data/cert1.der"
[[parties]]
id = 2
dns_name = "localhost:10002"
cert_path = "data/cert2.der"

See the example configuration in the collaborative-circom/examples/configs folder, with pre-generated certificates and keys in the collaborative-circom/examples/data folder.

Keys

  • my_id is the party id of the party executing the co-circom binary using the configuration file.
  • bind_addr is the local socket address this party is binding to and listening for incoming connections from other parties.
  • key_path is a path to a DER encoded PKCS8 private key file corresponding to the public key used in the certificate for our party.
  • parties is an array of tables containing the public information of each MPC party.
    • id: the party id of the MPC party
    • dns_name: the hostname/port combination where the party is publicly reachable. The hostname must be the a valid CN or SNI in the used certificate.
    • cert_path: a path to the DER encoded certificate (chain) file that is used to authenticate the connection with the party and is used to establish the secure communication channel.

MPC-VM

Design Choices

Problematic Circom Operations

The Circom language was designed for zero-knowledge circuits and while its internal model is pretty similar to the arithmetic circuit model that is native to MPC, there are a few assumptions that the Circom language makes that pose issues for execution in MPC. Most of them arise from conditional execution of code in Circom. While there are some conditions placed upon conditional code in circom (it is not allowed to modify the structure of the circuit, i.e., the same circuit has to be produced regardless of the input), it allows conditional execution of unconstrained code. Unconstrained code is code that is producing helper variables that may later be used to constrain actual signal values. A common example is the bit-decomposition of a number, which gets computed using unconstrained code, and later on, it is proven by adding constraints that the sum of the individual bits multiplied by their respective powers of two is equal to the original value, as this is much cheaper in zero-knowledge compared to directly computing the bit-decomposition.

Conditional Branches

Ternary Operators

A special case of conditional branches is the ternary operator.

Division in inactive branches

One further complication of executing both inactive and active branches is that all operations must be computable in both branches. A common operation that poses problems is field division, or more concretely, the associated inversion of the divisor, as this may fail if the divisor is 0. We solve this by conditionally loading the real input or 1, depending on if the current branch is active or not, using a conditional multiplexer gate in the MPC circuit.

Circom

We refer to the Circom documentation at https://docs.circom.io/.

Secure Multiparty Computation (MPC)

Currently, proof generation is supported with two different MPC protocols:

  • 3-party replicated secret sharing (based on ABY31) with semi-honest security
  • N-party Shamir secret sharing2 (based on DN073) with semi-honest security

Notation

With we denote that is additively secret shared amongst parties, such that and . With we denote that is binary secret shared amongst parties, such that and . With we denote that is Shamir secret shared amongst parties, such that and such that is a polynomial of degree with . Similar, for a group element in an additive Group (e.g., elliptic curve groups) we denote by its additive secret sharing amongst parties, and by its Shamir sharing, such that . Furthermore, indices of shares are taken modulo the number of parties.

3-Party Replicated Sharing

Replicated secret sharing is based on additive secret sharing, with the twist that each party has multiple additive shares. Thus, in the 3-party case a secret is shared the following way. First, the is split into three random shares such that and each party gets two shares:

Rng Setup

Random values are required during many parts of MPC executions. For cheaper randomness generation, correlated random number generators are set up before protocol execution.

Random Values

In order to create random shares , random additive shares of (i.e., ), or random binary shares of (i.e., ) without interaction, Rep3 sets up a correlated random number generator during the setup phase. Each party chooses a seed and sends it to the next party . Thus, each party has two seeds and can set up an RNG's, where two party are able to create the same random numbers:

Binary To Arithmetic Conversion

For the binary to arithmetic conversion, we need correlated randomness as well. The goal is to setup RNG's, such that:

In other words, and can use RNG1 create the same field element, while all parties can sample valid shares for it. Similar, and can use RNG2 to create the same field element, while all parties can sample valid shares for it. This setup can be achieved by sampling seeds from the already set up RNG for shared random values and resharing the seeds correctly.

Supported operations

Linear Operations

Due to being based on additive secret sharing, linear operations can be performed on shares without party interaction.

  • Constant addition:
  • Share addition:
  • Constant Multiplication:

Similar, linear operations can be computed locally in the binary domain as well.

  • Constant addition:
  • Share addition:
  • Constant Multiplication:

Multiplications

One main advantage of replicated secret sharing is the presence of a simple multiplication protocol. First, due to having two additive shares, each party can calculate an additive share of the result without interaction. For , is a valid additive share of .

Thus, multiplications involve a local operation followed by a resharing of the result to translate the additive share to a replicated share. Resharing requires to randomize the share to not leak any information to the other party. For that reason, a fresh random share of zero is added, which can be sampled locally without party interaction (see RNG setup).

Thus, party calculates:

The resharing, thereby, is simply implemented as sending to .

AND

AND gates follow directly from multiplications:

Arithmetic to Binary Conversion

In replicated sharing over rings (e.g., ABY3), arithmetic to binary conversion of a share is implemented by first locally splitting the shares to valid binary sharings, i.e., , , and and combining them in MPC using binary addition circuits. Then is a valid binary sharing of . This approach works because binary addition circuits implicitly perform modular reductions mod .

Thus, in we have to include the mod reductions manually in the circuit. For improved performance, we use the following protocol to translate :

  • samples to be a new random binary share of
  • Set
  • calculates
    • It follows that is a valid binary sharing of
  • sends its share of to
    • Each party now has a valid binary replicated sharing of
  • The parties compute a binary adder circuit of bits to sum up and to get with bits (including overflow bit).
  • The parties compute the subtraction of inside a binary circuit to get .
  • The -th bit of indicates an overflow of the subtraction. If an overflow occurs, the result should be the first bits of , otherwise the first bits of . This if statement can be computed with a CMUX gate.

Binary to Arithmetic Conversion

For the binary to arithmetic conversion, the general strategy for replicated sharing is the following. We sample random binary shares and using the special correlated randomness, such that gets both values in addition to it's shares, while and get and in clear respectively. Then we compute a binary circuit to add . Finally, we open to and . The arithmetic shares then are .

To account for modular reductions in finite fields, we follow a similar strategy as for the arithmetic to binary conversion to translate :

  • samples to be a new random binary share of
  • samples using RNG2 and sets ;
  • samples using RNG1 and sets ;
  • samples using RNG1 and using RNG2, sets , , and ;
    • It follows that is a valid binary sharing of
  • sends its share of to
    • Each party now has a valid binary replicated sharing of
  • The parties compute a binary adder circuit of bits to sum up and to get with bits (including overflow bit).
  • The parties compute the subtraction of inside a binary circuit to get .
  • The -th bit of indicates an overflow of the subtraction. If an overflow occurs, should be the first bits of , otherwise the first bits of . This if statement can be computed with a CMUX gate.
  • We open to and
  • The final sharing is and each party has (only) access to the shares it requires for the replication.

Bit Injection

To translate a single shared bit to a arithmetic sharing, we perform share splitting and create valid arithmetic shares of the shares: , , and . Then, we combine the shares by calculating arithmetic XORs: , where .

Binary Addition Circuits

As mentioned in the arithmetic to binary conversions, we need binary addition circuits. Since the bitlength of the used prime fields is large, we use depth-optimized carry-lookahead adders for the conversions. Currently, we implement Kogge-Stone adders, since these can be computed efficiently using shifts and AND/XOR gates without extracting specific bits.

The general structure of a Kogge-Stone adder to add two binary values is to first compute and , where is the -th bit of . Then, and are combined using a circuit with logarithmic depth (in the bitsize). This circuit is implemented in the kogge_stone_inner function.

For binary subtraction circuits, we basically compute an addition circuit with the two's complement of . Thus, we essentially compute . If is public, the can directly be computed and the result is just fed into the Kogge-Stone adder. If is shared, we invert all bits and set the carry-in flag for the Kogge-Stone adder. This simulates two's complement calculation. The set carry-in flag has the following effects: First, must additionally be XORed by . Finally, the LSB of the result of the Kogge-Stone circuit needs to be flipped.

Reconstruction

Reconstruction of a value is implemented as sending to . Then each party has all shares.

Security

Our implementation provides semi-honest security with honest majority, i.e., the scheme is secure if all parties follow the protocol honestly and no servers collude.

Shamir Secret Sharing

Shamir secret sharing is a different way of instantiating a linear secret sharing scheme which is based on polynomials. To share a value , one first has to sample a random polynomial of degree , where is in the constant term. I.e., one samples randomly from and sets: . The share of party then is . In other words, , where .

Reconstruction then works via lagrange interpolation of any shares: , where is the corresponding lagrange coefficient.

Supported operations

Linear Operations

Shamir's secret sharing allows, similar to additive sharing, to compute linear functions locally without party interaction:

  • Constant addition:
  • Share addition:
  • Constant Multiplication:

Multiplications

Shamir secret sharing comes with a native multiplication protocol: is a valid share of . However, is a point on a polynomial with degree . In other words, the degree doubles after a multiplication and twice as many parties () are required to reconstruct the secret . Thus, one needs to perform a degree reduction step in MPC for further computations. In DN07, this is done by sampling a random value, which is shared as a degree- () and degree- () polynomial. Then, the parties open to , who reconstructs it to . Then. shares as a fresh degree- share to all parties, who calculate .

Rng Setup

For the degree reduction step after a multiplication we require degree- and degree- sharings of the same random value . We generate these values following the techniques proposed in DN07, to generate random pairs at once:

  • Each party generates a random value and shares it as degree- share and degree- share to the other parties.
  • After receiving the all shares, one sets and .
  • Calculate and , where is a Vandermonde matrix.
  • The pairs for are then valid random shares which can be used for resharing.

For simplicity we use the following Vandermonde matrix:

Reconstruction

Reconstruction is currently implemented as sending its share to the next parties. Then, each party has shares to reconstruct the secret.

Security

Our implementation provides semi-honest security with honest majority, i.e., the scheme is secure if all parties follow the protocol honestly and at most servers collude. can, thereby, be chosen to be .

MPC for group operations

So far, we only discussed MPC for field elements . However, one can easily extend it to MPC over Group elements . W.l.o.g. we will use the notation for additive groups (e.g., elliptic curve groups). A secret share can be translated to a shared group element by , where is a generator of . Then, is a valid share of . Linear operations directly follow from the used linear secret sharing scheme: . Shared scalar multiplications also follow from the secret sharing scheme: .

Shamir vs Rep3

Shamir and Rep3 are both linear secret sharing schemes which provide semi-honest security with honest-majority. However, they have some important differences.

  • Shamir can be instantiated with parties, while Rep3 is limited to parties.
  • In Shamir, each share is just one field element , while in Rep3 each share is composed of two field elements.
  • In Shamir, the overhead on the CPU is significantly smaller compared to Rep3, where each operation is applied to two shares.
  • Rep3 allows efficient arithmetic-to-binary conversions.

Witness Extension

Due to not having an efficient arithmetic to binary conversion, we do not have a witness extension implementation for Shamir sharing at the moment. However, we provide a bridge implementation, which can translate Rep3 shares to 3-party Shamir shares (with threshold/poly-degree ).

This bridge works by first letting translate its first additive share to a Shamir share by dividing by the corresponding lagrange coefficient. This, however, creates a 3-party Shamir sharing with threshold/poly-degree . Thus, we perform the same degree-reduction step, which is also required after a Shamir multiplication.

Zero Knowledge Proofs

For a comprehensive introduction to Zero-Knowledge Proofs, we recommend reading the following book by Justin Thaler: "Proofs, Arguments, and Zero-Knowledge".

Furthermore, here are some MPC-friendly ZK proof systems:

Collaborative SNARKs

In this document we list some literature for collaborative SNARKs.

Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets

This is the first paper1 in this space. It experiments with the feasibility of evaluating SNARKs in MPC and implements Groth162 and Plonk3 using SPDZ4 and GSZ5 (maliciously secure variants of additive secret sharing and Shamir secret sharing respectively).

EOS: efficient private delegation of zkSNARK provers

This paper6 uses a delegator to speed up MPC computations and investigates using the SNARK as error-detecting computation to implement cheaper malicious security.

zkSaaS: Zero Knowledge SNARKs as a service

This paper7 uses packed secret sharing (PSS)8, i.e., a variant of Shamir secret sharing where multiple secrets are embedded into the same sharing polynomial, to speed up MPC computation. However, they encounter some problems with FFTs, since they cannot be implemented with the SIMD semantics of PSS naively.

Scalable Collaborative zk-SNARK: Fully Distributed Proof Generation and Malicious Security

This paper9 is a followup to zkSaaS which replaces the used SNARK with GKR10, which is better suited for PSS.

Scalable Collaborative zk-SNARK and Its Application to Efficient Proof Outsourcing

This paper11 is essentially an update version of "Scalable Collaborative zk-SNARK: Fully Distributed Proof Generation and Malicious Security", which includes semi-honest protocols for collaborative HyperPlonk12, additional optimization, and new experiments.

Confidential and Verifiable Machine Learning Delegations on the Cloud

This paper13 implements GKR in MPC using the well-known MP-SPDZ14 library. It focuses on efficient matrix multiplications, bit provides a generic construction as well.

How to prove any NP statement jointly? Efficient Distributed-prover Zero-Knowledge Protocols

This paper15 provides new security notions for distributed prover zero knowledge (DPZK) and provides a generic compiler to realize distributed proof generation for any zero-knowledge proof system build from the interactive oracle proofs (IOP) paradigm.