Overview
TACEO is creating the Compute Layer Security (CLS) protocol to make blockchain computation encrypted by default. The CLS enables every application to compute on private shared state. At its core will be an MPCVM capable of producing collaborative SNARKs.
MPC Primer
MPC is a subfield of cryptography that enables multiple parties to jointly compute a function over their combined inputs, while keeping these inputs private. OK, let's dissect that information. Similar to ZK, the evaluated function is usually represented as an arithmetic circuit. The inputs to are secretshared2 among the parties. Every party evaluates the circuit locally on their shares and communicates with the other participants when necessary. After the parties finish their computation, they reconstruct the result of the function $f$ without ever telling anyone the secret inputs. In practice, the parties computing the function are not necessarily the same parties that provided the inputs.
Collaborative SNARKs Primer
Collaborative SNARKs are a very new line of work, emerging in 2022, that combines the best of MPC and SNARKs. They enable multiple parties to jointly create a proof of the correctness of a computation while keeping their individual inputs private. This approach solves the problem of computing on private shared state without relying on a trusted third party. Each party contributes to a distributed protocol that generates a single SNARK proof, attesting to the correctness of their combined computation without revealing any individual inputs. The resulting proof is succinct and can be verified efficiently by anyone, ensuring the integrity of the computation without exposing any sensitive data.
DeFi
Decentralized Finance (DeFi) is transforming the traditional financial landscape by offering open, permissionless, and transparent services. However, privacy and security remain significant challenges. Public onchain information relating to a user’s financial records, trade intentions or identity could be misused in a sense that the user ends up with worse terms and economic disadvantages.
For instance, publishing a large limit order onchain immediately reveals the intention of a user to sell or buy a certain asset in high quantities. Market participants know how to use this new piece of information to their advantage. Traditional finance faced the same issue with public stock markets. As a response they introduced socalled Dark Pools, which keep the price impact for large quantity trades as low as possible by matching trades privately.
In TradFi Dark Pools work because market participations trust a central entity to run the service. Blockchains aim for getting rid of these types of intermediaries. However, running a Dark Pool via a public Smart Contract would not work, since all sensitive information would immediately be leaked.
We need to make sure that sensitive information (e.g. Limit order: I want to buy 10 BTC @ price $100k) is kept private. Yet, at some point multiple private inputs need to be combined so that a buy order can be matched with a sell order.
coSNARKs have the ability to compute on multiple, encrypted user inputs, while not leaking any information about the inputs itself. For the example of onchain Dark Pools, users could submit their desire to trade large quantities in an encrypted form to the MPC network, which processes and match trades with each other. Market participants don't get any information headstart they could use for their very own advantage and users get the best price possible.
AI
Bringing AI to Blockchains was a  until yet  unfulfilled promise. Verified offchain computation opened up this new design space now for the first time. In this case, the inference of ML models takes place offchain and only the results, alongside a Zero Knowledge Proof (zk proof) are posted back onchain. The zk proof guarantees the correct execution of the (correct) ML model, so that onchain contracts can rely on the output similarly as the execution would have taken place directly onchain.
A premise for zkpowered ML inference is: the one who runs the actual computation and generates the zk proof must have all data at its hand. As long as this entity is the same as the one who owns IP rights for the (trained) model and user inputs contain no sensitive information zk is sufficient.
In cases where users send prompts to the ML model, which shouldn’t be shared with anyone additional encryption techniques need to be added to zk. With coSNARKS user inputs can be fully shielded from AI service providers, while not restricting them in offering their services.
In general, zk proof generation is hardware intense and often requires specialized equipment. ZK cloud providers or proof markets offer proof computations as a service. As a side effect, all data needs to be shared with the prover, which could leak proprietary IP, like model weights.coSNARKS could mitigate this issue by splitting up the sensitive data (e.g. model weights) in secret shares and distribute them among the MPC network
Data Ownership
Data per se doesn’t contain much value. Only when data can be combined and computed on, value is created. With handling data, tough, comes responsibility. Data protection rights around the globe are getting stricter exposing data processors to regulatory risks.
Instead of handling data in plain, coSNARKs can be used to perform computations on encrypted (secret shared) data only. As seen in the case of Worldcoin, which handles highly sensitive data in the form of biometric iris scans, MPC used in coSNARKs removes the need for storing the data itself. The required computations, namely “is this iris hash already part of the dataset?”, can still be performed.
Data can remain in user’s control, while still contribute to economic activities by computing on this data. coSNARKS enable composable private onchain state. Our demo application, the Max Pick Challenge, demonstrates how user input is kept private, while still be used and compared in the collaborative guessing game. The highest unique guess can only be determined, when all guesses are compared to each other. However, the game only works if the actual guess is not leaked to the public. With today’s blockchain solution one wouldn’t be able to build such types of applications – coSNARKs open up an entire new design space for how data can be brought to and use onchain.
Gaming
Blockchain gaming aims to combine the benefits of decentralization and user empowerment with a great ingame experience. The nature of blockchain of being fully transparent and auditable can contribute a lot to fair game designs. However, certain games require a degree of privacy or the game doesn’t work as expected. If an onchain transaction contains the user’s next move it would be accessible by everyone else – including the player’s opponent. If now the game design gives the opponent the chance to respond to the next move before it is fully executed, the first player would have a big disadvantage, essentially leaking the plan to its opponent.
A simple example would be the pen & paper game Battleship. Both players secretly position their battleships onto a grid. Every new round one player “attacks” a position on the opponent’ grid, trying to hit a battleship. If the positioning of the ships would take place via public onchain transactions, all information would already be leaked, and the game couldn’t be played.
As a consequence, information like these must be kept secret, while still being “computable”. coSNARKs can provide both. The players would secret share the ship positions with the MPC nodes. Every “attack” is also sent to the nodes which compute collaboratively if a ship was hit or not.
From simple pen & paper to highly sophisticated strategy games, coSNARKs can be the missing piece to boost the creation and adoption of onchain games.
Quick Start
Collaborative Circom is an implementation of collaborative SNARKs, with a focus on the Circom framework. In contrast to traditional SNARKs, which are run by a single prover, collaborative SNARKs are executed using a multiparty computation protocol.
If you just want to get your hands dirty as fast as possible, here is a rundown on how to collaboratively prove the Multiplier2
example from the Circom documentation.
First of all, here is the relevant Circom file:
pragma circom 2.0.0;
/*This circuit template checks that c is the multiplication of a and b.*/
template Multiplier2 () {
// Declaration of signals.
signal input a;
signal input b;
signal output c;
// Constraints.
c <== a * b;
}
component main{public [b]} = Multiplier2();
This circuit proves that we know two numbers that factor the output number c. We also reveal one of the numbers we used to factor c. This is not really impressive, but we stick to the classics for explanations! Copy the code and put it in a file named multiplier2.circom
.
Compile the Circuit
In the first step, we compile an .r1cs
file using Circom and create a verification/proving key using SnarkJS. To compile the .r1cs
file open your terminal (after installing Circom) and type:
circom multiplier2.circom r1cs
You will find a file called multiplier2.r1cs
in your working folder. To create the keys you can either follow the Circom documentation, or download the two keys from our GitHub, where we created the keys already (you will need multiplier2.zkey
and verification_key.json
).
Split the Input
Ok, after we finished the setup, we need to prepare the inputs for the witness extension. If you have read the Circom documentation (or used Circom in the past), you will remember a step between compiling the circuits and the actual proving. That is, the witness extension (or "computing the witness" as Circom calls it).
We prepare an input file and call it input.json
:
{"a": "3", "b": "11"}
Remember that
b
is a public input, as defined by our circuit.
As we want to execute an MPC protocol, we have to split the input for the parties. At the moment we support 3 parties for the witness extension. To do that, execute the following command:
$ mkdir out
$ cocircom splitinput circuit multiplier2.circom input input.json protocol REP3 curve BN254 outdir out/
INFO co_circom: 275: Wrote input share 0 to file out/input.json.0.shared
INFO co_circom: 275: Wrote input share 1 to file out/input.json.1.shared
INFO co_circom: 275: Wrote input share 2 to file out/input.json.2.shared
INFO co_circom: 277: Split input into shares successfully
$ ls out/
input.json.0.shared
input.json.1.shared
input.json.2.shared
This command secret shares the private inputs (everything that is not explicitly public) and creates a .json
file for each of the three parties, containing the shared and the public values.
Witness Extension
Now we have to compute the extended witness. In a realworld setting you would have to send the input files from the previous step to the parties.
To achieve that we need another config file for every party, namely the network config (you can read an indepth explanation about the config at here). You can copypaste the config from here and call it party0.toml
for party0 and so on:
my_id = 0
bind_addr = "0.0.0.0:10000"
key_path = "data/key0.der"
[[parties]]
id = 0
dns_name = "localhost:10000"
cert_path = "data/cert0.der"
[[parties]]
id = 1
dns_name = "localhost:10001"
cert_path = "data/cert1.der"
[[parties]]
id = 2
dns_name = "localhost:10002"
cert_path = "data/cert2.der"
You can download the TLS certificates from our GitHub and put them under data/
.
We move the .toml
files to configs/
and execute the following command (for every party).
$ cocircom generatewitness input out/input.json.0.shared circuit multiplier2.circom protocol REP3 curve BN254 config configs/party0.toml out out/witness.wtns.0.shared
INFO co_circom: 365: Witness successfully written to out/witness.wtns.0.shared
For brevity we only showed the command for a the 0th party. You have to call it for all three parties in parallel.
After all parties finished successfully, you will have three witness files in your out/
folder. Each one of them contains a share of the extended witness.
Prove the Circuit
We need another MPC step to finally get our coSNARK proof. We can reuse TLS certificates and the network config from the previous step. Also, we finally need the proving key from the very first step! In your terminal execute the following command:
$ cocircom generateproof witness out/witness.wtns.0.shared zkey multiplier2.zkey protocol REP3 curve BN254 config configs/party0.toml out proof.0.json publicinput public_input.json
INFO co_circom: 418: Wrote proof to file proof.0.json
INFO co_circom: 438: Proof generation finished successfully
Again, for brevity, we only gave the command for party 0. You know the drill, all at the same time.
The three proofs produced by the separate parties are equivalent and valid Groth16 proofs  Congratulations, you did it 🎉
You will find another file, namely public_input.json
. This file contains all public information necessary to verify the proof, which, in our case, means:
["33","11"]
This is the factored number and the public input b
.
Verify the Proof
To verify we can either use snarkjs or the cocircom
binary.
$ cocircom verify proof proof.0.json vk verification_key.json publicinput public_input.json curve BN254
co_circom: 483: Proof verified successfully
$ snarkjs groth16 verify verification_key.json public_input.json proof.0.json
[INFO] snarkJS: OK!
For a full shell
script executing all of the commands at once, have a look at our GitHub. In this folder you find this exact example, and some more.
And now you can dive into the rest of the book 🦀
Installation
This section will help you setup the cocircom toolchain.
Prerequisites
To use cocircom, you need to install Rust, Circom, and SnarkJS. Here's a brief overview of why each tool is necessary:
 Rust: Required for building and running components of cocircom.
 Circom: Needed to compile a circuit into a .r1cs file.
 SnarkJS: Used to create the proving and verification keys.
Follow these steps to install the required tools:
 Install Rust: Visit the official Rust site for detailed installation instructions.
 Install Circom and SnarkJS: Refer to the circom documentation for guidance on installing Circom and SnarkJS.
These resources will provide the necessary information to get your environment set up for using cocircom.
Compile from Source
First, download the source from GitHub. We tested the compilation on Ubuntu 22.04.
git clone git@github.com:TaceoLabs/collaborativecircom.git
After downloading the source, build the toolchain simply by typing:
cargo build release
You can find the cocircom
binary under target/release/
.
Download Binary from Release

You can find the latest release here.

Download the binary for your operating system.

Extract the binary from the archive
tar xf cocircomYOUR_ARCHITECTURE.tar.gz

Make the binary executable (if necessary):
chmod +x cocircom
Usage
This section is empty at the moment 😭
It will be updated in the course of the next weeks, so please be patient!
For the time being we recommend checking out the Quick Start Guide or the examples folder on our GitHub, where we provide different bash
scripts to prove some Circom files.
Additionally, have a look at the CLI commands and the additional material!
cocircom CLI
Usage: cocircom <COMMAND>
Commands:
splitwitness Splits an existing witness file generated by Circom into secret shares for use in MPC
splitinput Splits a JSON input file into secret shares for use in MPC
mergeinputshares Merge multiple shared inputs received from multiple parties into a single one
generatewitness Evaluates the extended witness generation for the specified circuit and input share in MPC
translatewitness Translates the witness generated with one MPC protocol to a witness for a different one
generateproof Evaluates the prover algorithm for the specified circuit and witness share in MPC
verify Verification of a Circom proof
help Print this message or the help of the given subcommand(s)
Options:
h, help Print help
V, version Print version
splitinput
The aim of the splitinput
command is to take a traditional circom input.json
file and secretshare it to a number of participants.
Example
cocircom splitinput circuit test_vectors/poseidon/circuit.circom linklibrary test_vectors/poseidon/lib input test_vectors/poseidon/input.json protocol REP3 curve BN254 outdir test_vectors/poseidon
The above command takes the input test_vectors/poseidon/input.json
for the circom circuit defined in test_vectors/poseidon/circuit.circom
, with additional required circom library files in test_vectors/poseidon/lib
, and secret shares them using the REP3
MPC protocol. This produces 3 shares input.json.0.shared
, input.json.1.shared
, input.json.2.shared
in the output directory.
These shares can be handed to the 3 different MPC parties for the witness generation phase.
Reference
$ cocircom splitinput help
Splits a JSON input file into secret shares for use in MPC
Usage: cocircom splitinput [OPTIONS] input <INPUT> circuit <CIRCUIT> protocol <PROTOCOL> curve <CURVE> outdir <OUT_DIR>
Options:
input <INPUT> The path to the input JSON file
circuit <CIRCUIT> The path to the circuit file
linklibrary <LINK_LIBRARY> The path to Circom library files
protocol <PROTOCOL> The MPC protocol to be used [possible values: REP3, SHAMIR]
curve <CURVE> The pairing friendly curve to be used [possible values: BN254, BLS12381]
outdir <OUT_DIR> The path to the (existing) output directory
h, help Print help
mergeinputshares
The aim of the mergeinputshares
command is to take input shares originating from multiple parties and merge them into a single input share file to be used for witness generation.
A use case for this would be to have multiple parties provide different parts of the input to the MPC computation parties.
Example
cocircom mergeinputshares inputs test_vectors/multiplier2/input0.json.0.shared inputs test_vectors/multiplier2/input1.json.0.shared protocol REP3 curve BN254 out test_vectors/multiplier2/input.json.0.shared
The above command takes the two input shares input0.json.0.shared
and input1.json.0.shared
(note both are intended for party 0) and combines them into a single input share input.json.0.shared
.
Reference
cocircom mergeinputshares help
Merge multiple shared inputs received from multiple parties into a single one
Usage: cocircom mergeinputshares [OPTIONS] protocol <PROTOCOL> curve <CURVE> out <OUT>
Options:
inputs <INPUTS> The path to the input JSON file
protocol <PROTOCOL> The MPC protocol to be used [possible values: REP3, SHAMIR]
curve <CURVE> The pairing friendly curve to be used [possible values: BN254, BLS12381]
out <OUT> The output file where the merged input share is written to
h, help Print help
splitwitness
The aim of the splitwitness
command is to take a traditional circom witness.wtns
witness file and secretshare it to a number of participants.
Example
cocircom splitwitness witness test_vectors/poseidon/witness.wtns r1cs test_vectors/poseidon/poseidon.r1cs protocol REP3 curve BN254 outdir test_vectors/poseidon
The above command takes the witness file test_vectors/poseidon/witness.wtns
for the circom circuit defined in test_vectors/poseidon/circuit.circom
, with corresponding R1CS file test_vectors/poseidon/poseidon.r1cs
and secret shares it using the REP3
MPC protocol. This produces 3 shares witness.wtns.0.shared
, witness.wtns.1.shared
, witness.wtns.2.shared
in the output directory.
These shares can be handed to the 3 different MPC parties for the proof generation phase.
Reference
$ cocircom splitwitness help
Splits an existing witness file generated by Circom into secret shares for use in MPC
Usage: cocircom splitwitness [OPTIONS] witness <WITNESS> r1cs <R1CS> protocol <PROTOCOL> curve <CURVE> outdir <OUT_DIR>
Options:
witness <WITNESS> The path to the input witness file generated by Circom
r1cs <R1CS> The path to the r1cs file, generated by Circom compiler
protocol <PROTOCOL> The MPC protocol to be used [possible values: REP3, SHAMIR]
curve <CURVE> The pairing friendly curve to be used [possible values: BN254, BLS12381]
outdir <OUT_DIR> The path to the (existing) output directory
t, threshold <THRESHOLD> The threshold of tolerated colluding parties [default: 1]
n, numparties <NUM_PARTIES> The number of parties [default: 3]
h, help Print help
generatewitness
The aim of the generatewitness
command is to generate a secretshared witness file in MPC using secret shares of the input.
Example
cocircom generatewitness input test_vectors/poseidon/input.json.0.shared circuit test_vectors/poseidon/circuit.circom linklibrary test_vectors/poseidon/lib protocol REP3 curve BN254 config configs/party1.toml out test_vectors/poseidon/witness.wtns.0.shared
The above command takes a shared input file input.json.0.shared
for the circuit circuit.circom
with required circom library files in test_vectors/poseidon/lib
with the network config and outputs the witness share to test_vectors/poseidon/witness.wtns.0.shared
.
Reference
$ cocircom generatewitness help
Evaluates the extended witness generation for the specified circuit and input share in MPC
Usage: cocircom generatewitness [OPTIONS] input <INPUT> circuit <CIRCUIT> protocol <PROTOCOL> curve <CURVE> config <CONFIG> out <OUT>
Options:
input <INPUT> The path to the input share file
circuit <CIRCUIT> The path to the circuit file
linklibrary <LINK_LIBRARY> The path to Circom library files
protocol <PROTOCOL> The MPC protocol to be used [possible values: REP3, SHAMIR]
curve <CURVE> The pairing friendly curve to be used [possible values: BN254, BLS12381]
config <CONFIG> The path to MPC network configuration file
out <OUT> The output file where the final witness share is written to
h, help Print help
translatewitness
The aim of the translatewitness
command is to take a witness file witness.wtns
generated with one MPC protocol and translate it to a witness file of a different MPC protocol
Example
cocircom translatewitness witness test_vectors/poseidon/witness.wtns srcprotocol REP3 targetprotocol SHAMIR curve BN254 config configs/party1.toml out test_vectors/poseidon/shamir_witness.wtns
The above command takes the witness file test_vectors/poseidon/witness.wtns
which was generated with the source MPC protocol REP3
and translates it to the witness file test_vectors/poseidon/shamir_witness.wtns
which is suitable for the target MPC protocol SHAMIR
. The translation process requires network interaction, thus a networking config is required as well.
Reference
$ cocircom translatewitness help
Translates the witness generated with one MPC protocol to a witness for a different one
Usage: cocircom translatewitness witness <WITNESS> srcprotocol <SRC_PROTOCOL> targetprotocol <TARGET_PROTOCOL> curve <CURVE> config <CONFIG> out <OUT>
Options:
witness <WITNESS>
The path to the witness share file
srcprotocol <SRC_PROTOCOL>
The MPC protocol that was used for the witness generation [possible values: REP3, SHAMIR]
targetprotocol <TARGET_PROTOCOL>
The MPC protocol to be used for the proof generation [possible values: REP3, SHAMIR]
curve <CURVE>
The pairing friendly curve to be used [possible values: BN254, BLS12381]
config <CONFIG>
The path to MPC network configuration file
out <OUT>
The output file where the final witness share is written to
h, help
Print help
generateproof
The aim of the generateproof
command is to run proof generation in MPC using the provided public inputs and witness shares.
Example
cocircom generateproof witness test_vectors/poseidon/witness.wtns.0.shared zkey test_vectors/poseidon/poseidon.zkey protocol REP3 config configs/party1.toml out proof.json publicinput public_input.json
The above command takes a witness share test_vectors/poseidon/witness.wtns.0.shared
, a traditional Circom .zkey
file and a networking config and produces a Circomcompatible proof proof.json
, with a Circomcompatible public input file public_input.json
.
Reference
$ cocircom generateproof help
Evaluates the prover algorithm for the specified circuit and witness share in MPC
Usage: cocircom generateproof [OPTIONS] witness <WITNESS> zkey <ZKEY> protocol <PROTOCOL> curve <CURVE> config <CONFIG>
Options:
witness <WITNESS> The path to the witness share file
zkey <ZKEY> The path to the proving key (.zkey) file, generated by snarkjs setup phase
protocol <PROTOCOL> The MPC protocol to be used [possible values: REP3, SHAMIR]
curve <CURVE> The pairing friendly curve to be used [possible values: BN254, BLS12381]
config <CONFIG> The path to MPC network configuration file
out <OUT> The output file where the final proof is written to. If not passed, this party will not write the proof to a file
publicinput <PUBLIC_INPUT> The output JSON file where the public inputs are written to. If not passed, this party will not write the public inputs to a file
t, threshold <THRESHOLD> The threshold of tolerated colluding parties [default: 1]
h, help Print help
verify
The aim of the verify
command is to verify a Groth16 Circom proof using the provided verification key and public inputs.
Example
cocircom verify proof proof.json vk test_vectors/multiplier2/verification_key.json publicinput public_input.json curve BN254
The above command verifies the proof in proof.json
using the verification key test_vectors/multiplier2/verification_key.json
and public input public_input.json
.
Reference
cocircom verify help`
Verification of a Circom proof
Usage: cocircom verify proof <PROOF> curve <CURVE> vk <VK> publicinput <PUBLIC_INPUT>
Options:
proof <PROOF> The path to the proof file
curve <CURVE> The pairing friendly curve to be used [possible values: BN254, BLS12381]
vk <VK> The path to the verification key file
publicinput <PUBLIC_INPUT> The path to the public input JSON file
h, help Print help
Network Configuration
cocircom
requires a network configuration file for establishing connections to other MPC parties for the generatewitness
and generateproof
commands.
The network configuration file is a TOML file with the following structure:
my_id = 0
bind_addr = "0.0.0.0:10000"
key_path = "data/key0.der"
[[parties]]
id = 0
dns_name = "localhost:10000"
cert_path = "data/cert0.der"
[[parties]]
id = 1
dns_name = "localhost:10001"
cert_path = "data/cert1.der"
[[parties]]
id = 2
dns_name = "localhost:10002"
cert_path = "data/cert2.der"
See the example configuration in the collaborativecircom/examples/configs
folder, with pregenerated certificates and keys in the collaborativecircom/examples/data
folder.
Keys
my_id
is the party id of the party executing thecocircom
binary using the configuration file.bind_addr
is the local socket address this party is binding to and listening for incoming connections from other parties.key_path
is a path to a DER encoded PKCS8 private key file corresponding to the public key used in the certificate for our party.parties
is an array of tables containing the public information of each MPC party.id
: the party id of the MPC partydns_name
: the hostname/port combination where the party is publicly reachable. The hostname must be the a valid CN or SNI in the used certificate.cert_path
: a path to the DER encoded certificate (chain) file that is used to authenticate the connection with the party and is used to establish the secure communication channel.
MPCVM
Design Choices
Problematic Circom Operations
The Circom language was designed for zeroknowledge circuits and while its internal model is pretty similar to the arithmetic circuit model that is native to MPC, there are a few assumptions that the Circom language makes that pose issues for execution in MPC. Most of them arise from conditional execution of code in Circom. While there are some conditions placed upon conditional code in circom (it is not allowed to modify the structure of the circuit, i.e., the same circuit has to be produced regardless of the input), it allows conditional execution of unconstrained code. Unconstrained code is code that is producing helper variables that may later be used to constrain actual signal values. A common example is the bitdecomposition of a number, which gets computed using unconstrained code, and later on, it is proven by adding constraints that the sum of the individual bits multiplied by their respective powers of two is equal to the original value, as this is much cheaper in zeroknowledge compared to directly computing the bitdecomposition.
Conditional Branches
Ternary Operators
A special case of conditional branches is the ternary operator.
Division in inactive branches
One further complication of executing both inactive and active branches is that all operations must be computable in both branches. A common operation that poses problems is field division, or more concretely, the associated inversion of the divisor, as this may fail if the divisor is 0. We solve this by conditionally loading the real input or 1, depending on if the current branch is active or not, using a conditional multiplexer gate in the MPC circuit.
Circom
We refer to the Circom documentation at https://docs.circom.io/.
Secure Multiparty Computation (MPC)
Currently, proof generation is supported with two different MPC protocols:
 3party replicated secret sharing (based on ABY3^{1}) with semihonest security
 Nparty Shamir secret sharing^{2} (based on DN07^{3}) with semihonest security
Notation
With $[x]$ we denote that $x∈F_{p}$ is additively secret shared amongst $n$ parties, such that $[x]=(x_{1},x_{2},...,x_{n})$ and $x=∑_{i}x_{i}$. With $[x]_{B}$ we denote that $x∈F_{p}$ is binary secret shared amongst $n$ parties, such that $[x]_{B}=(x_{1},x_{2},...,x_{n})$ and $x=x_{1}⊕x_{2}⊕...⊕x_{n}$. With $[x]_{t}$ we denote that $x∈F_{p}$ is Shamir secret shared amongst $n$ parties, such that $[x]=(x_{1},x_{2},...,x_{n})$ and $x_{i}=Q(i)$ such that $Q(X)$ is a polynomial of degree $t$ with $x=Q(0)$. Similar, for a group element $X∈G$ in an additive Group $G$ (e.g., elliptic curve groups) we denote by $[X]$ its additive secret sharing amongst $n$ parties, and by $[X]_{t}$ its Shamir sharing, such that $[X]=(X_{1},X_{2},...,X_{n})$. Furthermore, indices of shares are taken modulo the number of parties.
3Party Replicated Sharing
Replicated secret sharing is based on additive secret sharing, with the twist that each party has multiple additive shares. Thus, in the 3party case a secret $x∈F_{p}$ is shared the following way. First, the $x$ is split into three random shares $x_{1},x_{2},x_{3}∈F_{p}$ such that $x=x_{1}+x_{2}+x_{3}modp$ and each party gets two shares: $P_{1}:(x_{1},x_{3})P_{2}:(x_{2},x_{1})P_{3}:(x_{3},x_{2})$
Rng Setup
Random values are required during many parts of MPC executions. For cheaper randomness generation, correlated random number generators are set up before protocol execution.
Random Values
In order to create random shares $(r_{i},r_{i−1})$, random additive shares of $0$ (i.e., $r_{i}−r_{i−1}$), or random binary shares of $0$ (i.e., $r_{i}⊕r_{i−1}$) without interaction, Rep3 sets up a correlated random number generator during the setup phase. Each party $P_{i}$ chooses a seed $s_{i}$ and sends it to the next party $P_{i+1}$. Thus, each party has two seeds and can set up an RNG's, where two party are able to create the same random numbers:
$P_{1}:(RNG_{1},RNG_{3})P_{2}:(RNG_{2},RNG_{1})P_{3}:(RNG_{3},RNG_{2})$
Binary To Arithmetic Conversion
For the binary to arithmetic conversion, we need correlated randomness as well. The goal is to setup RNG's, such that:
$P_{1}:(RNG1_{1},RNG1_{3}),(RNG2_{1},RNG2_{2},RNG2_{3})P_{1}:(RNG1_{1},RNG1_{2},RNG1_{3}),(RNG2_{2},RNG2_{1})P_{3}:(RNG1_{1},RNG1_{2},RNG1_{3}),(RNG2_{1},RNG2_{2},RNG2_{3})$ In other words, $P_{2}$ and $P_{3}$ can use RNG1 create the same field element, while all parties can sample valid shares for it. Similar, $P_{1}$ and $P_{3}$ can use RNG2 to create the same field element, while all parties can sample valid shares for it. This setup can be achieved by sampling seeds from the already set up RNG for shared random values and resharing the seeds correctly.
Supported operations
Linear Operations
Due to being based on additive secret sharing, linear operations can be performed on shares without party interaction.
 Constant addition: $[x]+y=(x_{1}+y,x_{2},x_{3})$
 Share addition: $[x]+[y]=(x_{1}+y_{1},x_{2}+y_{2},x_{3}+y_{3})$
 Constant Multiplication: $[x]⋅y=(x_{1}⋅y,x_{2}⋅y,x_{3}⋅y)$
Similar, linear operations can be computed locally in the binary domain as well.
 Constant addition: $[x]_{B}⊕y=(x_{1}⊕y,x_{2},x_{3})$
 Share addition: $[x]_{B}⊕[y]_{B}=(x_{1}⊕y_{1},x_{2}⊕y_{2},x_{3}⊕y_{3})$
 Constant Multiplication: $[x]_{B}∧y=(x_{1}∧y,x_{2}∧y,x_{3}∧y)$
Multiplications
One main advantage of replicated secret sharing is the presence of a simple multiplication protocol. First, due to having two additive shares, each party can calculate an additive share of the result without interaction. For $[z]=[x]⋅[y]$, $z_{i}=x_{i}⋅y_{i}+x_{i}⋅y_{i−1}+x_{i−1}⋅y_{i}$ is a valid additive share of $z$.
Thus, multiplications involve a local operation followed by a resharing of the result to translate the additive share to a replicated share. Resharing requires to randomize the share to not leak any information to the other party. For that reason, a fresh random share of zero is added, which can be sampled locally without party interaction (see RNG setup).
Thus, party $P_{i}$ calculates: $r_{i}=RNG_{i}−RNG_{i−1}z_{i}=x_{i}⋅y_{i}+x_{i}⋅y_{i−1}+x_{i−1}⋅y_{i}+r_{i}z_{i−1}=SendReceive(z_{i})$
The resharing, thereby, is simply implemented as $P_{i}$ sending $z_{i}$ to $P_{i+1}$.
AND
AND gates follow directly from multiplications:
$r_{i}=RNG_{i}⊕RNG_{i−1}z_{i}=(x_{i}∧y_{i})⊕(x_{i}∧y_{i−1})⊕(x_{i−1}∧y_{i})⊕r_{i}z_{i−1}=SendReceive(z_{i})$
Arithmetic to Binary Conversion
In replicated sharing over rings $Z_{2_{k}}$ (e.g., ABY3), arithmetic to binary conversion of a share $[x]$ is implemented by first locally splitting the shares to valid binary sharings, i.e., $[x_{1}]_{B}=(x_{1},0,0)$, $[x_{2}]_{B}=(0,x_{2},0)$, and $[x_{3}]_{B}=(0,0,x_{3})$ and combining them in MPC using binary addition circuits. Then $[x]_{B}=BinAdd(BinAdd([x_{1}]_{B},[x_{2}]_{B}),[x_{3}]_{B})$ is a valid binary sharing of $x$. This approach works because binary addition circuits implicitly perform modular reductions mod $2_{k}$.
Thus, in $F_{p}$ we have to include the mod $p$ reductions manually in the circuit. For improved performance, we use the following protocol to translate $[x]$:
 $P_{i}$ samples $r_{i}$ to be a new random binary share of $0$
 Set $[x_{3}]_{B}=(0,0,x_{3})$
 $P_{2}$ calculates $t=(x_{1}+x_{2}modp)⊕r_{2}$
 It follows that $(r_{1},t,r_{3})$ is a valid binary sharing of $(x_{1}+x_{2}modp)$
 $P_{i}$ sends its share of $(r_{1},t,r_{3})$ to $P_{i+1}$
 Each party now has a valid binary replicated sharing $[x_{1,2}]_{B}$ of $(x_{1}+x_{2}modp)$
 The parties compute a binary adder circuit of $k=⌈g_{2}(p)⌉$ bits to sum up $[x_{3}]_{B}$ and $[x_{1,2}]_{B}$ to get $[t_{1}]_{B}$ with $k+1$ bits (including overflow bit).
 The parties compute the subtraction of $[t_{1}]_{B}−p$ inside a binary circuit to get $[t_{2}]_{B}$.
 The $(k+1)$th bit of $[t_{2}]_{B}$ indicates an overflow of the subtraction. If an overflow occurs, the result should be the first $k$ bits of $[t_{1}]_{B}$, otherwise the first $k$ bits of $[t_{2}]_{B}$. This if statement can be computed with a CMUX gate.
Binary to Arithmetic Conversion
For the binary to arithmetic conversion, the general strategy for replicated sharing is the following. We sample random binary shares $[x_{2}]_{B}$ and $[x_{3}]_{B}$ using the special correlated randomness, such that $P_{3}$ gets both values in addition to it's shares, while $P_{1}$ and $P_{2}$ get $x_{3}$ and $x_{2}$ in clear respectively. Then we compute a binary circuit to add $[x_{1}]_{B}=BinAdd(BinAdd([x]_{B},[x_{2}]_{B}),[x_{3}]_{B})$. Finally, we open $[x_{1}]_{B}$ to $P_{1}$ and $P_{2}$. The arithmetic shares then are $[x]=(x_{1},−x_{2},−x_{3})$.
To account for modular reductions in finite fields, we follow a similar strategy as for the arithmetic to binary conversion to translate $[x]_{B}$:
 $P_{i}$ samples $r_{i}$ to be a new random binary share of $0$
 $P_{1}$ samples $x_{3}$ using RNG2 and sets $x_{3}=−x_{3}$;
 $P_{2}$ samples $x_{2}$ using RNG1 and sets $x_{2}=−x_{2}$;
 $P_{3}$ samples $x_{2}$ using RNG1 and $x_{3}$ using RNG2, sets $x_{2}=−x_{2}$, $x_{3}=−x_{3}$, and $t=(x_{2}+x_{3}modp)⊕r_{3}$;
 It follows that $(r_{1},t,r_{3})$ is a valid binary sharing of $(x_{2}+x_{3}modp)$
 $P_{i}$ sends its share of $(r_{1},t,r_{3})$ to $P_{i+1}$
 Each party now has a valid binary replicated sharing $[x_{2,3}]_{B}$ of $(x_{2}+x_{3}modp)$
 The parties compute a binary adder circuit of $k=⌈g_{2}(p)⌉$ bits to sum up $[x]_{B}$ and $[x_{2,3}]_{B}$ to get $[t_{1}]_{B}$ with $k+1$ bits (including overflow bit).
 The parties compute the subtraction of $[t_{1}]_{B}−p$ inside a binary circuit to get $[t_{2}]_{B}$.
 The $(k+1)$th bit of $[t_{2}]_{B}$ indicates an overflow of the subtraction. If an overflow occurs, $[x_{1}]_{B}$ should be the first $k$ bits of $[t_{1}]_{B}$, otherwise the first $k$ bits of $[t_{2}]_{B}$. This if statement can be computed with a CMUX gate.
 We open $[x_{1}]_{B}$ to $P_{1}$ and $P_{2}$
 The final sharing is $[x]=(x_{1},x_{2},x_{3})$ and each party has (only) access to the shares it requires for the replication.
Bit Injection
To translate a single shared bit $[x]_{B}$ to a arithmetic sharing, we perform share splitting and create valid arithmetic shares of the shares: $[x_{1}]=(x_{1},0,0)$, $[x_{2}]=(0,x_{2},0)$, and $[x_{3}]=(0,0,x_{3})$. Then, we combine the shares by calculating arithmetic XORs: $[x]=AXor(AXor([x_{1}],[x_{2}]),[x_{3}])$, where $AXor(a,b)=a+b−2⋅a⋅b$.
Binary Addition Circuits
As mentioned in the arithmetic to binary conversions, we need binary addition circuits. Since the bitlength of the used prime fields is large, we use depthoptimized carrylookahead adders for the conversions. Currently, we implement KoggeStone adders, since these can be computed efficiently using shifts and AND/XOR gates without extracting specific bits.
The general structure of a KoggeStone adder to add two binary values $x,y$ is to first compute $p[i]=x[i]⊕y[i]$ and $g[i]=x[i]∧y[i]$, where $x[i]$ is the $i$th bit of $x$. Then, $p$ and $g$ are combined using a circuit with logarithmic depth (in the bitsize). This circuit is implemented in the kogge_stone_inner
function.
For binary subtraction circuits, we basically compute an addition circuit with the two's complement of $y$. Thus, we essentially compute $2_{k}+x−y$. If $y$ is public, the $2_{k}−y$ can directly be computed and the result is just fed into the KoggeStone adder. If $y$ is shared, we invert all $k$ bits and set the carryin flag for the KoggeStone adder. This simulates two's complement calculation. The set carryin flag has the following effects: First, $g[0]$ must additionally be XORed by $p[0]$. Finally, the LSB of the result of the KoggeStone circuit needs to be flipped.
Reconstruction
Reconstruction of a value is implemented as $P_{i}$ sending $z_{i−1}$ to $P_{i+1}$. Then each party has all shares.
Security
Our implementation provides semihonest security with honest majority, i.e., the scheme is secure if all parties follow the protocol honestly and no servers collude.
Shamir Secret Sharing
Shamir secret sharing is a different way of instantiating a linear secret sharing scheme which is based on polynomials. To share a value $x∈F_{p}$, one first has to sample a random polynomial of degree $t$, where $x$ is in the constant term. I.e., one samples $a_{1},a_{2},...,a_{t}$ randomly from $F_{p}$ and sets: $Q(X)=x+a_{1}⋅X+a_{2}⋅X_{2}+...+a_{t}⋅X_{t}$. The share of party $i$ then is $Q(i)$. In other words, $[x]_{t}=(x_{1},x_{2},...,x_{n})$, where $x_{i}=Q(i)$.
Reconstruction then works via lagrange interpolation of any $t+1$ shares: $x=∑_{i}λ_{i}x_{i}$, where $λ_{i}$ is the corresponding lagrange coefficient.
Supported operations
Linear Operations
Shamir's secret sharing allows, similar to additive sharing, to compute linear functions locally without party interaction:
 Constant addition: $[x]_{t}+y=(x_{1}+y,x_{2}+y,x_{3}+y)$
 Share addition: $[x]_{t}+[y]_{t}=(x_{1}+y_{1},x_{2}+y_{2},x_{3}+y_{3})$
 Constant Multiplication: $[x]_{t}⋅y=(x_{1}⋅y,x_{2}⋅y,x_{3}⋅y)$
Multiplications
Shamir secret sharing comes with a native multiplication protocol: $z_{i}=x_{i}⋅y_{i}$ is a valid share of $[z]_{2t}=[x]_{t}⋅[y]_{t}$. However, $z_{i}$ is a point on a polynomial with degree $2t$. In other words, the degree doubles after a multiplication and twice as many parties ($2t+1$) are required to reconstruct the secret $z$. Thus, one needs to perform a degree reduction step in MPC for further computations. In DN07, this is done by sampling a random value, which is shared as a degree$t$ ($[r]_{t}$) and degree$2t$ ($[r]_{2t}$) polynomial. Then, the parties open $[z]_{2t}+[r]_{2t}$ to $P_{1}$, who reconstructs it to $z_{′}=z+r$. Then. $P_{1}$ shares $z_{′}$ as a fresh degree$t$ share to all parties, who calculate $[z]_{t}=[z_{′}]_{t}−[r]_{t}$.
Rng Setup
For the degree reduction step after a multiplication we require degree$t$ and degree$2t$ sharings of the same random value $r$. We generate these values following the techniques proposed in DN07, to generate $t$ random pairs at once:
 Each party $P_{i}$ generates a random value $s_{i}$ and shares it as degree$t$ share $[s_{i}]_{t}$ and degree$2t$ share $[s_{i}]_{2t}$ to the other parties.
 After receiving the all shares, one sets $[s]_{t}=([s_{1}]_{t},[s_{2}]_{t},...,[s_{n}]_{t})_{T}$ and $[s]_{2t}=([s_{1}]_{2t},[s_{2}]_{2t},...,[s_{n}]_{2t})_{T}$.
 Calculate $([r_{1}]_{t},[r_{2}]_{t},...,[r_{t}]_{t})_{T}=M⋅[s]_{t}$ and $([r_{1}]_{2t},[r_{2}]_{2t},...,[r_{t}]_{2t})_{T}=M⋅[s]_{2t}$, where $M∈F_{p}$ is a Vandermonde matrix.
 The pairs $([r_{i}]_{t},[r_{i}]_{2t})$ for $1≤i≤t$ are then valid random shares which can be used for resharing.
For simplicity we use the following Vandermonde matrix: $M= 111⋮1 122_{2}⋮2_{t} 133_{2}⋮3_{t} .........⋱... 1nn_{2}⋮n_{t} $
Reconstruction
Reconstruction is currently implemented as $P_{i}$ sending its share $x_{i}$ to the next $t$ parties. Then, each party has $t+1$ shares to reconstruct the secret.
Security
Our implementation provides semihonest security with honest majority, i.e., the scheme is secure if all parties follow the protocol honestly and at most $t$ servers collude. $t$ can, thereby, be chosen to be $t≤2n−1 $.
MPC for group operations
So far, we only discussed MPC for field elements $F_{p}$. However, one can easily extend it to MPC over Group elements $G$. W.l.o.g. we will use the notation for additive groups $G$ (e.g., elliptic curve groups). A secret share $[x]∈F_{p}$ can be translated to a shared group element by $[X]=[x]⋅G$, where $G$ is a generator of $G$. Then, $[X]=(X_{1},X_{2},...,X_{n})$ is a valid share of $X=x⋅G$. Linear operations directly follow from the used linear secret sharing scheme: $[Z]=a⋅[X]+b⋅[Y]+C=(a⋅[x]+b⋅[y]+c)⋅G$. Shared scalar multiplications also follow from the secret sharing scheme: $[Z]=[x]⋅[Y]=[x]⋅[y]⋅G$.
Shamir vs Rep3
Shamir and Rep3 are both linear secret sharing schemes which provide semihonest security with honestmajority. However, they have some important differences.
 Shamir can be instantiated with $n≥3$ parties, while Rep3 is limited to $n=3$ parties.
 In Shamir, each share is just one field element $∈F_{p}$, while in Rep3 each share is composed of two field elements.
 In Shamir, the overhead on the CPU is significantly smaller compared to Rep3, where each operation is applied to two shares.
 Rep3 allows efficient arithmetictobinary conversions.
Witness Extension
Due to not having an efficient arithmetic to binary conversion, we do not have a witness extension implementation for Shamir sharing at the moment. However, we provide a bridge implementation, which can translate Rep3 shares to 3party Shamir shares (with threshold/polydegree $t=1$).
This bridge works by first letting $P_{i}$ translate its first additive share $x_{i}$ to a Shamir share by dividing by the corresponding lagrange coefficient. This, however, creates a 3party Shamir sharing with threshold/polydegree $t=2$. Thus, we perform the same degreereduction step, which is also required after a Shamir multiplication.
Zero Knowledge Proofs
For a comprehensive introduction to ZeroKnowledge Proofs, we recommend reading the following book by Justin Thaler: "Proofs, Arguments, and ZeroKnowledge".
Furthermore, here are some MPCfriendly ZK proof systems:
 Groth16: https://eprint.iacr.org/2016/260.pdf
 Plonk: https://eprint.iacr.org/2019/953.pdf
 HyperPlonk: https://eprint.iacr.org/2022/1355.pdf
 Marlin: https://eprint.iacr.org/2019/1047.pdf
 Halo2: https://zcash.github.io/halo2/
Collaborative SNARKs
In this document we list some literature for collaborative SNARKs.
Experimenting with Collaborative zkSNARKs: ZeroKnowledge Proofs for Distributed Secrets
This is the first paper^{1} in this space. It experiments with the feasibility of evaluating SNARKs in MPC and implements Groth16^{2} and Plonk^{3} using SPDZ^{4} and GSZ^{5} (maliciously secure variants of additive secret sharing and Shamir secret sharing respectively).
Groth16: https://eprint.iacr.org/2016/260.pdf
EOS: efficient private delegation of zkSNARK provers
This paper^{6} uses a delegator to speed up MPC computations and investigates using the SNARK as errordetecting computation to implement cheaper malicious security.
zkSaaS: Zero Knowledge SNARKs as a service
This paper^{7} uses packed secret sharing (PSS)^{8}, i.e., a variant of Shamir secret sharing where multiple secrets are embedded into the same sharing polynomial, to speed up MPC computation. However, they encounter some problems with FFTs, since they cannot be implemented with the SIMD semantics of PSS naively.
Scalable Collaborative zkSNARK: Fully Distributed Proof Generation and Malicious Security
This paper^{9} is a followup to zkSaaS which replaces the used SNARK with GKR^{10}, which is better suited for PSS.
Scalable Collaborative zkSNARK and Its Application to Efficient Proof Outsourcing
This paper^{11} is essentially an update version of "Scalable Collaborative zkSNARK: Fully Distributed Proof Generation and Malicious Security", which includes semihonest protocols for collaborative HyperPlonk^{12}, additional optimization, and new experiments.
HyperPlonk: https://eprint.iacr.org/2022/1355.pdf
Confidential and Verifiable Machine Learning Delegations on the Cloud
This paper^{13} implements GKR in MPC using the wellknown MPSPDZ^{14} library. It focuses on efficient matrix multiplications, bit provides a generic construction as well.
MPSPDZ: https://github.com/data61/MPSPDZ, paper: https://eprint.iacr.org/2020/521.pdf
How to prove any NP statement jointly? Efficient Distributedprover ZeroKnowledge Protocols
This paper^{15} provides new security notions for distributed prover zero knowledge (DPZK) and provides a generic compiler to realize distributed proof generation for any zeroknowledge proof system build from the interactive oracle proofs (IOP) paradigm.